Dec 13 06:30:23 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 13 06:30:23 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 13 06:30:23 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 06:30:23 localhost kernel: BIOS-provided physical RAM map:
Dec 13 06:30:23 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 13 06:30:23 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 13 06:30:23 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 13 06:30:23 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 13 06:30:23 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 13 06:30:23 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 13 06:30:23 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 13 06:30:23 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 13 06:30:23 localhost kernel: NX (Execute Disable) protection: active
Dec 13 06:30:23 localhost kernel: APIC: Static calls initialized
Dec 13 06:30:23 localhost kernel: SMBIOS 2.8 present.
Dec 13 06:30:23 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 13 06:30:23 localhost kernel: Hypervisor detected: KVM
Dec 13 06:30:23 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 06:30:23 localhost kernel: kvm-clock: using sched offset of 4388818770 cycles
Dec 13 06:30:23 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 06:30:23 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 13 06:30:23 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 06:30:23 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 06:30:23 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 13 06:30:23 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 13 06:30:23 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 06:30:23 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 13 06:30:23 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 13 06:30:23 localhost kernel: Using GB pages for direct mapping
Dec 13 06:30:23 localhost kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 13 06:30:23 localhost kernel: ACPI: Early table checksum verification disabled
Dec 13 06:30:23 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 13 06:30:23 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:30:23 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:30:23 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:30:23 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 13 06:30:23 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:30:23 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 06:30:23 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 13 06:30:23 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 13 06:30:23 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 13 06:30:23 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 13 06:30:23 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 13 06:30:23 localhost kernel: No NUMA configuration found
Dec 13 06:30:23 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 13 06:30:23 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 13 06:30:23 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 13 06:30:23 localhost kernel: Zone ranges:
Dec 13 06:30:23 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 06:30:23 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 06:30:23 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 13 06:30:23 localhost kernel:   Device   empty
Dec 13 06:30:23 localhost kernel: Movable zone start for each node
Dec 13 06:30:23 localhost kernel: Early memory node ranges
Dec 13 06:30:23 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 13 06:30:23 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 13 06:30:23 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 13 06:30:23 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 13 06:30:23 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 06:30:23 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 13 06:30:23 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 13 06:30:23 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 13 06:30:23 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 06:30:23 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 06:30:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 06:30:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 06:30:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 06:30:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 06:30:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 06:30:23 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 06:30:23 localhost kernel: TSC deadline timer available
Dec 13 06:30:23 localhost kernel: CPU topo: Max. logical packages:   8
Dec 13 06:30:23 localhost kernel: CPU topo: Max. logical dies:       8
Dec 13 06:30:23 localhost kernel: CPU topo: Max. dies per package:   1
Dec 13 06:30:23 localhost kernel: CPU topo: Max. threads per core:   1
Dec 13 06:30:23 localhost kernel: CPU topo: Num. cores per package:     1
Dec 13 06:30:23 localhost kernel: CPU topo: Num. threads per package:   1
Dec 13 06:30:23 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 13 06:30:23 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 13 06:30:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 13 06:30:23 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 13 06:30:23 localhost kernel: Booting paravirtualized kernel on KVM
Dec 13 06:30:23 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 06:30:23 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 13 06:30:23 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 13 06:30:23 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 13 06:30:23 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 13 06:30:23 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 13 06:30:23 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 06:30:23 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 13 06:30:23 localhost kernel: random: crng init done
Dec 13 06:30:23 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 13 06:30:23 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 06:30:23 localhost kernel: Fallback order for Node 0: 0 
Dec 13 06:30:23 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 13 06:30:23 localhost kernel: Policy zone: Normal
Dec 13 06:30:23 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 06:30:23 localhost kernel: software IO TLB: area num 8.
Dec 13 06:30:23 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 13 06:30:23 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 13 06:30:23 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 13 06:30:23 localhost kernel: Dynamic Preempt: voluntary
Dec 13 06:30:23 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 06:30:23 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 13 06:30:23 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 13 06:30:23 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 13 06:30:23 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 13 06:30:23 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 06:30:23 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 06:30:23 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 13 06:30:23 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 06:30:23 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 06:30:23 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 06:30:23 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 13 06:30:23 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 06:30:23 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 13 06:30:23 localhost kernel: Console: colour VGA+ 80x25
Dec 13 06:30:23 localhost kernel: printk: console [ttyS0] enabled
Dec 13 06:30:23 localhost kernel: ACPI: Core revision 20230331
Dec 13 06:30:23 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 06:30:23 localhost kernel: x2apic enabled
Dec 13 06:30:23 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 13 06:30:23 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 13 06:30:23 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 13 06:30:23 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 13 06:30:23 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 13 06:30:23 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 13 06:30:23 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 06:30:23 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 06:30:23 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 13 06:30:23 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 13 06:30:23 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 13 06:30:23 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 06:30:23 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 13 06:30:23 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 13 06:30:23 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 13 06:30:23 localhost kernel: x86/bugs: return thunk changed
Dec 13 06:30:23 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 13 06:30:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 06:30:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 06:30:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 06:30:23 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 06:30:23 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 13 06:30:23 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 13 06:30:23 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 13 06:30:23 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 13 06:30:23 localhost kernel: landlock: Up and running.
Dec 13 06:30:23 localhost kernel: Yama: becoming mindful.
Dec 13 06:30:23 localhost kernel: SELinux:  Initializing.
Dec 13 06:30:23 localhost kernel: LSM support for eBPF active
Dec 13 06:30:23 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 06:30:23 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 06:30:23 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 13 06:30:23 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 13 06:30:23 localhost kernel: ... version:                0
Dec 13 06:30:23 localhost kernel: ... bit width:              48
Dec 13 06:30:23 localhost kernel: ... generic registers:      6
Dec 13 06:30:23 localhost kernel: ... value mask:             0000ffffffffffff
Dec 13 06:30:23 localhost kernel: ... max period:             00007fffffffffff
Dec 13 06:30:23 localhost kernel: ... fixed-purpose events:   0
Dec 13 06:30:23 localhost kernel: ... event mask:             000000000000003f
Dec 13 06:30:23 localhost kernel: signal: max sigframe size: 1776
Dec 13 06:30:23 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 13 06:30:23 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 13 06:30:23 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 13 06:30:23 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 13 06:30:23 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 13 06:30:23 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 13 06:30:23 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 13 06:30:23 localhost kernel: node 0 deferred pages initialised in 10ms
Dec 13 06:30:23 localhost kernel: Memory: 7763972K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618220K reserved, 0K cma-reserved)
Dec 13 06:30:23 localhost kernel: devtmpfs: initialized
Dec 13 06:30:23 localhost kernel: x86/mm: Memory block size: 128MB
Dec 13 06:30:23 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 06:30:23 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 13 06:30:23 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 06:30:23 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 06:30:23 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 13 06:30:23 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 13 06:30:23 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 13 06:30:23 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 13 06:30:23 localhost kernel: audit: type=2000 audit(1765607422.053:1): state=initialized audit_enabled=0 res=1
Dec 13 06:30:23 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 13 06:30:23 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 06:30:23 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 06:30:23 localhost kernel: cpuidle: using governor menu
Dec 13 06:30:23 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 06:30:23 localhost kernel: PCI: Using configuration type 1 for base access
Dec 13 06:30:23 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 13 06:30:23 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 06:30:23 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 06:30:23 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 13 06:30:23 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 06:30:23 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 06:30:23 localhost kernel: Demotion targets for Node 0: null
Dec 13 06:30:23 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 06:30:23 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 13 06:30:23 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 13 06:30:23 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 06:30:23 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 06:30:23 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 06:30:23 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 13 06:30:23 localhost kernel: ACPI: Interpreter enabled
Dec 13 06:30:23 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 13 06:30:23 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 06:30:23 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 06:30:23 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 13 06:30:23 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 13 06:30:23 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 06:30:23 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [3] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [4] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [5] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [6] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [7] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [8] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [9] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [10] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [11] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [12] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [13] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [14] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [15] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [16] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [17] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [18] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [19] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [20] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [21] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [22] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [23] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [24] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [25] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [26] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [27] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [28] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [29] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [30] registered
Dec 13 06:30:23 localhost kernel: acpiphp: Slot [31] registered
Dec 13 06:30:23 localhost kernel: PCI host bridge to bus 0000:00
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 06:30:23 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 06:30:23 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 13 06:30:23 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 13 06:30:23 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 13 06:30:23 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 13 06:30:23 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 13 06:30:23 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 13 06:30:23 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 13 06:30:23 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 13 06:30:23 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 13 06:30:23 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 13 06:30:23 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 13 06:30:23 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 13 06:30:23 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 06:30:23 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 06:30:23 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 06:30:23 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 06:30:23 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 13 06:30:23 localhost kernel: iommu: Default domain type: Translated
Dec 13 06:30:23 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 13 06:30:23 localhost kernel: SCSI subsystem initialized
Dec 13 06:30:23 localhost kernel: ACPI: bus type USB registered
Dec 13 06:30:23 localhost kernel: usbcore: registered new interface driver usbfs
Dec 13 06:30:23 localhost kernel: usbcore: registered new interface driver hub
Dec 13 06:30:23 localhost kernel: usbcore: registered new device driver usb
Dec 13 06:30:23 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 06:30:23 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 06:30:23 localhost kernel: PTP clock support registered
Dec 13 06:30:23 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 13 06:30:23 localhost kernel: NetLabel: Initializing
Dec 13 06:30:23 localhost kernel: NetLabel:  domain hash size = 128
Dec 13 06:30:23 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 13 06:30:23 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 13 06:30:23 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 13 06:30:23 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 06:30:23 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 13 06:30:23 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 13 06:30:23 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 06:30:23 localhost kernel: vgaarb: loaded
Dec 13 06:30:23 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 06:30:23 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 06:30:23 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 06:30:23 localhost kernel: pnp: PnP ACPI init
Dec 13 06:30:23 localhost kernel: pnp 00:03: [dma 2]
Dec 13 06:30:23 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 13 06:30:23 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 06:30:23 localhost kernel: NET: Registered PF_INET protocol family
Dec 13 06:30:23 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 06:30:23 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 13 06:30:23 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 06:30:23 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 06:30:23 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 13 06:30:23 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 13 06:30:23 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 13 06:30:23 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 06:30:23 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 06:30:23 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 06:30:23 localhost kernel: NET: Registered PF_XDP protocol family
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 13 06:30:23 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 13 06:30:23 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 13 06:30:23 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 13 06:30:23 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73089 usecs
Dec 13 06:30:23 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 13 06:30:23 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 06:30:23 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 13 06:30:23 localhost kernel: ACPI: bus type thunderbolt registered
Dec 13 06:30:23 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 13 06:30:23 localhost kernel: Initialise system trusted keyrings
Dec 13 06:30:23 localhost kernel: Key type blacklist registered
Dec 13 06:30:23 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 13 06:30:23 localhost kernel: zbud: loaded
Dec 13 06:30:23 localhost kernel: integrity: Platform Keyring initialized
Dec 13 06:30:23 localhost kernel: integrity: Machine keyring initialized
Dec 13 06:30:23 localhost kernel: Freeing initrd memory: 87820K
Dec 13 06:30:23 localhost kernel: NET: Registered PF_ALG protocol family
Dec 13 06:30:23 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 13 06:30:23 localhost kernel: Key type asymmetric registered
Dec 13 06:30:23 localhost kernel: Asymmetric key parser 'x509' registered
Dec 13 06:30:23 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 13 06:30:23 localhost kernel: io scheduler mq-deadline registered
Dec 13 06:30:23 localhost kernel: io scheduler kyber registered
Dec 13 06:30:23 localhost kernel: io scheduler bfq registered
Dec 13 06:30:23 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 13 06:30:23 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 13 06:30:23 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 13 06:30:23 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 13 06:30:23 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 13 06:30:23 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 13 06:30:23 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 13 06:30:23 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 06:30:23 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 06:30:23 localhost kernel: Non-volatile memory driver v1.3
Dec 13 06:30:23 localhost kernel: rdac: device handler registered
Dec 13 06:30:23 localhost kernel: hp_sw: device handler registered
Dec 13 06:30:23 localhost kernel: emc: device handler registered
Dec 13 06:30:23 localhost kernel: alua: device handler registered
Dec 13 06:30:23 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 13 06:30:23 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 13 06:30:23 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 13 06:30:23 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 13 06:30:23 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 13 06:30:23 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 13 06:30:23 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 13 06:30:23 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 13 06:30:23 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 13 06:30:23 localhost kernel: hub 1-0:1.0: USB hub found
Dec 13 06:30:23 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 13 06:30:23 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 13 06:30:23 localhost kernel: usbserial: USB Serial support registered for generic
Dec 13 06:30:23 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 06:30:23 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 06:30:23 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 06:30:23 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 06:30:23 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 13 06:30:23 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 13 06:30:23 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 13 06:30:23 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 13 06:30:23 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 13 06:30:23 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-13T06:30:22 UTC (1765607422)
Dec 13 06:30:23 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 13 06:30:23 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 13 06:30:23 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 06:30:23 localhost kernel: usbcore: registered new interface driver usbhid
Dec 13 06:30:23 localhost kernel: usbhid: USB HID core driver
Dec 13 06:30:23 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 13 06:30:23 localhost kernel: Initializing XFRM netlink socket
Dec 13 06:30:23 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 13 06:30:23 localhost kernel: Segment Routing with IPv6
Dec 13 06:30:23 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 13 06:30:23 localhost kernel: mpls_gso: MPLS GSO support
Dec 13 06:30:23 localhost kernel: IPI shorthand broadcast: enabled
Dec 13 06:30:23 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 06:30:23 localhost kernel: AES CTR mode by8 optimization enabled
Dec 13 06:30:23 localhost kernel: sched_clock: Marking stable (1226069220, 159887650)->(1506132460, -120175590)
Dec 13 06:30:23 localhost kernel: registered taskstats version 1
Dec 13 06:30:23 localhost kernel: Loading compiled-in X.509 certificates
Dec 13 06:30:23 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 06:30:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 13 06:30:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 13 06:30:23 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 13 06:30:23 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 13 06:30:23 localhost kernel: Demotion targets for Node 0: null
Dec 13 06:30:23 localhost kernel: page_owner is disabled
Dec 13 06:30:23 localhost kernel: Key type .fscrypt registered
Dec 13 06:30:23 localhost kernel: Key type fscrypt-provisioning registered
Dec 13 06:30:23 localhost kernel: Key type big_key registered
Dec 13 06:30:23 localhost kernel: Key type encrypted registered
Dec 13 06:30:23 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 06:30:23 localhost kernel: Loading compiled-in module X.509 certificates
Dec 13 06:30:23 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 06:30:23 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 13 06:30:23 localhost kernel: ima: No architecture policies found
Dec 13 06:30:23 localhost kernel: evm: Initialising EVM extended attributes:
Dec 13 06:30:23 localhost kernel: evm: security.selinux
Dec 13 06:30:23 localhost kernel: evm: security.SMACK64 (disabled)
Dec 13 06:30:23 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 13 06:30:23 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 13 06:30:23 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 13 06:30:23 localhost kernel: evm: security.apparmor (disabled)
Dec 13 06:30:23 localhost kernel: evm: security.ima
Dec 13 06:30:23 localhost kernel: evm: security.capability
Dec 13 06:30:23 localhost kernel: evm: HMAC attrs: 0x1
Dec 13 06:30:23 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 13 06:30:23 localhost kernel: Running certificate verification RSA selftest
Dec 13 06:30:23 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 13 06:30:23 localhost kernel: Running certificate verification ECDSA selftest
Dec 13 06:30:23 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 13 06:30:23 localhost kernel: clk: Disabling unused clocks
Dec 13 06:30:23 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 13 06:30:23 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 13 06:30:23 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 13 06:30:23 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 13 06:30:23 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 13 06:30:23 localhost kernel: Run /init as init process
Dec 13 06:30:23 localhost kernel:   with arguments:
Dec 13 06:30:23 localhost kernel:     /init
Dec 13 06:30:23 localhost kernel:   with environment:
Dec 13 06:30:23 localhost kernel:     HOME=/
Dec 13 06:30:23 localhost kernel:     TERM=linux
Dec 13 06:30:23 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 13 06:30:23 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 06:30:23 localhost systemd[1]: Detected virtualization kvm.
Dec 13 06:30:23 localhost systemd[1]: Detected architecture x86-64.
Dec 13 06:30:23 localhost systemd[1]: Running in initrd.
Dec 13 06:30:23 localhost systemd[1]: No hostname configured, using default hostname.
Dec 13 06:30:23 localhost systemd[1]: Hostname set to <localhost>.
Dec 13 06:30:23 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 13 06:30:23 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 13 06:30:23 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 13 06:30:23 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 13 06:30:23 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 13 06:30:23 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 13 06:30:23 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 13 06:30:23 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 13 06:30:23 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 13 06:30:23 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 06:30:23 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 13 06:30:23 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 13 06:30:23 localhost systemd[1]: Reached target Local File Systems.
Dec 13 06:30:23 localhost systemd[1]: Reached target Path Units.
Dec 13 06:30:23 localhost systemd[1]: Reached target Slice Units.
Dec 13 06:30:23 localhost systemd[1]: Reached target Swaps.
Dec 13 06:30:23 localhost systemd[1]: Reached target Timer Units.
Dec 13 06:30:23 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 13 06:30:23 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 13 06:30:23 localhost systemd[1]: Listening on Journal Socket.
Dec 13 06:30:23 localhost systemd[1]: Listening on udev Control Socket.
Dec 13 06:30:23 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 13 06:30:23 localhost systemd[1]: Reached target Socket Units.
Dec 13 06:30:23 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 13 06:30:23 localhost systemd[1]: Starting Journal Service...
Dec 13 06:30:23 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 06:30:23 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 13 06:30:23 localhost systemd[1]: Starting Create System Users...
Dec 13 06:30:23 localhost systemd[1]: Starting Setup Virtual Console...
Dec 13 06:30:23 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 13 06:30:23 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 13 06:30:23 localhost systemd[1]: Finished Create System Users.
Dec 13 06:30:23 localhost systemd-journald[306]: Journal started
Dec 13 06:30:23 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/44548dc0241a47779df7ed38fb199087) is 8.0M, max 153.6M, 145.6M free.
Dec 13 06:30:23 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec 13 06:30:23 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec 13 06:30:23 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 13 06:30:23 localhost systemd[1]: Started Journal Service.
Dec 13 06:30:23 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 13 06:30:23 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 06:30:23 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 13 06:30:23 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 06:30:23 localhost systemd[1]: Finished Setup Virtual Console.
Dec 13 06:30:23 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 13 06:30:23 localhost systemd[1]: Starting dracut cmdline hook...
Dec 13 06:30:23 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Dec 13 06:30:23 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 06:30:23 localhost systemd[1]: Finished dracut cmdline hook.
Dec 13 06:30:23 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 13 06:30:23 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 06:30:23 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 13 06:30:23 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 13 06:30:23 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 13 06:30:23 localhost kernel: RPC: Registered udp transport module.
Dec 13 06:30:23 localhost kernel: RPC: Registered tcp transport module.
Dec 13 06:30:23 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 13 06:30:23 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 13 06:30:23 localhost rpc.statd[442]: Version 2.5.4 starting
Dec 13 06:30:23 localhost rpc.statd[442]: Initializing NSM state
Dec 13 06:30:23 localhost rpc.idmapd[447]: Setting log level to 0
Dec 13 06:30:23 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 13 06:30:23 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 06:30:23 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 06:30:23 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 06:30:23 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 13 06:30:23 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 13 06:30:23 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 13 06:30:23 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 13 06:30:23 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 13 06:30:23 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 13 06:30:23 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 06:30:23 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 13 06:30:23 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 06:30:23 localhost systemd[1]: Reached target Network.
Dec 13 06:30:23 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 06:30:23 localhost systemd[1]: Starting dracut initqueue hook...
Dec 13 06:30:23 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 13 06:30:23 localhost kernel: libata version 3.00 loaded.
Dec 13 06:30:23 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 13 06:30:23 localhost kernel: scsi host0: ata_piix
Dec 13 06:30:23 localhost kernel: scsi host1: ata_piix
Dec 13 06:30:23 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 13 06:30:23 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 13 06:30:23 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 13 06:30:23 localhost kernel:  vda: vda1
Dec 13 06:30:24 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 13 06:30:24 localhost kernel: ata1: found unknown device (class 0)
Dec 13 06:30:24 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 13 06:30:24 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 13 06:30:24 localhost systemd-udevd[497]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 06:30:24 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 13 06:30:24 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 13 06:30:24 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 13 06:30:24 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 13 06:30:24 localhost systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 06:30:24 localhost systemd[1]: Reached target Initrd Root Device.
Dec 13 06:30:24 localhost systemd[1]: Reached target System Initialization.
Dec 13 06:30:24 localhost systemd[1]: Reached target Basic System.
Dec 13 06:30:24 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 13 06:30:24 localhost systemd[1]: Finished dracut initqueue hook.
Dec 13 06:30:24 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 06:30:24 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 13 06:30:24 localhost systemd[1]: Reached target Remote File Systems.
Dec 13 06:30:24 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 13 06:30:24 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 13 06:30:24 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 13 06:30:24 localhost systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Dec 13 06:30:24 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 06:30:24 localhost systemd[1]: Mounting /sysroot...
Dec 13 06:30:24 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 13 06:30:24 localhost kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 13 06:30:24 localhost kernel: XFS (vda1): Ending clean mount
Dec 13 06:30:25 localhost systemd[1]: Mounted /sysroot.
Dec 13 06:30:25 localhost systemd[1]: Reached target Initrd Root File System.
Dec 13 06:30:25 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 13 06:30:25 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 13 06:30:25 localhost systemd[1]: Reached target Initrd File Systems.
Dec 13 06:30:25 localhost systemd[1]: Reached target Initrd Default Target.
Dec 13 06:30:25 localhost systemd[1]: Starting dracut mount hook...
Dec 13 06:30:25 localhost systemd[1]: Finished dracut mount hook.
Dec 13 06:30:25 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 13 06:30:25 localhost rpc.idmapd[447]: exiting on signal 15
Dec 13 06:30:25 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 13 06:30:25 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 13 06:30:25 localhost systemd[1]: Stopped target Network.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Timer Units.
Dec 13 06:30:25 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 13 06:30:25 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Basic System.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Path Units.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Remote File Systems.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Slice Units.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Socket Units.
Dec 13 06:30:25 localhost systemd[1]: Stopped target System Initialization.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Local File Systems.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Swaps.
Dec 13 06:30:25 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped dracut mount hook.
Dec 13 06:30:25 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 13 06:30:25 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 13 06:30:25 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 13 06:30:25 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 13 06:30:25 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 13 06:30:25 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 13 06:30:25 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 13 06:30:25 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 13 06:30:25 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 13 06:30:25 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 13 06:30:25 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 13 06:30:25 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 13 06:30:25 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Closed udev Control Socket.
Dec 13 06:30:25 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Closed udev Kernel Socket.
Dec 13 06:30:25 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 13 06:30:25 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 13 06:30:25 localhost systemd[1]: Starting Cleanup udev Database...
Dec 13 06:30:25 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 13 06:30:25 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 13 06:30:25 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Stopped Create System Users.
Dec 13 06:30:25 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 06:30:25 localhost systemd[1]: Finished Cleanup udev Database.
Dec 13 06:30:25 localhost systemd[1]: Reached target Switch Root.
Dec 13 06:30:25 localhost systemd[1]: Starting Switch Root...
Dec 13 06:30:25 localhost systemd[1]: Switching root.
Dec 13 06:30:25 localhost systemd-journald[306]: Journal stopped
Dec 13 08:21:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:21:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359856825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:21:20 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:21:20 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.150 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.178 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.182 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:21:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037640602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.501 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.508 248514 DEBUG nova.compute.provider_tree [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.531 248514 DEBUG nova.scheduler.client.report [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.565 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.566 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.683 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.684 248514 DEBUG nova.network.neutron [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:21:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:21:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065062239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.744 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.746 248514 DEBUG nova.objects.instance [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'pci_devices' on Instance uuid 07413df5-0bb8-42c2-95ff-13458d598139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.796 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <uuid>07413df5-0bb8-42c2-95ff-13458d598139</uuid>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <name>instance-00000019</name>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1462636274</nova:name>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:21:19</nova:creationTime>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <nova:user uuid="bb27aa40b8134948b82eee1cf755ccc1">tempest-ListImageFiltersTestJSON-1727108935-project-member</nova:user>
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <nova:project uuid="14e8ffb710fe4f92a0f68ad58c260f0f">tempest-ListImageFiltersTestJSON-1727108935</nova:project>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <system>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <entry name="serial">07413df5-0bb8-42c2-95ff-13458d598139</entry>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <entry name="uuid">07413df5-0bb8-42c2-95ff-13458d598139</entry>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     </system>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <os>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   </os>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <features>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   </features>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/07413df5-0bb8-42c2-95ff-13458d598139_disk">
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       </source>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/07413df5-0bb8-42c2-95ff-13458d598139_disk.config">
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       </source>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:21:20 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/console.log" append="off"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <video>
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     </video>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:21:20 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:21:20 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:21:20 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:21:20 compute-0 nova_compute[248510]: </domain>
Dec 13 08:21:20 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029789743506055313 of space, bias 1.0, pg target 0.8936923051816594 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667350369986437 of space, bias 1.0, pg target 0.2002051109959311 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.966071310985396e-07 of space, bias 4.0, pg target 0.0009559285573182475 quantized to 16 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.805 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:21:20 compute-0 nova_compute[248510]: 2025-12-13 08:21:20.921 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:21:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 372 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 215 op/s
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.003 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.004 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.005 248514 INFO nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Using config drive
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.030 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.093 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.095 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.095 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Creating image(s)
Dec 13 08:21:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3359856825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:21:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3037640602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1065062239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.137 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.163 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.195 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.199 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.246 248514 INFO nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Creating config drive at /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.252 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwq8485sl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.293 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.294 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.295 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.295 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.321 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.326 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.401 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwq8485sl" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.429 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.433 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config 07413df5-0bb8-42c2-95ff-13458d598139_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.477 248514 DEBUG nova.network.neutron [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.478 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.568 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.569 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.569 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.569 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.569 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.571 248514 INFO nova.compute.manager [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Terminating instance
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.572 248514 DEBUG nova.compute.manager [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:21:21 compute-0 kernel: tap2b62e133-0b (unregistering): left promiscuous mode
Dec 13 08:21:21 compute-0 NetworkManager[50376]: <info>  [1765614081.9614] device (tap2b62e133-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:21:21 compute-0 ovn_controller[148476]: 2025-12-13T08:21:21Z|00150|binding|INFO|Releasing lport 2b62e133-0b9e-4c9d-9219-2a7e55c381ab from this chassis (sb_readonly=0)
Dec 13 08:21:21 compute-0 ovn_controller[148476]: 2025-12-13T08:21:21Z|00151|binding|INFO|Setting lport 2b62e133-0b9e-4c9d-9219-2a7e55c381ab down in Southbound
Dec 13 08:21:21 compute-0 ovn_controller[148476]: 2025-12-13T08:21:21Z|00152|binding|INFO|Removing iface tap2b62e133-0b ovn-installed in OVS
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.971 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.974 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:21.982 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:ac:d2 10.100.0.4'], port_security=['fa:16:3e:23:ac:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0d64e209-19e7-4ad3-a790-43d04d832838', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2b62e133-0b9e-4c9d-9219-2a7e55c381ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:21:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:21.985 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2b62e133-0b9e-4c9d-9219-2a7e55c381ab in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis
Dec 13 08:21:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:21.987 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5
Dec 13 08:21:21 compute-0 nova_compute[248510]: 2025-12-13 08:21:21.989 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.010 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53638d0e-777e-4295-904f-a26c4f28f88f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:22 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Dec 13 08:21:22 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 20.806s CPU time.
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.039 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1285804c-5f24-4e55-9620-16a4a79bc204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:22 compute-0 systemd-machined[210538]: Machine qemu-22-instance-00000014 terminated.
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.047 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbd7bc2-10e2-4d6e-80b2-f1afd2daf6dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.076 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f7f0a4-605b-47af-8226-14ff8e7d2fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.101 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[abd8b276-da30-447d-97b9-60a84eeed6d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 20, 'rx_bytes': 994, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 20, 'rx_bytes': 994, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279101, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.131 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1767c7-2f37-4806-bd63-a2252501eb94]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279102, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279102, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.133 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.142 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.142 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.142 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.142 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.215 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Instance destroyed successfully.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.215 248514 DEBUG nova.objects.instance [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid 0d64e209-19e7-4ad3-a790-43d04d832838 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.239 248514 DEBUG nova.virt.libvirt.vif [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:18:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-301010878',display_name='tempest-ServersAdminTestJSON-server-301010878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-301010878',id=20,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:18:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-pq0jl515',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:18:43Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=0d64e209-19e7-4ad3-a790-43d04d832838,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.239 248514 DEBUG nova.network.os_vif_util [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.240 248514 DEBUG nova.network.os_vif_util [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.241 248514 DEBUG os_vif [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.244 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b62e133-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.254 248514 INFO os_vif [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b')
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.279 248514 DEBUG nova.compute.manager [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-unplugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.280 248514 DEBUG oslo_concurrency.lockutils [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.280 248514 DEBUG oslo_concurrency.lockutils [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.281 248514 DEBUG oslo_concurrency.lockutils [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.281 248514 DEBUG nova.compute.manager [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] No waiting events found dispatching network-vif-unplugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.281 248514 DEBUG nova.compute.manager [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-unplugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:21:22 compute-0 ceph-mon[76537]: pgmap v1664: 321 pgs: 321 active+clean; 372 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 215 op/s
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.312 248514 DEBUG nova.compute.manager [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.312 248514 DEBUG oslo_concurrency.lockutils [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.313 248514 DEBUG oslo_concurrency.lockutils [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.313 248514 DEBUG oslo_concurrency.lockutils [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.313 248514 DEBUG nova.compute.manager [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Processing event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.314 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.319 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.323 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Instance spawned successfully.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.323 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.340 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614082.3283288, 2e309dc2-3cab-4ecf-8be7-eab85790a0da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.341 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] VM Resumed (Lifecycle Event)
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.356 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.357 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.358 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.358 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.359 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.359 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.366 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.370 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.399 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.431 248514 INFO nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Took 14.58 seconds to spawn the instance on the hypervisor.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.432 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.599 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.668 248514 INFO nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Took 16.23 seconds to build instance.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.673 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] resizing rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.763 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config 07413df5-0bb8-42c2-95ff-13458d598139_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.763 248514 INFO nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Deleting local config drive /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config because it was imported into RBD.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.842 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:22 compute-0 systemd-machined[210538]: New machine qemu-30-instance-00000019.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.854 248514 DEBUG nova.objects.instance [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'migration_context' on Instance uuid 9b6188af-75f0-4213-89c2-bd3eb72960b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:22 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000019.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.874 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.875 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Ensure instance console log exists: /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.876 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.876 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.876 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.878 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.896 248514 WARNING nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.907 248514 DEBUG nova.virt.libvirt.host [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.909 248514 DEBUG nova.virt.libvirt.host [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.922 248514 DEBUG nova.virt.libvirt.host [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.923 248514 DEBUG nova.virt.libvirt.host [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.924 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.924 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.924 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.924 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.925 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.925 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.925 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.925 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.926 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.926 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.926 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.926 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:21:22 compute-0 nova_compute[248510]: 2025-12-13 08:21:22.929 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 399 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 228 op/s
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.108 248514 INFO nova.virt.libvirt.driver [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Deleting instance files /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838_del
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.110 248514 INFO nova.virt.libvirt.driver [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Deletion of /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838_del complete
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.172 248514 INFO nova.compute.manager [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Took 1.60 seconds to destroy the instance on the hypervisor.
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.173 248514 DEBUG oslo.service.loopingcall [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.173 248514 DEBUG nova.compute.manager [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.173 248514 DEBUG nova.network.neutron [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:21:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:21:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929085001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.915 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.986s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.945 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:23 compute-0 nova_compute[248510]: 2025-12-13 08:21:23.953 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.064 248514 DEBUG nova.network.neutron [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.108 248514 INFO nova.compute.manager [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Took 0.93 seconds to deallocate network for instance.
Dec 13 08:21:24 compute-0 ceph-mon[76537]: pgmap v1665: 321 pgs: 321 active+clean; 399 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 228 op/s
Dec 13 08:21:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1929085001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.265 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614084.2595425, 07413df5-0bb8-42c2-95ff-13458d598139 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.267 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] VM Resumed (Lifecycle Event)
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.271 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.278 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.287 248514 INFO nova.virt.libvirt.driver [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance spawned successfully.
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.289 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.414 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.416 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:21:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275496492' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.541 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.544 248514 DEBUG nova.objects.instance [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b6188af-75f0-4213-89c2-bd3eb72960b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.580 248514 DEBUG oslo_concurrency.processutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.618 248514 DEBUG nova.compute.manager [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.619 248514 DEBUG oslo_concurrency.lockutils [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.620 248514 DEBUG oslo_concurrency.lockutils [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.620 248514 DEBUG oslo_concurrency.lockutils [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.620 248514 DEBUG nova.compute.manager [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.620 248514 WARNING nova.compute.manager [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 for instance with vm_state active and task_state None.
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.662 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.665 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <uuid>9b6188af-75f0-4213-89c2-bd3eb72960b7</uuid>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <name>instance-0000001a</name>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1010980373</nova:name>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:21:22</nova:creationTime>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <nova:user uuid="bb27aa40b8134948b82eee1cf755ccc1">tempest-ListImageFiltersTestJSON-1727108935-project-member</nova:user>
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <nova:project uuid="14e8ffb710fe4f92a0f68ad58c260f0f">tempest-ListImageFiltersTestJSON-1727108935</nova:project>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <system>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <entry name="serial">9b6188af-75f0-4213-89c2-bd3eb72960b7</entry>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <entry name="uuid">9b6188af-75f0-4213-89c2-bd3eb72960b7</entry>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     </system>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <os>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   </os>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <features>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   </features>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9b6188af-75f0-4213-89c2-bd3eb72960b7_disk">
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       </source>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config">
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       </source>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:21:24 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/console.log" append="off"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <video>
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     </video>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:21:24 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:21:24 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:21:24 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:21:24 compute-0 nova_compute[248510]: </domain>
Dec 13 08:21:24 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:21:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.699 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.699 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.700 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.700 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.700 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.701 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.710 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.957 248514 DEBUG nova.compute.manager [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.957 248514 DEBUG oslo_concurrency.lockutils [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.957 248514 DEBUG oslo_concurrency.lockutils [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.958 248514 DEBUG oslo_concurrency.lockutils [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.958 248514 DEBUG nova.compute.manager [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] No waiting events found dispatching network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.958 248514 WARNING nova.compute.manager [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received unexpected event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab for instance with vm_state deleted and task_state None.
Dec 13 08:21:24 compute-0 nova_compute[248510]: 2025-12-13 08:21:24.958 248514 DEBUG nova.compute.manager [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-deleted-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 420 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.2 MiB/s wr, 276 op/s
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.069 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.069 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614084.2609031, 07413df5-0bb8-42c2-95ff-13458d598139 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.070 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] VM Started (Lifecycle Event)
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.095 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.100 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.102 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.104 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Using config drive
Dec 13 08:21:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:21:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144514618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.274 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.284 248514 INFO nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 6.56 seconds to spawn the instance on the hypervisor.
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.284 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.285 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.304 248514 DEBUG oslo_concurrency.processutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.308 248514 DEBUG nova.compute.provider_tree [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.336 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.337 248514 DEBUG nova.scheduler.client.report [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.355 248514 INFO nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 8.80 seconds to build instance.
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.361 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.389 248514 INFO nova.scheduler.client.report [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Deleted allocations for instance 0d64e209-19e7-4ad3-a790-43d04d832838
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.393 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2275496492' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:25 compute-0 NetworkManager[50376]: <info>  [1765614085.4694] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Dec 13 08:21:25 compute-0 NetworkManager[50376]: <info>  [1765614085.4707] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.479 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.541 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Creating config drive at /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.546 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphs519wrp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:25 compute-0 ovn_controller[148476]: 2025-12-13T08:21:25Z|00153|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:21:25 compute-0 ovn_controller[148476]: 2025-12-13T08:21:25Z|00154|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.579 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.695 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphs519wrp" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.728 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:25 compute-0 nova_compute[248510]: 2025-12-13 08:21:25.736 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:26 compute-0 nova_compute[248510]: 2025-12-13 08:21:26.345 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:26 compute-0 nova_compute[248510]: 2025-12-13 08:21:26.347 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Deleting local config drive /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config because it was imported into RBD.
Dec 13 08:21:26 compute-0 systemd-machined[210538]: New machine qemu-31-instance-0000001a.
Dec 13 08:21:26 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001a.
Dec 13 08:21:26 compute-0 ceph-mon[76537]: pgmap v1666: 321 pgs: 321 active+clean; 420 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.2 MiB/s wr, 276 op/s
Dec 13 08:21:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2144514618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:26 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 08:21:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 420 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.048 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614087.0484254, 9b6188af-75f0-4213-89c2-bd3eb72960b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.050 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] VM Resumed (Lifecycle Event)
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.055 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.057 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.062 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance spawned successfully.
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.063 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.089 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.094 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.106 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.107 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.107 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.107 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.108 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.110 248514 INFO nova.compute.manager [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Terminating instance
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.111 248514 DEBUG nova.compute.manager [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.114 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.115 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.116 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.117 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.118 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.119 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.127 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.128 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614087.0485342, 9b6188af-75f0-4213-89c2-bd3eb72960b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.129 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] VM Started (Lifecycle Event)
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.183 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.188 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:21:27 compute-0 kernel: tap4b46a3fe-06 (unregistering): left promiscuous mode
Dec 13 08:21:27 compute-0 NetworkManager[50376]: <info>  [1765614087.2156] device (tap4b46a3fe-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00155|binding|INFO|Releasing lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 from this chassis (sb_readonly=0)
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00156|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 down in Southbound
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00157|binding|INFO|Removing iface tap4b46a3fe-06 ovn-installed in OVS
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.237 248514 DEBUG nova.compute.manager [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.238 248514 DEBUG nova.compute.manager [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.238 248514 DEBUG oslo_concurrency.lockutils [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.238 248514 DEBUG oslo_concurrency.lockutils [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.238 248514 DEBUG nova.network.neutron [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.242 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:e5:d6 10.100.0.8'], port_security=['fa:16:3e:69:e5:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6b1e2b65-1398-4af8-9e8a-a8b99630eef8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b46a3fe-06cf-4169-9e52-49c6d076be13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.243 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b46a3fe-06cf-4169-9e52-49c6d076be13 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.245 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.263 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[564cb8cd-8492-41d5-8956-87d4397c992a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.270 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.285 248514 INFO nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 6.19 seconds to spawn the instance on the hypervisor.
Dec 13 08:21:27 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Deactivated successfully.
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.286 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:27 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Consumed 21.317s CPU time.
Dec 13 08:21:27 compute-0 systemd-machined[210538]: Machine qemu-21-instance-00000013 terminated.
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.303 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8366698e-134a-473a-af58-6857d820beed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.306 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[af00807b-b0d3-491e-a0fa-f898c152c243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 kernel: tap4b46a3fe-06: entered promiscuous mode
Dec 13 08:21:27 compute-0 NetworkManager[50376]: <info>  [1765614087.3496] manager: (tap4b46a3fe-06): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Dec 13 08:21:27 compute-0 systemd-udevd[279462]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.349 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ac771df5-9770-43c9-8ea6-fee212935a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 kernel: tap4b46a3fe-06 (unregistering): left promiscuous mode
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00158|binding|INFO|Claiming lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 for this chassis.
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00159|binding|INFO|4b46a3fe-06cf-4169-9e52-49c6d076be13: Claiming fa:16:3e:69:e5:d6 10.100.0.8
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.382 248514 INFO nova.virt.libvirt.driver [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Instance destroyed successfully.
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.383 248514 DEBUG nova.objects.instance [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.385 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:e5:d6 10.100.0.8'], port_security=['fa:16:3e:69:e5:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6b1e2b65-1398-4af8-9e8a-a8b99630eef8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b46a3fe-06cf-4169-9e52-49c6d076be13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.388 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5699bc-7c12-4dfb-ad23-94f9a8a07418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 22, 'rx_bytes': 994, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 22, 'rx_bytes': 994, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279479, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00160|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 ovn-installed in OVS
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00161|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 up in Southbound
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00162|binding|INFO|Releasing lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 from this chassis (sb_readonly=1)
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00163|if_status|INFO|Dropped 8 log messages in last 88 seconds (most recently, 77 seconds ago) due to excessive rate
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00164|if_status|INFO|Not setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 down as sb is readonly
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00165|binding|INFO|Removing iface tap4b46a3fe-06 ovn-installed in OVS
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00166|binding|INFO|Releasing lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 from this chassis (sb_readonly=0)
Dec 13 08:21:27 compute-0 ovn_controller[148476]: 2025-12-13T08:21:27Z|00167|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 down in Southbound
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.412 248514 DEBUG nova.virt.libvirt.vif [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1421874999',display_name='tempest-ServersAdminTestJSON-server-1421874999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1421874999',id=19,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:18:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-fdarv985',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:18:07Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=6b1e2b65-1398-4af8-9e8a-a8b99630eef8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.413 248514 DEBUG nova.network.os_vif_util [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.414 248514 DEBUG nova.network.os_vif_util [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.415 248514 DEBUG os_vif [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.415 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:e5:d6 10.100.0.8'], port_security=['fa:16:3e:69:e5:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6b1e2b65-1398-4af8-9e8a-a8b99630eef8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b46a3fe-06cf-4169-9e52-49c6d076be13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.418 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b46a3fe-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.420 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51fd8a46-b2f3-4aa3-bc78-ec358f8cc83a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279483, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279483, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.422 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.423 248514 INFO nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 7.79 seconds to build instance.
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.425 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.426 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.426 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.426 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.427 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.428 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b46a3fe-06cf-4169-9e52-49c6d076be13 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.430 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.430 248514 INFO os_vif [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06')
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.446 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e31fddf-2099-402c-9585-6e9e933d618b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.457 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.484 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[30821045-ef49-46db-a55c-17f9fb506c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.488 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1f56b1fd-b326-4716-a907-b5ab23fc41e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.520 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4b6318-2416-4ae0-9be8-61f47bca7934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.541 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa1c054-945b-437a-9152-18cec614c21a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 24, 'rx_bytes': 994, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 24, 'rx_bytes': 994, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279507, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.567 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11948aac-8aa3-4586-9db1-29be516278de]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279508, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279508, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.569 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.572 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.572 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.574 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b46a3fe-06cf-4169-9e52-49c6d076be13 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.575 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.598 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5437aea7-fd10-430b-a4e7-0ce9bb2f4738]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.637 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b0073c1c-db41-481b-aaa5-0b80e6edb0cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.644 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[28e8e5ae-5209-4377-b2c9-7ff4b902d68e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.682 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b500be-945e-4419-ba2c-715faeae340a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.715 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a62a0760-de53-48d2-924d-789b4a72d14b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 26, 'rx_bytes': 994, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 26, 'rx_bytes': 994, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279515, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42be9cfe-5ba1-407d-bccc-29101b07d4c6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279516, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279516, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.743 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.746 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.746 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.747 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.747 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.763 248514 INFO nova.virt.libvirt.driver [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Deleting instance files /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8_del
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.764 248514 INFO nova.virt.libvirt.driver [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Deletion of /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8_del complete
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.822 248514 INFO nova.compute.manager [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Took 0.71 seconds to destroy the instance on the hypervisor.
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.824 248514 DEBUG oslo.service.loopingcall [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.824 248514 DEBUG nova.compute.manager [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:21:27 compute-0 nova_compute[248510]: 2025-12-13 08:21:27.825 248514 DEBUG nova.network.neutron [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:21:28 compute-0 sshd-session[279242]: Connection closed by 45.156.129.122 port 43493
Dec 13 08:21:28 compute-0 nova_compute[248510]: 2025-12-13 08:21:28.592 248514 DEBUG nova.network.neutron [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:21:28 compute-0 nova_compute[248510]: 2025-12-13 08:21:28.614 248514 INFO nova.compute.manager [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Took 0.79 seconds to deallocate network for instance.
Dec 13 08:21:28 compute-0 nova_compute[248510]: 2025-12-13 08:21:28.666 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:28 compute-0 nova_compute[248510]: 2025-12-13 08:21:28.667 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:28 compute-0 nova_compute[248510]: 2025-12-13 08:21:28.848 248514 DEBUG nova.compute.manager [req-ecfa77a5-f48b-4602-b255-a3d3cb5c478a req-c08a5e02-c37b-4dc9-8dcb-b235da6ab1a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-deleted-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:28 compute-0 nova_compute[248510]: 2025-12-13 08:21:28.870 248514 DEBUG oslo_concurrency.processutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:28 compute-0 ovn_controller[148476]: 2025-12-13T08:21:28Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 08:21:28 compute-0 ovn_controller[148476]: 2025-12-13T08:21:28Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 08:21:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 336 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.7 MiB/s wr, 452 op/s
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.273 248514 DEBUG nova.network.neutron [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.274 248514 DEBUG nova.network.neutron [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.299 248514 DEBUG oslo_concurrency.lockutils [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:29 compute-0 ceph-mon[76537]: pgmap v1667: 321 pgs: 321 active+clean; 420 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Dec 13 08:21:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.930 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-unplugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.930 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.930 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-unplugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 WARNING nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-unplugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state deleted and task_state None.
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 WARNING nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state deleted and task_state None.
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.933 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.933 248514 WARNING nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state deleted and task_state None.
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.933 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.933 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:21:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294930675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.934 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.934 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.934 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.934 248514 WARNING nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state deleted and task_state None.
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.960 248514 DEBUG oslo_concurrency.processutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.969 248514 DEBUG nova.compute.provider_tree [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.987 248514 DEBUG nova.scheduler.client.report [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:21:29 compute-0 nova_compute[248510]: 2025-12-13 08:21:29.991 248514 DEBUG nova.compute.manager [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.025 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.163 248514 INFO nova.compute.manager [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] instance snapshotting
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.399 248514 INFO nova.scheduler.client.report [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Deleted allocations for instance 6b1e2b65-1398-4af8-9e8a-a8b99630eef8
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.489 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:30 compute-0 ceph-mon[76537]: pgmap v1668: 321 pgs: 321 active+clean; 336 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.7 MiB/s wr, 452 op/s
Dec 13 08:21:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1294930675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.610 248514 INFO nova.virt.libvirt.driver [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Beginning live snapshot process
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.864 248514 DEBUG nova.virt.libvirt.imagebackend [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.879 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.882 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.883 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.883 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.884 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.885 248514 INFO nova.compute.manager [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Terminating instance
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.887 248514 DEBUG nova.compute.manager [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:21:30 compute-0 kernel: tapc85d366f-1e (unregistering): left promiscuous mode
Dec 13 08:21:30 compute-0 NetworkManager[50376]: <info>  [1765614090.9402] device (tapc85d366f-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:30 compute-0 ovn_controller[148476]: 2025-12-13T08:21:30Z|00168|binding|INFO|Releasing lport c85d366f-1e88-41c4-9425-0e5f62b77e99 from this chassis (sb_readonly=0)
Dec 13 08:21:30 compute-0 ovn_controller[148476]: 2025-12-13T08:21:30Z|00169|binding|INFO|Setting lport c85d366f-1e88-41c4-9425-0e5f62b77e99 down in Southbound
Dec 13 08:21:30 compute-0 ovn_controller[148476]: 2025-12-13T08:21:30Z|00170|binding|INFO|Removing iface tapc85d366f-1e ovn-installed in OVS
Dec 13 08:21:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:30.973 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:e7:e6 10.100.0.11'], port_security=['fa:16:3e:25:e7:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3fffafca-321d-4611-8940-da963b356ca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c85d366f-1e88-41c4-9425-0e5f62b77e99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:21:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:30.974 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c85d366f-1e88-41c4-9425-0e5f62b77e99 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis
Dec 13 08:21:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 336 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.7 MiB/s wr, 372 op/s
Dec 13 08:21:30 compute-0 ovn_controller[148476]: 2025-12-13T08:21:30Z|00171|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:21:30 compute-0 ovn_controller[148476]: 2025-12-13T08:21:30Z|00172|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 08:21:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:30.976 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5
Dec 13 08:21:30 compute-0 nova_compute[248510]: 2025-12-13 08:21:30.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:30.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[933d588b-cfda-4173-b128-015ff7d5225b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:31 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Dec 13 08:21:31 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 22.114s CPU time.
Dec 13 08:21:31 compute-0 systemd-machined[210538]: Machine qemu-20-instance-00000012 terminated.
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.053 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f1bbd440-6601-48c0-b3b9-ecaa0a5cc62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.058 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[72f5d070-8167-4647-9ae1-404cecb55c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.092 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad6154-ccfb-426c-a551-caf694dccd4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.126 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1bed1e77-4fd2-4abc-b2bb-e11323557a7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 28, 'rx_bytes': 994, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 28, 'rx_bytes': 994, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279586, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.136 248514 INFO nova.virt.libvirt.driver [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Instance destroyed successfully.
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.137 248514 DEBUG nova.objects.instance [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid 3fffafca-321d-4611-8940-da963b356ca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.151 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5d2548-26e6-4944-8a53-965013944a02]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279596, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279596, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.153 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.158 248514 DEBUG nova.virt.libvirt.vif [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:17:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-364828820',display_name='tempest-ServersAdminTestJSON-server-364828820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-364828820',id=18,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-t7ff1bcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:17:29Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=3fffafca-321d-4611-8940-da963b356ca1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.158 248514 DEBUG nova.network.os_vif_util [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.160 248514 DEBUG nova.network.os_vif_util [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.160 248514 DEBUG os_vif [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.162 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc85d366f-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.166 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.166 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.167 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.167 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.175 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(1829f1286d3742fb834c1b1b06a524cc) on rbd image(07413df5-0bb8-42c2-95ff-13458d598139_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.209 248514 INFO os_vif [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e')
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.489 248514 INFO nova.virt.libvirt.driver [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Deleting instance files /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1_del
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.490 248514 INFO nova.virt.libvirt.driver [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Deletion of /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1_del complete
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.542 248514 INFO nova.compute.manager [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Took 0.65 seconds to destroy the instance on the hypervisor.
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.542 248514 DEBUG oslo.service.loopingcall [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.542 248514 DEBUG nova.compute.manager [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.543 248514 DEBUG nova.network.neutron [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:21:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Dec 13 08:21:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Dec 13 08:21:31 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.615 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] cloning vms/07413df5-0bb8-42c2-95ff-13458d598139_disk@1829f1286d3742fb834c1b1b06a524cc to images/d29e6f1b-81ef-4730-8c93-4aec7533b72b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.735 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] flattening images/d29e6f1b-81ef-4730-8c93-4aec7533b72b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:21:31 compute-0 nova_compute[248510]: 2025-12-13 08:21:31.821 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.037 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-unplugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.037 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] No waiting events found dispatching network-vif-unplugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-unplugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.039 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.039 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.039 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] No waiting events found dispatching network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.039 248514 WARNING nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received unexpected event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 for instance with vm_state active and task_state deleting.
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.295 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] removing snapshot(1829f1286d3742fb834c1b1b06a524cc) on rbd image(07413df5-0bb8-42c2-95ff-13458d598139_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.466 248514 DEBUG nova.network.neutron [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.488 248514 INFO nova.compute.manager [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Took 0.95 seconds to deallocate network for instance.
Dec 13 08:21:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Dec 13 08:21:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.575 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.576 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:32 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Dec 13 08:21:32 compute-0 ceph-mon[76537]: pgmap v1669: 321 pgs: 321 active+clean; 336 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.7 MiB/s wr, 372 op/s
Dec 13 08:21:32 compute-0 ceph-mon[76537]: osdmap e169: 3 total, 3 up, 3 in
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.603 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(snap) on rbd image(d29e6f1b-81ef-4730-8c93-4aec7533b72b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:21:32 compute-0 nova_compute[248510]: 2025-12-13 08:21:32.728 248514 DEBUG oslo_concurrency.processutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 345 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 4.4 MiB/s wr, 433 op/s
Dec 13 08:21:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:21:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676856328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.317 248514 DEBUG oslo_concurrency.processutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.323 248514 DEBUG nova.compute.provider_tree [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.344 248514 DEBUG nova.scheduler.client.report [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.372 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:33 compute-0 sshd-session[279517]: Connection closed by 45.156.129.120 port 49243 [preauth]
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.422 248514 INFO nova.scheduler.client.report [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Deleted allocations for instance 3fffafca-321d-4611-8940-da963b356ca1
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.511 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Dec 13 08:21:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Dec 13 08:21:33 compute-0 ceph-mon[76537]: osdmap e170: 3 total, 3 up, 3 in
Dec 13 08:21:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/676856328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:33 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:21:33 compute-0 nova_compute[248510]: 2025-12-13 08:21:33.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.134 248514 DEBUG nova.compute.manager [req-5787667f-7983-409a-8bd2-71ca44dbb3a7 req-5a3c64c2-7b88-40fe-9009-13aa5437b77b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-deleted-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.287 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.288 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.288 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.289 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.524 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.525 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.526 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.526 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.527 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.528 248514 INFO nova.compute.manager [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Terminating instance
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.529 248514 DEBUG nova.compute.manager [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:21:34 compute-0 kernel: tap7f8a109e-e2 (unregistering): left promiscuous mode
Dec 13 08:21:34 compute-0 NetworkManager[50376]: <info>  [1765614094.6325] device (tap7f8a109e-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:21:34 compute-0 ovn_controller[148476]: 2025-12-13T08:21:34Z|00173|binding|INFO|Releasing lport 7f8a109e-e262-4847-8430-ac7944dace5c from this chassis (sb_readonly=0)
Dec 13 08:21:34 compute-0 ovn_controller[148476]: 2025-12-13T08:21:34Z|00174|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c down in Southbound
Dec 13 08:21:34 compute-0 ovn_controller[148476]: 2025-12-13T08:21:34Z|00175|binding|INFO|Removing iface tap7f8a109e-e2 ovn-installed in OVS
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.651 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0a:44 10.100.0.14'], port_security=['fa:16:3e:e6:0a:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a14d8b88-7aec-468f-a550-881364e4d95e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7f8a109e-e262-4847-8430-ac7944dace5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:21:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.653 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7f8a109e-e262-4847-8430-ac7944dace5c in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis
Dec 13 08:21:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.655 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:21:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.656 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[332c3a52-846d-4ace-950d-1327fdd641e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.657 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 namespace which is not needed anymore
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:21:34 compute-0 ceph-mon[76537]: pgmap v1672: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 345 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 4.4 MiB/s wr, 433 op/s
Dec 13 08:21:34 compute-0 ceph-mon[76537]: osdmap e171: 3 total, 3 up, 3 in
Dec 13 08:21:34 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000011.scope: Deactivated successfully.
Dec 13 08:21:34 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000011.scope: Consumed 13.474s CPU time.
Dec 13 08:21:34 compute-0 systemd-machined[210538]: Machine qemu-28-instance-00000011 terminated.
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.788 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance destroyed successfully.
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.788 248514 DEBUG nova.objects.instance [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.805 248514 DEBUG nova.virt.libvirt.vif [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:20Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.805 248514 DEBUG nova.network.os_vif_util [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.806 248514 DEBUG nova.network.os_vif_util [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.807 248514 DEBUG os_vif [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.810 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.810 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f8a109e-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:21:34 compute-0 nova_compute[248510]: 2025-12-13 08:21:34.843 248514 INFO os_vif [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2')
Dec 13 08:21:34 compute-0 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [NOTICE]   (272057) : haproxy version is 2.8.14-c23fe91
Dec 13 08:21:34 compute-0 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [NOTICE]   (272057) : path to executable is /usr/sbin/haproxy
Dec 13 08:21:34 compute-0 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [WARNING]  (272057) : Exiting Master process...
Dec 13 08:21:34 compute-0 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [WARNING]  (272057) : Exiting Master process...
Dec 13 08:21:34 compute-0 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [ALERT]    (272057) : Current worker (272059) exited with code 143 (Terminated)
Dec 13 08:21:34 compute-0 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [WARNING]  (272057) : All workers exited. Exiting... (0)
Dec 13 08:21:34 compute-0 systemd[1]: libpod-563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864.scope: Deactivated successfully.
Dec 13 08:21:34 compute-0 podman[279781]: 2025-12-13 08:21:34.863937944 +0000 UTC m=+0.063990577 container died 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-937067be80a0f92a68338f93cc6d8182d7c2bbc7c8ec6a16edddf5b92b0e5e64-merged.mount: Deactivated successfully.
Dec 13 08:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864-userdata-shm.mount: Deactivated successfully.
Dec 13 08:21:34 compute-0 podman[279781]: 2025-12-13 08:21:34.959345704 +0000 UTC m=+0.159398337 container cleanup 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:21:34 compute-0 systemd[1]: libpod-conmon-563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864.scope: Deactivated successfully.
Dec 13 08:21:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 306 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Dec 13 08:21:35 compute-0 podman[279827]: 2025-12-13 08:21:35.361374216 +0000 UTC m=+0.374140216 container remove 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.368 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45f44cbe-2f28-4557-a1e5-6b9919cac6ab]: (4, ('Sat Dec 13 08:21:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 (563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864)\n563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864\nSat Dec 13 08:21:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 (563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864)\n563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.370 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[81d44609-ae93-468c-829a-42bacd138459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.371 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:21:35 compute-0 nova_compute[248510]: 2025-12-13 08:21:35.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:35 compute-0 kernel: tape3139e1f-d0: left promiscuous mode
Dec 13 08:21:35 compute-0 nova_compute[248510]: 2025-12-13 08:21:35.399 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.406 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac734f0-7dac-4eb7-9646-730f7ce62bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.422 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c58d0d0b-8e61-4875-9db7-e32dbcbde432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.424 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b70149c-607f-4140-8362-cc226511e6c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.442 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c855eaea-9a1b-43e3-85c5-17c9f1ee7772]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641488, 'reachable_time': 32059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279842, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.444 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.445 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e73d5388-5596-4c57-9c37-d2c46d011847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:21:35 compute-0 systemd[1]: run-netns-ovnmeta\x2de3139e1f\x2dd005\x2d4b75\x2d9ad6\x2d07f3fbaa8fb5.mount: Deactivated successfully.
Dec 13 08:21:35 compute-0 nova_compute[248510]: 2025-12-13 08:21:35.837 248514 INFO nova.virt.libvirt.driver [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Snapshot image upload complete
Dec 13 08:21:35 compute-0 nova_compute[248510]: 2025-12-13 08:21:35.837 248514 INFO nova.compute.manager [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 5.67 seconds to snapshot the instance on the hypervisor.
Dec 13 08:21:36 compute-0 sudo[279844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:21:36 compute-0 sudo[279844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:36 compute-0 sudo[279844]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:36 compute-0 sudo[279869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:21:36 compute-0 sudo[279869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.145 248514 INFO nova.virt.libvirt.driver [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deleting instance files /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e_del
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.147 248514 INFO nova.virt.libvirt.driver [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deletion of /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e_del complete
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.195 248514 INFO nova.compute.manager [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Took 1.67 seconds to destroy the instance on the hypervisor.
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.196 248514 DEBUG oslo.service.loopingcall [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.196 248514 DEBUG nova.compute.manager [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.196 248514 DEBUG nova.network.neutron [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:21:36 compute-0 ovn_controller[148476]: 2025-12-13T08:21:36Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:c0:80 10.100.0.7
Dec 13 08:21:36 compute-0 ovn_controller[148476]: 2025-12-13T08:21:36Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:c0:80 10.100.0.7
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.498 248514 DEBUG nova.compute.manager [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.498 248514 DEBUG oslo_concurrency.lockutils [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG oslo_concurrency.lockutils [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG oslo_concurrency.lockutils [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG nova.compute.manager [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG nova.compute.manager [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:36 compute-0 ceph-mon[76537]: pgmap v1674: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 306 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Dec 13 08:21:36 compute-0 sudo[279869]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:21:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:21:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:21:36 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:21:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:21:36 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:21:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:21:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:21:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:21:36 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:21:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:21:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:21:36 compute-0 sudo[279925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:21:36 compute-0 sudo[279925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:36 compute-0 sudo[279925]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.876 248514 DEBUG nova.network.neutron [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.902 248514 INFO nova.compute.manager [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Took 0.71 seconds to deallocate network for instance.
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.964 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.965 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 306 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.979 248514 DEBUG nova.compute.manager [req-aded1f0b-a200-48e3-96fc-fde7e7bf2425 req-a962aacb-cc55-4d15-9ecc-d7c3f8f27a5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-deleted-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:36 compute-0 sudo[279950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:21:36 compute-0 sudo[279950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:36 compute-0 nova_compute[248510]: 2025-12-13 08:21:36.996 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.017 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.018 248514 DEBUG nova.compute.provider_tree [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.048 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.075 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.179 248514 DEBUG oslo_concurrency.processutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.212 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614082.2091837, 0d64e209-19e7-4ad3-a790-43d04d832838 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.214 248514 INFO nova.compute.manager [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] VM Stopped (Lifecycle Event)
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.257 248514 DEBUG nova.compute.manager [None req-d7602709-8c1c-4d1d-8b6d-1112f487ee50 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.324 248514 DEBUG nova.compute.manager [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:37 compute-0 podman[279986]: 2025-12-13 08:21:37.379374251 +0000 UTC m=+0.095465803 container create 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.388 248514 INFO nova.compute.manager [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] instance snapshotting
Dec 13 08:21:37 compute-0 podman[279986]: 2025-12-13 08:21:37.314321079 +0000 UTC m=+0.030412631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:21:37 compute-0 systemd[1]: Started libpod-conmon-0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1.scope.
Dec 13 08:21:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:21:37 compute-0 podman[279986]: 2025-12-13 08:21:37.527633363 +0000 UTC m=+0.243724935 container init 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:21:37 compute-0 podman[279986]: 2025-12-13 08:21:37.539567737 +0000 UTC m=+0.255659279 container start 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:21:37 compute-0 fervent_jackson[280021]: 167 167
Dec 13 08:21:37 compute-0 systemd[1]: libpod-0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1.scope: Deactivated successfully.
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.619 248514 INFO nova.virt.libvirt.driver [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Beginning live snapshot process
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.648 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updating instance_info_cache with network_info: [{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:21:37 compute-0 nova_compute[248510]: 2025-12-13 08:21:37.799 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:21:37 compute-0 podman[279986]: 2025-12-13 08:21:37.831535418 +0000 UTC m=+0.547626990 container attach 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec 13 08:21:37 compute-0 podman[279986]: 2025-12-13 08:21:37.832452851 +0000 UTC m=+0.548544413 container died 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:21:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:21:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3615211835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:21:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:21:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:21:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:21:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:21:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf1b48197f33311cf523717315ddad01124d303a6ec05a44809cc41f24e07cbd-merged.mount: Deactivated successfully.
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.310 248514 DEBUG oslo_concurrency.processutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.325 248514 DEBUG nova.virt.libvirt.imagebackend [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.331 248514 DEBUG nova.compute.provider_tree [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.372 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.402 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.453 248514 INFO nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Deleted allocations for instance a14d8b88-7aec-468f-a550-881364e4d95e
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.551 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.599 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(1a2ff40d88ae4e96ad7be6f96c000c7c) on rbd image(9b6188af-75f0-4213-89c2-bd3eb72960b7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:21:38 compute-0 podman[279986]: 2025-12-13 08:21:38.65942888 +0000 UTC m=+1.375520442 container remove 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 13 08:21:38 compute-0 systemd[1]: libpod-conmon-0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1.scope: Deactivated successfully.
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.724 248514 DEBUG nova.compute.manager [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.725 248514 DEBUG oslo_concurrency.lockutils [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.725 248514 DEBUG oslo_concurrency.lockutils [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.726 248514 DEBUG oslo_concurrency.lockutils [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.726 248514 DEBUG nova.compute.manager [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.726 248514 WARNING nova.compute.manager [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state deleted and task_state None.
Dec 13 08:21:38 compute-0 nova_compute[248510]: 2025-12-13 08:21:38.791 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:38 compute-0 podman[280100]: 2025-12-13 08:21:38.829042328 +0000 UTC m=+0.027172270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:21:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 293 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.9 MiB/s wr, 425 op/s
Dec 13 08:21:39 compute-0 podman[280100]: 2025-12-13 08:21:39.138667914 +0000 UTC m=+0.336797836 container create 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 08:21:39 compute-0 ceph-mon[76537]: pgmap v1675: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 306 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Dec 13 08:21:39 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3615211835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:39 compute-0 nova_compute[248510]: 2025-12-13 08:21:39.383 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:39 compute-0 systemd[1]: Started libpod-conmon-8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19.scope.
Dec 13 08:21:39 compute-0 podman[280115]: 2025-12-13 08:21:39.442763855 +0000 UTC m=+0.261925243 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:21:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:39 compute-0 podman[280116]: 2025-12-13 08:21:39.487948168 +0000 UTC m=+0.303387714 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:21:39 compute-0 podman[280100]: 2025-12-13 08:21:39.488898671 +0000 UTC m=+0.687028613 container init 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:21:39 compute-0 podman[280114]: 2025-12-13 08:21:39.489134847 +0000 UTC m=+0.308435359 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:21:39 compute-0 podman[280100]: 2025-12-13 08:21:39.498128108 +0000 UTC m=+0.696258030 container start 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:21:39 compute-0 podman[280100]: 2025-12-13 08:21:39.518297175 +0000 UTC m=+0.716427117 container attach 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:21:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:21:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Dec 13 08:21:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Dec 13 08:21:39 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Dec 13 08:21:39 compute-0 nova_compute[248510]: 2025-12-13 08:21:39.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:40 compute-0 musing_davinci[280165]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:21:40 compute-0 musing_davinci[280165]: --> All data devices are unavailable
Dec 13 08:21:40 compute-0 systemd[1]: libpod-8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19.scope: Deactivated successfully.
Dec 13 08:21:40 compute-0 podman[280100]: 2025-12-13 08:21:40.047789907 +0000 UTC m=+1.245919829 container died 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 08:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:21:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 293 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.9 MiB/s wr, 353 op/s
Dec 13 08:21:41 compute-0 ceph-mon[76537]: pgmap v1676: 321 pgs: 321 active+clean; 293 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.9 MiB/s wr, 425 op/s
Dec 13 08:21:41 compute-0 ceph-mon[76537]: osdmap e172: 3 total, 3 up, 3 in
Dec 13 08:21:41 compute-0 nova_compute[248510]: 2025-12-13 08:21:41.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:41 compute-0 nova_compute[248510]: 2025-12-13 08:21:41.720 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] cloning vms/9b6188af-75f0-4213-89c2-bd3eb72960b7_disk@1a2ff40d88ae4e96ad7be6f96c000c7c to images/1b5bf03c-4e74-4ede-b890-c9dcb4210f68 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894-merged.mount: Deactivated successfully.
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.229 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.230 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.230 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.230 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.381 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614087.3801205, 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.381 248514 INFO nova.compute.manager [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] VM Stopped (Lifecycle Event)
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.414 248514 DEBUG nova.compute.manager [None req-44a69ce5-d245-4e8c-be5c-abcd7d532ef1 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:42 compute-0 podman[280100]: 2025-12-13 08:21:42.618129517 +0000 UTC m=+3.816259459 container remove 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 08:21:42 compute-0 systemd[1]: libpod-conmon-8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19.scope: Deactivated successfully.
Dec 13 08:21:42 compute-0 sudo[279950]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:42 compute-0 ceph-mon[76537]: pgmap v1678: 321 pgs: 321 active+clean; 293 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.9 MiB/s wr, 353 op/s
Dec 13 08:21:42 compute-0 sudo[280246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:21:42 compute-0 sudo[280246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:42 compute-0 sudo[280246]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.801 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:21:42 compute-0 nova_compute[248510]: 2025-12-13 08:21:42.801 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:42 compute-0 sudo[280271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:21:42 compute-0 sudo[280271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 293 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.9 MiB/s wr, 342 op/s
Dec 13 08:21:43 compute-0 podman[280327]: 2025-12-13 08:21:43.125379941 +0000 UTC m=+0.029722573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:21:43 compute-0 podman[280327]: 2025-12-13 08:21:43.265322858 +0000 UTC m=+0.169665450 container create bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Dec 13 08:21:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:21:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406369122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.364 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] flattening images/1b5bf03c-4e74-4ede-b890-c9dcb4210f68 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:21:43 compute-0 systemd[1]: Started libpod-conmon-bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f.scope.
Dec 13 08:21:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.541 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:43 compute-0 podman[280327]: 2025-12-13 08:21:43.54268387 +0000 UTC m=+0.447026482 container init bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:21:43 compute-0 podman[280327]: 2025-12-13 08:21:43.554195593 +0000 UTC m=+0.458538185 container start bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Dec 13 08:21:43 compute-0 vigorous_merkle[280364]: 167 167
Dec 13 08:21:43 compute-0 systemd[1]: libpod-bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f.scope: Deactivated successfully.
Dec 13 08:21:43 compute-0 podman[280327]: 2025-12-13 08:21:43.732867244 +0000 UTC m=+0.637209926 container attach bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 08:21:43 compute-0 podman[280327]: 2025-12-13 08:21:43.734124875 +0000 UTC m=+0.638467467 container died bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 08:21:43 compute-0 ovn_controller[148476]: 2025-12-13T08:21:43Z|00176|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:21:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/406369122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.816 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.817 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.825 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.825 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.832 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.833 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:21:43 compute-0 nova_compute[248510]: 2025-12-13 08:21:43.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.034 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.037 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3723MB free_disk=59.8763771802187GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.037 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.037 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6d86184a97ad6e115bce82d4d6ff9e2dad79904073f715d4a4c1883a895a56b-merged.mount: Deactivated successfully.
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.295 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 07413df5-0bb8-42c2-95ff-13458d598139 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b6188af-75f0-4213-89c2-bd3eb72960b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.376 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:21:44 compute-0 nova_compute[248510]: 2025-12-13 08:21:44.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 317 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.2 MiB/s wr, 261 op/s
Dec 13 08:21:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:21:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231662131' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:45 compute-0 nova_compute[248510]: 2025-12-13 08:21:45.069 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:45 compute-0 nova_compute[248510]: 2025-12-13 08:21:45.077 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:21:45 compute-0 nova_compute[248510]: 2025-12-13 08:21:45.094 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:21:45 compute-0 podman[280327]: 2025-12-13 08:21:45.098366036 +0000 UTC m=+2.002708648 container remove bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:21:45 compute-0 nova_compute[248510]: 2025-12-13 08:21:45.121 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:21:45 compute-0 nova_compute[248510]: 2025-12-13 08:21:45.122 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:45 compute-0 ceph-mon[76537]: pgmap v1679: 321 pgs: 321 active+clean; 293 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.9 MiB/s wr, 342 op/s
Dec 13 08:21:45 compute-0 systemd[1]: libpod-conmon-bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f.scope: Deactivated successfully.
Dec 13 08:21:45 compute-0 podman[280411]: 2025-12-13 08:21:45.283143357 +0000 UTC m=+0.034866620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:21:45 compute-0 podman[280411]: 2025-12-13 08:21:45.418778428 +0000 UTC m=+0.170501661 container create 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:21:45 compute-0 systemd[1]: Started libpod-conmon-9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab.scope.
Dec 13 08:21:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:45 compute-0 nova_compute[248510]: 2025-12-13 08:21:45.586 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] removing snapshot(1a2ff40d88ae4e96ad7be6f96c000c7c) on rbd image(9b6188af-75f0-4213-89c2-bd3eb72960b7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:21:45 compute-0 podman[280411]: 2025-12-13 08:21:45.594988808 +0000 UTC m=+0.346712071 container init 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 08:21:45 compute-0 podman[280411]: 2025-12-13 08:21:45.604621596 +0000 UTC m=+0.356344839 container start 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:21:45 compute-0 podman[280411]: 2025-12-13 08:21:45.616398716 +0000 UTC m=+0.368121959 container attach 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:21:45 compute-0 lucid_yonath[280435]: {
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:     "0": [
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:         {
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "devices": [
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "/dev/loop3"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             ],
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_name": "ceph_lv0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_size": "21470642176",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "name": "ceph_lv0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "tags": {
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cluster_name": "ceph",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.crush_device_class": "",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.encrypted": "0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.objectstore": "bluestore",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osd_id": "0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.type": "block",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.vdo": "0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.with_tpm": "0"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             },
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "type": "block",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "vg_name": "ceph_vg0"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:         }
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:     ],
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:     "1": [
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:         {
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "devices": [
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "/dev/loop4"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             ],
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_name": "ceph_lv1",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_size": "21470642176",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "name": "ceph_lv1",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "tags": {
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cluster_name": "ceph",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.crush_device_class": "",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.encrypted": "0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.objectstore": "bluestore",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osd_id": "1",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.type": "block",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.vdo": "0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.with_tpm": "0"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             },
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "type": "block",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "vg_name": "ceph_vg1"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:         }
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:     ],
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:     "2": [
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:         {
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "devices": [
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "/dev/loop5"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             ],
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_name": "ceph_lv2",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_size": "21470642176",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "name": "ceph_lv2",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "tags": {
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.cluster_name": "ceph",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.crush_device_class": "",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.encrypted": "0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.objectstore": "bluestore",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osd_id": "2",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.type": "block",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.vdo": "0",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:                 "ceph.with_tpm": "0"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             },
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "type": "block",
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:             "vg_name": "ceph_vg2"
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:         }
Dec 13 08:21:45 compute-0 lucid_yonath[280435]:     ]
Dec 13 08:21:45 compute-0 lucid_yonath[280435]: }
Dec 13 08:21:45 compute-0 systemd[1]: libpod-9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab.scope: Deactivated successfully.
Dec 13 08:21:45 compute-0 podman[280411]: 2025-12-13 08:21:45.934828089 +0000 UTC m=+0.686551332 container died 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:21:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5-merged.mount: Deactivated successfully.
Dec 13 08:21:45 compute-0 podman[280411]: 2025-12-13 08:21:45.976150297 +0000 UTC m=+0.727873540 container remove 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 08:21:45 compute-0 systemd[1]: libpod-conmon-9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab.scope: Deactivated successfully.
Dec 13 08:21:46 compute-0 sudo[280271]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:46 compute-0 sudo[280467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:21:46 compute-0 sudo[280467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:46 compute-0 sudo[280467]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:46 compute-0 nova_compute[248510]: 2025-12-13 08:21:46.116 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:46 compute-0 nova_compute[248510]: 2025-12-13 08:21:46.130 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614091.1272292, 3fffafca-321d-4611-8940-da963b356ca1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:46 compute-0 nova_compute[248510]: 2025-12-13 08:21:46.130 248514 INFO nova.compute.manager [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] VM Stopped (Lifecycle Event)
Dec 13 08:21:46 compute-0 ceph-mon[76537]: pgmap v1680: 321 pgs: 321 active+clean; 317 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.2 MiB/s wr, 261 op/s
Dec 13 08:21:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3231662131' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:46 compute-0 nova_compute[248510]: 2025-12-13 08:21:46.143 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:46 compute-0 nova_compute[248510]: 2025-12-13 08:21:46.163 248514 DEBUG nova.compute.manager [None req-3550348f-188e-4ceb-9477-b6ab80ea8317 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:46 compute-0 sudo[280492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:21:46 compute-0 sudo[280492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Dec 13 08:21:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Dec 13 08:21:46 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Dec 13 08:21:46 compute-0 nova_compute[248510]: 2025-12-13 08:21:46.436 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(snap) on rbd image(1b5bf03c-4e74-4ede-b890-c9dcb4210f68) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:21:46 compute-0 podman[280529]: 2025-12-13 08:21:46.490897476 +0000 UTC m=+0.042913708 container create ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:21:46 compute-0 systemd[1]: Started libpod-conmon-ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3.scope.
Dec 13 08:21:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:21:46 compute-0 podman[280529]: 2025-12-13 08:21:46.470716508 +0000 UTC m=+0.022732770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:21:46 compute-0 podman[280529]: 2025-12-13 08:21:46.627170202 +0000 UTC m=+0.179186464 container init ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:21:46 compute-0 podman[280529]: 2025-12-13 08:21:46.63683568 +0000 UTC m=+0.188851922 container start ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:21:46 compute-0 jovial_fermat[280560]: 167 167
Dec 13 08:21:46 compute-0 systemd[1]: libpod-ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3.scope: Deactivated successfully.
Dec 13 08:21:46 compute-0 podman[280529]: 2025-12-13 08:21:46.641902925 +0000 UTC m=+0.193919257 container attach ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 08:21:46 compute-0 podman[280529]: 2025-12-13 08:21:46.645738819 +0000 UTC m=+0.197755051 container died ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 08:21:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a027d176d76e75f24cd42c1fd0e3c6d6cac7c3ea8530a1f1948bdb32eb3d63b-merged.mount: Deactivated successfully.
Dec 13 08:21:46 compute-0 podman[280529]: 2025-12-13 08:21:46.698237762 +0000 UTC m=+0.250253994 container remove ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 08:21:46 compute-0 systemd[1]: libpod-conmon-ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3.scope: Deactivated successfully.
Dec 13 08:21:46 compute-0 podman[280583]: 2025-12-13 08:21:46.941581936 +0000 UTC m=+0.095153295 container create cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:21:46 compute-0 podman[280583]: 2025-12-13 08:21:46.875584511 +0000 UTC m=+0.029155960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:21:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 317 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 827 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Dec 13 08:21:46 compute-0 systemd[1]: Started libpod-conmon-cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128.scope.
Dec 13 08:21:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:21:47 compute-0 podman[280583]: 2025-12-13 08:21:47.0412092 +0000 UTC m=+0.194780589 container init cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 08:21:47 compute-0 podman[280583]: 2025-12-13 08:21:47.050568201 +0000 UTC m=+0.204139560 container start cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:21:47 compute-0 podman[280583]: 2025-12-13 08:21:47.054489667 +0000 UTC m=+0.208061016 container attach cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:21:47 compute-0 ovn_controller[148476]: 2025-12-13T08:21:47Z|00177|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:21:47 compute-0 nova_compute[248510]: 2025-12-13 08:21:47.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Dec 13 08:21:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Dec 13 08:21:47 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Dec 13 08:21:47 compute-0 ceph-mon[76537]: osdmap e173: 3 total, 3 up, 3 in
Dec 13 08:21:47 compute-0 lvm[280678]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:21:47 compute-0 lvm[280677]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:21:47 compute-0 lvm[280677]: VG ceph_vg0 finished
Dec 13 08:21:47 compute-0 lvm[280678]: VG ceph_vg1 finished
Dec 13 08:21:47 compute-0 lvm[280680]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:21:47 compute-0 lvm[280680]: VG ceph_vg2 finished
Dec 13 08:21:47 compute-0 friendly_ride[280599]: {}
Dec 13 08:21:47 compute-0 systemd[1]: libpod-cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128.scope: Deactivated successfully.
Dec 13 08:21:47 compute-0 systemd[1]: libpod-cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128.scope: Consumed 1.420s CPU time.
Dec 13 08:21:47 compute-0 podman[280583]: 2025-12-13 08:21:47.956727699 +0000 UTC m=+1.110299078 container died cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:21:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4-merged.mount: Deactivated successfully.
Dec 13 08:21:48 compute-0 podman[280583]: 2025-12-13 08:21:48.022425008 +0000 UTC m=+1.175996367 container remove cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:21:48 compute-0 systemd[1]: libpod-conmon-cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128.scope: Deactivated successfully.
Dec 13 08:21:48 compute-0 sudo[280492]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:21:48 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:21:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:21:48 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:21:48 compute-0 sudo[280698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:21:48 compute-0 sudo[280698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:21:48 compute-0 sudo[280698]: pam_unix(sudo:session): session closed for user root
Dec 13 08:21:48 compute-0 ceph-mon[76537]: pgmap v1682: 321 pgs: 321 active+clean; 317 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 827 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Dec 13 08:21:48 compute-0 ceph-mon[76537]: osdmap e174: 3 total, 3 up, 3 in
Dec 13 08:21:48 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:21:48 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:21:48 compute-0 nova_compute[248510]: 2025-12-13 08:21:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:21:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 200 op/s
Dec 13 08:21:49 compute-0 nova_compute[248510]: 2025-12-13 08:21:49.257 248514 INFO nova.virt.libvirt.driver [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Snapshot image upload complete
Dec 13 08:21:49 compute-0 nova_compute[248510]: 2025-12-13 08:21:49.257 248514 INFO nova.compute.manager [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 11.87 seconds to snapshot the instance on the hypervisor.
Dec 13 08:21:49 compute-0 nova_compute[248510]: 2025-12-13 08:21:49.386 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:21:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Dec 13 08:21:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Dec 13 08:21:49 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Dec 13 08:21:49 compute-0 nova_compute[248510]: 2025-12-13 08:21:49.776 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614094.7748296, a14d8b88-7aec-468f-a550-881364e4d95e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:21:49 compute-0 nova_compute[248510]: 2025-12-13 08:21:49.777 248514 INFO nova.compute.manager [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Stopped (Lifecycle Event)
Dec 13 08:21:49 compute-0 nova_compute[248510]: 2025-12-13 08:21:49.844 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:50 compute-0 ceph-mon[76537]: pgmap v1684: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 200 op/s
Dec 13 08:21:50 compute-0 ceph-mon[76537]: osdmap e175: 3 total, 3 up, 3 in
Dec 13 08:21:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 177 op/s
Dec 13 08:21:52 compute-0 nova_compute[248510]: 2025-12-13 08:21:52.562 248514 DEBUG nova.compute.manager [None req-961f4ab9-ab6f-4cca-aeb4-147677bc1314 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:52 compute-0 ceph-mon[76537]: pgmap v1686: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 177 op/s
Dec 13 08:21:52 compute-0 nova_compute[248510]: 2025-12-13 08:21:52.939 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:52 compute-0 nova_compute[248510]: 2025-12-13 08:21:52.940 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:52 compute-0 nova_compute[248510]: 2025-12-13 08:21:52.959 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:21:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.8 MiB/s wr, 177 op/s
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.044 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.045 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.052 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.052 248514 INFO nova.compute.claims [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.208 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:21:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755280805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.774 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.783 248514 DEBUG nova.compute.provider_tree [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:21:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3755280805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.928 248514 DEBUG nova.scheduler.client.report [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.965 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:53 compute-0 nova_compute[248510]: 2025-12-13 08:21:53.966 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.022 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.022 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.054 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.097 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.198 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.199 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.200 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Creating image(s)
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.222 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.248 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.273 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.278 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.352 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.375 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.379 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f983ed7f-13a4-496d-b8e9-60768d90efe6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.409 248514 DEBUG nova.policy [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8aa7edd2151436caa0fd25f361298fd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2495263e4f944deda2647b578d06bb21', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.478 248514 DEBUG nova.compute.manager [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.544 248514 INFO nova.compute.manager [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] instance snapshotting
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.673 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f983ed7f-13a4-496d-b8e9-60768d90efe6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:21:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.746 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] resizing rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:21:54 compute-0 ceph-mon[76537]: pgmap v1687: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.8 MiB/s wr, 177 op/s
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.839 248514 DEBUG nova.objects.instance [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'migration_context' on Instance uuid f983ed7f-13a4-496d-b8e9-60768d90efe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.843 248514 INFO nova.virt.libvirt.driver [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Beginning live snapshot process
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.871 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.872 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Ensure instance console log exists: /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.873 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.873 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.873 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Dec 13 08:21:54 compute-0 nova_compute[248510]: 2025-12-13 08:21:54.988 248514 DEBUG nova.virt.libvirt.imagebackend [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:21:55 compute-0 nova_compute[248510]: 2025-12-13 08:21:55.282 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully created port: b07d4534-1cb5-41ec-b0c4-3e820159fe8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:21:55 compute-0 nova_compute[248510]: 2025-12-13 08:21:55.329 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(10d5fc17bda04d4a89230048af381972) on rbd image(07413df5-0bb8-42c2-95ff-13458d598139_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:21:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:55.403 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:55.404 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:21:55.404 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:21:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Dec 13 08:21:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Dec 13 08:21:55 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Dec 13 08:21:56 compute-0 ceph-mon[76537]: pgmap v1688: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Dec 13 08:21:56 compute-0 ceph-mon[76537]: osdmap e176: 3 total, 3 up, 3 in
Dec 13 08:21:56 compute-0 nova_compute[248510]: 2025-12-13 08:21:56.071 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] cloning vms/07413df5-0bb8-42c2-95ff-13458d598139_disk@10d5fc17bda04d4a89230048af381972 to images/54bbec98-39a0-4d8e-b157-e4a05b0adf64 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:21:56 compute-0 nova_compute[248510]: 2025-12-13 08:21:56.180 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] flattening images/54bbec98-39a0-4d8e-b157-e4a05b0adf64 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:21:56 compute-0 nova_compute[248510]: 2025-12-13 08:21:56.273 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully created port: 33293def-d398-4fee-865f-a61997489b67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:21:56 compute-0 nova_compute[248510]: 2025-12-13 08:21:56.678 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] removing snapshot(10d5fc17bda04d4a89230048af381972) on rbd image(07413df5-0bb8-42c2-95ff-13458d598139_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:21:56 compute-0 nova_compute[248510]: 2025-12-13 08:21:56.860 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully created port: bd554014-5cc7-4f34-b4a0-03ae7cc1f530 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:21:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 17 KiB/s wr, 12 op/s
Dec 13 08:21:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Dec 13 08:21:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Dec 13 08:21:57 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Dec 13 08:21:57 compute-0 nova_compute[248510]: 2025-12-13 08:21:57.060 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(snap) on rbd image(54bbec98-39a0-4d8e-b157-e4a05b0adf64) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:21:58 compute-0 ceph-mon[76537]: pgmap v1690: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 17 KiB/s wr, 12 op/s
Dec 13 08:21:58 compute-0 ceph-mon[76537]: osdmap e177: 3 total, 3 up, 3 in
Dec 13 08:21:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Dec 13 08:21:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Dec 13 08:21:58 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Dec 13 08:21:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 208 op/s
Dec 13 08:21:59 compute-0 ceph-mon[76537]: osdmap e178: 3 total, 3 up, 3 in
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.225 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully updated port: b07d4534-1cb5-41ec-b0c4-3e820159fe8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.385 248514 DEBUG nova.compute.manager [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-changed-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.386 248514 DEBUG nova.compute.manager [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing instance network info cache due to event network-changed-b07d4534-1cb5-41ec-b0c4-3e820159fe8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.387 248514 DEBUG oslo_concurrency.lockutils [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.387 248514 DEBUG oslo_concurrency.lockutils [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.387 248514 DEBUG nova.network.neutron [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing network info cache for port b07d4534-1cb5-41ec-b0c4-3e820159fe8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.631 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.631 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.632 248514 DEBUG nova.objects.instance [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.643 248514 DEBUG nova.network.neutron [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.665 248514 DEBUG nova.objects.instance [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.679 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:21:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:21:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Dec 13 08:21:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Dec 13 08:21:59 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.834 248514 INFO nova.virt.libvirt.driver [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Snapshot image upload complete
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.835 248514 INFO nova.compute.manager [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 5.29 seconds to snapshot the instance on the hypervisor.
Dec 13 08:21:59 compute-0 nova_compute[248510]: 2025-12-13 08:21:59.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:00 compute-0 nova_compute[248510]: 2025-12-13 08:22:00.288 248514 DEBUG nova.policy [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:22:00 compute-0 nova_compute[248510]: 2025-12-13 08:22:00.323 248514 DEBUG nova.network.neutron [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:00 compute-0 nova_compute[248510]: 2025-12-13 08:22:00.343 248514 DEBUG oslo_concurrency.lockutils [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:00 compute-0 nova_compute[248510]: 2025-12-13 08:22:00.645 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully updated port: 33293def-d398-4fee-865f-a61997489b67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:00 compute-0 ceph-mon[76537]: pgmap v1693: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 208 op/s
Dec 13 08:22:00 compute-0 ceph-mon[76537]: osdmap e179: 3 total, 3 up, 3 in
Dec 13 08:22:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 13 MiB/s wr, 245 op/s
Dec 13 08:22:01 compute-0 nova_compute[248510]: 2025-12-13 08:22:01.104 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully created port: 9516b135-3bb5-4da4-942f-d044cad93bd4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:22:02 compute-0 nova_compute[248510]: 2025-12-13 08:22:02.088 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully updated port: bd554014-5cc7-4f34-b4a0-03ae7cc1f530 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:02 compute-0 nova_compute[248510]: 2025-12-13 08:22:02.112 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:02 compute-0 nova_compute[248510]: 2025-12-13 08:22:02.112 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquired lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:02 compute-0 nova_compute[248510]: 2025-12-13 08:22:02.112 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:02 compute-0 ceph-mon[76537]: pgmap v1695: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 13 MiB/s wr, 245 op/s
Dec 13 08:22:02 compute-0 nova_compute[248510]: 2025-12-13 08:22:02.421 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:02 compute-0 nova_compute[248510]: 2025-12-13 08:22:02.791 248514 DEBUG nova.compute.manager [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-changed-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:02 compute-0 nova_compute[248510]: 2025-12-13 08:22:02.791 248514 DEBUG nova.compute.manager [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing instance network info cache due to event network-changed-33293def-d398-4fee-865f-a61997489b67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:02 compute-0 nova_compute[248510]: 2025-12-13 08:22:02.792 248514 DEBUG oslo_concurrency.lockutils [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 210 op/s
Dec 13 08:22:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:03.292 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:03 compute-0 nova_compute[248510]: 2025-12-13 08:22:03.293 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:03.293 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:22:03 compute-0 nova_compute[248510]: 2025-12-13 08:22:03.384 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully updated port: 9516b135-3bb5-4da4-942f-d044cad93bd4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:03 compute-0 nova_compute[248510]: 2025-12-13 08:22:03.403 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:03 compute-0 nova_compute[248510]: 2025-12-13 08:22:03.404 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:03 compute-0 nova_compute[248510]: 2025-12-13 08:22:03.404 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:03 compute-0 nova_compute[248510]: 2025-12-13 08:22:03.688 248514 WARNING nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it
Dec 13 08:22:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:22:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 56K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 3836 syncs, 3.55 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6771 writes, 29K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 35.51 MB, 0.06 MB/s
                                           Interval WAL: 6772 writes, 2436 syncs, 2.78 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:22:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:04.296 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:04 compute-0 nova_compute[248510]: 2025-12-13 08:22:04.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:04 compute-0 ceph-mon[76537]: pgmap v1696: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 210 op/s
Dec 13 08:22:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:04 compute-0 nova_compute[248510]: 2025-12-13 08:22:04.850 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 171 op/s
Dec 13 08:22:06 compute-0 ceph-mon[76537]: pgmap v1697: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 171 op/s
Dec 13 08:22:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Dec 13 08:22:07 compute-0 ceph-mon[76537]: pgmap v1698: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Dec 13 08:22:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.9 KiB/s wr, 14 op/s
Dec 13 08:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:22:09
Dec 13 08:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta']
Dec 13 08:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.254 248514 DEBUG nova.compute.manager [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-changed-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.254 248514 DEBUG nova.compute.manager [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing instance network info cache due to event network-changed-bd554014-5cc7-4f34-b4a0-03ae7cc1f530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.255 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.619 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.713 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.716 248514 DEBUG nova.virt.libvirt.vif [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.717 248514 DEBUG nova.network.os_vif_util [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.717 248514 DEBUG nova.network.os_vif_util [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.718 248514 DEBUG os_vif [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.719 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.719 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.723 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9516b135-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.724 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9516b135-3b, col_values=(('external_ids', {'iface-id': '9516b135-3bb5-4da4-942f-d044cad93bd4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:b5:6d', 'vm-uuid': '2e309dc2-3cab-4ecf-8be7-eab85790a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:09 compute-0 NetworkManager[50376]: <info>  [1765614129.7275] manager: (tap9516b135-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.736 248514 INFO os_vif [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b')
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.737 248514 DEBUG nova.virt.libvirt.vif [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.737 248514 DEBUG nova.network.os_vif_util [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.738 248514 DEBUG nova.network.os_vif_util [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.740 248514 DEBUG nova.virt.libvirt.guest [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:96:b5:6d"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <target dev="tap9516b135-3b"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]: </interface>
Dec 13 08:22:09 compute-0 nova_compute[248510]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 08:22:09 compute-0 kernel: tap9516b135-3b: entered promiscuous mode
Dec 13 08:22:09 compute-0 NetworkManager[50376]: <info>  [1765614129.7544] manager: (tap9516b135-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:09 compute-0 ovn_controller[148476]: 2025-12-13T08:22:09Z|00178|binding|INFO|Claiming lport 9516b135-3bb5-4da4-942f-d044cad93bd4 for this chassis.
Dec 13 08:22:09 compute-0 ovn_controller[148476]: 2025-12-13T08:22:09Z|00179|binding|INFO|9516b135-3bb5-4da4-942f-d044cad93bd4: Claiming fa:16:3e:96:b5:6d 10.100.0.3
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.764 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:b5:6d 10.100.0.3'], port_security=['fa:16:3e:96:b5:6d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9516b135-3bb5-4da4-942f-d044cad93bd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.766 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9516b135-3bb5-4da4-942f-d044cad93bd4 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.769 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:22:09 compute-0 ovn_controller[148476]: 2025-12-13T08:22:09Z|00180|binding|INFO|Setting lport 9516b135-3bb5-4da4-942f-d044cad93bd4 ovn-installed in OVS
Dec 13 08:22:09 compute-0 ovn_controller[148476]: 2025-12-13T08:22:09Z|00181|binding|INFO|Setting lport 9516b135-3bb5-4da4-942f-d044cad93bd4 up in Southbound
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7115ba-1899-4fd3-b6ef-db755008f457]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.814 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:09 compute-0 systemd-udevd[281088]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.829 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8d19c971-ec9e-4c20-9add-65c7bc9bc6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.833 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[91b471ca-0709-46ce-bbac-d1fd62ca2810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:09 compute-0 NetworkManager[50376]: <info>  [1765614129.8474] device (tap9516b135-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:22:09 compute-0 NetworkManager[50376]: <info>  [1765614129.8479] device (tap9516b135-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.851 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Releasing lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.852 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance network_info: |[{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.853 248514 DEBUG oslo_concurrency.lockutils [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.853 248514 DEBUG nova.network.neutron [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing network info cache for port 33293def-d398-4fee-865f-a61997489b67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.858 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start _get_guest_xml network_info=[{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.865 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e81594d-2f87-462d-8176-d7889dc54bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.870 248514 WARNING nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:22:09 compute-0 podman[281061]: 2025-12-13 08:22:09.880083621 +0000 UTC m=+0.083829126 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.888 248514 DEBUG nova.virt.libvirt.host [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.889 248514 DEBUG nova.virt.libvirt.host [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.893 248514 DEBUG nova.virt.libvirt.host [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.894 248514 DEBUG nova.virt.libvirt.host [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.895 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.894 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2856ae-375b-4a4b-a01e-d6765cb0d46a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281121, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.896 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.897 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.897 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.897 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.898 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.898 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.899 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.899 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.900 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.900 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.900 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.904 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:09 compute-0 podman[281060]: 2025-12-13 08:22:09.906706526 +0000 UTC m=+0.111076567 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.914 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a575fc18-1e21-4601-a3e2-f8fe10c7366b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281128, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281128, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:09 compute-0 podman[281057]: 2025-12-13 08:22:09.915439381 +0000 UTC m=+0.118814687 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.916 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.919 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.920 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.920 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.920 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.968 248514 DEBUG nova.virt.libvirt.driver [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.968 248514 DEBUG nova.virt.libvirt.driver [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.969 248514 DEBUG nova.virt.libvirt.driver [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:54:c0:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.969 248514 DEBUG nova.virt.libvirt.driver [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:96:b5:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:09 compute-0 nova_compute[248510]: 2025-12-13 08:22:09.993 248514 DEBUG nova.virt.libvirt.guest [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:09</nova:creationTime>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:09 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 08:22:09 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:22:09 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:09 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:09 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:09 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:22:10 compute-0 nova_compute[248510]: 2025-12-13 08:22:10.022 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:22:10 compute-0 ceph-mon[76537]: pgmap v1699: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.9 KiB/s wr, 14 op/s
Dec 13 08:22:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:22:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984943527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:10 compute-0 nova_compute[248510]: 2025-12-13 08:22:10.515 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:10 compute-0 nova_compute[248510]: 2025-12-13 08:22:10.537 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:10 compute-0 nova_compute[248510]: 2025-12-13 08:22:10.541 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:22:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 4.3 KiB/s wr, 12 op/s
Dec 13 08:22:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:22:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.8 total, 600.0 interval
                                           Cumulative writes: 14K writes, 56K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4359 syncs, 3.38 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6706 writes, 25K keys, 6706 commit groups, 1.0 writes per commit group, ingest: 26.22 MB, 0.04 MB/s
                                           Interval WAL: 6706 writes, 2621 syncs, 2.56 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:22:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:22:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426856428' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.112 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.114 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.114 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.115 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.116 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.116 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.117 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.117 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.117 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.118 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.121 248514 DEBUG nova.objects.instance [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'pci_devices' on Instance uuid f983ed7f-13a4-496d-b8e9-60768d90efe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.150 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <uuid>f983ed7f-13a4-496d-b8e9-60768d90efe6</uuid>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <name>instance-0000001b</name>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestMultiNic-server-1051085888</nova:name>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:22:09</nova:creationTime>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:user uuid="e8aa7edd2151436caa0fd25f361298fd">tempest-ServersTestMultiNic-1741413593-project-member</nova:user>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:project uuid="2495263e4f944deda2647b578d06bb21">tempest-ServersTestMultiNic-1741413593</nova:project>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:port uuid="b07d4534-1cb5-41ec-b0c4-3e820159fe8e">
Dec 13 08:22:11 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.33" ipVersion="4"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:port uuid="33293def-d398-4fee-865f-a61997489b67">
Dec 13 08:22:11 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.1.74" ipVersion="4"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <nova:port uuid="bd554014-5cc7-4f34-b4a0-03ae7cc1f530">
Dec 13 08:22:11 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.211" ipVersion="4"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <system>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <entry name="serial">f983ed7f-13a4-496d-b8e9-60768d90efe6</entry>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <entry name="uuid">f983ed7f-13a4-496d-b8e9-60768d90efe6</entry>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </system>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <os>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   </os>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <features>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   </features>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/f983ed7f-13a4-496d-b8e9-60768d90efe6_disk">
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config">
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:22:11 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:27:d2:76"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <target dev="tapb07d4534-1c"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ce:8c:81"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <target dev="tap33293def-d3"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ad:27:90"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <target dev="tapbd554014-5c"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/console.log" append="off"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <video>
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </video>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:22:11 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:22:11 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:22:11 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:22:11 compute-0 nova_compute[248510]: </domain>
Dec 13 08:22:11 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Preparing to wait for external event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Preparing to wait for external event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Preparing to wait for external event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.153 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.153 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.154 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.154 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.155 248514 DEBUG os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.156 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.156 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.160 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d4534-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.160 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb07d4534-1c, col_values=(('external_ids', {'iface-id': 'b07d4534-1cb5-41ec-b0c4-3e820159fe8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:d2:76', 'vm-uuid': 'f983ed7f-13a4-496d-b8e9-60768d90efe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 NetworkManager[50376]: <info>  [1765614131.1631] manager: (tapb07d4534-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.170 248514 INFO os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c')
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.171 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.172 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.172 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.173 248514 DEBUG os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.173 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.174 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.174 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.178 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33293def-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.178 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33293def-d3, col_values=(('external_ids', {'iface-id': '33293def-d398-4fee-865f-a61997489b67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:8c:81', 'vm-uuid': 'f983ed7f-13a4-496d-b8e9-60768d90efe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.179 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 NetworkManager[50376]: <info>  [1765614131.1807] manager: (tap33293def-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.182 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.186 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.189 248514 INFO os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3')
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.190 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.190 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.191 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.192 248514 DEBUG os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.192 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.193 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.193 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.197 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd554014-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.197 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd554014-5c, col_values=(('external_ids', {'iface-id': 'bd554014-5cc7-4f34-b4a0-03ae7cc1f530', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:27:90', 'vm-uuid': 'f983ed7f-13a4-496d-b8e9-60768d90efe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 NetworkManager[50376]: <info>  [1765614131.1996] manager: (tapbd554014-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.210 248514 INFO os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c')
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.375 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.375 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.375 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:27:d2:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.376 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:ce:8c:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.376 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:ad:27:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.377 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Using config drive
Dec 13 08:22:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1984943527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3426856428' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.488 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.537 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.538 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.538 248514 DEBUG nova.objects.instance [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.723 248514 DEBUG nova.network.neutron [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updated VIF entry in instance network info cache for port 33293def-d398-4fee-865f-a61997489b67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.723 248514 DEBUG nova.network.neutron [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.823 248514 DEBUG oslo_concurrency.lockutils [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.824 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.825 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing network info cache for port bd554014-5cc7-4f34-b4a0-03ae7cc1f530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.885 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Creating config drive at /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.891 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsh53b5em execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:11 compute-0 ovn_controller[148476]: 2025-12-13T08:22:11Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:b5:6d 10.100.0.3
Dec 13 08:22:11 compute-0 ovn_controller[148476]: 2025-12-13T08:22:11Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:b5:6d 10.100.0.3
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.972 248514 DEBUG nova.objects.instance [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:11 compute-0 nova_compute[248510]: 2025-12-13 08:22:11.990 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.024 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsh53b5em" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.054 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.060 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.289 248514 DEBUG nova.policy [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:22:12 compute-0 ceph-mon[76537]: pgmap v1700: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 4.3 KiB/s wr, 12 op/s
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.871 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.811s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.872 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Deleting local config drive /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config because it was imported into RBD.
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9287] manager: (tapb07d4534-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Dec 13 08:22:12 compute-0 kernel: tapb07d4534-1c: entered promiscuous mode
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:12 compute-0 ovn_controller[148476]: 2025-12-13T08:22:12Z|00182|binding|INFO|Claiming lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e for this chassis.
Dec 13 08:22:12 compute-0 ovn_controller[148476]: 2025-12-13T08:22:12Z|00183|binding|INFO|b07d4534-1cb5-41ec-b0c4-3e820159fe8e: Claiming fa:16:3e:27:d2:76 10.100.0.33
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9495] manager: (tap33293def-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Dec 13 08:22:12 compute-0 kernel: tap33293def-d3: entered promiscuous mode
Dec 13 08:22:12 compute-0 systemd-udevd[281275]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:22:12 compute-0 systemd-udevd[281274]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9659] manager: (tapbd554014-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Dec 13 08:22:12 compute-0 systemd-udevd[281277]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9792] device (tapb07d4534-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9798] device (tapb07d4534-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9814] device (tap33293def-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9819] device (tap33293def-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:22:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 5.5 KiB/s wr, 12 op/s
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.994 248514 DEBUG nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.994 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.994 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.994 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 WARNING nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 for instance with vm_state active and task_state None.
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.996 248514 DEBUG nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.996 248514 WARNING nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 for instance with vm_state active and task_state None.
Dec 13 08:22:12 compute-0 nova_compute[248510]: 2025-12-13 08:22:12.996 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:12 compute-0 kernel: tapbd554014-5c: entered promiscuous mode
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9987] device (tapbd554014-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:22:12 compute-0 NetworkManager[50376]: <info>  [1765614132.9997] device (tapbd554014-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:12Z|00184|if_status|INFO|Not updating pb chassis for 33293def-d398-4fee-865f-a61997489b67 now as sb is readonly
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:12Z|00185|binding|INFO|Claiming lport 33293def-d398-4fee-865f-a61997489b67 for this chassis.
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:12Z|00186|binding|INFO|33293def-d398-4fee-865f-a61997489b67: Claiming fa:16:3e:ce:8c:81 10.100.1.74
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:12Z|00187|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:12.996 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:d2:76 10.100.0.33'], port_security=['fa:16:3e:27:d2:76 10.100.0.33'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.33/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-647203e6-db87-411a-8603-ed4b91cb4212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f2b755-adc8-4d52-9a0b-2240b0923f42, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b07d4534-1cb5-41ec-b0c4-3e820159fe8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.004 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b07d4534-1cb5-41ec-b0c4-3e820159fe8e in datapath 647203e6-db87-411a-8603-ed4b91cb4212 bound to our chassis
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.006 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 647203e6-db87-411a-8603-ed4b91cb4212
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00188|binding|INFO|Setting lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e ovn-installed in OVS
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.006 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00189|binding|INFO|Claiming lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 for this chassis.
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00190|binding|INFO|bd554014-5cc7-4f34-b4a0-03ae7cc1f530: Claiming fa:16:3e:ad:27:90 10.100.0.211
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00191|binding|INFO|Setting lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e up in Southbound
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.013 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:8c:81 10.100.1.74'], port_security=['fa:16:3e:ce:8c:81 10.100.1.74'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.74/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c824dbae-6ef3-43b5-8ec9-f4bc95c906d6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=33293def-d398-4fee-865f-a61997489b67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.022 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:27:90 10.100.0.211'], port_security=['fa:16:3e:ad:27:90 10.100.0.211'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.211/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-647203e6-db87-411a-8603-ed4b91cb4212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f2b755-adc8-4d52-9a0b-2240b0923f42, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bd554014-5cc7-4f34-b4a0-03ae7cc1f530) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:13 compute-0 systemd-machined[210538]: New machine qemu-32-instance-0000001b.
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.047 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac4966e-2b90-470b-aec8-377bbdd105b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.049 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap647203e6-d1 in ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00192|binding|INFO|Setting lport 33293def-d398-4fee-865f-a61997489b67 ovn-installed in OVS
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00193|binding|INFO|Setting lport 33293def-d398-4fee-865f-a61997489b67 up in Southbound
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.052 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap647203e6-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[73b0297a-d781-4e20-93d3-777bffd862c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.053 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9ade395b-8d9e-463a-b453-450da8b52606]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001b.
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00194|binding|INFO|Setting lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 ovn-installed in OVS
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.069 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[78ffe802-2d1f-4fed-89ad-6e9bca1a2af2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.097 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a15b7dc-ba52-47f2-9ac1-3949eeb60b1d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00195|binding|INFO|Setting lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 up in Southbound
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.133 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[df399358-a443-4de3-bf8a-8ae80faeccbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.138 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7a2e4d-e38c-4631-9e45-4819316c7c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 NetworkManager[50376]: <info>  [1765614133.1401] manager: (tap647203e6-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.176 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f0afaf44-2993-4156-9db8-61ff320e9c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.180 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17aa60d3-2414-4a46-8d56-f496a1b0c8ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 NetworkManager[50376]: <info>  [1765614133.2099] device (tap647203e6-d0): carrier: link connected
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.220 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[22d3e147-49d3-4898-aa7e-0232930b60ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.243 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35e91cee-5d28-4fb8-81c9-bc77d92df9d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap647203e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:b0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671042, 'reachable_time': 17366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281314, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.263 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a94affea-bb0e-485d-9da0-f6867aaf8be7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:b0ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671042, 'tstamp': 671042}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281315, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.285 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1667fc1-030d-4bfc-b0a8-26bbb55b89ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap647203e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:b0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671042, 'reachable_time': 17366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281316, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.292 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully created port: cea82a7d-e92d-4ac6-ba47-854ec9905fd2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.310 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "1c17e7b7-7062-48d2-a30f-b387929244d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.310 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.333 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[721857fe-eb87-4abc-994c-0e79fb7d332e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.341 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.413 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2e570b-f63b-43d9-880e-0c0cd97aff5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647203e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647203e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:13 compute-0 NetworkManager[50376]: <info>  [1765614133.4183] manager: (tap647203e6-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Dec 13 08:22:13 compute-0 kernel: tap647203e6-d0: entered promiscuous mode
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.420 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.421 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap647203e6-d0, col_values=(('external_ids', {'iface-id': '2a2d6eba-8a85-4872-98d3-6dab02d46408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.422 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:13 compute-0 ovn_controller[148476]: 2025-12-13T08:22:13Z|00196|binding|INFO|Releasing lport 2a2d6eba-8a85-4872-98d3-6dab02d46408 from this chassis (sb_readonly=0)
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.440 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/647203e6-db87-411a-8603-ed4b91cb4212.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/647203e6-db87-411a-8603-ed4b91cb4212.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.441 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[125d13c6-841c-4fb3-8e5e-072ad3329c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.443 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-647203e6-db87-411a-8603-ed4b91cb4212
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/647203e6-db87-411a-8603-ed4b91cb4212.pid.haproxy
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 647203e6-db87-411a-8603-ed4b91cb4212
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:22:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.444 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'env', 'PROCESS_TAG=haproxy-647203e6-db87-411a-8603-ed4b91cb4212', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/647203e6-db87-411a-8603-ed4b91cb4212.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.567 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.568 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.578 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.578 248514 INFO nova.compute.claims [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.584 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updated VIF entry in instance network info cache for port bd554014-5cc7-4f34-b4a0-03ae7cc1f530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.584 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.621 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.622 248514 DEBUG nova.compute.manager [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.622 248514 DEBUG nova.compute.manager [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-9516b135-3bb5-4da4-942f-d044cad93bd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.623 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.623 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.623 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port 9516b135-3bb5-4da4-942f-d044cad93bd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.661 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614133.6613116, f983ed7f-13a4-496d-b8e9-60768d90efe6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.662 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] VM Started (Lifecycle Event)
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.825 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.835 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614133.6614385, f983ed7f-13a4-496d-b8e9-60768d90efe6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.835 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] VM Paused (Lifecycle Event)
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.868 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.872 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:22:13 compute-0 nova_compute[248510]: 2025-12-13 08:22:13.907 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:22:13 compute-0 podman[281392]: 2025-12-13 08:22:13.827212521 +0000 UTC m=+0.031465846 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.008 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:14 compute-0 podman[281392]: 2025-12-13 08:22:14.068771581 +0000 UTC m=+0.273024886 container create 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:22:14 compute-0 systemd[1]: Started libpod-conmon-0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56.scope.
Dec 13 08:22:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265fb3e3eb8d96b0e0a6d35b9b283058e328a9688b28ba3696775b5ebfa0aa7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.173 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully updated port: cea82a7d-e92d-4ac6-ba47-854ec9905fd2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.200 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:14 compute-0 podman[281392]: 2025-12-13 08:22:14.211194569 +0000 UTC m=+0.415447904 container init 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:22:14 compute-0 podman[281392]: 2025-12-13 08:22:14.217743 +0000 UTC m=+0.421996305 container start 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 08:22:14 compute-0 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [NOTICE]   (281432) : New worker (281434) forked
Dec 13 08:22:14 compute-0 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [NOTICE]   (281432) : Loading success.
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092307132' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.641 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.649 248514 DEBUG nova.compute.provider_tree [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.656 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 33293def-d398-4fee-865f-a61997489b67 in datapath 527d2c60-2d6f-4195-aeaa-9dd99258fb5b unbound from our chassis
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.658 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527d2c60-2d6f-4195-aeaa-9dd99258fb5b
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.672 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f03dc76-d565-42e3-a1f0-2c1b7ae1f301]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.673 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527d2c60-21 in ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.676 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527d2c60-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d879c19e-5c52-4f2e-af28-48ef17de840e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff1ff4b-8536-46fc-a2bf-e1a5107f3e5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.691 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4b307ab5-bd39-485d-b404-5741d8db5eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.717 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a76313fb-a145-4d72-aa72-ba53ffdb9e36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.724 248514 DEBUG nova.compute.manager [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.725 248514 DEBUG nova.compute.manager [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-cea82a7d-e92d-4ac6-ba47-854ec9905fd2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.725 248514 DEBUG oslo_concurrency.lockutils [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.741 248514 DEBUG nova.scheduler.client.report [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.751 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a82b2524-19a9-4279-a4ec-47338976a087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.760 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffd8939-d476-4c3d-9701-a647d568feaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 NetworkManager[50376]: <info>  [1765614134.7606] manager: (tap527d2c60-20): new Veth device (/org/freedesktop/NetworkManager/Devices/91)
Dec 13 08:22:14 compute-0 systemd-udevd[281302]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.765 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.766 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.789 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[39d46a06-8dce-4d3c-932c-616f516f61cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.793 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[83d5bd0d-e760-4733-9e3a-809393b36396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 NetworkManager[50376]: <info>  [1765614134.8181] device (tap527d2c60-20): carrier: link connected
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.823 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[21acf589-2ec0-4831-a416-3a0f1da5d162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.829 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.830 248514 DEBUG nova.network.neutron [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.844 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[997c9337-ca6f-47fc-9963-8aa0795925b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d2c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:a6:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671203, 'reachable_time': 30096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281455, 'error': None, 'target': 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.861 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d51634d-f964-44f1-beaa-f358669fe26a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:a661'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671203, 'tstamp': 671203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281456, 'error': None, 'target': 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.863 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:22:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.877 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e49b66-a61e-4375-83ef-170750b4e150]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d2c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:a6:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671203, 'reachable_time': 30096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281457, 'error': None, 'target': 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.895 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:22:14 compute-0 ceph-mon[76537]: pgmap v1701: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 5.5 KiB/s wr, 12 op/s
Dec 13 08:22:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4092307132' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.913 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5e0f72-7719-4788-92f0-e50c60831379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:22:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561537915' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:22:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:22:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561537915' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.992 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[10d1ec3d-a337-4f34-9d92-f349585fe12f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 18 op/s
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.994 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.995 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d2c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.996 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:14 compute-0 nova_compute[248510]: 2025-12-13 08:22:14.996 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Creating image(s)
Dec 13 08:22:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.997 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527d2c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:15 compute-0 NetworkManager[50376]: <info>  [1765614134.9999] manager: (tap527d2c60-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Dec 13 08:22:15 compute-0 kernel: tap527d2c60-20: entered promiscuous mode
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.003 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527d2c60-20, col_values=(('external_ids', {'iface-id': 'a4843e01-b61c-4c89-820d-c2bc9f310806'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:15 compute-0 ovn_controller[148476]: 2025-12-13T08:22:15Z|00197|binding|INFO|Releasing lport a4843e01-b61c-4c89-820d-c2bc9f310806 from this chassis (sb_readonly=0)
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.023 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527d2c60-2d6f-4195-aeaa-9dd99258fb5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527d2c60-2d6f-4195-aeaa-9dd99258fb5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.024 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a1833d-5668-4206-a0f3-6b452b052cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.024 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.025 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-527d2c60-2d6f-4195-aeaa-9dd99258fb5b
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/527d2c60-2d6f-4195-aeaa-9dd99258fb5b.pid.haproxy
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 527d2c60-2d6f-4195-aeaa-9dd99258fb5b
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:22:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.025 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'env', 'PROCESS_TAG=haproxy-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527d2c60-2d6f-4195-aeaa-9dd99258fb5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.052 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.076 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.080 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.109 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.150 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.151 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.152 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.152 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.177 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.182 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1c17e7b7-7062-48d2-a30f-b387929244d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.273 248514 DEBUG nova.network.neutron [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.274 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.488 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port 9516b135-3bb5-4da4-942f-d044cad93bd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:22:15 compute-0 podman[281581]: 2025-12-13 08:22:15.39633135 +0000 UTC m=+0.026019342 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.489 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.515 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.516 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.517 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.686 248514 WARNING nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.687 248514 WARNING nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.756 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.756 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.756 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.757 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.757 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Processing event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.757 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.758 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.759 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.759 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.759 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No event matching network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e in dict_keys([('network-vif-plugged', '33293def-d398-4fee-865f-a61997489b67'), ('network-vif-plugged', 'bd554014-5cc7-4f34-b4a0-03ae7cc1f530')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 WARNING nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e for instance with vm_state building and task_state spawning.
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Processing event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No event matching network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 in dict_keys([('network-vif-plugged', 'bd554014-5cc7-4f34-b4a0-03ae7cc1f530')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 WARNING nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 for instance with vm_state building and task_state spawning.
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Processing event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 WARNING nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 for instance with vm_state building and task_state spawning.
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.764 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.769 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614135.7689528, f983ed7f-13a4-496d-b8e9-60768d90efe6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.770 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] VM Resumed (Lifecycle Event)
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.773 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.780 248514 INFO nova.virt.libvirt.driver [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance spawned successfully.
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.781 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.791 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:15 compute-0 podman[281581]: 2025-12-13 08:22:15.802617177 +0000 UTC m=+0.432305129 container create 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.817 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.821 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.821 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.822 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.823 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.823 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.823 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:15 compute-0 nova_compute[248510]: 2025-12-13 08:22:15.858 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:22:15 compute-0 systemd[1]: Started libpod-conmon-854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a.scope.
Dec 13 08:22:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55d01b3ee3deca83f683b6cfde9ad826f72c632d13fcddd102d1648a05c1801/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:16 compute-0 nova_compute[248510]: 2025-12-13 08:22:16.088 248514 INFO nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Took 21.89 seconds to spawn the instance on the hypervisor.
Dec 13 08:22:16 compute-0 nova_compute[248510]: 2025-12-13 08:22:16.089 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2561537915' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:22:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2561537915' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:22:16 compute-0 ceph-mon[76537]: pgmap v1702: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 18 op/s
Dec 13 08:22:16 compute-0 nova_compute[248510]: 2025-12-13 08:22:16.162 248514 INFO nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Took 23.15 seconds to build instance.
Dec 13 08:22:16 compute-0 nova_compute[248510]: 2025-12-13 08:22:16.187 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:16 compute-0 nova_compute[248510]: 2025-12-13 08:22:16.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:16 compute-0 podman[281581]: 2025-12-13 08:22:16.429784204 +0000 UTC m=+1.059472186 container init 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:22:16 compute-0 podman[281581]: 2025-12-13 08:22:16.43611153 +0000 UTC m=+1.065799482 container start 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:22:16 compute-0 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [NOTICE]   (281603) : New worker (281605) forked
Dec 13 08:22:16 compute-0 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [NOTICE]   (281603) : Loading success.
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.672 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bd554014-5cc7-4f34-b4a0-03ae7cc1f530 in datapath 647203e6-db87-411a-8603-ed4b91cb4212 unbound from our chassis
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.675 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 647203e6-db87-411a-8603-ed4b91cb4212
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3709c8a0-252a-4cd7-93ef-54c8d3c80d4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.729 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[374e1634-82d5-4dce-845e-c086f7cf23dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.733 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[155f87d4-b060-4242-bc04-2f89393fd895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.770 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3b66890f-3155-49c3-a3e1-69ad1bd5805c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.793 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe013365-18a6-4a48-bf4b-beefec972617]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap647203e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:b0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671042, 'reachable_time': 17366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281619, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.816 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df096661-eb22-4741-bbba-dd87f1b87ceb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap647203e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671058, 'tstamp': 671058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281620, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap647203e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671062, 'tstamp': 671062}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281620, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.818 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647203e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.821 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647203e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:16 compute-0 nova_compute[248510]: 2025-12-13 08:22:16.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap647203e6-d0, col_values=(('external_ids', {'iface-id': '2a2d6eba-8a85-4872-98d3-6dab02d46408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.823 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:16 compute-0 nova_compute[248510]: 2025-12-13 08:22:16.827 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1c17e7b7-7062-48d2-a30f-b387929244d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:16 compute-0 nova_compute[248510]: 2025-12-13 08:22:16.902 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] resizing rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:22:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 18 KiB/s wr, 9 op/s
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.440 248514 DEBUG nova.objects.instance [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c17e7b7-7062-48d2-a30f-b387929244d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.456 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.456 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Ensure instance console log exists: /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.457 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.457 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.457 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.458 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.466 248514 WARNING nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.473 248514 DEBUG nova.virt.libvirt.host [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.474 248514 DEBUG nova.virt.libvirt.host [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.478 248514 DEBUG nova.virt.libvirt.host [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.478 248514 DEBUG nova.virt.libvirt.host [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.481 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.483 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.985 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.986 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.986 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.986 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.986 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.988 248514 INFO nova.compute.manager [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Terminating instance
Dec 13 08:22:17 compute-0 nova_compute[248510]: 2025-12-13 08:22:17.989 248514 DEBUG nova.compute.manager [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:22:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:22:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108519056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.055 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.088 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.096 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:18 compute-0 kernel: tapb07d4534-1c (unregistering): left promiscuous mode
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.1832] device (tapb07d4534-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00198|binding|INFO|Releasing lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e from this chassis (sb_readonly=0)
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00199|binding|INFO|Setting lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e down in Southbound
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.194 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00200|binding|INFO|Removing iface tapb07d4534-1c ovn-installed in OVS
Dec 13 08:22:18 compute-0 kernel: tap33293def-d3 (unregistering): left promiscuous mode
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.205 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:d2:76 10.100.0.33'], port_security=['fa:16:3e:27:d2:76 10.100.0.33'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.33/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-647203e6-db87-411a-8603-ed4b91cb4212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f2b755-adc8-4d52-9a0b-2240b0923f42, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b07d4534-1cb5-41ec-b0c4-3e820159fe8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.206 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b07d4534-1cb5-41ec-b0c4-3e820159fe8e in datapath 647203e6-db87-411a-8603-ed4b91cb4212 unbound from our chassis
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.208 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 647203e6-db87-411a-8603-ed4b91cb4212
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.2124] device (tap33293def-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 kernel: tapbd554014-5c (unregistering): left promiscuous mode
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.230 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7643c6b8-f39a-4f29-a512-152793779460]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00201|binding|INFO|Releasing lport 33293def-d398-4fee-865f-a61997489b67 from this chassis (sb_readonly=0)
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00202|binding|INFO|Setting lport 33293def-d398-4fee-865f-a61997489b67 down in Southbound
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.2360] device (tapbd554014-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00203|binding|INFO|Removing iface tap33293def-d3 ovn-installed in OVS
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.237 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.246 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:8c:81 10.100.1.74'], port_security=['fa:16:3e:ce:8c:81 10.100.1.74'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.74/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c824dbae-6ef3-43b5-8ec9-f4bc95c906d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=33293def-d398-4fee-865f-a61997489b67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00204|binding|INFO|Releasing lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 from this chassis (sb_readonly=0)
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00205|binding|INFO|Setting lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 down in Southbound
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00206|binding|INFO|Removing iface tapbd554014-5c ovn-installed in OVS
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.273 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:27:90 10.100.0.211'], port_security=['fa:16:3e:ad:27:90 10.100.0.211'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.211/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-647203e6-db87-411a-8603-ed4b91cb4212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f2b755-adc8-4d52-9a0b-2240b0923f42, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bd554014-5cc7-4f34-b4a0-03ae7cc1f530) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.275 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[60dab543-f185-4a28-96ae-5b2ddec08b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.279 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4124fe-d66f-43b8-b7a3-46c4688cd974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Dec 13 08:22:18 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Consumed 2.718s CPU time.
Dec 13 08:22:18 compute-0 systemd-machined[210538]: Machine qemu-32-instance-0000001b terminated.
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.312 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec7c53b-19d2-43e9-a670-d7530ef0154e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.318 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.333 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a075ac6-86aa-4bb0-919b-4cadef8d234f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap647203e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:b0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671042, 'reachable_time': 17366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281765, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[271da28a-969e-4490-9152-0c4071256e7b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap647203e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671058, 'tstamp': 671058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281766, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap647203e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671062, 'tstamp': 671062}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281766, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.354 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.355 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647203e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.355 248514 DEBUG oslo_concurrency.lockutils [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.356 248514 DEBUG nova.network.neutron [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port cea82a7d-e92d-4ac6-ba47-854ec9905fd2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.359 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.362 248514 DEBUG nova.virt.libvirt.vif [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.362 248514 DEBUG nova.network.os_vif_util [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.363 248514 DEBUG nova.network.os_vif_util [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.364 248514 DEBUG os_vif [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.365 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.366 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.368 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647203e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.369 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcea82a7d-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.369 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.369 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcea82a7d-e9, col_values=(('external_ids', {'iface-id': 'cea82a7d-e92d-4ac6-ba47-854ec9905fd2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:29:df', 'vm-uuid': '2e309dc2-3cab-4ecf-8be7-eab85790a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.369 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap647203e6-d0, col_values=(('external_ids', {'iface-id': '2a2d6eba-8a85-4872-98d3-6dab02d46408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.370 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.371 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 33293def-d398-4fee-865f-a61997489b67 in datapath 527d2c60-2d6f-4195-aeaa-9dd99258fb5b unbound from our chassis
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.3721] manager: (tapcea82a7d-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.373 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527d2c60-2d6f-4195-aeaa-9dd99258fb5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.374 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f977cb-e460-42f9-bbf0-49e045a48a18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.375 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b namespace which is not needed anymore
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.383 248514 INFO os_vif [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9')
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.384 248514 DEBUG nova.virt.libvirt.vif [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.384 248514 DEBUG nova.network.os_vif_util [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.385 248514 DEBUG nova.network.os_vif_util [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.388 248514 DEBUG nova.virt.libvirt.guest [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:3a:29:df"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <target dev="tapcea82a7d-e9"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]: </interface>
Dec 13 08:22:18 compute-0 nova_compute[248510]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 08:22:18 compute-0 systemd-udevd[281737]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.4053] manager: (tapcea82a7d-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Dec 13 08:22:18 compute-0 kernel: tapcea82a7d-e9: entered promiscuous mode
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00207|binding|INFO|Claiming lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for this chassis.
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00208|binding|INFO|cea82a7d-e92d-4ac6-ba47-854ec9905fd2: Claiming fa:16:3e:3a:29:df 10.100.0.4
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.4218] device (tapcea82a7d-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.4224] device (tapcea82a7d-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.4267] manager: (tap33293def-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Dec 13 08:22:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.430 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:29:df 10.100.0.4'], port_security=['fa:16:3e:3a:29:df 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=cea82a7d-e92d-4ac6-ba47-854ec9905fd2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:18 compute-0 NetworkManager[50376]: <info>  [1765614138.4431] manager: (tapbd554014-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00209|binding|INFO|Setting lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 ovn-installed in OVS
Dec 13 08:22:18 compute-0 ovn_controller[148476]: 2025-12-13T08:22:18Z|00210|binding|INFO|Setting lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 up in Southbound
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.463 248514 INFO nova.virt.libvirt.driver [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance destroyed successfully.
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.463 248514 DEBUG nova.objects.instance [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'resources' on Instance uuid f983ed7f-13a4-496d-b8e9-60768d90efe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.479 248514 DEBUG nova.virt.libvirt.vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:16Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.480 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.480 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.481 248514 DEBUG os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.483 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d4534-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.502 248514 INFO os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c')
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.504 248514 DEBUG nova.virt.libvirt.vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:16Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.504 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.505 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.505 248514 DEBUG os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.507 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.507 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33293def-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.509 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.511 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.513 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.516 248514 INFO os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3')
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.517 248514 DEBUG nova.virt.libvirt.vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:16Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.518 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.520 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.520 248514 DEBUG os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.527 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd554014-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.536 248514 INFO os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c')
Dec 13 08:22:18 compute-0 ceph-mon[76537]: pgmap v1703: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 18 KiB/s wr, 9 op/s
Dec 13 08:22:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/108519056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:18 compute-0 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [NOTICE]   (281603) : haproxy version is 2.8.14-c23fe91
Dec 13 08:22:18 compute-0 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [NOTICE]   (281603) : path to executable is /usr/sbin/haproxy
Dec 13 08:22:18 compute-0 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [WARNING]  (281603) : Exiting Master process...
Dec 13 08:22:18 compute-0 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [ALERT]    (281603) : Current worker (281605) exited with code 143 (Terminated)
Dec 13 08:22:18 compute-0 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [WARNING]  (281603) : All workers exited. Exiting... (0)
Dec 13 08:22:18 compute-0 systemd[1]: libpod-854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a.scope: Deactivated successfully.
Dec 13 08:22:18 compute-0 podman[281824]: 2025-12-13 08:22:18.571259521 +0000 UTC m=+0.070627761 container died 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.613 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.613 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.613 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:54:c0:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.613 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:96:b5:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.614 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:3a:29:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.647 248514 DEBUG nova.virt.libvirt.guest [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:18</nova:creationTime>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:18 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 08:22:18 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 08:22:18 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:22:18 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:18 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:18 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:18 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.676 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a-userdata-shm.mount: Deactivated successfully.
Dec 13 08:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c55d01b3ee3deca83f683b6cfde9ad826f72c632d13fcddd102d1648a05c1801-merged.mount: Deactivated successfully.
Dec 13 08:22:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:22:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890616477' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:18 compute-0 podman[281824]: 2025-12-13 08:22:18.948532944 +0000 UTC m=+0.447901184 container cleanup 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:22:18 compute-0 systemd[1]: libpod-conmon-854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a.scope: Deactivated successfully.
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.972 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.876s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:18 compute-0 nova_compute[248510]: 2025-12-13 08:22:18.974 248514 DEBUG nova.objects.instance [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c17e7b7-7062-48d2-a30f-b387929244d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 548 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.033 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <uuid>1c17e7b7-7062-48d2-a30f-b387929244d9</uuid>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <name>instance-0000001c</name>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <nova:name>tempest-TenantUsagesTestJSON-server-1085101637</nova:name>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:22:17</nova:creationTime>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <nova:user uuid="e022e18c9c6c4da890d5bdf86cffc2a6">tempest-TenantUsagesTestJSON-1897461749-project-member</nova:user>
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <nova:project uuid="f9c57acfa75d48db92e25886eccd2ee1">tempest-TenantUsagesTestJSON-1897461749</nova:project>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <system>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <entry name="serial">1c17e7b7-7062-48d2-a30f-b387929244d9</entry>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <entry name="uuid">1c17e7b7-7062-48d2-a30f-b387929244d9</entry>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     </system>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <os>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   </os>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <features>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   </features>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1c17e7b7-7062-48d2-a30f-b387929244d9_disk">
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config">
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:22:19 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/console.log" append="off"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <video>
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     </video>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:22:19 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:22:19 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:22:19 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:22:19 compute-0 nova_compute[248510]: </domain>
Dec 13 08:22:19 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.282 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.282 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.283 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Using config drive
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.304 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:19 compute-0 podman[281887]: 2025-12-13 08:22:19.422912278 +0000 UTC m=+0.452577768 container remove 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.430 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[32ef09a6-5d8d-44e0-94b3-ac14f92a076f]: (4, ('Sat Dec 13 08:22:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b (854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a)\n854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a\nSat Dec 13 08:22:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b (854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a)\n854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[04109abb-5fbc-4146-b474-6eb0a47380a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.433 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d2c60-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:19 compute-0 kernel: tap527d2c60-20: left promiscuous mode
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.436 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.457 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce780c2-8b0f-4d11-a3e4-e23309d56c3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.474 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea017096-1880-4721-91d7-9c474455f648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.476 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5752487c-e1ba-4a78-bd60-8868d4e8154d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.491 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc5f633-9ebc-450e-9741-090e5c4e0bf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671195, 'reachable_time': 35319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281921, 'error': None, 'target': 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.494 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.494 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfce3cc-7b40-4643-86f8-0cbccec1bd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.495 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bd554014-5cc7-4f34-b4a0-03ae7cc1f530 in datapath 647203e6-db87-411a-8603-ed4b91cb4212 unbound from our chassis
Dec 13 08:22:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d527d2c60\x2d2d6f\x2d4195\x2daeaa\x2d9dd99258fb5b.mount: Deactivated successfully.
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.497 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 647203e6-db87-411a-8603-ed4b91cb4212, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.497 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34f19bd8-5e88-4d29-a20b-d93ac89df789]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.498 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 namespace which is not needed anymore
Dec 13 08:22:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.670 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Creating config drive at /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config
Dec 13 08:22:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/890616477' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.676 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfn0rfvg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:22:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 11K writes, 49K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3291 syncs, 3.64 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5030 writes, 21K keys, 5030 commit groups, 1.0 writes per commit group, ingest: 23.45 MB, 0.04 MB/s
                                           Interval WAL: 5030 writes, 1906 syncs, 2.64 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:22:19 compute-0 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [NOTICE]   (281432) : haproxy version is 2.8.14-c23fe91
Dec 13 08:22:19 compute-0 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [NOTICE]   (281432) : path to executable is /usr/sbin/haproxy
Dec 13 08:22:19 compute-0 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [WARNING]  (281432) : Exiting Master process...
Dec 13 08:22:19 compute-0 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [WARNING]  (281432) : Exiting Master process...
Dec 13 08:22:19 compute-0 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [ALERT]    (281432) : Current worker (281434) exited with code 143 (Terminated)
Dec 13 08:22:19 compute-0 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [WARNING]  (281432) : All workers exited. Exiting... (0)
Dec 13 08:22:19 compute-0 systemd[1]: libpod-0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56.scope: Deactivated successfully.
Dec 13 08:22:19 compute-0 podman[281939]: 2025-12-13 08:22:19.715608877 +0000 UTC m=+0.126558768 container died 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 08:22:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Dec 13 08:22:19 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Dec 13 08:22:19 compute-0 nova_compute[248510]: 2025-12-13 08:22:19.814 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfn0rfvg" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56-userdata-shm.mount: Deactivated successfully.
Dec 13 08:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-265fb3e3eb8d96b0e0a6d35b9b283058e328a9688b28ba3696775b5ebfa0aa7f-merged.mount: Deactivated successfully.
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.002 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:20 compute-0 podman[281939]: 2025-12-13 08:22:20.003158899 +0000 UTC m=+0.414108760 container cleanup 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.006 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:20 compute-0 systemd[1]: libpod-conmon-0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56.scope: Deactivated successfully.
Dec 13 08:22:20 compute-0 ovn_controller[148476]: 2025-12-13T08:22:20Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:29:df 10.100.0.4
Dec 13 08:22:20 compute-0 ovn_controller[148476]: 2025-12-13T08:22:20Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:29:df 10.100.0.4
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.242 248514 DEBUG nova.compute.manager [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG oslo_concurrency.lockutils [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG oslo_concurrency.lockutils [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG oslo_concurrency.lockutils [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG nova.compute.manager [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-unplugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG nova.compute.manager [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.270 248514 DEBUG nova.network.neutron [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port cea82a7d-e92d-4ac6-ba47-854ec9905fd2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.270 248514 DEBUG nova.network.neutron [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.300 248514 DEBUG oslo_concurrency.lockutils [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029750218993480873 of space, bias 1.0, pg target 0.8925065698044262 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0021996237752399867 of space, bias 1.0, pg target 0.659887132571996 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.001002730076268e-07 of space, bias 4.0, pg target 0.0009601203276091521 quantized to 16 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:22:20 compute-0 podman[281991]: 2025-12-13 08:22:20.860044484 +0000 UTC m=+0.834882424 container remove 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:22:20 compute-0 ceph-mon[76537]: pgmap v1704: 321 pgs: 321 active+clean; 548 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Dec 13 08:22:20 compute-0 ceph-mon[76537]: osdmap e180: 3 total, 3 up, 3 in
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.871 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[99fdbcd2-8918-4e4e-92a6-7e36f877b4dc]: (4, ('Sat Dec 13 08:22:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 (0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56)\n0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56\nSat Dec 13 08:22:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 (0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56)\n0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.874 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[503e9c9b-69e4-4ab8-adc7-f76cebf9cba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.877 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647203e6-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.881 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:20 compute-0 kernel: tap647203e6-d0: left promiscuous mode
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.902 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d25c96c8-3276-4de5-a8d1-cd931dc5af98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.919 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4494864e-0092-4b53-bab7-79e39767733f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.921 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87ad7547-7437-425a-951d-72c85dba274c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.936 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:20 compute-0 nova_compute[248510]: 2025-12-13 08:22:20.937 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Deleting local config drive /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config because it was imported into RBD.
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.940 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe096d85-41db-4fee-aa9d-14bcbb7d9823]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671034, 'reachable_time': 21712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282027, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.943 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:22:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d647203e6\x2ddb87\x2d411a\x2d8603\x2ded4b91cb4212.mount: Deactivated successfully.
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.943 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b331b650-ceb7-43d3-82d4-003dc266f284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.946 158419 INFO neutron.agent.ovn.metadata.agent [-] Port cea82a7d-e92d-4ac6-ba47-854ec9905fd2 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.947 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:22:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.969 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af9f7781-5dae-4733-a268-d2b2e26ff90c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 548 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.006 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95b5758a-2e5d-432a-90ea-4449f61a4152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.011 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6fa750-070c-4437-ad1f-687eb94f5a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:21 compute-0 systemd-machined[210538]: New machine qemu-33-instance-0000001c.
Dec 13 08:22:21 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001c.
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.042 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[33102e56-63a3-43f2-ba18-8c2fa8fc2803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.067 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f27f64c3-371b-4a1e-92ee-f6f2665ab444]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282043, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:21 compute-0 nova_compute[248510]: 2025-12-13 08:22:21.087 248514 INFO nova.virt.libvirt.driver [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Deleting instance files /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6_del
Dec 13 08:22:21 compute-0 nova_compute[248510]: 2025-12-13 08:22:21.088 248514 INFO nova.virt.libvirt.driver [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Deletion of /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6_del complete
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.093 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b29fee50-25cb-4ae2-9cb5-f5a11c1c1795]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282045, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282045, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.095 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:21 compute-0 nova_compute[248510]: 2025-12-13 08:22:21.097 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:21 compute-0 nova_compute[248510]: 2025-12-13 08:22:21.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:21 compute-0 nova_compute[248510]: 2025-12-13 08:22:21.177 248514 INFO nova.compute.manager [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Took 3.19 seconds to destroy the instance on the hypervisor.
Dec 13 08:22:21 compute-0 nova_compute[248510]: 2025-12-13 08:22:21.178 248514 DEBUG oslo.service.loopingcall [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:22:21 compute-0 nova_compute[248510]: 2025-12-13 08:22:21.178 248514 DEBUG nova.compute.manager [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:22:21 compute-0 nova_compute[248510]: 2025-12-13 08:22:21.178 248514 DEBUG nova.network.neutron [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.349 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614142.349334, 1c17e7b7-7062-48d2-a30f-b387929244d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.350 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] VM Resumed (Lifecycle Event)
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.353 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.354 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.358 248514 INFO nova.virt.libvirt.driver [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance spawned successfully.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.359 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:22:22 compute-0 ceph-mon[76537]: pgmap v1706: 321 pgs: 321 active+clean; 548 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.398 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.406 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.410 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.410 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.411 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.411 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.412 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.413 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.450 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.451 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614142.3525267, 1c17e7b7-7062-48d2-a30f-b387929244d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.451 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] VM Started (Lifecycle Event)
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.483 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.488 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.496 248514 INFO nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Took 7.50 seconds to spawn the instance on the hypervisor.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.496 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.524 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.563 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.564 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.564 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.564 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.564 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.565 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e for instance with vm_state active and task_state deleting.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.565 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.565 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.566 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.566 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.566 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-unplugged-33293def-d398-4fee-865f-a61997489b67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.566 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-33293def-d398-4fee-865f-a61997489b67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.567 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.567 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.567 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.567 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.568 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.568 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 for instance with vm_state active and task_state deleting.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.568 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.568 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-unplugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.570 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.570 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.570 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.570 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 for instance with vm_state active and task_state deleting.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.572 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.572 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for instance with vm_state active and task_state None.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.572 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.572 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.573 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.573 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.573 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.573 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for instance with vm_state active and task_state None.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.576 248514 INFO nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Took 9.05 seconds to build instance.
Dec 13 08:22:22 compute-0 nova_compute[248510]: 2025-12-13 08:22:22.602 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 492 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Dec 13 08:22:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Dec 13 08:22:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 08:22:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Dec 13 08:22:23 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Dec 13 08:22:23 compute-0 nova_compute[248510]: 2025-12-13 08:22:23.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.272 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-3861ef01-74c8-4321-b36e-79090caaf6dc" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.273 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-3861ef01-74c8-4321-b36e-79090caaf6dc" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.273 248514 DEBUG nova.objects.instance [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:24 compute-0 ceph-mon[76537]: pgmap v1707: 321 pgs: 321 active+clean; 492 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Dec 13 08:22:24 compute-0 ceph-mon[76537]: osdmap e181: 3 total, 3 up, 3 in
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.833 248514 DEBUG nova.network.neutron [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.856 248514 INFO nova.compute.manager [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Took 3.68 seconds to deallocate network for instance.
Dec 13 08:22:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.904 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.905 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.918 248514 DEBUG nova.objects.instance [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:24 compute-0 nova_compute[248510]: 2025-12-13 08:22:24.934 248514 DEBUG nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:22:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 423 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.7 MiB/s wr, 310 op/s
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.035 248514 DEBUG oslo_concurrency.processutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.314 248514 DEBUG nova.policy [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:22:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321899503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.875 248514 DEBUG oslo_concurrency.processutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.840s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.884 248514 DEBUG nova.compute.provider_tree [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.903 248514 DEBUG nova.scheduler.client.report [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.938 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.954 248514 DEBUG nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully updated port: 3861ef01-74c8-4321-b36e-79090caaf6dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:25 compute-0 ceph-mon[76537]: pgmap v1709: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 423 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.7 MiB/s wr, 310 op/s
Dec 13 08:22:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/321899503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.981 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.982 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.982 248514 DEBUG nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:25 compute-0 nova_compute[248510]: 2025-12-13 08:22:25.984 248514 INFO nova.scheduler.client.report [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Deleted allocations for instance f983ed7f-13a4-496d-b8e9-60768d90efe6
Dec 13 08:22:26 compute-0 nova_compute[248510]: 2025-12-13 08:22:26.064 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:26 compute-0 nova_compute[248510]: 2025-12-13 08:22:26.192 248514 WARNING nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it
Dec 13 08:22:26 compute-0 nova_compute[248510]: 2025-12-13 08:22:26.193 248514 WARNING nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it
Dec 13 08:22:26 compute-0 nova_compute[248510]: 2025-12-13 08:22:26.193 248514 WARNING nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it
Dec 13 08:22:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 423 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 157 op/s
Dec 13 08:22:27 compute-0 nova_compute[248510]: 2025-12-13 08:22:27.319 248514 DEBUG nova.compute.manager [req-7ba32e55-f01d-4520-9484-bc109c1281a6 req-ebc34e57-6bb1-44d3-8d0f-0a2098cf7898 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-deleted-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:27 compute-0 nova_compute[248510]: 2025-12-13 08:22:27.319 248514 DEBUG nova.compute.manager [req-7ba32e55-f01d-4520-9484-bc109c1281a6 req-ebc34e57-6bb1-44d3-8d0f-0a2098cf7898 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-deleted-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:27 compute-0 nova_compute[248510]: 2025-12-13 08:22:27.422 248514 DEBUG nova.compute.manager [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-3861ef01-74c8-4321-b36e-79090caaf6dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:27 compute-0 nova_compute[248510]: 2025-12-13 08:22:27.422 248514 DEBUG nova.compute.manager [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-3861ef01-74c8-4321-b36e-79090caaf6dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:27 compute-0 nova_compute[248510]: 2025-12-13 08:22:27.422 248514 DEBUG oslo_concurrency.lockutils [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:28 compute-0 ceph-mon[76537]: pgmap v1710: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 423 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 157 op/s
Dec 13 08:22:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Dec 13 08:22:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Dec 13 08:22:28 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Dec 13 08:22:28 compute-0 nova_compute[248510]: 2025-12-13 08:22:28.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 372 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 29 KiB/s wr, 224 op/s
Dec 13 08:22:29 compute-0 nova_compute[248510]: 2025-12-13 08:22:29.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:29 compute-0 ceph-mon[76537]: osdmap e182: 3 total, 3 up, 3 in
Dec 13 08:22:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Dec 13 08:22:29 compute-0 ovn_controller[148476]: 2025-12-13T08:22:29Z|00211|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:22:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Dec 13 08:22:30 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Dec 13 08:22:30 compute-0 nova_compute[248510]: 2025-12-13 08:22:30.050 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:30 compute-0 ceph-mon[76537]: pgmap v1712: 321 pgs: 321 active+clean; 372 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 29 KiB/s wr, 224 op/s
Dec 13 08:22:30 compute-0 ceph-mon[76537]: osdmap e183: 3 total, 3 up, 3 in
Dec 13 08:22:30 compute-0 nova_compute[248510]: 2025-12-13 08:22:30.906 248514 DEBUG nova.compute.manager [req-3377f1a2-c6fb-43d5-af2b-dfc72f571f4b req-2bc06b3e-f419-4eb6-91db-a32283eff8d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-deleted-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 372 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 10 KiB/s wr, 207 op/s
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.079 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.080 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.080 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "9b6188af-75f0-4213-89c2-bd3eb72960b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.081 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.081 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.082 248514 INFO nova.compute.manager [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Terminating instance
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.083 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "refresh_cache-9b6188af-75f0-4213-89c2-bd3eb72960b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.083 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquired lock "refresh_cache-9b6188af-75f0-4213-89c2-bd3eb72960b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.083 248514 DEBUG nova.network.neutron [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.305 248514 DEBUG nova.network.neutron [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.673 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "1c17e7b7-7062-48d2-a30f-b387929244d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.674 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.674 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "1c17e7b7-7062-48d2-a30f-b387929244d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.675 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.675 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.676 248514 INFO nova.compute.manager [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Terminating instance
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.677 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "refresh_cache-1c17e7b7-7062-48d2-a30f-b387929244d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.677 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquired lock "refresh_cache-1c17e7b7-7062-48d2-a30f-b387929244d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.678 248514 DEBUG nova.network.neutron [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.816 248514 DEBUG nova.network.neutron [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.835 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Releasing lock "refresh_cache-9b6188af-75f0-4213-89c2-bd3eb72960b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.836 248514 DEBUG nova.compute.manager [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:22:31 compute-0 nova_compute[248510]: 2025-12-13 08:22:31.859 248514 DEBUG nova.network.neutron [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:32 compute-0 ceph-mon[76537]: pgmap v1714: 321 pgs: 321 active+clean; 372 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 10 KiB/s wr, 207 op/s
Dec 13 08:22:32 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Dec 13 08:22:32 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001a.scope: Consumed 14.604s CPU time.
Dec 13 08:22:32 compute-0 systemd-machined[210538]: Machine qemu-31-instance-0000001a terminated.
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.064 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance destroyed successfully.
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.064 248514 DEBUG nova.objects.instance [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'resources' on Instance uuid 9b6188af-75f0-4213-89c2-bd3eb72960b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.322 248514 DEBUG nova.network.neutron [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.334 248514 INFO nova.virt.libvirt.driver [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Deleting instance files /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7_del
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.335 248514 INFO nova.virt.libvirt.driver [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Deletion of /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7_del complete
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.347 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Releasing lock "refresh_cache-1c17e7b7-7062-48d2-a30f-b387929244d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.347 248514 DEBUG nova.compute.manager [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:22:32 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Dec 13 08:22:32 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Consumed 10.831s CPU time.
Dec 13 08:22:32 compute-0 systemd-machined[210538]: Machine qemu-33-instance-0000001c terminated.
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.411 248514 INFO nova.compute.manager [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 0.57 seconds to destroy the instance on the hypervisor.
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.411 248514 DEBUG oslo.service.loopingcall [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.411 248514 DEBUG nova.compute.manager [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.412 248514 DEBUG nova.network.neutron [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.572 248514 INFO nova.virt.libvirt.driver [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance destroyed successfully.
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.572 248514 DEBUG nova.objects.instance [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lazy-loading 'resources' on Instance uuid 1c17e7b7-7062-48d2-a30f-b387929244d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.840 248514 INFO nova.virt.libvirt.driver [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Deleting instance files /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9_del
Dec 13 08:22:32 compute-0 nova_compute[248510]: 2025-12-13 08:22:32.841 248514 INFO nova.virt.libvirt.driver [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Deletion of /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9_del complete
Dec 13 08:22:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 324 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 829 KiB/s rd, 15 KiB/s wr, 109 op/s
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.294 248514 DEBUG nova.network.neutron [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.302 248514 INFO nova.compute.manager [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Took 0.95 seconds to destroy the instance on the hypervisor.
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.302 248514 DEBUG oslo.service.loopingcall [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.303 248514 DEBUG nova.compute.manager [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.303 248514 DEBUG nova.network.neutron [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.314 248514 DEBUG nova.network.neutron [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.335 248514 INFO nova.compute.manager [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 0.92 seconds to deallocate network for instance.
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.340 248514 DEBUG nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.382 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.384 248514 DEBUG oslo_concurrency.lockutils [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.385 248514 DEBUG nova.network.neutron [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port 3861ef01-74c8-4321-b36e-79090caaf6dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.389 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.389 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.391 248514 DEBUG nova.virt.libvirt.vif [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.391 248514 DEBUG nova.network.os_vif_util [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.392 248514 DEBUG nova.network.os_vif_util [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.392 248514 DEBUG os_vif [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.393 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.394 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.398 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3861ef01-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.399 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3861ef01-74, col_values=(('external_ids', {'iface-id': '3861ef01-74c8-4321-b36e-79090caaf6dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:de:08', 'vm-uuid': '2e309dc2-3cab-4ecf-8be7-eab85790a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:33 compute-0 NetworkManager[50376]: <info>  [1765614153.4016] manager: (tap3861ef01-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.400 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.414 248514 INFO os_vif [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74')
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.415 248514 DEBUG nova.virt.libvirt.vif [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.415 248514 DEBUG nova.network.os_vif_util [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.416 248514 DEBUG nova.network.os_vif_util [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.418 248514 DEBUG nova.virt.libvirt.guest [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:82:de:08"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <target dev="tap3861ef01-74"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]: </interface>
Dec 13 08:22:33 compute-0 nova_compute[248510]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 08:22:33 compute-0 kernel: tap3861ef01-74: entered promiscuous mode
Dec 13 08:22:33 compute-0 systemd-udevd[282115]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:22:33 compute-0 NetworkManager[50376]: <info>  [1765614153.4334] manager: (tap3861ef01-74): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Dec 13 08:22:33 compute-0 ovn_controller[148476]: 2025-12-13T08:22:33Z|00212|binding|INFO|Claiming lport 3861ef01-74c8-4321-b36e-79090caaf6dc for this chassis.
Dec 13 08:22:33 compute-0 ovn_controller[148476]: 2025-12-13T08:22:33Z|00213|binding|INFO|3861ef01-74c8-4321-b36e-79090caaf6dc: Claiming fa:16:3e:82:de:08 10.100.0.10
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.434 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.444 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:de:08 10.100.0.10'], port_security=['fa:16:3e:82:de:08 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-2007287726', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-2007287726', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3861ef01-74c8-4321-b36e-79090caaf6dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.445 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3861ef01-74c8-4321-b36e-79090caaf6dc in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.447 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:22:33 compute-0 NetworkManager[50376]: <info>  [1765614153.4502] device (tap3861ef01-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:22:33 compute-0 NetworkManager[50376]: <info>  [1765614153.4513] device (tap3861ef01-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:22:33 compute-0 ovn_controller[148476]: 2025-12-13T08:22:33Z|00214|binding|INFO|Setting lport 3861ef01-74c8-4321-b36e-79090caaf6dc ovn-installed in OVS
Dec 13 08:22:33 compute-0 ovn_controller[148476]: 2025-12-13T08:22:33Z|00215|binding|INFO|Setting lport 3861ef01-74c8-4321-b36e-79090caaf6dc up in Southbound
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.457 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.460 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614138.460027, f983ed7f-13a4-496d-b8e9-60768d90efe6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.461 248514 INFO nova.compute.manager [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] VM Stopped (Lifecycle Event)
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.467 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64036a02-ce55-4d75-8480-18d07309e263]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.489 248514 DEBUG nova.compute.manager [None req-9a3ca6fd-57e4-45cf-908a-2f8cbecefcc9 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.506 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[029f7f9c-7ecb-43fd-a2c9-ea79cf48678a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.510 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[01d98796-7671-4835-ab77-f0083b560601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.518 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.519 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.519 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:54:c0:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.519 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:96:b5:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.520 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:3a:29:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.520 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:82:de:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.539 248514 DEBUG oslo_concurrency.processutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.545 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8031ff-0586-4e10-9190-84ea7e398e6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.568 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc044f6d-fceb-4868-a863-2f0dac2de0bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282172, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.573 248514 DEBUG nova.virt.libvirt.guest [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:33</nova:creationTime>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:33 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 08:22:33 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 08:22:33 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 08:22:33 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:22:33 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:33 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:33 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:33 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.592 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[902eaf18-6d8a-4de4-8d75-a4e3d3514d6b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282173, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282173, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.594 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.597 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.598 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.598 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.598 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.610 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-3861ef01-74c8-4321-b36e-79090caaf6dc" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:33 compute-0 nova_compute[248510]: 2025-12-13 08:22:33.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:34 compute-0 ceph-mon[76537]: pgmap v1715: 321 pgs: 321 active+clean; 324 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 829 KiB/s rd, 15 KiB/s wr, 109 op/s
Dec 13 08:22:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2356560262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.097 248514 DEBUG oslo_concurrency.processutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.104 248514 DEBUG nova.compute.provider_tree [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.136 248514 DEBUG nova.scheduler.client.report [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.167 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.209 248514 INFO nova.scheduler.client.report [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Deleted allocations for instance 9b6188af-75f0-4213-89c2-bd3eb72960b7
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.226 248514 DEBUG nova.network.neutron [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.246 248514 DEBUG nova.network.neutron [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.273 248514 INFO nova.compute.manager [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Took 0.97 seconds to deallocate network for instance.
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.282 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.339 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.340 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.427 248514 DEBUG oslo_concurrency.processutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.575 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "07413df5-0bb8-42c2-95ff-13458d598139" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.575 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.576 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "07413df5-0bb8-42c2-95ff-13458d598139-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.576 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.577 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.579 248514 INFO nova.compute.manager [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Terminating instance
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.580 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "refresh_cache-07413df5-0bb8-42c2-95ff-13458d598139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.580 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquired lock "refresh_cache-07413df5-0bb8-42c2-95ff-13458d598139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.580 248514 DEBUG nova.network.neutron [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.663 248514 DEBUG nova.compute.manager [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.663 248514 DEBUG oslo_concurrency.lockutils [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.663 248514 DEBUG oslo_concurrency.lockutils [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.664 248514 DEBUG oslo_concurrency.lockutils [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.664 248514 DEBUG nova.compute.manager [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.664 248514 WARNING nova.compute.manager [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc for instance with vm_state active and task_state None.
Dec 13 08:22:34 compute-0 nova_compute[248510]: 2025-12-13 08:22:34.752 248514 DEBUG nova.network.neutron [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Dec 13 08:22:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Dec 13 08:22:34 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Dec 13 08:22:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939969010' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 200 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 13 KiB/s wr, 141 op/s
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.020 248514 DEBUG oslo_concurrency.processutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.027 248514 DEBUG nova.compute.provider_tree [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2356560262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:35 compute-0 ceph-mon[76537]: osdmap e184: 3 total, 3 up, 3 in
Dec 13 08:22:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1939969010' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.046 248514 DEBUG nova.scheduler.client.report [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:35 compute-0 ovn_controller[148476]: 2025-12-13T08:22:35Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:de:08 10.100.0.10
Dec 13 08:22:35 compute-0 ovn_controller[148476]: 2025-12-13T08:22:35Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:de:08 10.100.0.10
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.077 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.101 248514 INFO nova.scheduler.client.report [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Deleted allocations for instance 1c17e7b7-7062-48d2-a30f-b387929244d9
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.169 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.279 248514 DEBUG nova.network.neutron [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port 3861ef01-74c8-4321-b36e-79090caaf6dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.280 248514 DEBUG nova.network.neutron [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.300 248514 DEBUG oslo_concurrency.lockutils [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.332 248514 DEBUG nova.network.neutron [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.360 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Releasing lock "refresh_cache-07413df5-0bb8-42c2-95ff-13458d598139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.361 248514 DEBUG nova.compute.manager [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.373 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-9516b135-3bb5-4da4-942f-d044cad93bd4" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.374 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-9516b135-3bb5-4da4-942f-d044cad93bd4" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.400 248514 DEBUG nova.objects.instance [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.423 248514 DEBUG nova.virt.libvirt.vif [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.424 248514 DEBUG nova.network.os_vif_util [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.426 248514 DEBUG nova.network.os_vif_util [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.432 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.435 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.440 248514 DEBUG nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Attempting to detach device tap9516b135-3b from instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.441 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:96:b5:6d"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <target dev="tap9516b135-3b"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]: </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.448 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:22:35 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000019.scope: Deactivated successfully.
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.453 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface>not found in domain: <domain type='kvm' id='29'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <name>instance-00000018</name>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:33</nova:creationTime>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:35 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000019.scope: Consumed 14.801s CPU time.
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:35 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <system>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='serial'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='uuid'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </system>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <os>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </os>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <features>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </features>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk' index='2'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config' index='1'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:54:c0:80'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='tap10aa2df4-a7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:96:b5:6d'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='tap9516b135-3b'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='net1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:3a:29:df'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='tapcea82a7d-e9'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='net2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:82:de:08'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='tap3861ef01-74'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='net3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <source path='/dev/pts/4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </target>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/4'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <source path='/dev/pts/4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </console>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <video>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </video>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c312,c966</label>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c312,c966</imagelabel>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:22:35 compute-0 nova_compute[248510]: </domain>
Dec 13 08:22:35 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.454 248514 INFO nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap9516b135-3b from instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da from the persistent domain config.
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.454 248514 DEBUG nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] (1/8): Attempting to detach device tap9516b135-3b with device alias net1 from instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.456 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:96:b5:6d"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <target dev="tap9516b135-3b"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]: </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 08:22:35 compute-0 systemd-machined[210538]: Machine qemu-30-instance-00000019 terminated.
Dec 13 08:22:35 compute-0 kernel: tap9516b135-3b (unregistering): left promiscuous mode
Dec 13 08:22:35 compute-0 NetworkManager[50376]: <info>  [1765614155.5676] device (tap9516b135-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:35 compute-0 ovn_controller[148476]: 2025-12-13T08:22:35Z|00216|binding|INFO|Releasing lport 9516b135-3bb5-4da4-942f-d044cad93bd4 from this chassis (sb_readonly=0)
Dec 13 08:22:35 compute-0 ovn_controller[148476]: 2025-12-13T08:22:35Z|00217|binding|INFO|Setting lport 9516b135-3bb5-4da4-942f-d044cad93bd4 down in Southbound
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:35 compute-0 ovn_controller[148476]: 2025-12-13T08:22:35Z|00218|binding|INFO|Removing iface tap9516b135-3b ovn-installed in OVS
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.585 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:b5:6d 10.100.0.3'], port_security=['fa:16:3e:96:b5:6d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9516b135-3bb5-4da4-942f-d044cad93bd4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.586 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765614155.585611, 2e309dc2-3cab-4ecf-8be7-eab85790a0da => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.586 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9516b135-3bb5-4da4-942f-d044cad93bd4 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.588 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.588 248514 DEBUG nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Start waiting for the detach event from libvirt for device tap9516b135-3b with device alias net1 for instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.588 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.593 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface>not found in domain: <domain type='kvm' id='29'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <name>instance-00000018</name>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:33</nova:creationTime>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:35 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <system>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='serial'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='uuid'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </system>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <os>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </os>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <features>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </features>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk' index='2'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config' index='1'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:54:c0:80'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='tap10aa2df4-a7'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:3a:29:df'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='tapcea82a7d-e9'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='net2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:82:de:08'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target dev='tap3861ef01-74'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='net3'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <source path='/dev/pts/4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       </target>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/4'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <source path='/dev/pts/4'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </console>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <video>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </video>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c312,c966</label>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c312,c966</imagelabel>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:22:35 compute-0 nova_compute[248510]: </domain>
Dec 13 08:22:35 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.594 248514 INFO nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap9516b135-3b from instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da from the live domain config.
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.595 248514 DEBUG nova.virt.libvirt.vif [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.595 248514 DEBUG nova.network.os_vif_util [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.596 248514 DEBUG nova.network.os_vif_util [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.597 248514 DEBUG os_vif [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.600 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9516b135-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.605 248514 INFO os_vif [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b')
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.607 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:35</nova:creationTime>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 08:22:35 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:22:35 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:35 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:35 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:35 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.608 248514 INFO nova.virt.libvirt.driver [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance destroyed successfully.
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.608 248514 DEBUG nova.objects.instance [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'resources' on Instance uuid 07413df5-0bb8-42c2-95ff-13458d598139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.608 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1c5ab4-4533-483d-8f56-d1189ba6dd30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.647 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[701b7303-204f-46c6-9185-73785c05ff89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.651 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b0eccf63-bbcb-40db-9ebe-326d6c15df13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.690 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a482edcf-2e75-48fd-a53c-fb59ae512a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.711 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0c8aec-8d00-47af-946a-d4526e28ce5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282248, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.733 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d829feca-b34a-40e5-9c90-7218b470aa48]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282249, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282249, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.736 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.742 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.742 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.883 248514 DEBUG nova.compute.manager [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.884 248514 DEBUG oslo_concurrency.lockutils [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.884 248514 DEBUG oslo_concurrency.lockutils [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.884 248514 DEBUG oslo_concurrency.lockutils [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.885 248514 DEBUG nova.compute.manager [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-unplugged-9516b135-3bb5-4da4-942f-d044cad93bd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.885 248514 WARNING nova.compute.manager [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-unplugged-9516b135-3bb5-4da4-942f-d044cad93bd4 for instance with vm_state active and task_state None.
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.944 248514 INFO nova.virt.libvirt.driver [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Deleting instance files /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139_del
Dec 13 08:22:35 compute-0 nova_compute[248510]: 2025-12-13 08:22:35.945 248514 INFO nova.virt.libvirt.driver [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Deletion of /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139_del complete
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.014 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.014 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.015 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.020 248514 INFO nova.compute.manager [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 0.66 seconds to destroy the instance on the hypervisor.
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.020 248514 DEBUG oslo.service.loopingcall [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.020 248514 DEBUG nova.compute.manager [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.021 248514 DEBUG nova.network.neutron [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:22:36 compute-0 ceph-mon[76537]: pgmap v1717: 321 pgs: 321 active+clean; 200 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 13 KiB/s wr, 141 op/s
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.134 248514 DEBUG nova.network.neutron [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.150 248514 DEBUG nova.network.neutron [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.168 248514 INFO nova.compute.manager [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 0.15 seconds to deallocate network for instance.
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.219 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.220 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.297 248514 DEBUG oslo_concurrency.processutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.609 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.771 248514 DEBUG nova.compute.manager [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 DEBUG oslo_concurrency.lockutils [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 DEBUG oslo_concurrency.lockutils [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 DEBUG oslo_concurrency.lockutils [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 DEBUG nova.compute.manager [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 WARNING nova.compute.manager [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc for instance with vm_state active and task_state None.
Dec 13 08:22:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226999218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.882 248514 DEBUG oslo_concurrency.processutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.889 248514 DEBUG nova.compute.provider_tree [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.906 248514 DEBUG nova.scheduler.client.report [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.937 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:36 compute-0 nova_compute[248510]: 2025-12-13 08:22:36.981 248514 INFO nova.scheduler.client.report [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Deleted allocations for instance 07413df5-0bb8-42c2-95ff-13458d598139
Dec 13 08:22:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 200 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 9.3 KiB/s wr, 93 op/s
Dec 13 08:22:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3226999218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:37 compute-0 nova_compute[248510]: 2025-12-13 08:22:37.073 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Dec 13 08:22:38 compute-0 ceph-mon[76537]: pgmap v1718: 321 pgs: 321 active+clean; 200 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 9.3 KiB/s wr, 93 op/s
Dec 13 08:22:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Dec 13 08:22:38 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.216 158419 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 930ebf1c-b554-4b96-90af-54fc159022b7 with type ""
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.218 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:de:08 10.100.0.10'], port_security=['fa:16:3e:82:de:08 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-2007287726', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-2007287726', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3861ef01-74c8-4321-b36e-79090caaf6dc) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.219 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3861ef01-74c8-4321-b36e-79090caaf6dc in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.220 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:22:38 compute-0 ovn_controller[148476]: 2025-12-13T08:22:38Z|00219|binding|INFO|Removing iface tap3861ef01-74 ovn-installed in OVS
Dec 13 08:22:38 compute-0 ovn_controller[148476]: 2025-12-13T08:22:38Z|00220|binding|INFO|Removing lport 3861ef01-74c8-4321-b36e-79090caaf6dc ovn-installed in OVS
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.236 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1df9cfbc-55c7-4844-86f2-f05170844610]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.270 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8bac078f-1d45-4cdd-9a1c-9e241ca4ed6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.274 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a696914f-9f99-4034-8c58-62f9b2a210d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.316 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ddc2a5-b892-4302-8478-8ca06cc9194c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.342 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5909248c-aaec-4926-b9b1-7ba914f800ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282278, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.363 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11f05204-06c3-4df9-a860-873c9a5834f6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282279, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282279, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.366 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.371 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.372 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.372 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.372 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.611 248514 DEBUG nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.611 248514 DEBUG oslo_concurrency.lockutils [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.612 248514 DEBUG oslo_concurrency.lockutils [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.612 248514 DEBUG oslo_concurrency.lockutils [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.612 248514 DEBUG nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.613 248514 WARNING nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 for instance with vm_state active and task_state None.
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.613 248514 DEBUG nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-deleted-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.614 248514 INFO nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Neutron deleted interface 9516b135-3bb5-4da4-942f-d044cad93bd4; detaching it from the instance and deleting it from the info cache
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.614 248514 DEBUG nova.network.neutron [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.646 248514 DEBUG nova.objects.instance [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'system_metadata' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.679 248514 DEBUG nova.objects.instance [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.703 248514 DEBUG nova.virt.libvirt.vif [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.703 248514 DEBUG nova.network.os_vif_util [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.704 248514 DEBUG nova.network.os_vif_util [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.708 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.712 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface>not found in domain: <domain type='kvm' id='29'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <name>instance-00000018</name>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:35</nova:creationTime>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:38 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <system>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='serial'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='uuid'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </system>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <os>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </os>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <features>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </features>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk' index='2'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config' index='1'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:54:c0:80'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='tap10aa2df4-a7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:3a:29:df'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='tapcea82a7d-e9'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='net2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:82:de:08'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='tap3861ef01-74'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='net3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <source path='/dev/pts/4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </target>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/4'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <source path='/dev/pts/4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </console>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <video>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </video>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c312,c966</label>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c312,c966</imagelabel>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:22:38 compute-0 nova_compute[248510]: </domain>
Dec 13 08:22:38 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.713 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.716 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface>not found in domain: <domain type='kvm' id='29'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <name>instance-00000018</name>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:35</nova:creationTime>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:38 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <system>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='serial'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='uuid'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </system>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <os>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </os>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <features>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </features>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk' index='2'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config' index='1'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:54:c0:80'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='tap10aa2df4-a7'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:3a:29:df'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='tapcea82a7d-e9'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='net2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:82:de:08'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target dev='tap3861ef01-74'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='net3'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <source path='/dev/pts/4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       </target>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/4'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <source path='/dev/pts/4'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </console>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </input>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <video>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </video>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c312,c966</label>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c312,c966</imagelabel>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:22:38 compute-0 nova_compute[248510]: </domain>
Dec 13 08:22:38 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.717 248514 WARNING nova.virt.libvirt.driver [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Detaching interface fa:16:3e:96:b5:6d failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9516b135-3b' not found.
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.718 248514 DEBUG nova.virt.libvirt.vif [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.718 248514 DEBUG nova.network.os_vif_util [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.718 248514 DEBUG nova.network.os_vif_util [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.719 248514 DEBUG os_vif [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.720 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9516b135-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.721 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.724 248514 INFO os_vif [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b')
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.724 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:22:38</nova:creationTime>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 08:22:38 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:22:38 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:22:38 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:22:38 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:22:38 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.844 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.844 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.845 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.845 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.846 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.847 248514 INFO nova.compute.manager [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Terminating instance
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.848 248514 DEBUG nova.compute.manager [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:22:38 compute-0 kernel: tap10aa2df4-a7 (unregistering): left promiscuous mode
Dec 13 08:22:38 compute-0 NetworkManager[50376]: <info>  [1765614158.9035] device (tap10aa2df4-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:38 compute-0 ovn_controller[148476]: 2025-12-13T08:22:38Z|00221|binding|INFO|Releasing lport 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 from this chassis (sb_readonly=0)
Dec 13 08:22:38 compute-0 ovn_controller[148476]: 2025-12-13T08:22:38Z|00222|binding|INFO|Setting lport 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 down in Southbound
Dec 13 08:22:38 compute-0 ovn_controller[148476]: 2025-12-13T08:22:38Z|00223|binding|INFO|Removing iface tap10aa2df4-a7 ovn-installed in OVS
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.927 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:c0:80 10.100.0.7'], port_security=['fa:16:3e:54:c0:80 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71562f64-f92d-4728-bc7f-33bdc44249e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.929 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.931 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 kernel: tapcea82a7d-e9 (unregistering): left promiscuous mode
Dec 13 08:22:38 compute-0 NetworkManager[50376]: <info>  [1765614158.9441] device (tapcea82a7d-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.948 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb2fce4-4900-4746-895f-3ab029e1eca6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:38 compute-0 ovn_controller[148476]: 2025-12-13T08:22:38Z|00224|binding|INFO|Releasing lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 from this chassis (sb_readonly=0)
Dec 13 08:22:38 compute-0 ovn_controller[148476]: 2025-12-13T08:22:38Z|00225|binding|INFO|Setting lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 down in Southbound
Dec 13 08:22:38 compute-0 ovn_controller[148476]: 2025-12-13T08:22:38Z|00226|binding|INFO|Removing iface tapcea82a7d-e9 ovn-installed in OVS
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.950 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.958 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:29:df 10.100.0.4'], port_security=['fa:16:3e:3a:29:df 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=cea82a7d-e92d-4ac6-ba47-854ec9905fd2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:38 compute-0 kernel: tap3861ef01-74 (unregistering): left promiscuous mode
Dec 13 08:22:38 compute-0 NetworkManager[50376]: <info>  [1765614158.9715] device (tap3861ef01-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:38 compute-0 nova_compute[248510]: 2025-12-13 08:22:38.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.991 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0fae3da0-0648-4d30-a146-e33aa1cf9129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.994 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[74aaa533-c558-45ff-9360-3e2f7764ef41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000018.scope: Deactivated successfully.
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.029 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b865823c-13b0-4111-a0aa-89c3f85a6688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000018.scope: Consumed 16.862s CPU time.
Dec 13 08:22:39 compute-0 systemd-machined[210538]: Machine qemu-29-instance-00000018 terminated.
Dec 13 08:22:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 121 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 25 KiB/s wr, 164 op/s
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.051 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[72a41ca1-6af6-4296-bec6-583003baf7b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282303, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 NetworkManager[50376]: <info>  [1765614159.0704] manager: (tap10aa2df4-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.070 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c234e823-5d9c-4de6-9917-1f8d686b64cf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282304, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282304, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.072 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 NetworkManager[50376]: <info>  [1765614159.0821] manager: (tapcea82a7d-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.094 158419 INFO neutron.agent.ovn.metadata.agent [-] Port cea82a7d-e92d-4ac6-ba47-854ec9905fd2 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:22:39 compute-0 NetworkManager[50376]: <info>  [1765614159.0951] manager: (tap3861ef01-74): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.095 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ca92864-3b70-4794-9db1-fa08128cef92, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fce347-4ff0-42a6-a9c5-95dd8e3ef20b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.097 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace which is not needed anymore
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.108 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Instance destroyed successfully.
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.109 248514 DEBUG nova.objects.instance [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'resources' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:39 compute-0 ceph-mon[76537]: osdmap e185: 3 total, 3 up, 3 in
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.127 248514 DEBUG nova.virt.libvirt.vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.127 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.128 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.128 248514 DEBUG os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.130 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10aa2df4-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.131 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.141 248514 INFO os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7')
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.142 248514 DEBUG nova.virt.libvirt.vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.142 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.143 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.143 248514 DEBUG os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.144 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcea82a7d-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.151 248514 INFO os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9')
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.152 248514 DEBUG nova.virt.libvirt.vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.153 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.153 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.154 248514 DEBUG os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.156 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.156 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3861ef01-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.162 248514 INFO os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74')
Dec 13 08:22:39 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [NOTICE]   (278674) : haproxy version is 2.8.14-c23fe91
Dec 13 08:22:39 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [NOTICE]   (278674) : path to executable is /usr/sbin/haproxy
Dec 13 08:22:39 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [WARNING]  (278674) : Exiting Master process...
Dec 13 08:22:39 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [ALERT]    (278674) : Current worker (278676) exited with code 143 (Terminated)
Dec 13 08:22:39 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [WARNING]  (278674) : All workers exited. Exiting... (0)
Dec 13 08:22:39 compute-0 systemd[1]: libpod-c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7.scope: Deactivated successfully.
Dec 13 08:22:39 compute-0 podman[282374]: 2025-12-13 08:22:39.264407137 +0000 UTC m=+0.051695864 container died c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 08:22:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7-userdata-shm.mount: Deactivated successfully.
Dec 13 08:22:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d74ef57e5fc97f32c1e7bb2787cfa72693667d60ba34de8c1be9c45989ad24ab-merged.mount: Deactivated successfully.
Dec 13 08:22:39 compute-0 podman[282374]: 2025-12-13 08:22:39.327499221 +0000 UTC m=+0.114787948 container cleanup c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:22:39 compute-0 systemd[1]: libpod-conmon-c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7.scope: Deactivated successfully.
Dec 13 08:22:39 compute-0 podman[282406]: 2025-12-13 08:22:39.402242552 +0000 UTC m=+0.047452230 container remove c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.410 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8274f557-5483-4499-be48-4bbcab19708b]: (4, ('Sat Dec 13 08:22:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7)\nc146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7\nSat Dec 13 08:22:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7)\nc146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.414 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[98729fe1-dd1b-4530-b739-031855ee8309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 kernel: tap1ca92864-30: left promiscuous mode
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.440 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b880eced-405b-4af3-b7d5-f975beb60934]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.459 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[49efc520-58b3-4a7f-beba-a101dfb981e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.460 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79580822-bc9c-4083-b0bf-22f36138267b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.481 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[614a8b0a-ee39-4f82-a80e-be66be9ef18a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665417, 'reachable_time': 43079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282422, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.484 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:22:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.484 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[75d49722-2f07-4169-8fc9-9bf1e3bdbdaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d1ca92864\x2d3b70\x2d4794\x2d9db1\x2dfa08128cef92.mount: Deactivated successfully.
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.500 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.507 248514 INFO nova.virt.libvirt.driver [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Deleting instance files /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da_del
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.508 248514 INFO nova.virt.libvirt.driver [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Deletion of /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da_del complete
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.581 248514 INFO nova.compute.manager [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Took 0.73 seconds to destroy the instance on the hypervisor.
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.581 248514 DEBUG oslo.service.loopingcall [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.582 248514 DEBUG nova.compute.manager [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:22:39 compute-0 nova_compute[248510]: 2025-12-13 08:22:39.582 248514 DEBUG nova.network.neutron [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:22:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:22:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Dec 13 08:22:40 compute-0 ceph-mon[76537]: pgmap v1720: 321 pgs: 321 active+clean; 121 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 25 KiB/s wr, 164 op/s
Dec 13 08:22:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Dec 13 08:22:40 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Dec 13 08:22:40 compute-0 nova_compute[248510]: 2025-12-13 08:22:40.280 248514 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 3861ef01-74c8-4321-b36e-79090caaf6dc could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 13 08:22:40 compute-0 nova_compute[248510]: 2025-12-13 08:22:40.280 248514 DEBUG nova.network.neutron [-] Unable to show port 3861ef01-74c8-4321-b36e-79090caaf6dc as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Dec 13 08:22:40 compute-0 nova_compute[248510]: 2025-12-13 08:22:40.819 248514 DEBUG nova.compute.manager [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:40 compute-0 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG oslo_concurrency.lockutils [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:40 compute-0 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG oslo_concurrency.lockutils [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:40 compute-0 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG oslo_concurrency.lockutils [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:40 compute-0 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG nova.compute.manager [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-unplugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:40 compute-0 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG nova.compute.manager [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:22:40 compute-0 podman[282425]: 2025-12-13 08:22:40.997790662 +0000 UTC m=+0.080574525 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:22:41 compute-0 podman[282424]: 2025-12-13 08:22:41.001533794 +0000 UTC m=+0.087812324 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd)
Dec 13 08:22:41 compute-0 podman[282423]: 2025-12-13 08:22:41.033186664 +0000 UTC m=+0.122766595 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 08:22:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 121 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 21 KiB/s wr, 93 op/s
Dec 13 08:22:41 compute-0 ceph-mon[76537]: osdmap e186: 3 total, 3 up, 3 in
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.257 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-deleted-3861ef01-74c8-4321-b36e-79090caaf6dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.258 248514 INFO nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Neutron deleted interface 3861ef01-74c8-4321-b36e-79090caaf6dc; detaching it from the instance and deleting it from the info cache
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.258 248514 DEBUG nova.network.neutron [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.293 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Detach interface failed, port_id=3861ef01-74c8-4321-b36e-79090caaf6dc, reason: Instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.294 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.294 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.294 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-unplugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.296 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.296 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.296 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.296 248514 WARNING nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for instance with vm_state active and task_state deleting.
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.542 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.543 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.562 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.639 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.640 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.650 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.650 248514 INFO nova.compute.claims [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:22:41 compute-0 nova_compute[248510]: 2025-12-13 08:22:41.783 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.142 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Dec 13 08:22:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Dec 13 08:22:42 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.170 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.170 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:22:42 compute-0 ceph-mon[76537]: pgmap v1722: 321 pgs: 321 active+clean; 121 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 21 KiB/s wr, 93 op/s
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.170 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.171 248514 DEBUG nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.172 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.183028) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162183256, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2187, "num_deletes": 255, "total_data_size": 3594986, "memory_usage": 3661608, "flush_reason": "Manual Compaction"}
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162210747, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3518133, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30633, "largest_seqno": 32819, "table_properties": {"data_size": 3507892, "index_size": 6607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20816, "raw_average_key_size": 20, "raw_value_size": 3487649, "raw_average_value_size": 3466, "num_data_blocks": 289, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765613960, "oldest_key_time": 1765613960, "file_creation_time": 1765614162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 27769 microseconds, and 8940 cpu microseconds.
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.210801) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3518133 bytes OK
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.210826) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.215205) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.215240) EVENT_LOG_v1 {"time_micros": 1765614162215231, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.215264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3585746, prev total WAL file size 3585746, number of live WAL files 2.
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.216462) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3435KB)], [68(7540KB)]
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162216638, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11239185, "oldest_snapshot_seqno": -1}
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.276 248514 DEBUG nova.network.neutron [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5760 keys, 9528601 bytes, temperature: kUnknown
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162298788, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9528601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9488955, "index_size": 24176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 144996, "raw_average_key_size": 25, "raw_value_size": 9384187, "raw_average_value_size": 1629, "num_data_blocks": 988, "num_entries": 5760, "num_filter_entries": 5760, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.299036) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9528601 bytes
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.300812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.7 rd, 115.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6284, records dropped: 524 output_compression: NoCompression
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.300828) EVENT_LOG_v1 {"time_micros": 1765614162300820, "job": 38, "event": "compaction_finished", "compaction_time_micros": 82210, "compaction_time_cpu_micros": 24658, "output_level": 6, "num_output_files": 1, "total_output_size": 9528601, "num_input_records": 6284, "num_output_records": 5760, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162301398, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162302451, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.216294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:22:42 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.311 248514 INFO nova.compute.manager [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Took 2.73 seconds to deallocate network for instance.
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.374 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80217683' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.430 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.437 248514 DEBUG nova.compute.provider_tree [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.459 248514 DEBUG nova.scheduler.client.report [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.487 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.488 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.491 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.549 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.549 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.573 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.608 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.612 248514 DEBUG oslo_concurrency.processutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.709 248514 INFO nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.710 248514 INFO nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Port 9516b135-3bb5-4da4-942f-d044cad93bd4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.710 248514 INFO nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Port cea82a7d-e92d-4ac6-ba47-854ec9905fd2 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.710 248514 INFO nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Port 3861ef01-74c8-4321-b36e-79090caaf6dc from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.710 248514 DEBUG nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.728 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.752 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-9516b135-3bb5-4da4-942f-d044cad93bd4" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 7.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.756 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.758 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.758 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Creating image(s)
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.779 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.796 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.818 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.822 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.862 248514 DEBUG nova.policy [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8aa7edd2151436caa0fd25f361298fd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2495263e4f944deda2647b578d06bb21', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.867 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.909 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.909 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.910 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.910 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.931 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:42 compute-0 nova_compute[248510]: 2025-12-13 08:22:42.935 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 96 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 28 KiB/s wr, 189 op/s
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.107 248514 DEBUG nova.compute.manager [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.108 248514 DEBUG oslo_concurrency.lockutils [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.109 248514 DEBUG oslo_concurrency.lockutils [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.109 248514 DEBUG oslo_concurrency.lockutils [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.109 248514 DEBUG nova.compute.manager [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.109 248514 WARNING nova.compute.manager [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 for instance with vm_state deleted and task_state None.
Dec 13 08:22:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580774684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:43 compute-0 ceph-mon[76537]: osdmap e187: 3 total, 3 up, 3 in
Dec 13 08:22:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/80217683' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.220 248514 DEBUG oslo_concurrency.processutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.227 248514 DEBUG nova.compute.provider_tree [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.244 248514 DEBUG nova.scheduler.client.report [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.258 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.288 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.322 248514 INFO nova.scheduler.client.report [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Deleted allocations for instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.330 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] resizing rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.414 248514 DEBUG nova.objects.instance [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'migration_context' on Instance uuid 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.422 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.435 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.435 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Ensure instance console log exists: /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.436 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.436 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.437 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.584 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Successfully created port: 4dd4aba8-8ce4-4b2e-92b4-879959570e8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.664 248514 DEBUG nova.compute.manager [req-8c980d3a-54f6-4202-87a2-d206b955f545 req-01f08685-5a14-454d-88d7-3b9c81bf9d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-deleted-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.665 248514 DEBUG nova.compute.manager [req-8c980d3a-54f6-4202-87a2-d206b955f545 req-01f08685-5a14-454d-88d7-3b9c81bf9d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-deleted-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:43 compute-0 nova_compute[248510]: 2025-12-13 08:22:43.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:44 compute-0 ceph-mon[76537]: pgmap v1724: 321 pgs: 321 active+clean; 96 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 28 KiB/s wr, 189 op/s
Dec 13 08:22:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3580774684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.368 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Successfully created port: ee08056b-cf18-46d7-9fea-542c2ec040ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.501 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.844 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.845 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.845 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.845 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:22:44 compute-0 nova_compute[248510]: 2025-12-13 08:22:44.845 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Dec 13 08:22:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Dec 13 08:22:45 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Dec 13 08:22:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 77 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 3.0 MiB/s wr, 179 op/s
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.362 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Successfully updated port: 4dd4aba8-8ce4-4b2e-92b4-879959570e8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2605465771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.449 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.626 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.628 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4367MB free_disk=59.959420564584434GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.628 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.628 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.703 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.703 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.703 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.750 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.836 248514 DEBUG nova.compute.manager [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-changed-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.837 248514 DEBUG nova.compute.manager [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Refreshing instance network info cache due to event network-changed-4dd4aba8-8ce4-4b2e-92b4-879959570e8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.838 248514 DEBUG oslo_concurrency.lockutils [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.838 248514 DEBUG oslo_concurrency.lockutils [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:45 compute-0 nova_compute[248510]: 2025-12-13 08:22:45.839 248514 DEBUG nova.network.neutron [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Refreshing network info cache for port 4dd4aba8-8ce4-4b2e-92b4-879959570e8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:46 compute-0 ceph-mon[76537]: osdmap e188: 3 total, 3 up, 3 in
Dec 13 08:22:46 compute-0 ceph-mon[76537]: pgmap v1726: 321 pgs: 321 active+clean; 77 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 3.0 MiB/s wr, 179 op/s
Dec 13 08:22:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2605465771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:46 compute-0 nova_compute[248510]: 2025-12-13 08:22:46.227 248514 DEBUG nova.network.neutron [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/65429836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:46 compute-0 nova_compute[248510]: 2025-12-13 08:22:46.353 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:46 compute-0 nova_compute[248510]: 2025-12-13 08:22:46.361 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:46 compute-0 nova_compute[248510]: 2025-12-13 08:22:46.383 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:46 compute-0 nova_compute[248510]: 2025-12-13 08:22:46.419 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:22:46 compute-0 nova_compute[248510]: 2025-12-13 08:22:46.419 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:47 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/65429836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 77 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 2.6 MiB/s wr, 155 op/s
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.062 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614152.0614407, 9b6188af-75f0-4213-89c2-bd3eb72960b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.063 248514 INFO nova.compute.manager [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] VM Stopped (Lifecycle Event)
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.096 248514 DEBUG nova.compute.manager [None req-c5efbff1-d9bd-4341-a027-23553c6d38c5 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.294 248514 DEBUG nova.network.neutron [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.354 248514 DEBUG oslo_concurrency.lockutils [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.415 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Successfully updated port: ee08056b-cf18-46d7-9fea-542c2ec040ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.419 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.435 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.436 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquired lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.436 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.570 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614152.5687995, 1c17e7b7-7062-48d2-a30f-b387929244d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.570 248514 INFO nova.compute.manager [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] VM Stopped (Lifecycle Event)
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.591 248514 DEBUG nova.compute.manager [None req-06ad8612-b386-4c35-bfdc-1f80e2249245 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.903 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.998 248514 DEBUG nova.compute.manager [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-changed-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.998 248514 DEBUG nova.compute.manager [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Refreshing instance network info cache due to event network-changed-ee08056b-cf18-46d7-9fea-542c2ec040ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:47 compute-0 nova_compute[248510]: 2025-12-13 08:22:47.998 248514 DEBUG oslo_concurrency.lockutils [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:48 compute-0 ceph-mon[76537]: pgmap v1727: 321 pgs: 321 active+clean; 77 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 2.6 MiB/s wr, 155 op/s
Dec 13 08:22:48 compute-0 sudo[282738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:22:48 compute-0 sudo[282738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:48 compute-0 sudo[282738]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:48 compute-0 sudo[282763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:22:48 compute-0 sudo[282763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:48 compute-0 sudo[282763]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:22:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:22:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:22:48 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:22:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:22:48 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:22:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:22:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:22:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:22:48 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:22:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:22:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:22:48 compute-0 sudo[282819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:22:48 compute-0 sudo[282819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:48 compute-0 sudo[282819]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:49 compute-0 sudo[282844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:22:49 compute-0 sudo[282844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:49 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:22:49 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:22:49 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:22:49 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:22:49 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:22:49 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:22:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 2.7 MiB/s wr, 151 op/s
Dec 13 08:22:49 compute-0 nova_compute[248510]: 2025-12-13 08:22:49.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:49 compute-0 podman[282880]: 2025-12-13 08:22:49.398399705 +0000 UTC m=+0.055650371 container create 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:22:49 compute-0 systemd[1]: Started libpod-conmon-92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738.scope.
Dec 13 08:22:49 compute-0 podman[282880]: 2025-12-13 08:22:49.367671839 +0000 UTC m=+0.024922525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:22:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:49 compute-0 podman[282880]: 2025-12-13 08:22:49.494638786 +0000 UTC m=+0.151889512 container init 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:22:49 compute-0 podman[282880]: 2025-12-13 08:22:49.504474418 +0000 UTC m=+0.161725114 container start 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 08:22:49 compute-0 nova_compute[248510]: 2025-12-13 08:22:49.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:49 compute-0 podman[282880]: 2025-12-13 08:22:49.509560243 +0000 UTC m=+0.166810949 container attach 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:22:49 compute-0 systemd[1]: libpod-92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738.scope: Deactivated successfully.
Dec 13 08:22:49 compute-0 flamboyant_kirch[282896]: 167 167
Dec 13 08:22:49 compute-0 podman[282880]: 2025-12-13 08:22:49.51430195 +0000 UTC m=+0.171552636 container died 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:22:49 compute-0 conmon[282896]: conmon 92729bc230c83be32b6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738.scope/container/memory.events
Dec 13 08:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ee4c0a75fdd675b94646a8807186fb7de064d80e12e5b8a1c606ce84d9f9cc9-merged.mount: Deactivated successfully.
Dec 13 08:22:49 compute-0 podman[282880]: 2025-12-13 08:22:49.563850571 +0000 UTC m=+0.221101237 container remove 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:22:49 compute-0 systemd[1]: libpod-conmon-92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738.scope: Deactivated successfully.
Dec 13 08:22:49 compute-0 podman[282921]: 2025-12-13 08:22:49.746546791 +0000 UTC m=+0.049995653 container create f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:22:49 compute-0 nova_compute[248510]: 2025-12-13 08:22:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:22:49 compute-0 systemd[1]: Started libpod-conmon-f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c.scope.
Dec 13 08:22:49 compute-0 podman[282921]: 2025-12-13 08:22:49.725780799 +0000 UTC m=+0.029229691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:22:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:49 compute-0 podman[282921]: 2025-12-13 08:22:49.851207658 +0000 UTC m=+0.154656540 container init f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 08:22:49 compute-0 podman[282921]: 2025-12-13 08:22:49.859887082 +0000 UTC m=+0.163335954 container start f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 08:22:49 compute-0 podman[282921]: 2025-12-13 08:22:49.863950732 +0000 UTC m=+0.167399754 container attach f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 08:22:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Dec 13 08:22:49 compute-0 nova_compute[248510]: 2025-12-13 08:22:49.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Dec 13 08:22:50 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Dec 13 08:22:50 compute-0 ceph-mon[76537]: pgmap v1728: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 2.7 MiB/s wr, 151 op/s
Dec 13 08:22:50 compute-0 ceph-mon[76537]: osdmap e189: 3 total, 3 up, 3 in
Dec 13 08:22:50 compute-0 ecstatic_mahavira[282939]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:22:50 compute-0 ecstatic_mahavira[282939]: --> All data devices are unavailable
Dec 13 08:22:50 compute-0 systemd[1]: libpod-f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c.scope: Deactivated successfully.
Dec 13 08:22:50 compute-0 podman[282921]: 2025-12-13 08:22:50.440717819 +0000 UTC m=+0.744166681 container died f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 08:22:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4-merged.mount: Deactivated successfully.
Dec 13 08:22:50 compute-0 podman[282921]: 2025-12-13 08:22:50.502171683 +0000 UTC m=+0.805620545 container remove f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:22:50 compute-0 systemd[1]: libpod-conmon-f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c.scope: Deactivated successfully.
Dec 13 08:22:50 compute-0 sudo[282844]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:50 compute-0 nova_compute[248510]: 2025-12-13 08:22:50.594 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614155.5921252, 07413df5-0bb8-42c2-95ff-13458d598139 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:50 compute-0 nova_compute[248510]: 2025-12-13 08:22:50.595 248514 INFO nova.compute.manager [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] VM Stopped (Lifecycle Event)
Dec 13 08:22:50 compute-0 nova_compute[248510]: 2025-12-13 08:22:50.622 248514 DEBUG nova.compute.manager [None req-70d8d6a6-aa7e-4ac6-b234-12704c71be6a - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:50 compute-0 sudo[282971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:22:50 compute-0 sudo[282971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:50 compute-0 sudo[282971]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:50 compute-0 sudo[282996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:22:50 compute-0 sudo[282996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.7 MiB/s wr, 80 op/s
Dec 13 08:22:51 compute-0 podman[283032]: 2025-12-13 08:22:51.055714477 +0000 UTC m=+0.045887981 container create b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:22:51 compute-0 systemd[1]: Started libpod-conmon-b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0.scope.
Dec 13 08:22:51 compute-0 podman[283032]: 2025-12-13 08:22:51.036576525 +0000 UTC m=+0.026750049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:22:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:51 compute-0 podman[283032]: 2025-12-13 08:22:51.156435108 +0000 UTC m=+0.146608702 container init b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:22:51 compute-0 podman[283032]: 2025-12-13 08:22:51.165273875 +0000 UTC m=+0.155447389 container start b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 08:22:51 compute-0 podman[283032]: 2025-12-13 08:22:51.169203072 +0000 UTC m=+0.159376746 container attach b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:22:51 compute-0 nervous_stonebraker[283048]: 167 167
Dec 13 08:22:51 compute-0 systemd[1]: libpod-b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0.scope: Deactivated successfully.
Dec 13 08:22:51 compute-0 podman[283032]: 2025-12-13 08:22:51.172672298 +0000 UTC m=+0.162845802 container died b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 08:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-76b7b6e53f7535157188657df7670bf6c60bda50e49ca8149c449154f812f04a-merged.mount: Deactivated successfully.
Dec 13 08:22:51 compute-0 podman[283032]: 2025-12-13 08:22:51.217927452 +0000 UTC m=+0.208100956 container remove b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:22:51 compute-0 systemd[1]: libpod-conmon-b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0.scope: Deactivated successfully.
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.334 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updating instance_info_cache with network_info: [{"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.357 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Releasing lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.357 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance network_info: |[{"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.357 248514 DEBUG oslo_concurrency.lockutils [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.358 248514 DEBUG nova.network.neutron [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Refreshing network info cache for port ee08056b-cf18-46d7-9fea-542c2ec040ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.361 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start _get_guest_xml network_info=[{"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.368 248514 WARNING nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.373 248514 DEBUG nova.virt.libvirt.host [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.374 248514 DEBUG nova.virt.libvirt.host [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.378 248514 DEBUG nova.virt.libvirt.host [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.378 248514 DEBUG nova.virt.libvirt.host [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.381 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.384 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:51 compute-0 podman[283072]: 2025-12-13 08:22:51.430882598 +0000 UTC m=+0.044421336 container create 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:22:51 compute-0 systemd[1]: Started libpod-conmon-0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383.scope.
Dec 13 08:22:51 compute-0 podman[283072]: 2025-12-13 08:22:51.411315116 +0000 UTC m=+0.024853894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:22:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:51 compute-0 podman[283072]: 2025-12-13 08:22:51.537239837 +0000 UTC m=+0.150778595 container init 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:22:51 compute-0 podman[283072]: 2025-12-13 08:22:51.546896545 +0000 UTC m=+0.160435283 container start 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:22:51 compute-0 podman[283072]: 2025-12-13 08:22:51.552448672 +0000 UTC m=+0.165987530 container attach 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:22:51 compute-0 confident_darwin[283089]: {
Dec 13 08:22:51 compute-0 confident_darwin[283089]:     "0": [
Dec 13 08:22:51 compute-0 confident_darwin[283089]:         {
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "devices": [
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "/dev/loop3"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             ],
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_name": "ceph_lv0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_size": "21470642176",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "name": "ceph_lv0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "tags": {
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cluster_name": "ceph",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.crush_device_class": "",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.encrypted": "0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.objectstore": "bluestore",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osd_id": "0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.type": "block",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.vdo": "0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.with_tpm": "0"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             },
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "type": "block",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "vg_name": "ceph_vg0"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:         }
Dec 13 08:22:51 compute-0 confident_darwin[283089]:     ],
Dec 13 08:22:51 compute-0 confident_darwin[283089]:     "1": [
Dec 13 08:22:51 compute-0 confident_darwin[283089]:         {
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "devices": [
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "/dev/loop4"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             ],
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_name": "ceph_lv1",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_size": "21470642176",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "name": "ceph_lv1",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "tags": {
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cluster_name": "ceph",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.crush_device_class": "",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.encrypted": "0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.objectstore": "bluestore",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osd_id": "1",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.type": "block",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.vdo": "0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.with_tpm": "0"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             },
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "type": "block",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "vg_name": "ceph_vg1"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:         }
Dec 13 08:22:51 compute-0 confident_darwin[283089]:     ],
Dec 13 08:22:51 compute-0 confident_darwin[283089]:     "2": [
Dec 13 08:22:51 compute-0 confident_darwin[283089]:         {
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "devices": [
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "/dev/loop5"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             ],
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_name": "ceph_lv2",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_size": "21470642176",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "name": "ceph_lv2",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "tags": {
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.cluster_name": "ceph",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.crush_device_class": "",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.encrypted": "0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.objectstore": "bluestore",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osd_id": "2",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.type": "block",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.vdo": "0",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:                 "ceph.with_tpm": "0"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             },
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "type": "block",
Dec 13 08:22:51 compute-0 confident_darwin[283089]:             "vg_name": "ceph_vg2"
Dec 13 08:22:51 compute-0 confident_darwin[283089]:         }
Dec 13 08:22:51 compute-0 confident_darwin[283089]:     ]
Dec 13 08:22:51 compute-0 confident_darwin[283089]: }
Dec 13 08:22:51 compute-0 systemd[1]: libpod-0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383.scope: Deactivated successfully.
Dec 13 08:22:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:22:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437399676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:51 compute-0 podman[283120]: 2025-12-13 08:22:51.928004732 +0000 UTC m=+0.033017394 container died 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.945 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca-merged.mount: Deactivated successfully.
Dec 13 08:22:51 compute-0 podman[283120]: 2025-12-13 08:22:51.972328844 +0000 UTC m=+0.077341496 container remove 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.973 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:51 compute-0 nova_compute[248510]: 2025-12-13 08:22:51.978 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:51 compute-0 systemd[1]: libpod-conmon-0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383.scope: Deactivated successfully.
Dec 13 08:22:52 compute-0 sudo[282996]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:52 compute-0 ceph-mon[76537]: pgmap v1730: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.7 MiB/s wr, 80 op/s
Dec 13 08:22:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3437399676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.073 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.073 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.090 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:22:52 compute-0 sudo[283155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:22:52 compute-0 sudo[283155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:52 compute-0 sudo[283155]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.180 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.180 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:52 compute-0 sudo[283199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:22:52 compute-0 sudo[283199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.189 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.189 248514 INFO nova.compute.claims [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.325 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:52 compute-0 podman[283237]: 2025-12-13 08:22:52.498228117 +0000 UTC m=+0.051732996 container create dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:22:52 compute-0 systemd[1]: Started libpod-conmon-dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35.scope.
Dec 13 08:22:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:22:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843865063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:52 compute-0 podman[283237]: 2025-12-13 08:22:52.476911331 +0000 UTC m=+0.030416240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.571 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.574 248514 DEBUG nova.virt.libvirt.vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:42Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.574 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.575 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.576 248514 DEBUG nova.virt.libvirt.vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:42Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.577 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.578 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.579 248514 DEBUG nova.objects.instance [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:52 compute-0 podman[283237]: 2025-12-13 08:22:52.587109126 +0000 UTC m=+0.140614035 container init dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:22:52 compute-0 podman[283237]: 2025-12-13 08:22:52.594377535 +0000 UTC m=+0.147882424 container start dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.596 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <uuid>7299d5b2-6ea4-47fc-b16b-fcb6dd741e38</uuid>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <name>instance-0000001d</name>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestMultiNic-server-1320257136</nova:name>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:22:51</nova:creationTime>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:user uuid="e8aa7edd2151436caa0fd25f361298fd">tempest-ServersTestMultiNic-1741413593-project-member</nova:user>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:project uuid="2495263e4f944deda2647b578d06bb21">tempest-ServersTestMultiNic-1741413593</nova:project>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:port uuid="4dd4aba8-8ce4-4b2e-92b4-879959570e8d">
Dec 13 08:22:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.93" ipVersion="4"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <nova:port uuid="ee08056b-cf18-46d7-9fea-542c2ec040ab">
Dec 13 08:22:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.1.173" ipVersion="4"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <system>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <entry name="serial">7299d5b2-6ea4-47fc-b16b-fcb6dd741e38</entry>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <entry name="uuid">7299d5b2-6ea4-47fc-b16b-fcb6dd741e38</entry>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </system>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <os>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   </os>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <features>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   </features>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk">
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config">
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:22:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:57:be:c0"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <target dev="tap4dd4aba8-8c"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ca:98:f1"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <target dev="tapee08056b-cf"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/console.log" append="off"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <video>
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </video>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:22:52 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:22:52 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:22:52 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:22:52 compute-0 nova_compute[248510]: </domain>
Dec 13 08:22:52 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.598 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Preparing to wait for external event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:22:52 compute-0 podman[283237]: 2025-12-13 08:22:52.598154868 +0000 UTC m=+0.151659757 container attach dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.598 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.598 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.598 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.599 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Preparing to wait for external event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.599 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.599 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.599 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.600 248514 DEBUG nova.virt.libvirt.vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:42Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.600 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:52 compute-0 zen_jang[283272]: 167 167
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.601 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.601 248514 DEBUG os_vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:52 compute-0 systemd[1]: libpod-dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35.scope: Deactivated successfully.
Dec 13 08:22:52 compute-0 podman[283237]: 2025-12-13 08:22:52.602996937 +0000 UTC m=+0.156501826 container died dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.604 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.604 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.608 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dd4aba8-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.609 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4dd4aba8-8c, col_values=(('external_ids', {'iface-id': '4dd4aba8-8ce4-4b2e-92b4-879959570e8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:be:c0', 'vm-uuid': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:52 compute-0 NetworkManager[50376]: <info>  [1765614172.6336] manager: (tap4dd4aba8-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.634 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.639 248514 INFO os_vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c')
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.640 248514 DEBUG nova.virt.libvirt.vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:42Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.640 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.641 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.642 248514 DEBUG os_vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f358277d9612cb73ba2ef2899ea42aba3a80a5ffb9d21dd7a6538eaa8d9c697-merged.mount: Deactivated successfully.
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.643 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.644 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.647 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee08056b-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.647 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee08056b-cf, col_values=(('external_ids', {'iface-id': 'ee08056b-cf18-46d7-9fea-542c2ec040ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:98:f1', 'vm-uuid': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:52 compute-0 NetworkManager[50376]: <info>  [1765614172.6498] manager: (tapee08056b-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:52 compute-0 podman[283237]: 2025-12-13 08:22:52.654929046 +0000 UTC m=+0.208433935 container remove dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.657 248514 INFO os_vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf')
Dec 13 08:22:52 compute-0 systemd[1]: libpod-conmon-dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35.scope: Deactivated successfully.
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.709 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.710 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.710 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:57:be:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.710 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:ca:98:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.711 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Using config drive
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.757 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.774 248514 DEBUG nova.network.neutron [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updated VIF entry in instance network info cache for port ee08056b-cf18-46d7-9fea-542c2ec040ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.775 248514 DEBUG nova.network.neutron [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updating instance_info_cache with network_info: [{"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.825 248514 DEBUG oslo_concurrency.lockutils [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:52 compute-0 podman[283320]: 2025-12-13 08:22:52.856215154 +0000 UTC m=+0.059779613 container create 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 08:22:52 compute-0 systemd[1]: Started libpod-conmon-4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f.scope.
Dec 13 08:22:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19294390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:52 compute-0 podman[283320]: 2025-12-13 08:22:52.830949602 +0000 UTC m=+0.034514091 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.939 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:52 compute-0 nova_compute[248510]: 2025-12-13 08:22:52.945 248514 DEBUG nova.compute.provider_tree [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:52 compute-0 podman[283320]: 2025-12-13 08:22:52.966603213 +0000 UTC m=+0.170167672 container init 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:22:52 compute-0 podman[283320]: 2025-12-13 08:22:52.974880297 +0000 UTC m=+0.178444736 container start 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:22:52 compute-0 podman[283320]: 2025-12-13 08:22:52.978844395 +0000 UTC m=+0.182408834 container attach 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:22:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 436 KiB/s wr, 17 op/s
Dec 13 08:22:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/843865063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/19294390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.108 248514 DEBUG nova.scheduler.client.report [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.135 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.136 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.182 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.183 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.207 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.208 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.210 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.241 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.245 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.251 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Creating config drive at /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.258 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdsb0e8ew execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.350 248514 DEBUG nova.policy [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.388 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.389 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.398 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.399 248514 INFO nova.compute.claims [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.403 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.405 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.405 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Creating image(s)
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.433 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.478 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.516 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.525 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.556 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdsb0e8ew" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.599 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.605 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.638 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.639 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.640 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.640 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.660 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.664 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:53 compute-0 lvm[283534]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:22:53 compute-0 lvm[283530]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:22:53 compute-0 lvm[283530]: VG ceph_vg0 finished
Dec 13 08:22:53 compute-0 lvm[283534]: VG ceph_vg2 finished
Dec 13 08:22:53 compute-0 lvm[283533]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:22:53 compute-0 lvm[283533]: VG ceph_vg1 finished
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.786 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.791 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Deleting local config drive /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config because it was imported into RBD.
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.796 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:53 compute-0 admiring_bhaskara[283337]: {}
Dec 13 08:22:53 compute-0 systemd[1]: libpod-4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f.scope: Deactivated successfully.
Dec 13 08:22:53 compute-0 systemd[1]: libpod-4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f.scope: Consumed 1.504s CPU time.
Dec 13 08:22:53 compute-0 podman[283320]: 2025-12-13 08:22:53.850356981 +0000 UTC m=+1.053921420 container died 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 08:22:53 compute-0 kernel: tap4dd4aba8-8c: entered promiscuous mode
Dec 13 08:22:53 compute-0 systemd-udevd[283529]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:22:53 compute-0 NetworkManager[50376]: <info>  [1765614173.8946] manager: (tap4dd4aba8-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Dec 13 08:22:53 compute-0 ovn_controller[148476]: 2025-12-13T08:22:53Z|00227|binding|INFO|Claiming lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d for this chassis.
Dec 13 08:22:53 compute-0 ovn_controller[148476]: 2025-12-13T08:22:53Z|00228|binding|INFO|4dd4aba8-8ce4-4b2e-92b4-879959570e8d: Claiming fa:16:3e:57:be:c0 10.100.0.93
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:53 compute-0 NetworkManager[50376]: <info>  [1765614173.9097] device (tap4dd4aba8-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:22:53 compute-0 NetworkManager[50376]: <info>  [1765614173.9108] device (tap4dd4aba8-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.912 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:be:c0 10.100.0.93'], port_security=['fa:16:3e:57:be:c0 10.100.0.93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.93/24', 'neutron:device_id': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9222573e-9ff1-438c-aa91-fa531c4ff949', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7b24070-0889-4199-b066-7f798c438fcf, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4dd4aba8-8ce4-4b2e-92b4-879959570e8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.913 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4dd4aba8-8ce4-4b2e-92b4-879959570e8d in datapath 9222573e-9ff1-438c-aa91-fa531c4ff949 bound to our chassis
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.916 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9222573e-9ff1-438c-aa91-fa531c4ff949
Dec 13 08:22:53 compute-0 NetworkManager[50376]: <info>  [1765614173.9207] manager: (tapee08056b-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Dec 13 08:22:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986-merged.mount: Deactivated successfully.
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.940 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e547e5c8-2cc6-4ffa-bacc-711337218eb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.943 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9222573e-91 in ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.948 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9222573e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.949 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0a53fb-50ad-4c1e-89d8-1f73ca91f804]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.950 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41b885a0-26b5-4d51-8f53-ca29d583fd86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:53 compute-0 kernel: tapee08056b-cf: entered promiscuous mode
Dec 13 08:22:53 compute-0 NetworkManager[50376]: <info>  [1765614173.9522] device (tapee08056b-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:22:53 compute-0 NetworkManager[50376]: <info>  [1765614173.9540] device (tapee08056b-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:22:53 compute-0 ovn_controller[148476]: 2025-12-13T08:22:53Z|00229|binding|INFO|Claiming lport ee08056b-cf18-46d7-9fea-542c2ec040ab for this chassis.
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:53 compute-0 ovn_controller[148476]: 2025-12-13T08:22:53Z|00230|binding|INFO|ee08056b-cf18-46d7-9fea-542c2ec040ab: Claiming fa:16:3e:ca:98:f1 10.100.1.173
Dec 13 08:22:53 compute-0 ovn_controller[148476]: 2025-12-13T08:22:53Z|00231|binding|INFO|Setting lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d ovn-installed in OVS
Dec 13 08:22:53 compute-0 nova_compute[248510]: 2025-12-13 08:22:53.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:53 compute-0 systemd-machined[210538]: New machine qemu-34-instance-0000001d.
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.965 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c4209abe-b259-4da0-b68c-648b0fcff36a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:53 compute-0 ovn_controller[148476]: 2025-12-13T08:22:53Z|00232|binding|INFO|Setting lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d up in Southbound
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.968 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:98:f1 10.100.1.173'], port_security=['fa:16:3e:ca:98:f1 10.100.1.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.173/24', 'neutron:device_id': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77039756-6444-4fde-8e43-817fb80a54e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ef8942f-4ab0-4f06-afc6-d773195832b4, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ee08056b-cf18-46d7-9fea-542c2ec040ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:53 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001d.
Dec 13 08:22:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.993 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15ae8e7b-a759-474c-bb4f-f6c9ce547f37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 podman[283320]: 2025-12-13 08:22:53.999482634 +0000 UTC m=+1.203047073 container remove 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 08:22:54 compute-0 ovn_controller[148476]: 2025-12-13T08:22:54Z|00233|binding|INFO|Setting lport ee08056b-cf18-46d7-9fea-542c2ec040ab ovn-installed in OVS
Dec 13 08:22:54 compute-0 ovn_controller[148476]: 2025-12-13T08:22:54Z|00234|binding|INFO|Setting lport ee08056b-cf18-46d7-9fea-542c2ec040ab up in Southbound
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.028 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6fa469-df9b-42b8-a829-c7c1046879fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 systemd[1]: libpod-conmon-4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f.scope: Deactivated successfully.
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.037 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:54 compute-0 NetworkManager[50376]: <info>  [1765614174.0446] manager: (tap9222573e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.043 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3465b458-79e5-4036-9e3e-3bdc24df143c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 sudo[283199]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:22:54 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:22:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:22:54 compute-0 ceph-mon[76537]: pgmap v1731: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 436 KiB/s wr, 17 op/s
Dec 13 08:22:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.087 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1306a2fc-5382-4365-96c3-a4bd6428b260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.092 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d52ab6f5-9c4e-45ac-90fd-eb310f3bb586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:22:54 compute-0 NetworkManager[50376]: <info>  [1765614174.1190] device (tap9222573e-90): carrier: link connected
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.124 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad2d661-09c2-476f-ae09-627877e95e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.126 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614159.105946, 2e309dc2-3cab-4ecf-8be7-eab85790a0da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.127 248514 INFO nova.compute.manager [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] VM Stopped (Lifecycle Event)
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.136 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] resizing rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.143 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[77b4a375-a3e4-4065-a1e0-997ccc94a5c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9222573e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:40:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675133, 'reachable_time': 22562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283685, 'error': None, 'target': 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 sudo[283663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.164 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d123c88-64f5-4103-86a6-bcdad5645022]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:4065'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675133, 'tstamp': 675133}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283703, 'error': None, 'target': 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 sudo[283663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:22:54 compute-0 sudo[283663]: pam_unix(sudo:session): session closed for user root
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.181 248514 DEBUG nova.compute.manager [None req-206b2a67-5f37-4f61-89e1-e08e15808743 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.187 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f47bdf96-8b02-42db-9e1e-2e505f38093f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9222573e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:40:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675133, 'reachable_time': 22562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283717, 'error': None, 'target': 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.216 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a334df17-d416-4dbc-8d85-1a4dcfd63420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.238 248514 DEBUG nova.compute.manager [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.238 248514 DEBUG oslo_concurrency.lockutils [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.239 248514 DEBUG oslo_concurrency.lockutils [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.239 248514 DEBUG oslo_concurrency.lockutils [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.240 248514 DEBUG nova.compute.manager [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Processing event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.246 248514 DEBUG nova.objects.instance [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'migration_context' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.275 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.275 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Ensure instance console log exists: /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.276 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.276 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.276 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.286 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c38cb2f6-b4be-4c75-96c3-1ff5ccf5ee1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.287 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9222573e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9222573e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:54 compute-0 kernel: tap9222573e-90: entered promiscuous mode
Dec 13 08:22:54 compute-0 NetworkManager[50376]: <info>  [1765614174.2910] manager: (tap9222573e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.293 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9222573e-90, col_values=(('external_ids', {'iface-id': '45a4178e-17fc-4302-986d-b435f576d8a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:54 compute-0 ovn_controller[148476]: 2025-12-13T08:22:54Z|00235|binding|INFO|Releasing lport 45a4178e-17fc-4302-986d-b435f576d8a7 from this chassis (sb_readonly=0)
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.314 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9222573e-9ff1-438c-aa91-fa531c4ff949.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9222573e-9ff1-438c-aa91-fa531c4ff949.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a7a9b3-f2de-4ec6-bb4d-fe53391adb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.316 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-9222573e-9ff1-438c-aa91-fa531c4ff949
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/9222573e-9ff1-438c-aa91-fa531c4ff949.pid.haproxy
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 9222573e-9ff1-438c-aa91-fa531c4ff949
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.317 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'env', 'PROCESS_TAG=haproxy-9222573e-9ff1-438c-aa91-fa531c4ff949', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9222573e-9ff1-438c-aa91-fa531c4ff949.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.376 248514 DEBUG nova.compute.manager [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.377 248514 DEBUG oslo_concurrency.lockutils [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.378 248514 DEBUG oslo_concurrency.lockutils [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.378 248514 DEBUG oslo_concurrency.lockutils [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.378 248514 DEBUG nova.compute.manager [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Processing event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.381 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Successfully created port: bc4158d8-4963-4009-a434-0a0106941c9d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:22:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:22:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1194278152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.454 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.460 248514 DEBUG nova.compute.provider_tree [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.477 248514 DEBUG nova.scheduler.client.report [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.507 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.530 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.532 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.590 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.590 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.625 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.650 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:22:54 compute-0 podman[283766]: 2025-12-13 08:22:54.718537665 +0000 UTC m=+0.059078556 container create 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.746 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.747 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.748 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Creating image(s)
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.771 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:54 compute-0 systemd[1]: Started libpod-conmon-73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99.scope.
Dec 13 08:22:54 compute-0 podman[283766]: 2025-12-13 08:22:54.686882305 +0000 UTC m=+0.027423206 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.792 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3c1eb258931fd7cf42baba3d7c54850af94ec03301e14b33f172aaaa8b9257/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:54 compute-0 podman[283766]: 2025-12-13 08:22:54.821979543 +0000 UTC m=+0.162520434 container init 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.824 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.827 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:54 compute-0 podman[283766]: 2025-12-13 08:22:54.828595876 +0000 UTC m=+0.169136747 container start 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 08:22:54 compute-0 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [NOTICE]   (283839) : New worker (283842) forked
Dec 13 08:22:54 compute-0 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [NOTICE]   (283839) : Loading success.
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.859 248514 DEBUG nova.policy [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '79d4b34b8bd3452cb5b8c0954166f397', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06fbab937d6444558229b2351632e711', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.905 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.906 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.907 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.908 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.921 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ee08056b-cf18-46d7-9fea-542c2ec040ab in datapath 77039756-6444-4fde-8e43-817fb80a54e0 unbound from our chassis
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.923 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77039756-6444-4fde-8e43-817fb80a54e0
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.938 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.939 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[303f9ece-b4da-4443-b5a5-010bef4e1864]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.940 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77039756-61 in ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.943 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77039756-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8bd19c-fa65-46bc-a0c8-dd79a8b67d64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 nova_compute[248510]: 2025-12-13 08:22:54.943 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3608d5a1-2555-4554-933c-e249e0f43812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.957 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b8772324-eb47-4ff9-b111-441c8a539b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6524c45-5962-4472-a494-8f9d8919c184]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.024 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1fee4fea-e86c-49e6-b61e-e8c297a18312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 NetworkManager[50376]: <info>  [1765614175.0322] manager: (tap77039756-60): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.033 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9a322a40-016f-4d22-b8dc-2d6a544ff21d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 90 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 475 KiB/s wr, 18 op/s
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.065 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.067 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614175.0670154, 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.067 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] VM Started (Lifecycle Event)
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.073 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.080 248514 INFO nova.virt.libvirt.driver [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance spawned successfully.
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.080 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.080 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6e11c3e5-1d97-406b-ac3d-b57056490cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.085 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[83614b17-b71f-4e8e-9d76-3d57e43f317d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:22:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1194278152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:22:55 compute-0 NetworkManager[50376]: <info>  [1765614175.1153] device (tap77039756-60): carrier: link connected
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.135 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b9433a08-ed78-4d4b-b3cd-8550138e44a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.163 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5540a6-949e-4eec-af92-5a256b5bee77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77039756-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:96:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675233, 'reachable_time': 31379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283943, 'error': None, 'target': 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.182 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5de2cef9-5698-4af9-88bc-59e4ecbb8ae1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:964f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675233, 'tstamp': 675233}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283944, 'error': None, 'target': 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.202 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09bd067d-43fe-4164-9170-341fb6a4cf92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77039756-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:96:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675233, 'reachable_time': 31379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283945, 'error': None, 'target': 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.244 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4d943d-0756-442c-af5d-a634694721bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.287 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.316 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cff934e5-1c2e-410d-8f83-30d7dba7fa8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77039756-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.319 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77039756-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:55 compute-0 NetworkManager[50376]: <info>  [1765614175.3214] manager: (tap77039756-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Dec 13 08:22:55 compute-0 kernel: tap77039756-60: entered promiscuous mode
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.323 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77039756-60, col_values=(('external_ids', {'iface-id': '1b2300c4-8e8d-459a-85ca-05cdd90f9306'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:55 compute-0 ovn_controller[148476]: 2025-12-13T08:22:55Z|00236|binding|INFO|Releasing lport 1b2300c4-8e8d-459a-85ca-05cdd90f9306 from this chassis (sb_readonly=0)
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.325 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77039756-6444-4fde-8e43-817fb80a54e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77039756-6444-4fde-8e43-817fb80a54e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.332 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0775640e-a921-48a6-a27d-d4617a8fbfc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.333 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-77039756-6444-4fde-8e43-817fb80a54e0
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/77039756-6444-4fde-8e43-817fb80a54e0.pid.haproxy
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 77039756-6444-4fde-8e43-817fb80a54e0
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.334 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'env', 'PROCESS_TAG=haproxy-77039756-6444-4fde-8e43-817fb80a54e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77039756-6444-4fde-8e43-817fb80a54e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.363 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] resizing rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.401 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.404 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.404 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.405 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.409 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.410 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.410 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.411 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.411 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.411 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.418 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.455 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.455 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614175.0730948, 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.455 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] VM Paused (Lifecycle Event)
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.460 248514 DEBUG nova.objects.instance [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'migration_context' on Instance uuid dc64fea4-e9a8-47e7-8a3a-d01897fc81de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.498 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.501 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.501 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Ensure instance console log exists: /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.501 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.502 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.502 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.504 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614175.0736177, 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.504 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] VM Resumed (Lifecycle Event)
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.533 248514 INFO nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Took 12.78 seconds to spawn the instance on the hypervisor.
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.534 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.534 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.542 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.587 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.623 248514 INFO nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Took 14.00 seconds to build instance.
Dec 13 08:22:55 compute-0 nova_compute[248510]: 2025-12-13 08:22:55.646 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:55 compute-0 podman[284050]: 2025-12-13 08:22:55.734430137 +0000 UTC m=+0.057527697 container create d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:22:55 compute-0 systemd[1]: Started libpod-conmon-d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7.scope.
Dec 13 08:22:55 compute-0 podman[284050]: 2025-12-13 08:22:55.702909561 +0000 UTC m=+0.026007141 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:22:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:22:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d34cd0ef81b6a8c0e2cc74104c29ceeace88d06c0627064001b1c7c88fec1e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:22:55 compute-0 podman[284050]: 2025-12-13 08:22:55.824608208 +0000 UTC m=+0.147705798 container init d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:22:55 compute-0 podman[284050]: 2025-12-13 08:22:55.831026146 +0000 UTC m=+0.154123706 container start d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:22:55 compute-0 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [NOTICE]   (284069) : New worker (284071) forked
Dec 13 08:22:55 compute-0 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [NOTICE]   (284069) : Loading success.
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.019 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Successfully updated port: bc4158d8-4963-4009-a434-0a0106941c9d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:56 compute-0 ceph-mon[76537]: pgmap v1732: 321 pgs: 321 active+clean; 90 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 475 KiB/s wr, 18 op/s
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.126 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.126 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.127 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.293 248514 DEBUG nova.compute.manager [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.294 248514 DEBUG oslo_concurrency.lockutils [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.294 248514 DEBUG oslo_concurrency.lockutils [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.294 248514 DEBUG oslo_concurrency.lockutils [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.295 248514 DEBUG nova.compute.manager [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.295 248514 WARNING nova.compute.manager [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received unexpected event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d for instance with vm_state active and task_state None.
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.328 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Successfully created port: 627622b8-ef54-4181-bd8d-e8e82650b143 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.333 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.552 248514 DEBUG nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.552 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.553 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.553 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.553 248514 DEBUG nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.554 248514 WARNING nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received unexpected event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab for instance with vm_state active and task_state None.
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.554 248514 DEBUG nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.554 248514 DEBUG nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:56 compute-0 nova_compute[248510]: 2025-12-13 08:22:56.555 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 90 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 475 KiB/s wr, 18 op/s
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.899 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.950 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.951 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance network_info: |[{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.953 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Successfully updated port: 627622b8-ef54-4181-bd8d-e8e82650b143 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.954 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.955 248514 DEBUG nova.network.neutron [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.958 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start _get_guest_xml network_info=[{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.964 248514 WARNING nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.972 248514 DEBUG nova.virt.libvirt.host [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.973 248514 DEBUG nova.virt.libvirt.host [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.976 248514 DEBUG nova.virt.libvirt.host [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.977 248514 DEBUG nova.virt.libvirt.host [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.977 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.977 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.978 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.978 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.978 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.979 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.979 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.979 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.979 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.980 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.980 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.980 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:22:57 compute-0 nova_compute[248510]: 2025-12-13 08:22:57.983 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.022 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.023 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.023 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.332 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:22:58 compute-0 ceph-mon[76537]: pgmap v1733: 321 pgs: 321 active+clean; 90 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 475 KiB/s wr, 18 op/s
Dec 13 08:22:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:22:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012624721' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.590 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.616 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.621 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.794 248514 DEBUG nova.compute.manager [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.795 248514 DEBUG nova.compute.manager [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.795 248514 DEBUG oslo_concurrency.lockutils [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.830 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.830 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.831 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.831 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.831 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.833 248514 INFO nova.compute.manager [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Terminating instance
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.834 248514 DEBUG nova.compute.manager [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:22:58 compute-0 kernel: tap4dd4aba8-8c (unregistering): left promiscuous mode
Dec 13 08:22:58 compute-0 NetworkManager[50376]: <info>  [1765614178.8727] device (tap4dd4aba8-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:58 compute-0 ovn_controller[148476]: 2025-12-13T08:22:58Z|00237|binding|INFO|Releasing lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d from this chassis (sb_readonly=0)
Dec 13 08:22:58 compute-0 ovn_controller[148476]: 2025-12-13T08:22:58Z|00238|binding|INFO|Setting lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d down in Southbound
Dec 13 08:22:58 compute-0 ovn_controller[148476]: 2025-12-13T08:22:58Z|00239|binding|INFO|Removing iface tap4dd4aba8-8c ovn-installed in OVS
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.896 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:be:c0 10.100.0.93'], port_security=['fa:16:3e:57:be:c0 10.100.0.93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.93/24', 'neutron:device_id': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9222573e-9ff1-438c-aa91-fa531c4ff949', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7b24070-0889-4199-b066-7f798c438fcf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4dd4aba8-8ce4-4b2e-92b4-879959570e8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.897 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4dd4aba8-8ce4-4b2e-92b4-879959570e8d in datapath 9222573e-9ff1-438c-aa91-fa531c4ff949 unbound from our chassis
Dec 13 08:22:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.898 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9222573e-9ff1-438c-aa91-fa531c4ff949, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:22:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.900 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a933d7cf-73b1-4662-a84d-7c093053a531]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.900 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 namespace which is not needed anymore
Dec 13 08:22:58 compute-0 kernel: tapee08056b-cf (unregistering): left promiscuous mode
Dec 13 08:22:58 compute-0 NetworkManager[50376]: <info>  [1765614178.9096] device (tapee08056b-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:58 compute-0 ovn_controller[148476]: 2025-12-13T08:22:58Z|00240|binding|INFO|Releasing lport ee08056b-cf18-46d7-9fea-542c2ec040ab from this chassis (sb_readonly=0)
Dec 13 08:22:58 compute-0 ovn_controller[148476]: 2025-12-13T08:22:58Z|00241|binding|INFO|Setting lport ee08056b-cf18-46d7-9fea-542c2ec040ab down in Southbound
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:58 compute-0 ovn_controller[148476]: 2025-12-13T08:22:58Z|00242|binding|INFO|Removing iface tapee08056b-cf ovn-installed in OVS
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.926 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:98:f1 10.100.1.173'], port_security=['fa:16:3e:ca:98:f1 10.100.1.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.173/24', 'neutron:device_id': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77039756-6444-4fde-8e43-817fb80a54e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ef8942f-4ab0-4f06-afc6-d773195832b4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ee08056b-cf18-46d7-9fea-542c2ec040ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:22:58 compute-0 nova_compute[248510]: 2025-12-13 08:22:58.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:58 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Dec 13 08:22:58 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Consumed 4.842s CPU time.
Dec 13 08:22:58 compute-0 systemd-machined[210538]: Machine qemu-34-instance-0000001d terminated.
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [NOTICE]   (283839) : haproxy version is 2.8.14-c23fe91
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [NOTICE]   (283839) : path to executable is /usr/sbin/haproxy
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [WARNING]  (283839) : Exiting Master process...
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [ALERT]    (283839) : Current worker (283842) exited with code 143 (Terminated)
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [WARNING]  (283839) : All workers exited. Exiting... (0)
Dec 13 08:22:59 compute-0 systemd[1]: libpod-73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99.scope: Deactivated successfully.
Dec 13 08:22:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 180 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Dec 13 08:22:59 compute-0 podman[284169]: 2025-12-13 08:22:59.055639632 +0000 UTC m=+0.047581883 container died 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 NetworkManager[50376]: <info>  [1765614179.0689] manager: (tapee08056b-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.086 248514 INFO nova.virt.libvirt.driver [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance destroyed successfully.
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.087 248514 DEBUG nova.objects.instance [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'resources' on Instance uuid 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99-userdata-shm.mount: Deactivated successfully.
Dec 13 08:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea3c1eb258931fd7cf42baba3d7c54850af94ec03301e14b33f172aaaa8b9257-merged.mount: Deactivated successfully.
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.105 248514 DEBUG nova.virt.libvirt.vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:55Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.106 248514 DEBUG nova.network.os_vif_util [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.107 248514 DEBUG nova.network.os_vif_util [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.108 248514 DEBUG os_vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.111 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd4aba8-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.119 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.125 248514 INFO os_vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c')
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.126 248514 DEBUG nova.virt.libvirt.vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:55Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.127 248514 DEBUG nova.network.os_vif_util [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.128 248514 DEBUG nova.network.os_vif_util [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.128 248514 DEBUG os_vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:22:59 compute-0 podman[284169]: 2025-12-13 08:22:59.128997859 +0000 UTC m=+0.120940110 container cleanup 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.129 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.130 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee08056b-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.136 248514 INFO os_vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf')
Dec 13 08:22:59 compute-0 systemd[1]: libpod-conmon-73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99.scope: Deactivated successfully.
Dec 13 08:22:59 compute-0 podman[284217]: 2025-12-13 08:22:59.212902525 +0000 UTC m=+0.056474762 container remove 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 08:22:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:22:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434445151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.221 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef782ef4-215d-4a08-9775-ae189bd01e73]: (4, ('Sat Dec 13 08:22:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 (73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99)\n73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99\nSat Dec 13 08:22:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 (73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99)\n73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.223 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf76aa8-ad71-44c1-b084-6bd31fef38e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.226 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9222573e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 kernel: tap9222573e-90: left promiscuous mode
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.291 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.292 248514 DEBUG nova.virt.libvirt.vif [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.293 248514 DEBUG nova.network.os_vif_util [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.292 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df1592ed-9fbe-426e-b315-c267641cd28b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.293 248514 DEBUG nova.network.os_vif_util [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.295 248514 DEBUG nova.objects.instance [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.308 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe808b6-6aa7-4a5d-b315-ffe6259fd7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.310 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2be95d6-d215-482d-99f0-89be56774d39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.312 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <uuid>3b43a9c7-85e7-4558-bd2f-e4712882021e</uuid>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <name>instance-0000001e</name>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:22:57</nova:creationTime>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 08:22:59 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <system>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <entry name="serial">3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <entry name="uuid">3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </system>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <os>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   </os>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <features>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   </features>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk">
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config">
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       </source>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:22:59 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:37:48:0d"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <target dev="tapbc4158d8-49"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log" append="off"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <video>
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </video>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:22:59 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:22:59 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:22:59 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:22:59 compute-0 nova_compute[248510]: </domain>
Dec 13 08:22:59 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.313 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Preparing to wait for external event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.313 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.313 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.314 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.314 248514 DEBUG nova.virt.libvirt.vif [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.315 248514 DEBUG nova.network.os_vif_util [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.316 248514 DEBUG nova.network.os_vif_util [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.316 248514 DEBUG os_vif [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.317 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.318 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.322 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc4158d8-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.322 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc4158d8-49, col_values=(('external_ids', {'iface-id': 'bc4158d8-4963-4009-a434-0a0106941c9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:48:0d', 'vm-uuid': '3b43a9c7-85e7-4558-bd2f-e4712882021e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 NetworkManager[50376]: <info>  [1765614179.3253] manager: (tapbc4158d8-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.332 248514 INFO os_vif [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49')
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.332 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc42fa5-16d9-461e-907f-13e2eb044a44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675123, 'reachable_time': 15938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284250, 'error': None, 'target': 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d9222573e\x2d9ff1\x2d438c\x2daa91\x2dfa531c4ff949.mount: Deactivated successfully.
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.336 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.336 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4684caba-5002-43c1-8cdc-453b13abdaec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.338 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ee08056b-cf18-46d7-9fea-542c2ec040ab in datapath 77039756-6444-4fde-8e43-817fb80a54e0 unbound from our chassis
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.339 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77039756-6444-4fde-8e43-817fb80a54e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.341 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[897dfcba-7182-444e-9b9a-6669516a5123]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.341 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 namespace which is not needed anymore
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.421 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.421 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.422 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:37:48:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.422 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Using config drive
Dec 13 08:22:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2012624721' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1434445151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.447 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.487 248514 INFO nova.virt.libvirt.driver [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Deleting instance files /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_del
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.488 248514 INFO nova.virt.libvirt.driver [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Deletion of /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_del complete
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [NOTICE]   (284069) : haproxy version is 2.8.14-c23fe91
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [NOTICE]   (284069) : path to executable is /usr/sbin/haproxy
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [WARNING]  (284069) : Exiting Master process...
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [WARNING]  (284069) : Exiting Master process...
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [ALERT]    (284069) : Current worker (284071) exited with code 143 (Terminated)
Dec 13 08:22:59 compute-0 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [WARNING]  (284069) : All workers exited. Exiting... (0)
Dec 13 08:22:59 compute-0 systemd[1]: libpod-d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7.scope: Deactivated successfully.
Dec 13 08:22:59 compute-0 podman[284270]: 2025-12-13 08:22:59.502584 +0000 UTC m=+0.053813027 container died d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.509 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7-userdata-shm.mount: Deactivated successfully.
Dec 13 08:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d34cd0ef81b6a8c0e2cc74104c29ceeace88d06c0627064001b1c7c88fec1e1-merged.mount: Deactivated successfully.
Dec 13 08:22:59 compute-0 podman[284270]: 2025-12-13 08:22:59.549146596 +0000 UTC m=+0.100375623 container cleanup d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:22:59 compute-0 systemd[1]: libpod-conmon-d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7.scope: Deactivated successfully.
Dec 13 08:22:59 compute-0 podman[284317]: 2025-12-13 08:22:59.615238974 +0000 UTC m=+0.044616660 container remove d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.621 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b339ec4-624d-4e82-b9f5-dee20628ce6f]: (4, ('Sat Dec 13 08:22:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 (d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7)\nd408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7\nSat Dec 13 08:22:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 (d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7)\nd408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbb82ec-a85a-4898-b3ca-7a60ad8527fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.624 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77039756-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:22:59 compute-0 kernel: tap77039756-60: left promiscuous mode
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.630 248514 DEBUG nova.compute.manager [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-unplugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.630 248514 DEBUG oslo_concurrency.lockutils [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG oslo_concurrency.lockutils [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG oslo_concurrency.lockutils [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG nova.compute.manager [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-unplugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG nova.compute.manager [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-unplugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.649 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eec6cfb8-7313-459e-8da3-9aeec1af3e35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.662 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76445a6a-bda5-410b-aaa4-2be59d9d0309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.664 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ced348be-1f03-4109-9846-cd50c7b60818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c897ece-dfc6-4be2-bd1d-a3f58b72ee5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675223, 'reachable_time': 19379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284335, 'error': None, 'target': 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.690 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:22:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.690 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bec26c5a-d6ee-44b1-8fc8-b58484fad400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.694 248514 INFO nova.compute.manager [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Took 0.86 seconds to destroy the instance on the hypervisor.
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.694 248514 DEBUG oslo.service.loopingcall [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.695 248514 DEBUG nova.compute.manager [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.695 248514 DEBUG nova.network.neutron [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.878 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.900 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.901 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance network_info: |[{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.901 248514 DEBUG oslo_concurrency.lockutils [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.901 248514 DEBUG nova.network.neutron [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.904 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start _get_guest_xml network_info=[{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.912 248514 WARNING nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.919 248514 DEBUG nova.virt.libvirt.host [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.919 248514 DEBUG nova.virt.libvirt.host [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.927 248514 DEBUG nova.virt.libvirt.host [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.927 248514 DEBUG nova.virt.libvirt.host [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.928 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.928 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.928 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.930 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.930 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.930 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.930 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:22:59 compute-0 nova_compute[248510]: 2025-12-13 08:22:59.933 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d77039756\x2d6444\x2d4fde\x2d8e43\x2d817fb80a54e0.mount: Deactivated successfully.
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.150 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Creating config drive at /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.157 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwyuyeklz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.217 248514 DEBUG nova.network.neutron [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.219 248514 DEBUG nova.network.neutron [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.242 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.301 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwyuyeklz" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.327 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.331 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:00 compute-0 ceph-mon[76537]: pgmap v1734: 321 pgs: 321 active+clean; 180 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.483 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.484 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Deleting local config drive /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config because it was imported into RBD.
Dec 13 08:23:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:23:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657982056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.520 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:00 compute-0 kernel: tapbc4158d8-49: entered promiscuous mode
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.547 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:00 compute-0 NetworkManager[50376]: <info>  [1765614180.5502] manager: (tapbc4158d8-49): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Dec 13 08:23:00 compute-0 systemd-udevd[284143]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:23:00 compute-0 ovn_controller[148476]: 2025-12-13T08:23:00Z|00243|binding|INFO|Claiming lport bc4158d8-4963-4009-a434-0a0106941c9d for this chassis.
Dec 13 08:23:00 compute-0 ovn_controller[148476]: 2025-12-13T08:23:00Z|00244|binding|INFO|bc4158d8-4963-4009-a434-0a0106941c9d: Claiming fa:16:3e:37:48:0d 10.100.0.6
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.558 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.558 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:48:0d 10.100.0.6'], port_security=['fa:16:3e:37:48:0d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b43a9c7-85e7-4558-bd2f-e4712882021e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd758160-971f-4f0e-a4ca-a13304d3c491', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bc4158d8-4963-4009-a434-0a0106941c9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.559 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bc4158d8-4963-4009-a434-0a0106941c9d in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.561 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:23:00 compute-0 NetworkManager[50376]: <info>  [1765614180.5643] device (tapbc4158d8-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:23:00 compute-0 NetworkManager[50376]: <info>  [1765614180.5649] device (tapbc4158d8-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:23:00 compute-0 ovn_controller[148476]: 2025-12-13T08:23:00Z|00245|binding|INFO|Setting lport bc4158d8-4963-4009-a434-0a0106941c9d ovn-installed in OVS
Dec 13 08:23:00 compute-0 ovn_controller[148476]: 2025-12-13T08:23:00Z|00246|binding|INFO|Setting lport bc4158d8-4963-4009-a434-0a0106941c9d up in Southbound
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.578 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39e290f9-ee72-4739-b13b-ccfc74f461d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.579 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ca92864-31 in ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.581 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ca92864-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.581 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4ef5a0-57bb-402f-aa65-6cc7c36f737e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.582 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd0dbf5-e0d5-4ed6-8de0-48c3990f6128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 systemd-machined[210538]: New machine qemu-35-instance-0000001e.
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.596 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[42e3d90c-b223-40ac-b2db-152402c7b9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001e.
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d136a47-bd2f-41e1-a4d8-504c01813d26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.673 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[db316b40-6884-4524-94ff-9c3bfde056c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.679 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85d31762-c319-4e5d-9ec7-14489c8787ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 NetworkManager[50376]: <info>  [1765614180.6799] manager: (tap1ca92864-30): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.729 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0fc4d8-1e66-49c2-94b8-ca709d17ba3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.733 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[26d6a6a2-e09d-4a2f-888c-5cae840d3ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 NetworkManager[50376]: <info>  [1765614180.7627] device (tap1ca92864-30): carrier: link connected
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.772 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6836d4e4-0aa8-427f-92cc-875c7da376ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.795 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[453731f2-49cc-4f29-b22e-302a2af730f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284479, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.815 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac13849-be19-494b-8ce4-2490f743b118]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:e2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675797, 'tstamp': 675797}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284480, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.839 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e77927ca-97e4-4af7-81b9-78028e373e82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284481, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.875 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4972be9f-ec83-4704-b4ac-94fc8da899b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.932 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e06c2d07-b1d5-4a5e-8ce9-1e75d7042a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.933 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.934 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.934 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:00 compute-0 kernel: tap1ca92864-30: entered promiscuous mode
Dec 13 08:23:00 compute-0 NetworkManager[50376]: <info>  [1765614180.9371] manager: (tap1ca92864-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.940 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:00 compute-0 ovn_controller[148476]: 2025-12-13T08:23:00Z|00247|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:23:00 compute-0 nova_compute[248510]: 2025-12-13 08:23:00.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.961 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.962 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f68dfcb4-2050-4cdf-909c-b85a781e0096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.963 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:23:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.965 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'env', 'PROCESS_TAG=haproxy-1ca92864-3b70-4794-9db1-fa08128cef92', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ca92864-3b70-4794-9db1-fa08128cef92.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:23:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 180 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Dec 13 08:23:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:23:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846145538' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.141 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-unplugged-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.142 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.142 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.143 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.143 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-unplugged-ee08056b-cf18-46d7-9fea-542c2ec040ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.143 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-unplugged-ee08056b-cf18-46d7-9fea-542c2ec040ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.144 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.144 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.144 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.145 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.145 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.145 248514 WARNING nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received unexpected event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab for instance with vm_state active and task_state deleting.
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.145 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.146 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.146 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.146 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.147 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Processing event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.160 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.161 248514 DEBUG nova.virt.libvirt.vif [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-153788010',display_name='tempest-FloatingIPsAssociationTestJSON-server-153788010',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-153788010',id=31,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-m786w5ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:54Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=dc64fea4-e9a8-47e7-8a3a-d01897fc81de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.162 248514 DEBUG nova.network.os_vif_util [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.163 248514 DEBUG nova.network.os_vif_util [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.164 248514 DEBUG nova.objects.instance [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc64fea4-e9a8-47e7-8a3a-d01897fc81de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.194 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.195 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614181.1951368, 3b43a9c7-85e7-4558-bd2f-e4712882021e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.196 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] VM Started (Lifecycle Event)
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.202 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.206 248514 INFO nova.virt.libvirt.driver [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance spawned successfully.
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.207 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.263 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <uuid>dc64fea4-e9a8-47e7-8a3a-d01897fc81de</uuid>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <name>instance-0000001f</name>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-153788010</nova:name>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:22:59</nova:creationTime>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <nova:user uuid="79d4b34b8bd3452cb5b8c0954166f397">tempest-FloatingIPsAssociationTestJSON-609563086-project-member</nova:user>
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <nova:project uuid="06fbab937d6444558229b2351632e711">tempest-FloatingIPsAssociationTestJSON-609563086</nova:project>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <nova:port uuid="627622b8-ef54-4181-bd8d-e8e82650b143">
Dec 13 08:23:01 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <system>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <entry name="serial">dc64fea4-e9a8-47e7-8a3a-d01897fc81de</entry>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <entry name="uuid">dc64fea4-e9a8-47e7-8a3a-d01897fc81de</entry>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </system>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <os>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   </os>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <features>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   </features>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk">
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config">
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:23:01 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:4a:b2:83"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <target dev="tap627622b8-ef"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/console.log" append="off"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <video>
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </video>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:23:01 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:23:01 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:23:01 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:23:01 compute-0 nova_compute[248510]: </domain>
Dec 13 08:23:01 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.264 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Preparing to wait for external event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.264 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.264 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.264 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.265 248514 DEBUG nova.virt.libvirt.vif [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-153788010',display_name='tempest-FloatingIPsAssociationTestJSON-server-153788010',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-153788010',id=31,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-m786w5ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:54Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=dc64fea4-e9a8-47e7-8a3a-d01897fc81de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.265 248514 DEBUG nova.network.os_vif_util [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.266 248514 DEBUG nova.network.os_vif_util [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.266 248514 DEBUG os_vif [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.268 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.269 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.269 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.274 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap627622b8-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.274 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap627622b8-ef, col_values=(('external_ids', {'iface-id': '627622b8-ef54-4181-bd8d-e8e82650b143', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:b2:83', 'vm-uuid': 'dc64fea4-e9a8-47e7-8a3a-d01897fc81de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:01 compute-0 NetworkManager[50376]: <info>  [1765614181.2771] manager: (tap627622b8-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.280 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.282 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.284 248514 DEBUG nova.network.neutron [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.287 248514 INFO os_vif [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef')
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.289 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.289 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.289 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.290 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.290 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.290 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.327 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.327 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614181.195357, 3b43a9c7-85e7-4558-bd2f-e4712882021e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.327 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] VM Paused (Lifecycle Event)
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.335 248514 INFO nova.compute.manager [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Took 1.64 seconds to deallocate network for instance.
Dec 13 08:23:01 compute-0 podman[284555]: 2025-12-13 08:23:01.356047522 +0000 UTC m=+0.047312856 container create e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.369 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.375 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614181.2038863, 3b43a9c7-85e7-4558-bd2f-e4712882021e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.375 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] VM Resumed (Lifecycle Event)
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.378 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.378 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.379 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No VIF found with MAC fa:16:3e:4a:b2:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.379 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Using config drive
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.406 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:01 compute-0 systemd[1]: Started libpod-conmon-e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b.scope.
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.415 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.417 248514 INFO nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Took 8.01 seconds to spawn the instance on the hypervisor.
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.417 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.417 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.418 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:01 compute-0 podman[284555]: 2025-12-13 08:23:01.331466737 +0000 UTC m=+0.022732081 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.431 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4067aa9b3f2b0ef0dd9859d3a57acc4d914a6c704c9a2c14d2ad026817e68f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.455 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:01 compute-0 podman[284555]: 2025-12-13 08:23:01.460147836 +0000 UTC m=+0.151413190 container init e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:23:01 compute-0 podman[284555]: 2025-12-13 08:23:01.468484252 +0000 UTC m=+0.159749576 container start e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 08:23:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1657982056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1846145538' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:01 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [NOTICE]   (284593) : New worker (284595) forked
Dec 13 08:23:01 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [NOTICE]   (284593) : Loading success.
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.535 248514 DEBUG oslo_concurrency.processutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.653 248514 INFO nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Took 9.50 seconds to build instance.
Dec 13 08:23:01 compute-0 nova_compute[248510]: 2025-12-13 08:23:01.675 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.009 248514 DEBUG nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.009 248514 DEBUG oslo_concurrency.lockutils [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 DEBUG oslo_concurrency.lockutils [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 DEBUG oslo_concurrency.lockutils [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 DEBUG nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 WARNING nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received unexpected event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d for instance with vm_state deleted and task_state None.
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 DEBUG nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-deleted-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.011 248514 DEBUG nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-deleted-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:23:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364112568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.128 248514 DEBUG oslo_concurrency.processutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.137 248514 DEBUG nova.compute.provider_tree [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.154 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Creating config drive at /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.159 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xc8mx3r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.195 248514 DEBUG nova.network.neutron [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.197 248514 DEBUG nova.network.neutron [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.199 248514 DEBUG nova.scheduler.client.report [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.223 248514 DEBUG oslo_concurrency.lockutils [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.226 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.260 248514 INFO nova.scheduler.client.report [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Deleted allocations for instance 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.300 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xc8mx3r" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.326 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.331 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.373 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:02 compute-0 ceph-mon[76537]: pgmap v1735: 321 pgs: 321 active+clean; 180 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Dec 13 08:23:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2364112568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.506 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.508 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Deleting local config drive /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config because it was imported into RBD.
Dec 13 08:23:02 compute-0 NetworkManager[50376]: <info>  [1765614182.5810] manager: (tap627622b8-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Dec 13 08:23:02 compute-0 kernel: tap627622b8-ef: entered promiscuous mode
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:02 compute-0 ovn_controller[148476]: 2025-12-13T08:23:02Z|00248|binding|INFO|Claiming lport 627622b8-ef54-4181-bd8d-e8e82650b143 for this chassis.
Dec 13 08:23:02 compute-0 ovn_controller[148476]: 2025-12-13T08:23:02Z|00249|binding|INFO|627622b8-ef54-4181-bd8d-e8e82650b143: Claiming fa:16:3e:4a:b2:83 10.100.0.9
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.606 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:b2:83 10.100.0.9'], port_security=['fa:16:3e:4a:b2:83 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dc64fea4-e9a8-47e7-8a3a-d01897fc81de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06fbab937d6444558229b2351632e711', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fa325bd2-c57a-49fb-8dd9-f45405c95b4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f47224ac-d05f-46db-ac07-cb476b38b044, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=627622b8-ef54-4181-bd8d-e8e82650b143) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.614 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 627622b8-ef54-4181-bd8d-e8e82650b143 in datapath 62193ff6-aaa1-401a-b1e0-512e67752a9e bound to our chassis
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.619 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62193ff6-aaa1-401a-b1e0-512e67752a9e
Dec 13 08:23:02 compute-0 systemd-machined[210538]: New machine qemu-36-instance-0000001f.
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:02 compute-0 ovn_controller[148476]: 2025-12-13T08:23:02Z|00250|binding|INFO|Setting lport 627622b8-ef54-4181-bd8d-e8e82650b143 ovn-installed in OVS
Dec 13 08:23:02 compute-0 ovn_controller[148476]: 2025-12-13T08:23:02Z|00251|binding|INFO|Setting lport 627622b8-ef54-4181-bd8d-e8e82650b143 up in Southbound
Dec 13 08:23:02 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-0000001f.
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.635 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b221c77-5686-4d3a-a5f6-7a32694d0920]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.637 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62193ff6-a1 in ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.639 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62193ff6-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.639 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eee16be2-4cab-4384-87e0-87b6ee9c35d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[218f68e0-01bd-4e31-a359-0761175874e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 systemd-udevd[284680]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.657 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1576e0f9-2d45-479b-a8c1-1c034388c88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 NetworkManager[50376]: <info>  [1765614182.6711] device (tap627622b8-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:23:02 compute-0 NetworkManager[50376]: <info>  [1765614182.6728] device (tap627622b8-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a1e061-b7b6-4d25-b010-eaed35a1cadc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.715 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cc33832e-08b0-4f38-ad19-bcd87d5f4d50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 systemd-udevd[284684]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:23:02 compute-0 NetworkManager[50376]: <info>  [1765614182.7293] manager: (tap62193ff6-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47b61247-621f-4709-a239-ceff47eeb62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.764 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d0dae41e-3f07-4e22-8cf5-1d28d4b24288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.769 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ada246d-6432-41bb-953f-85f9992f695a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 NetworkManager[50376]: <info>  [1765614182.7999] device (tap62193ff6-a0): carrier: link connected
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.807 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f95928ae-3698-4133-a443-4767430dc5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.829 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55797a0f-cc41-485b-98e5-025f497064cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62193ff6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:33:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676001, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284712, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.851 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bf2840-67c7-4c28-855b-5908fd1ea686]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:33dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676001, 'tstamp': 676001}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284713, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5186b55e-22d3-4381-acd6-aeac5b3fb5c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62193ff6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:33:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676001, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284714, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3629f4-1dc0-4cdc-84dc-f1f58b02260f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.988 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0905b9a-4cec-4f25-bc8a-74008c296405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.990 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62193ff6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.991 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.991 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62193ff6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:02 compute-0 kernel: tap62193ff6-a0: entered promiscuous mode
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:02 compute-0 NetworkManager[50376]: <info>  [1765614182.9977] manager: (tap62193ff6-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Dec 13 08:23:02 compute-0 nova_compute[248510]: 2025-12-13 08:23:02.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.005 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62193ff6-a0, col_values=(('external_ids', {'iface-id': '67d122d2-811d-4aa8-bdde-aafc5e939b2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:03 compute-0 ovn_controller[148476]: 2025-12-13T08:23:03Z|00252|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.042 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.043 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62193ff6-aaa1-401a-b1e0-512e67752a9e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62193ff6-aaa1-401a-b1e0-512e67752a9e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[018906c3-8f88-40f3-a7d2-f6b900f9b778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.045 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-62193ff6-aaa1-401a-b1e0-512e67752a9e
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/62193ff6-aaa1-401a-b1e0-512e67752a9e.pid.haproxy
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 62193ff6-aaa1-401a-b1e0-512e67752a9e
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:23:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.046 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'env', 'PROCESS_TAG=haproxy-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62193ff6-aaa1-401a-b1e0-512e67752a9e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:23:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 170 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.070 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614183.0700932, dc64fea4-e9a8-47e7-8a3a-d01897fc81de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.071 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] VM Started (Lifecycle Event)
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.096 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.100 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614183.0714447, dc64fea4-e9a8-47e7-8a3a-d01897fc81de => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.101 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] VM Paused (Lifecycle Event)
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.124 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.137 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.169 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.249 248514 DEBUG nova.compute.manager [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.250 248514 DEBUG oslo_concurrency.lockutils [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.250 248514 DEBUG oslo_concurrency.lockutils [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.250 248514 DEBUG oslo_concurrency.lockutils [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.250 248514 DEBUG nova.compute.manager [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:03 compute-0 nova_compute[248510]: 2025-12-13 08:23:03.251 248514 WARNING nova.compute.manager [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d for instance with vm_state active and task_state None.
Dec 13 08:23:03 compute-0 podman[284788]: 2025-12-13 08:23:03.466022522 +0000 UTC m=+0.057681942 container create 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:23:03 compute-0 systemd[1]: Started libpod-conmon-70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08.scope.
Dec 13 08:23:03 compute-0 podman[284788]: 2025-12-13 08:23:03.433146582 +0000 UTC m=+0.024806002 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:23:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58e74a911db8950501b6e1ba22b29e7f74e9f24881f0a836706699e6a74d9c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:03 compute-0 podman[284788]: 2025-12-13 08:23:03.574835522 +0000 UTC m=+0.166494962 container init 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 08:23:03 compute-0 podman[284788]: 2025-12-13 08:23:03.580950493 +0000 UTC m=+0.172609913 container start 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 08:23:03 compute-0 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [NOTICE]   (284807) : New worker (284809) forked
Dec 13 08:23:03 compute-0 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [NOTICE]   (284807) : Loading success.
Dec 13 08:23:04 compute-0 ceph-mon[76537]: pgmap v1736: 321 pgs: 321 active+clean; 170 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Dec 13 08:23:04 compute-0 nova_compute[248510]: 2025-12-13 08:23:04.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.443 248514 DEBUG nova.compute.manager [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.443 248514 DEBUG oslo_concurrency.lockutils [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.444 248514 DEBUG oslo_concurrency.lockutils [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.444 248514 DEBUG oslo_concurrency.lockutils [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.444 248514 DEBUG nova.compute.manager [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Processing event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.445 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.454 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614185.4534385, dc64fea4-e9a8-47e7-8a3a-d01897fc81de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.454 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] VM Resumed (Lifecycle Event)
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.457 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.460 248514 INFO nova.virt.libvirt.driver [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance spawned successfully.
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.461 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.489 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.497 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.503 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.504 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.504 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.505 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.505 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.506 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.577 248514 INFO nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Took 10.83 seconds to spawn the instance on the hypervisor.
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.578 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.642 248514 INFO nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Took 12.28 seconds to build instance.
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.668 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:05 compute-0 nova_compute[248510]: 2025-12-13 08:23:05.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:05.858 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:23:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:05.859 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:23:06 compute-0 nova_compute[248510]: 2025-12-13 08:23:06.288 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:06 compute-0 ovn_controller[148476]: 2025-12-13T08:23:06Z|00253|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 08:23:06 compute-0 ovn_controller[148476]: 2025-12-13T08:23:06Z|00254|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:23:06 compute-0 ceph-mon[76537]: pgmap v1737: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Dec 13 08:23:06 compute-0 nova_compute[248510]: 2025-12-13 08:23:06.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:06 compute-0 ovn_controller[148476]: 2025-12-13T08:23:06Z|00255|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 08:23:06 compute-0 ovn_controller[148476]: 2025-12-13T08:23:06Z|00256|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:23:06 compute-0 nova_compute[248510]: 2025-12-13 08:23:06.659 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:06 compute-0 nova_compute[248510]: 2025-12-13 08:23:06.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:06 compute-0 NetworkManager[50376]: <info>  [1765614186.7191] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Dec 13 08:23:06 compute-0 NetworkManager[50376]: <info>  [1765614186.7197] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Dec 13 08:23:06 compute-0 nova_compute[248510]: 2025-12-13 08:23:06.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:06 compute-0 ovn_controller[148476]: 2025-12-13T08:23:06Z|00257|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 08:23:06 compute-0 ovn_controller[148476]: 2025-12-13T08:23:06Z|00258|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:23:06 compute-0 nova_compute[248510]: 2025-12-13 08:23:06.830 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 231 op/s
Dec 13 08:23:08 compute-0 ceph-mon[76537]: pgmap v1738: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 231 op/s
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.523 248514 DEBUG nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 DEBUG nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] No waiting events found dispatching network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 WARNING nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received unexpected event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 for instance with vm_state active and task_state None.
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:08 compute-0 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG nova.network.neutron [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 297 op/s
Dec 13 08:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:23:09
Dec 13 08:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'images']
Dec 13 08:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:23:09 compute-0 nova_compute[248510]: 2025-12-13 08:23:09.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:09.862 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:23:10 compute-0 ceph-mon[76537]: pgmap v1739: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 297 op/s
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:23:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:23:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 173 op/s
Dec 13 08:23:11 compute-0 nova_compute[248510]: 2025-12-13 08:23:11.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:11 compute-0 nova_compute[248510]: 2025-12-13 08:23:11.314 248514 DEBUG nova.network.neutron [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:11 compute-0 nova_compute[248510]: 2025-12-13 08:23:11.315 248514 DEBUG nova.network.neutron [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:11 compute-0 nova_compute[248510]: 2025-12-13 08:23:11.352 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:11 compute-0 podman[284821]: 2025-12-13 08:23:11.981960585 +0000 UTC m=+0.061472548 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:23:11 compute-0 podman[284820]: 2025-12-13 08:23:11.990413283 +0000 UTC m=+0.073600397 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:23:12 compute-0 podman[284819]: 2025-12-13 08:23:12.023043358 +0000 UTC m=+0.107571105 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:23:12 compute-0 ceph-mon[76537]: pgmap v1740: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 173 op/s
Dec 13 08:23:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 137 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 425 KiB/s wr, 186 op/s
Dec 13 08:23:13 compute-0 ovn_controller[148476]: 2025-12-13T08:23:13Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:48:0d 10.100.0.6
Dec 13 08:23:13 compute-0 ovn_controller[148476]: 2025-12-13T08:23:13Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:48:0d 10.100.0.6
Dec 13 08:23:14 compute-0 nova_compute[248510]: 2025-12-13 08:23:14.084 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614179.0825927, 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:14 compute-0 nova_compute[248510]: 2025-12-13 08:23:14.085 248514 INFO nova.compute.manager [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] VM Stopped (Lifecycle Event)
Dec 13 08:23:14 compute-0 ceph-mon[76537]: pgmap v1741: 321 pgs: 321 active+clean; 137 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 425 KiB/s wr, 186 op/s
Dec 13 08:23:14 compute-0 nova_compute[248510]: 2025-12-13 08:23:14.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:23:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/887234214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:23:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:23:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/887234214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:23:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 156 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 174 op/s
Dec 13 08:23:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/887234214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:23:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/887234214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:23:16 compute-0 nova_compute[248510]: 2025-12-13 08:23:16.115 248514 DEBUG nova.compute.manager [None req-13018656-a484-4227-ad64-2b2b7957f4a0 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:16 compute-0 nova_compute[248510]: 2025-12-13 08:23:16.296 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:16 compute-0 ceph-mon[76537]: pgmap v1742: 321 pgs: 321 active+clean; 156 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 174 op/s
Dec 13 08:23:16 compute-0 nova_compute[248510]: 2025-12-13 08:23:16.713 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:16 compute-0 nova_compute[248510]: 2025-12-13 08:23:16.714 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:16 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 13 08:23:16 compute-0 nova_compute[248510]: 2025-12-13 08:23:16.732 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:23:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 156 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 13 08:23:17 compute-0 nova_compute[248510]: 2025-12-13 08:23:17.099 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:17 compute-0 nova_compute[248510]: 2025-12-13 08:23:17.100 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:17 compute-0 nova_compute[248510]: 2025-12-13 08:23:17.109 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:23:17 compute-0 nova_compute[248510]: 2025-12-13 08:23:17.109 248514 INFO nova.compute.claims [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:23:17 compute-0 nova_compute[248510]: 2025-12-13 08:23:17.346 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:17 compute-0 ovn_controller[148476]: 2025-12-13T08:23:17Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:b2:83 10.100.0.9
Dec 13 08:23:17 compute-0 ovn_controller[148476]: 2025-12-13T08:23:17Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:b2:83 10.100.0.9
Dec 13 08:23:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:23:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151020671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:17 compute-0 nova_compute[248510]: 2025-12-13 08:23:17.962 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:17 compute-0 nova_compute[248510]: 2025-12-13 08:23:17.969 248514 DEBUG nova.compute.provider_tree [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:23:17 compute-0 nova_compute[248510]: 2025-12-13 08:23:17.988 248514 DEBUG nova.scheduler.client.report [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.018 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.019 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.100 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.101 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.125 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.152 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.303 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.306 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.306 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Creating image(s)
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.329 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.354 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.375 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.378 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.430 248514 DEBUG nova.policy [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.448 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.449 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.449 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.450 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.477 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.482 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d503913e-a05e-47d4-9366-db4426b9aac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:18 compute-0 ceph-mon[76537]: pgmap v1743: 321 pgs: 321 active+clean; 156 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 13 08:23:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3151020671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.786 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d503913e-a05e-47d4-9366-db4426b9aac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.856 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] resizing rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:23:18 compute-0 nova_compute[248510]: 2025-12-13 08:23:18.947 248514 DEBUG nova.objects.instance [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'migration_context' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:19 compute-0 nova_compute[248510]: 2025-12-13 08:23:19.048 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:23:19 compute-0 nova_compute[248510]: 2025-12-13 08:23:19.049 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Ensure instance console log exists: /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:23:19 compute-0 nova_compute[248510]: 2025-12-13 08:23:19.050 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:19 compute-0 nova_compute[248510]: 2025-12-13 08:23:19.050 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:19 compute-0 nova_compute[248510]: 2025-12-13 08:23:19.051 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 229 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 192 op/s
Dec 13 08:23:19 compute-0 nova_compute[248510]: 2025-12-13 08:23:19.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:20 compute-0 nova_compute[248510]: 2025-12-13 08:23:20.048 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Successfully created port: e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:23:20 compute-0 nova_compute[248510]: 2025-12-13 08:23:20.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:20 compute-0 ovn_controller[148476]: 2025-12-13T08:23:20Z|00259|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 08:23:20 compute-0 ovn_controller[148476]: 2025-12-13T08:23:20Z|00260|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 08:23:20 compute-0 nova_compute[248510]: 2025-12-13 08:23:20.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:20 compute-0 ceph-mon[76537]: pgmap v1744: 321 pgs: 321 active+clean; 229 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 192 op/s
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017437611022420622 of space, bias 1.0, pg target 0.5231283306726187 quantized to 32 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668854095107248 of space, bias 1.0, pg target 0.20006562285321744 quantized to 32 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.289488535883092e-07 of space, bias 4.0, pg target 0.000874738624305971 quantized to 16 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:23:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.019 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.020 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.043 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:23:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 229 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 5.4 MiB/s wr, 126 op/s
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.106 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.106 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.115 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.116 248514 INFO nova.compute.claims [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.282 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:23:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3337626705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:21 compute-0 nova_compute[248510]: 2025-12-13 08:23:21.933 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.188 248514 DEBUG nova.compute.provider_tree [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.205 248514 DEBUG nova.scheduler.client.report [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.226 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.227 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.277 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.277 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.302 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.332 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.589 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Successfully updated port: e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:23:22 compute-0 ceph-mon[76537]: pgmap v1745: 321 pgs: 321 active+clean; 229 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 5.4 MiB/s wr, 126 op/s
Dec 13 08:23:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3337626705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.678 248514 DEBUG nova.policy [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '79d4b34b8bd3452cb5b8c0954166f397', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06fbab937d6444558229b2351632e711', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.700 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.700 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.701 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.709 248514 DEBUG nova.compute.manager [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.710 248514 DEBUG nova.compute.manager [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.710 248514 DEBUG oslo_concurrency.lockutils [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.742 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.744 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.744 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Creating image(s)
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.766 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.791 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.813 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.817 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.896 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.898 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.898 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.899 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.921 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.925 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4403714d-3521-4409-9c3b-59d655fc999d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:22 compute-0 nova_compute[248510]: 2025-12-13 08:23:22.958 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:23:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 246 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 604 KiB/s rd, 6.0 MiB/s wr, 141 op/s
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.231 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4403714d-3521-4409-9c3b-59d655fc999d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.299 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] resizing rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.386 248514 DEBUG nova.objects.instance [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'migration_context' on Instance uuid 4403714d-3521-4409-9c3b-59d655fc999d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.390 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Successfully created port: d462a8a0-34ee-4682-ac5c-f7632b5ad39c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.407 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.408 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Ensure instance console log exists: /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.408 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.408 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:23 compute-0 nova_compute[248510]: 2025-12-13 08:23:23.408 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.592 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:24 compute-0 ceph-mon[76537]: pgmap v1746: 321 pgs: 321 active+clean; 246 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 604 KiB/s rd, 6.0 MiB/s wr, 141 op/s
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.946 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.982 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.983 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance network_info: |[{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.984 248514 DEBUG oslo_concurrency.lockutils [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.984 248514 DEBUG nova.network.neutron [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.987 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start _get_guest_xml network_info=[{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.992 248514 WARNING nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:23:24 compute-0 nova_compute[248510]: 2025-12-13 08:23:24.998 248514 DEBUG nova.virt.libvirt.host [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.000 248514 DEBUG nova.virt.libvirt.host [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:23:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.010 248514 DEBUG nova.virt.libvirt.host [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.012 248514 DEBUG nova.virt.libvirt.host [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.013 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.013 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.014 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.014 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.015 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.015 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.015 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.016 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.016 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.017 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.017 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.018 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.025 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 289 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 7.4 MiB/s wr, 164 op/s
Dec 13 08:23:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:23:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657012232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.610 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Successfully updated port: d462a8a0-34ee-4682-ac5c-f7632b5ad39c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.613 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.644 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.649 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1657012232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.693 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.694 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquired lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.694 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.786 248514 DEBUG nova.compute.manager [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.786 248514 DEBUG nova.compute.manager [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing instance network info cache due to event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:25 compute-0 nova_compute[248510]: 2025-12-13 08:23:25.787 248514 DEBUG oslo_concurrency.lockutils [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.011 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:23:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:23:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091847818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.270 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.272 248514 DEBUG nova.virt.libvirt.vif [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.272 248514 DEBUG nova.network.os_vif_util [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.274 248514 DEBUG nova.network.os_vif_util [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.276 248514 DEBUG nova.objects.instance [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_devices' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.295 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <uuid>d503913e-a05e-47d4-9366-db4426b9aac1</uuid>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <name>instance-00000020</name>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:23:24</nova:creationTime>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 08:23:26 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <system>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <entry name="serial">d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <entry name="uuid">d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </system>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <os>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   </os>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <features>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   </features>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk">
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk.config">
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:23:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b3:27:85"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <target dev="tape1eabc5e-9e"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log" append="off"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <video>
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </video>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:23:26 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:23:26 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:23:26 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:23:26 compute-0 nova_compute[248510]: </domain>
Dec 13 08:23:26 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.296 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Preparing to wait for external event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.297 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.297 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.297 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.298 248514 DEBUG nova.virt.libvirt.vif [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.299 248514 DEBUG nova.network.os_vif_util [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.299 248514 DEBUG nova.network.os_vif_util [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.300 248514 DEBUG os_vif [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.301 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.302 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.306 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.307 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1eabc5e-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.307 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1eabc5e-9e, col_values=(('external_ids', {'iface-id': 'e1eabc5e-9ed7-4b2e-ba64-11149b2f4043', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:27:85', 'vm-uuid': 'd503913e-a05e-47d4-9366-db4426b9aac1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:26 compute-0 NetworkManager[50376]: <info>  [1765614206.3103] manager: (tape1eabc5e-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.319 248514 INFO os_vif [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e')
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.411 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.411 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.411 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:b3:27:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.412 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Using config drive
Dec 13 08:23:26 compute-0 nova_compute[248510]: 2025-12-13 08:23:26.431 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:26 compute-0 ceph-mon[76537]: pgmap v1747: 321 pgs: 321 active+clean; 289 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 7.4 MiB/s wr, 164 op/s
Dec 13 08:23:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1091847818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.057 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Creating config drive at /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.065 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_y2gug60 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 289 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 465 KiB/s rd, 5.7 MiB/s wr, 136 op/s
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.178 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.210 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_y2gug60" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.233 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.237 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config d503913e-a05e-47d4-9366-db4426b9aac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.269 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Releasing lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.270 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance network_info: |[{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.271 248514 DEBUG oslo_concurrency.lockutils [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.272 248514 DEBUG nova.network.neutron [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.275 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start _get_guest_xml network_info=[{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.280 248514 WARNING nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.286 248514 DEBUG nova.virt.libvirt.host [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.286 248514 DEBUG nova.virt.libvirt.host [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.289 248514 DEBUG nova.virt.libvirt.host [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.290 248514 DEBUG nova.virt.libvirt.host [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.290 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.290 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.291 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.291 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.291 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.292 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.292 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.292 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.292 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.293 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.293 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.293 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.297 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.333 248514 DEBUG nova.network.neutron [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.334 248514 DEBUG nova.network.neutron [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.386 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config d503913e-a05e-47d4-9366-db4426b9aac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.387 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Deleting local config drive /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config because it was imported into RBD.
Dec 13 08:23:27 compute-0 NetworkManager[50376]: <info>  [1765614207.4531] manager: (tape1eabc5e-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Dec 13 08:23:27 compute-0 kernel: tape1eabc5e-9e: entered promiscuous mode
Dec 13 08:23:27 compute-0 ovn_controller[148476]: 2025-12-13T08:23:27Z|00261|binding|INFO|Claiming lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 for this chassis.
Dec 13 08:23:27 compute-0 ovn_controller[148476]: 2025-12-13T08:23:27Z|00262|binding|INFO|e1eabc5e-9ed7-4b2e-ba64-11149b2f4043: Claiming fa:16:3e:b3:27:85 10.100.0.3
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:27 compute-0 ovn_controller[148476]: 2025-12-13T08:23:27Z|00263|binding|INFO|Setting lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 ovn-installed in OVS
Dec 13 08:23:27 compute-0 systemd-udevd[285421]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:27 compute-0 systemd-machined[210538]: New machine qemu-37-instance-00000020.
Dec 13 08:23:27 compute-0 NetworkManager[50376]: <info>  [1765614207.4984] device (tape1eabc5e-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:23:27 compute-0 NetworkManager[50376]: <info>  [1765614207.4992] device (tape1eabc5e-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:23:27 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.548 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:27:85 10.100.0.3'], port_security=['fa:16:3e:b3:27:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd503913e-a05e-47d4-9366-db4426b9aac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd758160-971f-4f0e-a4ca-a13304d3c491', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:23:27 compute-0 ovn_controller[148476]: 2025-12-13T08:23:27Z|00264|binding|INFO|Setting lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 up in Southbound
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.550 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.552 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.562 248514 DEBUG oslo_concurrency.lockutils [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb129a8-dcfa-4790-8345-e005d8718800]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.614 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8f0697-25cf-48ff-982c-46eafbbab80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.618 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[45363661-f37c-4083-bb19-467e5d6c9229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.654 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[79772222-8646-404f-9c0b-0371c1e9993f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[437a25d2-ed55-4050-9f72-a6ed11a028d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285436, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c65ea9af-9d32-4191-ba16-b04740c2fd96]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285437, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285437, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.700 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.700 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:23:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1932318373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.902 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.928 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:27 compute-0 nova_compute[248510]: 2025-12-13 08:23:27.933 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.277 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614208.2769887, d503913e-a05e-47d4-9366-db4426b9aac1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.279 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] VM Started (Lifecycle Event)
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.315 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.321 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614208.278722, d503913e-a05e-47d4-9366-db4426b9aac1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.322 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] VM Paused (Lifecycle Event)
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.363 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.368 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.395 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:23:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666929185' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.488 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.490 248514 DEBUG nova.virt.libvirt.vif [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1547732604',display_name='tempest-FloatingIPsAssociationTestJSON-server-1547732604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1547732604',id=33,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-ath0htte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:22Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=4403714d-3521-4409-9c3b-59d655fc999d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.491 248514 DEBUG nova.network.os_vif_util [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.491 248514 DEBUG nova.network.os_vif_util [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.493 248514 DEBUG nova.objects.instance [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4403714d-3521-4409-9c3b-59d655fc999d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.501 248514 DEBUG nova.compute.manager [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.502 248514 DEBUG oslo_concurrency.lockutils [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.502 248514 DEBUG oslo_concurrency.lockutils [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.503 248514 DEBUG oslo_concurrency.lockutils [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.503 248514 DEBUG nova.compute.manager [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Processing event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.504 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.510 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614208.5084395, d503913e-a05e-47d4-9366-db4426b9aac1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.510 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] VM Resumed (Lifecycle Event)
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.513 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <uuid>4403714d-3521-4409-9c3b-59d655fc999d</uuid>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <name>instance-00000021</name>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1547732604</nova:name>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:23:27</nova:creationTime>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <nova:user uuid="79d4b34b8bd3452cb5b8c0954166f397">tempest-FloatingIPsAssociationTestJSON-609563086-project-member</nova:user>
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <nova:project uuid="06fbab937d6444558229b2351632e711">tempest-FloatingIPsAssociationTestJSON-609563086</nova:project>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <nova:port uuid="d462a8a0-34ee-4682-ac5c-f7632b5ad39c">
Dec 13 08:23:28 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <system>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <entry name="serial">4403714d-3521-4409-9c3b-59d655fc999d</entry>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <entry name="uuid">4403714d-3521-4409-9c3b-59d655fc999d</entry>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </system>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <os>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   </os>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <features>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   </features>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4403714d-3521-4409-9c3b-59d655fc999d_disk">
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4403714d-3521-4409-9c3b-59d655fc999d_disk.config">
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:23:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:9f:23:d9"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <target dev="tapd462a8a0-34"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/console.log" append="off"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <video>
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </video>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:23:28 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:23:28 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:23:28 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:23:28 compute-0 nova_compute[248510]: </domain>
Dec 13 08:23:28 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.514 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Preparing to wait for external event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.514 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.515 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.515 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.516 248514 DEBUG nova.virt.libvirt.vif [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1547732604',display_name='tempest-FloatingIPsAssociationTestJSON-server-1547732604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1547732604',id=33,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-ath0htte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:22Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=4403714d-3521-4409-9c3b-59d655fc999d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.516 248514 DEBUG nova.network.os_vif_util [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.517 248514 DEBUG nova.network.os_vif_util [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.517 248514 DEBUG os_vif [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.518 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.519 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.520 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.524 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd462a8a0-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.524 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd462a8a0-34, col_values=(('external_ids', {'iface-id': 'd462a8a0-34ee-4682-ac5c-f7632b5ad39c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:23:d9', 'vm-uuid': '4403714d-3521-4409-9c3b-59d655fc999d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:28 compute-0 NetworkManager[50376]: <info>  [1765614208.5279] manager: (tapd462a8a0-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.531 248514 INFO nova.virt.libvirt.driver [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance spawned successfully.
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.531 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.538 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.540 248514 INFO os_vif [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34')
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.547 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.557 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.557 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.558 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.558 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.559 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.559 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.565 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.623 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.624 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.624 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No VIF found with MAC fa:16:3e:9f:23:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.625 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Using config drive
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.650 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.659 248514 INFO nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Took 10.35 seconds to spawn the instance on the hypervisor.
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.659 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:28 compute-0 ceph-mon[76537]: pgmap v1748: 321 pgs: 321 active+clean; 289 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 465 KiB/s rd, 5.7 MiB/s wr, 136 op/s
Dec 13 08:23:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1932318373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1666929185' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.858 248514 INFO nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Took 12.07 seconds to build instance.
Dec 13 08:23:28 compute-0 nova_compute[248510]: 2025-12-13 08:23:28.884 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 5.8 MiB/s wr, 146 op/s
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.419 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Creating config drive at /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.425 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8723ao83 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.563 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8723ao83" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.588 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.594 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config 4403714d-3521-4409-9c3b-59d655fc999d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.633 248514 DEBUG nova.network.neutron [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updated VIF entry in instance network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.633 248514 DEBUG nova.network.neutron [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.657 248514 DEBUG oslo_concurrency.lockutils [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.736 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config 4403714d-3521-4409-9c3b-59d655fc999d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.737 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Deleting local config drive /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config because it was imported into RBD.
Dec 13 08:23:29 compute-0 kernel: tapd462a8a0-34: entered promiscuous mode
Dec 13 08:23:29 compute-0 NetworkManager[50376]: <info>  [1765614209.7843] manager: (tapd462a8a0-34): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:29 compute-0 ovn_controller[148476]: 2025-12-13T08:23:29Z|00265|binding|INFO|Claiming lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c for this chassis.
Dec 13 08:23:29 compute-0 ovn_controller[148476]: 2025-12-13T08:23:29Z|00266|binding|INFO|d462a8a0-34ee-4682-ac5c-f7632b5ad39c: Claiming fa:16:3e:9f:23:d9 10.100.0.4
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.796 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:23:d9 10.100.0.4'], port_security=['fa:16:3e:9f:23:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4403714d-3521-4409-9c3b-59d655fc999d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06fbab937d6444558229b2351632e711', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fa325bd2-c57a-49fb-8dd9-f45405c95b4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f47224ac-d05f-46db-ac07-cb476b38b044, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d462a8a0-34ee-4682-ac5c-f7632b5ad39c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.797 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d462a8a0-34ee-4682-ac5c-f7632b5ad39c in datapath 62193ff6-aaa1-401a-b1e0-512e67752a9e bound to our chassis
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.800 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62193ff6-aaa1-401a-b1e0-512e67752a9e
Dec 13 08:23:29 compute-0 NetworkManager[50376]: <info>  [1765614209.8076] device (tapd462a8a0-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:23:29 compute-0 NetworkManager[50376]: <info>  [1765614209.8085] device (tapd462a8a0-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:23:29 compute-0 ovn_controller[148476]: 2025-12-13T08:23:29Z|00267|binding|INFO|Setting lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c ovn-installed in OVS
Dec 13 08:23:29 compute-0 ovn_controller[148476]: 2025-12-13T08:23:29Z|00268|binding|INFO|Setting lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c up in Southbound
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.816 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c71dd8e-9c8a-49b2-b5de-60f24ab2a5f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:29 compute-0 systemd-machined[210538]: New machine qemu-38-instance-00000021.
Dec 13 08:23:29 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000021.
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.865 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[363baef5-e07d-49d3-b5e6-57004be0731b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.869 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec075711-373b-4862-93a1-ba9f4b86b7c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.904 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7a56e34c-f060-4f14-9bdd-7785c697e499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.945 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f97d5196-b21c-4648-8634-1166a1852fda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62193ff6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:33:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676001, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285607, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.968 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70783894-0fa0-4b06-ab7e-15d15f67b1da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62193ff6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676016, 'tstamp': 676016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285609, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62193ff6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676019, 'tstamp': 676019}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285609, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.970 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62193ff6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:29 compute-0 nova_compute[248510]: 2025-12-13 08:23:29.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.973 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62193ff6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.974 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.974 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62193ff6-a0, col_values=(('external_ids', {'iface-id': '67d122d2-811d-4aa8-bdde-aafc5e939b2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.975 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:30 compute-0 nova_compute[248510]: 2025-12-13 08:23:30.443 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614210.4427738, 4403714d-3521-4409-9c3b-59d655fc999d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:30 compute-0 nova_compute[248510]: 2025-12-13 08:23:30.443 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] VM Started (Lifecycle Event)
Dec 13 08:23:30 compute-0 nova_compute[248510]: 2025-12-13 08:23:30.475 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:30 compute-0 nova_compute[248510]: 2025-12-13 08:23:30.481 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614210.4440446, 4403714d-3521-4409-9c3b-59d655fc999d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:30 compute-0 nova_compute[248510]: 2025-12-13 08:23:30.481 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] VM Paused (Lifecycle Event)
Dec 13 08:23:30 compute-0 ceph-mon[76537]: pgmap v1749: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 5.8 MiB/s wr, 146 op/s
Dec 13 08:23:30 compute-0 nova_compute[248510]: 2025-12-13 08:23:30.875 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:30 compute-0 nova_compute[248510]: 2025-12-13 08:23:30.880 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:30 compute-0 nova_compute[248510]: 2025-12-13 08:23:30.905 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:31 compute-0 nova_compute[248510]: 2025-12-13 08:23:31.032 248514 DEBUG nova.compute.manager [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:31 compute-0 nova_compute[248510]: 2025-12-13 08:23:31.032 248514 DEBUG oslo_concurrency.lockutils [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:31 compute-0 nova_compute[248510]: 2025-12-13 08:23:31.033 248514 DEBUG oslo_concurrency.lockutils [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:31 compute-0 nova_compute[248510]: 2025-12-13 08:23:31.033 248514 DEBUG oslo_concurrency.lockutils [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:31 compute-0 nova_compute[248510]: 2025-12-13 08:23:31.033 248514 DEBUG nova.compute.manager [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:31 compute-0 nova_compute[248510]: 2025-12-13 08:23:31.033 248514 WARNING nova.compute.manager [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 for instance with vm_state active and task_state None.
Dec 13 08:23:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Dec 13 08:23:31 compute-0 ceph-mon[76537]: pgmap v1750: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.021 248514 DEBUG nova.compute.manager [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.021 248514 DEBUG oslo_concurrency.lockutils [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.022 248514 DEBUG oslo_concurrency.lockutils [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.022 248514 DEBUG oslo_concurrency.lockutils [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.022 248514 DEBUG nova.compute.manager [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Processing event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.023 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.027 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614213.0268362, 4403714d-3521-4409-9c3b-59d655fc999d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.027 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] VM Resumed (Lifecycle Event)
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.033 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.038 248514 INFO nova.virt.libvirt.driver [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance spawned successfully.
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.038 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.054 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.061 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.065 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.065 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.066 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.066 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.066 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.067 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:23:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 2.4 MiB/s wr, 83 op/s
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.090 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.127 248514 INFO nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Took 10.38 seconds to spawn the instance on the hypervisor.
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.128 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.203 248514 INFO nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Took 12.12 seconds to build instance.
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.225 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:33 compute-0 nova_compute[248510]: 2025-12-13 08:23:33.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:34 compute-0 ceph-mon[76537]: pgmap v1751: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 2.4 MiB/s wr, 83 op/s
Dec 13 08:23:34 compute-0 nova_compute[248510]: 2025-12-13 08:23:34.550 248514 DEBUG nova.compute.manager [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:34 compute-0 nova_compute[248510]: 2025-12-13 08:23:34.551 248514 DEBUG nova.compute.manager [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:34 compute-0 nova_compute[248510]: 2025-12-13 08:23:34.551 248514 DEBUG oslo_concurrency.lockutils [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:34 compute-0 nova_compute[248510]: 2025-12-13 08:23:34.551 248514 DEBUG oslo_concurrency.lockutils [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:34 compute-0 nova_compute[248510]: 2025-12-13 08:23:34.552 248514 DEBUG nova.network.neutron [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:34 compute-0 nova_compute[248510]: 2025-12-13 08:23:34.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:34 compute-0 nova_compute[248510]: 2025-12-13 08:23:34.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 293 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Dec 13 08:23:35 compute-0 nova_compute[248510]: 2025-12-13 08:23:35.618 248514 DEBUG nova.compute.manager [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:35 compute-0 nova_compute[248510]: 2025-12-13 08:23:35.618 248514 DEBUG oslo_concurrency.lockutils [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:35 compute-0 nova_compute[248510]: 2025-12-13 08:23:35.618 248514 DEBUG oslo_concurrency.lockutils [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:35 compute-0 nova_compute[248510]: 2025-12-13 08:23:35.619 248514 DEBUG oslo_concurrency.lockutils [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:35 compute-0 nova_compute[248510]: 2025-12-13 08:23:35.619 248514 DEBUG nova.compute.manager [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] No waiting events found dispatching network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:35 compute-0 nova_compute[248510]: 2025-12-13 08:23:35.619 248514 WARNING nova.compute.manager [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received unexpected event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c for instance with vm_state active and task_state None.
Dec 13 08:23:36 compute-0 ceph-mon[76537]: pgmap v1752: 321 pgs: 321 active+clean; 293 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.503 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.504 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.521 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.619 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.621 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.627 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.627 248514 INFO nova.compute.claims [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.657 248514 DEBUG nova.compute.manager [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.657 248514 DEBUG nova.compute.manager [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.658 248514 DEBUG oslo_concurrency.lockutils [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.658 248514 DEBUG oslo_concurrency.lockutils [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:36 compute-0 nova_compute[248510]: 2025-12-13 08:23:36.658 248514 DEBUG nova.network.neutron [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 293 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 99 KiB/s wr, 95 op/s
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.227 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:23:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:23:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1088463483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.813 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.821 248514 DEBUG nova.compute.provider_tree [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.846 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.848 248514 DEBUG nova.scheduler.client.report [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.884 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.885 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.965 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.966 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:23:37 compute-0 nova_compute[248510]: 2025-12-13 08:23:37.990 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.015 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.125 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.127 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.128 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Creating image(s)
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.153 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.182 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.206 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.211 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.292 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.293 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.294 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.294 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.319 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.324 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.356 248514 DEBUG nova.network.neutron [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.358 248514 DEBUG nova.network.neutron [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.362 248514 DEBUG nova.policy [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5faa7317a5cd4b748a984970f79ef52b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2be5ed2a3b1a405bb6891ecdc5cba68c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.405 248514 DEBUG oslo_concurrency.lockutils [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:38 compute-0 ceph-mon[76537]: pgmap v1753: 321 pgs: 321 active+clean; 293 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 99 KiB/s wr, 95 op/s
Dec 13 08:23:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1088463483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.680 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.759 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] resizing rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.942 248514 DEBUG nova.network.neutron [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.944 248514 DEBUG nova.network.neutron [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:38 compute-0 nova_compute[248510]: 2025-12-13 08:23:38.970 248514 DEBUG oslo_concurrency.lockutils [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 319 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 167 op/s
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.358 248514 DEBUG nova.objects.instance [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'migration_context' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.376 248514 DEBUG nova.compute.manager [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.376 248514 DEBUG nova.compute.manager [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.377 248514 DEBUG oslo_concurrency.lockutils [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.377 248514 DEBUG oslo_concurrency.lockutils [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.377 248514 DEBUG nova.network.neutron [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.380 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.381 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Ensure instance console log exists: /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.381 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.382 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.382 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:39 compute-0 nova_compute[248510]: 2025-12-13 08:23:39.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:23:40 compute-0 nova_compute[248510]: 2025-12-13 08:23:40.277 248514 DEBUG nova.compute.manager [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:40 compute-0 nova_compute[248510]: 2025-12-13 08:23:40.280 248514 DEBUG nova.compute.manager [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:40 compute-0 nova_compute[248510]: 2025-12-13 08:23:40.280 248514 DEBUG oslo_concurrency.lockutils [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:40 compute-0 nova_compute[248510]: 2025-12-13 08:23:40.281 248514 DEBUG oslo_concurrency.lockutils [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:40 compute-0 nova_compute[248510]: 2025-12-13 08:23:40.281 248514 DEBUG nova.network.neutron [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:40 compute-0 ceph-mon[76537]: pgmap v1754: 321 pgs: 321 active+clean; 319 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 167 op/s
Dec 13 08:23:40 compute-0 nova_compute[248510]: 2025-12-13 08:23:40.759 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:40 compute-0 nova_compute[248510]: 2025-12-13 08:23:40.760 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:40 compute-0 nova_compute[248510]: 2025-12-13 08:23:40.761 248514 DEBUG nova.objects.instance [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 319 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 957 KiB/s wr, 157 op/s
Dec 13 08:23:42 compute-0 nova_compute[248510]: 2025-12-13 08:23:42.277 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Successfully created port: 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:23:42 compute-0 nova_compute[248510]: 2025-12-13 08:23:42.396 248514 DEBUG nova.objects.instance [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:42 compute-0 ovn_controller[148476]: 2025-12-13T08:23:42Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:27:85 10.100.0.3
Dec 13 08:23:42 compute-0 ovn_controller[148476]: 2025-12-13T08:23:42Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:27:85 10.100.0.3
Dec 13 08:23:42 compute-0 nova_compute[248510]: 2025-12-13 08:23:42.593 248514 DEBUG nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:23:42 compute-0 ceph-mon[76537]: pgmap v1755: 321 pgs: 321 active+clean; 319 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 957 KiB/s wr, 157 op/s
Dec 13 08:23:42 compute-0 nova_compute[248510]: 2025-12-13 08:23:42.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:42 compute-0 nova_compute[248510]: 2025-12-13 08:23:42.916 248514 DEBUG nova.policy [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:23:42 compute-0 podman[285842]: 2025-12-13 08:23:42.976636002 +0000 UTC m=+0.059613922 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 08:23:42 compute-0 podman[285841]: 2025-12-13 08:23:42.984574248 +0000 UTC m=+0.073145006 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 08:23:43 compute-0 podman[285840]: 2025-12-13 08:23:43.010968889 +0000 UTC m=+0.101071895 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:23:43 compute-0 nova_compute[248510]: 2025-12-13 08:23:43.010 248514 DEBUG nova.network.neutron [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:43 compute-0 nova_compute[248510]: 2025-12-13 08:23:43.010 248514 DEBUG nova.network.neutron [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 346 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Dec 13 08:23:43 compute-0 nova_compute[248510]: 2025-12-13 08:23:43.098 248514 DEBUG oslo_concurrency.lockutils [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:43 compute-0 nova_compute[248510]: 2025-12-13 08:23:43.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:43 compute-0 nova_compute[248510]: 2025-12-13 08:23:43.722 248514 DEBUG nova.network.neutron [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:43 compute-0 nova_compute[248510]: 2025-12-13 08:23:43.723 248514 DEBUG nova.network.neutron [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:43 compute-0 nova_compute[248510]: 2025-12-13 08:23:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:43 compute-0 nova_compute[248510]: 2025-12-13 08:23:43.876 248514 DEBUG oslo_concurrency.lockutils [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:44 compute-0 nova_compute[248510]: 2025-12-13 08:23:44.512 248514 DEBUG nova.compute.manager [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:44 compute-0 nova_compute[248510]: 2025-12-13 08:23:44.513 248514 DEBUG nova.compute.manager [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:44 compute-0 nova_compute[248510]: 2025-12-13 08:23:44.513 248514 DEBUG oslo_concurrency.lockutils [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:44 compute-0 nova_compute[248510]: 2025-12-13 08:23:44.514 248514 DEBUG oslo_concurrency.lockutils [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:44 compute-0 nova_compute[248510]: 2025-12-13 08:23:44.514 248514 DEBUG nova.network.neutron [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:44 compute-0 nova_compute[248510]: 2025-12-13 08:23:44.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:44 compute-0 nova_compute[248510]: 2025-12-13 08:23:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:44 compute-0 nova_compute[248510]: 2025-12-13 08:23:44.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:23:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 204 op/s
Dec 13 08:23:45 compute-0 nova_compute[248510]: 2025-12-13 08:23:45.505 248514 DEBUG nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Successfully updated port: 815f5388-ae4c-4748-ae1e-a35179c687ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:23:45 compute-0 nova_compute[248510]: 2025-12-13 08:23:45.638 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:45 compute-0 nova_compute[248510]: 2025-12-13 08:23:45.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:45 compute-0 nova_compute[248510]: 2025-12-13 08:23:45.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.140 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.141 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.141 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.141 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.141 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:23:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650964901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.719 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.903 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.903 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.907 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.907 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.911 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.911 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.914 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:23:46 compute-0 nova_compute[248510]: 2025-12-13 08:23:46.914 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:23:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.113 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.114 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3560MB free_disk=59.8443395672366GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.115 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.115 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.305 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3b43a9c7-85e7-4558-bd2f-e4712882021e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.306 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance dc64fea4-e9a8-47e7-8a3a-d01897fc81de actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.307 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance d503913e-a05e-47d4-9366-db4426b9aac1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.307 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 4403714d-3521-4409-9c3b-59d655fc999d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.308 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.308 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.309 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.432 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.661 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Successfully updated port: 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.800 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.800 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:47 compute-0 nova_compute[248510]: 2025-12-13 08:23:47.800 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:23:47 compute-0 ceph-mon[76537]: pgmap v1756: 321 pgs: 321 active+clean; 346 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Dec 13 08:23:48 compute-0 nova_compute[248510]: 2025-12-13 08:23:48.072 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:23:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:23:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974391493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:48 compute-0 nova_compute[248510]: 2025-12-13 08:23:48.458 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:48 compute-0 nova_compute[248510]: 2025-12-13 08:23:48.465 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:23:48 compute-0 nova_compute[248510]: 2025-12-13 08:23:48.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:48 compute-0 nova_compute[248510]: 2025-12-13 08:23:48.587 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:23:48 compute-0 nova_compute[248510]: 2025-12-13 08:23:48.694 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:23:48 compute-0 nova_compute[248510]: 2025-12-13 08:23:48.695 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:48 compute-0 nova_compute[248510]: 2025-12-13 08:23:48.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 390 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.6 MiB/s wr, 163 op/s
Dec 13 08:23:49 compute-0 ceph-mon[76537]: pgmap v1757: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 204 op/s
Dec 13 08:23:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/650964901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:49 compute-0 ceph-mon[76537]: pgmap v1758: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Dec 13 08:23:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/974391493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:23:49 compute-0 nova_compute[248510]: 2025-12-13 08:23:49.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:49 compute-0 nova_compute[248510]: 2025-12-13 08:23:49.786 248514 DEBUG nova.network.neutron [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:49 compute-0 nova_compute[248510]: 2025-12-13 08:23:49.787 248514 DEBUG nova.network.neutron [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:49 compute-0 nova_compute[248510]: 2025-12-13 08:23:49.821 248514 DEBUG oslo_concurrency.lockutils [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:49 compute-0 nova_compute[248510]: 2025-12-13 08:23:49.822 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:49 compute-0 nova_compute[248510]: 2025-12-13 08:23:49.822 248514 DEBUG nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:23:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.132 248514 WARNING nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it
Dec 13 08:23:50 compute-0 ceph-mon[76537]: pgmap v1759: 321 pgs: 321 active+clean; 390 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.6 MiB/s wr, 163 op/s
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.690 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:23:50 compute-0 ovn_controller[148476]: 2025-12-13T08:23:50Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:23:d9 10.100.0.4
Dec 13 08:23:50 compute-0 ovn_controller[148476]: 2025-12-13T08:23:50Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:23:d9 10.100.0.4
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.816 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.859 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.859 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance network_info: |[{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.862 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start _get_guest_xml network_info=[{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.867 248514 WARNING nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.874 248514 DEBUG nova.virt.libvirt.host [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.875 248514 DEBUG nova.virt.libvirt.host [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.879 248514 DEBUG nova.virt.libvirt.host [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.879 248514 DEBUG nova.virt.libvirt.host [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.880 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.880 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.880 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.880 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.881 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.881 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.881 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:23:50 compute-0 nova_compute[248510]: 2025-12-13 08:23:50.886 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 390 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 4.7 MiB/s wr, 91 op/s
Dec 13 08:23:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:23:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019114998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:51 compute-0 nova_compute[248510]: 2025-12-13 08:23:51.464 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:51 compute-0 nova_compute[248510]: 2025-12-13 08:23:51.487 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:51 compute-0 nova_compute[248510]: 2025-12-13 08:23:51.491 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:23:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514353425' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.063 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.065 248514 DEBUG nova.virt.libvirt.vif [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1901007496',display_name='tempest-AttachInterfacesUnderV243Test-server-1901007496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1901007496',id=34,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAtqDaZq3IK7Bvm/s6fqCH+TSHLKWsERX0aPeV408BGJSMsRQoO1UjptArZn77j735/fg+c2goyKkkvVN7UQeehgaqDzhHMveiUhv8vzTex1upUSSOpKWfKRhsOR5NuVjA==',key_name='tempest-keypair-862524999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2be5ed2a3b1a405bb6891ecdc5cba68c',ramdisk_id='',reservation_id='r-l244eynx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1008670327',owner_user_name='tempest-AttachInterfacesUnderV243Test-1008670327-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5faa7317a5cd4b748a984970f79ef52b',uuid=ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.066 248514 DEBUG nova.network.os_vif_util [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converting VIF {"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.067 248514 DEBUG nova.network.os_vif_util [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.068 248514 DEBUG nova.objects.instance [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'pci_devices' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.084 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <uuid>ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5</uuid>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <name>instance-00000022</name>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1901007496</nova:name>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:23:50</nova:creationTime>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <nova:user uuid="5faa7317a5cd4b748a984970f79ef52b">tempest-AttachInterfacesUnderV243Test-1008670327-project-member</nova:user>
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <nova:project uuid="2be5ed2a3b1a405bb6891ecdc5cba68c">tempest-AttachInterfacesUnderV243Test-1008670327</nova:project>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <nova:port uuid="5d8e1c45-4a7d-4fab-9d17-2bb5837e6134">
Dec 13 08:23:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <system>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <entry name="serial">ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5</entry>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <entry name="uuid">ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5</entry>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </system>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <os>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   </os>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <features>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   </features>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk">
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config">
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:23:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:83:58:17"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <target dev="tap5d8e1c45-4a"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/console.log" append="off"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <video>
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </video>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:23:52 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:23:52 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:23:52 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:23:52 compute-0 nova_compute[248510]: </domain>
Dec 13 08:23:52 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.084 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Preparing to wait for external event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.085 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.085 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.085 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.086 248514 DEBUG nova.virt.libvirt.vif [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1901007496',display_name='tempest-AttachInterfacesUnderV243Test-server-1901007496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1901007496',id=34,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAtqDaZq3IK7Bvm/s6fqCH+TSHLKWsERX0aPeV408BGJSMsRQoO1UjptArZn77j735/fg+c2goyKkkvVN7UQeehgaqDzhHMveiUhv8vzTex1upUSSOpKWfKRhsOR5NuVjA==',key_name='tempest-keypair-862524999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2be5ed2a3b1a405bb6891ecdc5cba68c',ramdisk_id='',reservation_id='r-l244eynx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1008670327',owner_user_name='tempest-AttachInterfacesUnderV243Test-1008670327-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5faa7317a5cd4b748a984970f79ef52b',uuid=ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.086 248514 DEBUG nova.network.os_vif_util [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converting VIF {"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.086 248514 DEBUG nova.network.os_vif_util [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.087 248514 DEBUG os_vif [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.088 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.088 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.092 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d8e1c45-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.092 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d8e1c45-4a, col_values=(('external_ids', {'iface-id': '5d8e1c45-4a7d-4fab-9d17-2bb5837e6134', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:58:17', 'vm-uuid': 'ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:52 compute-0 NetworkManager[50376]: <info>  [1765614232.0950] manager: (tap5d8e1c45-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.103 248514 INFO os_vif [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a')
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.159 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.159 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.159 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] No VIF found with MAC fa:16:3e:83:58:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.160 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Using config drive
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.182 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:52 compute-0 ceph-mon[76537]: pgmap v1760: 321 pgs: 321 active+clean; 390 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 4.7 MiB/s wr, 91 op/s
Dec 13 08:23:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4019114998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/514353425' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.769 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Creating config drive at /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.775 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk53yhg8s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.917 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk53yhg8s" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.943 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:23:52 compute-0 nova_compute[248510]: 2025-12-13 08:23:52.947 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 400 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 5.0 MiB/s wr, 100 op/s
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.093 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.095 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Deleting local config drive /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config because it was imported into RBD.
Dec 13 08:23:53 compute-0 kernel: tap5d8e1c45-4a: entered promiscuous mode
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.1541] manager: (tap5d8e1c45-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00269|binding|INFO|Claiming lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 for this chassis.
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00270|binding|INFO|5d8e1c45-4a7d-4fab-9d17-2bb5837e6134: Claiming fa:16:3e:83:58:17 10.100.0.7
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.156 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.166 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:58:17 10.100.0.7'], port_security=['fa:16:3e:83:58:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2be5ed2a3b1a405bb6891ecdc5cba68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16148d83-2b30-49dd-9926-d0fb6490d2c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=615ba7d0-57bb-42d2-948a-6426e9af82d9, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.168 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 in datapath e08cb57c-0bd2-4c88-a4f8-e9d9be925301 bound to our chassis
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.170 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e08cb57c-0bd2-4c88-a4f8-e9d9be925301
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00271|binding|INFO|Setting lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 ovn-installed in OVS
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00272|binding|INFO|Setting lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 up in Southbound
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.175 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.179 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.186 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fe8d67-f7c5-4c3c-a171-31838f3af3a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.187 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape08cb57c-01 in ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.189 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape08cb57c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.189 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[211d9576-878b-4df3-8c9c-f2ee3a79fa19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[54a6236a-fe6c-4b17-8c3b-b89cf7c90cd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 systemd-machined[210538]: New machine qemu-39-instance-00000022.
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.203 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8f6ef6-8c75-445c-b51b-ffbc7a8aa387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000022.
Dec 13 08:23:53 compute-0 systemd-udevd[286081]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.217 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a89e25-4d10-485b-a534-ece57b94ba8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.2282] device (tap5d8e1c45-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.2291] device (tap5d8e1c45-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.251 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[41de7e64-b6bc-4c62-8b20-334ebaea6200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 systemd-udevd[286084]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.2587] manager: (tape08cb57c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.258 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc070998-3002-4db3-921f-c7ca0bf76237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.295 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[41ae5c37-40f2-4545-8ba3-993b0662e2ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.298 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[440ac26b-877d-4811-a65c-828f0238631e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.3239] device (tape08cb57c-00): carrier: link connected
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.332 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[68293124-0e2b-49b2-bcd1-73fc931517f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.357 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e38a28d8-4c37-4971-b2ee-215ac18e5980]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape08cb57c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:b8:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681053, 'reachable_time': 20902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286111, 'error': None, 'target': 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.378 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2f28d6-fc9d-4797-88b5-0594c0113dbd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:b819'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681053, 'tstamp': 681053}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286112, 'error': None, 'target': 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.400 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50cb1729-22d8-4d84-b65a-b35a17312ba4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape08cb57c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:b8:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681053, 'reachable_time': 20902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286113, 'error': None, 'target': 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.435 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[234ccc18-e6d1-455c-8ad0-fb213b181e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.512 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78f38083-819e-4433-b22e-5125e8c2fc11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.514 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape08cb57c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.514 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.514 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape08cb57c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.5174] manager: (tape08cb57c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Dec 13 08:23:53 compute-0 kernel: tape08cb57c-00: entered promiscuous mode
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.523 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape08cb57c-00, col_values=(('external_ids', {'iface-id': '54f0e9c1-d2c9-4d7a-b554-d7af88f55e22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00273|binding|INFO|Releasing lport 54f0e9c1-d2c9-4d7a-b554-d7af88f55e22 from this chassis (sb_readonly=0)
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.542 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e08cb57c-0bd2-4c88-a4f8-e9d9be925301.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e08cb57c-0bd2-4c88-a4f8-e9d9be925301.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.543 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[143c0da7-9fa9-4f17-a5d4-22f5b52dbcb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.544 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-e08cb57c-0bd2-4c88-a4f8-e9d9be925301
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/e08cb57c-0bd2-4c88-a4f8-e9d9be925301.pid.haproxy
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID e08cb57c-0bd2-4c88-a4f8-e9d9be925301
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.544 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'env', 'PROCESS_TAG=haproxy-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e08cb57c-0bd2-4c88-a4f8-e9d9be925301.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.625 248514 DEBUG nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.648 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.652 248514 DEBUG nova.virt.libvirt.vif [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.652 248514 DEBUG nova.network.os_vif_util [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.653 248514 DEBUG nova.network.os_vif_util [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.653 248514 DEBUG os_vif [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.654 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.654 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.655 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.658 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap815f5388-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.659 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap815f5388-ae, col_values=(('external_ids', {'iface-id': '815f5388-ae4c-4748-ae1e-a35179c687ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:45:d8', 'vm-uuid': '3b43a9c7-85e7-4558-bd2f-e4712882021e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.6614] manager: (tap815f5388-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.661 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.670 248514 INFO os_vif [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.672 248514 DEBUG nova.virt.libvirt.vif [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.672 248514 DEBUG nova.network.os_vif_util [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.673 248514 DEBUG nova.network.os_vif_util [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.676 248514 DEBUG nova.virt.libvirt.guest [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <target dev="tap815f5388-ae"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]: </interface>
Dec 13 08:23:53 compute-0 nova_compute[248510]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.6864] manager: (tap815f5388-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Dec 13 08:23:53 compute-0 kernel: tap815f5388-ae: entered promiscuous mode
Dec 13 08:23:53 compute-0 systemd-udevd[286094]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00274|binding|INFO|Claiming lport 815f5388-ae4c-4748-ae1e-a35179c687ad for this chassis.
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00275|binding|INFO|815f5388-ae4c-4748-ae1e-a35179c687ad: Claiming fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.689 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.699 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:45:d8 10.100.0.9'], port_security=['fa:16:3e:a8:45:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3b43a9c7-85e7-4558-bd2f-e4712882021e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=815f5388-ae4c-4748-ae1e-a35179c687ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.7019] device (tap815f5388-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:23:53 compute-0 NetworkManager[50376]: <info>  [1765614233.7033] device (tap815f5388-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00276|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad ovn-installed in OVS
Dec 13 08:23:53 compute-0 ovn_controller[148476]: 2025-12-13T08:23:53Z|00277|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad up in Southbound
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.787 248514 DEBUG nova.virt.libvirt.driver [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.787 248514 DEBUG nova.virt.libvirt.driver [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.787 248514 DEBUG nova.virt.libvirt.driver [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:37:48:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.788 248514 DEBUG nova.virt.libvirt.driver [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:a8:45:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.818 248514 DEBUG nova.virt.libvirt.guest [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:23:53</nova:creationTime>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 08:23:53 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 08:23:53 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:23:53 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:23:53 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:23:53 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:23:53 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.841 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614233.8403728, ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.841 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] VM Started (Lifecycle Event)
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.845 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 13.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.872 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.878 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614233.8405333, ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:23:53 compute-0 nova_compute[248510]: 2025-12-13 08:23:53.878 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] VM Paused (Lifecycle Event)
Dec 13 08:23:53 compute-0 podman[286192]: 2025-12-13 08:23:53.98387562 +0000 UTC m=+0.061516399 container create 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 08:23:54 compute-0 systemd[1]: Started libpod-conmon-502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef.scope.
Dec 13 08:23:54 compute-0 podman[286192]: 2025-12-13 08:23:53.951270855 +0000 UTC m=+0.028911654 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:23:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c545acf84645020cc8882e401df567b891cd88033c1b634e5d68fb95026bcdfb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:54 compute-0 nova_compute[248510]: 2025-12-13 08:23:54.111 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:23:54 compute-0 nova_compute[248510]: 2025-12-13 08:23:54.117 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:23:54 compute-0 nova_compute[248510]: 2025-12-13 08:23:54.143 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:23:54 compute-0 podman[286192]: 2025-12-13 08:23:54.191728658 +0000 UTC m=+0.269369457 container init 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 08:23:54 compute-0 podman[286192]: 2025-12-13 08:23:54.199161201 +0000 UTC m=+0.276801980 container start 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 08:23:54 compute-0 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [NOTICE]   (286217) : New worker (286236) forked
Dec 13 08:23:54 compute-0 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [NOTICE]   (286217) : Loading success.
Dec 13 08:23:54 compute-0 ceph-mon[76537]: pgmap v1761: 321 pgs: 321 active+clean; 400 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 5.0 MiB/s wr, 100 op/s
Dec 13 08:23:54 compute-0 sudo[286210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:23:54 compute-0 sudo[286210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:23:54 compute-0 sudo[286210]: pam_unix(sudo:session): session closed for user root
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.289 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 815f5388-ae4c-4748-ae1e-a35179c687ad in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.291 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.311 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[578507cb-408f-457e-9a8f-bed64d6db8f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:54 compute-0 sudo[286247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:23:54 compute-0 sudo[286247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.345 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca8b0d8-9962-43fd-8ee8-7ede6389655a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.349 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[76776379-22b4-4728-b5db-1b088bbfa916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.386 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3cedd981-cf92-49fb-bae4-a7a3b1e57f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.405 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfca803-b0f5-4fca-b05b-1f6aff56632e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286277, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.423 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03570c02-5af0-441c-9338-ea2e8f92ef7f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286278, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286278, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.425 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:54 compute-0 nova_compute[248510]: 2025-12-13 08:23:54.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.428 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.429 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.429 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.429 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:54 compute-0 nova_compute[248510]: 2025-12-13 08:23:54.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:54 compute-0 ovn_controller[148476]: 2025-12-13T08:23:54Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 08:23:54 compute-0 ovn_controller[148476]: 2025-12-13T08:23:54Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 08:23:54 compute-0 sudo[286247]: pam_unix(sudo:session): session closed for user root
Dec 13 08:23:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:23:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:23:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:23:55 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:23:55 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:23:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:23:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:23:55 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:23:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 568 KiB/s rd, 4.0 MiB/s wr, 124 op/s
Dec 13 08:23:55 compute-0 sudo[286310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:23:55 compute-0 sudo[286310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:23:55 compute-0 sudo[286310]: pam_unix(sudo:session): session closed for user root
Dec 13 08:23:55 compute-0 sudo[286335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:23:55 compute-0 sudo[286335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:23:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:23:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:23:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:23:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:55.405 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:55.406 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:55.408 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:55 compute-0 podman[286372]: 2025-12-13 08:23:55.481905172 +0000 UTC m=+0.050674962 container create 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 08:23:55 compute-0 podman[286372]: 2025-12-13 08:23:55.461854177 +0000 UTC m=+0.030623967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:23:55 compute-0 systemd[1]: Started libpod-conmon-1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e.scope.
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.650 248514 DEBUG nova.compute.manager [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.650 248514 DEBUG nova.compute.manager [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-815f5388-ae4c-4748-ae1e-a35179c687ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.651 248514 DEBUG oslo_concurrency.lockutils [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.651 248514 DEBUG oslo_concurrency.lockutils [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.651 248514 DEBUG nova.network.neutron [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port 815f5388-ae4c-4748-ae1e-a35179c687ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.801 248514 DEBUG nova.compute.manager [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.801 248514 DEBUG nova.compute.manager [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing instance network info cache due to event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.802 248514 DEBUG oslo_concurrency.lockutils [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.802 248514 DEBUG oslo_concurrency.lockutils [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:55 compute-0 nova_compute[248510]: 2025-12-13 08:23:55.802 248514 DEBUG nova.network.neutron [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:55 compute-0 podman[286372]: 2025-12-13 08:23:55.859785065 +0000 UTC m=+0.428554875 container init 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:23:55 compute-0 podman[286372]: 2025-12-13 08:23:55.868824808 +0000 UTC m=+0.437594588 container start 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:23:55 compute-0 charming_sanderson[286388]: 167 167
Dec 13 08:23:55 compute-0 systemd[1]: libpod-1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e.scope: Deactivated successfully.
Dec 13 08:23:55 compute-0 conmon[286388]: conmon 1cf9167ad3136774cd43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e.scope/container/memory.events
Dec 13 08:23:56 compute-0 podman[286372]: 2025-12-13 08:23:56.021789793 +0000 UTC m=+0.590559603 container attach 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:23:56 compute-0 podman[286372]: 2025-12-13 08:23:56.023265939 +0000 UTC m=+0.592035749 container died 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3d30f0a0d53418b03052fcd578ff9a7c909308de84dcea6419deb60e029f98b-merged.mount: Deactivated successfully.
Dec 13 08:23:56 compute-0 podman[286372]: 2025-12-13 08:23:56.074816381 +0000 UTC m=+0.643586171 container remove 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:23:56 compute-0 systemd[1]: libpod-conmon-1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e.scope: Deactivated successfully.
Dec 13 08:23:56 compute-0 ceph-mon[76537]: pgmap v1762: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 568 KiB/s rd, 4.0 MiB/s wr, 124 op/s
Dec 13 08:23:56 compute-0 podman[286414]: 2025-12-13 08:23:56.326405999 +0000 UTC m=+0.048589450 container create af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 08:23:56 compute-0 systemd[1]: Started libpod-conmon-af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717.scope.
Dec 13 08:23:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:56 compute-0 podman[286414]: 2025-12-13 08:23:56.30659397 +0000 UTC m=+0.028777441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:56 compute-0 podman[286414]: 2025-12-13 08:23:56.424755435 +0000 UTC m=+0.146938906 container init af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 08:23:56 compute-0 podman[286414]: 2025-12-13 08:23:56.437954761 +0000 UTC m=+0.160138212 container start af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:23:56 compute-0 podman[286414]: 2025-12-13 08:23:56.442341749 +0000 UTC m=+0.164525380 container attach af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:23:56 compute-0 frosty_mirzakhani[286431]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:23:56 compute-0 frosty_mirzakhani[286431]: --> All data devices are unavailable
Dec 13 08:23:56 compute-0 systemd[1]: libpod-af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717.scope: Deactivated successfully.
Dec 13 08:23:56 compute-0 podman[286414]: 2025-12-13 08:23:56.979533593 +0000 UTC m=+0.701717054 container died af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 08:23:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781-merged.mount: Deactivated successfully.
Dec 13 08:23:57 compute-0 podman[286414]: 2025-12-13 08:23:57.035816072 +0000 UTC m=+0.757999523 container remove af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 08:23:57 compute-0 systemd[1]: libpod-conmon-af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717.scope: Deactivated successfully.
Dec 13 08:23:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Dec 13 08:23:57 compute-0 sudo[286335]: pam_unix(sudo:session): session closed for user root
Dec 13 08:23:57 compute-0 sudo[286463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:23:57 compute-0 sudo[286463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:23:57 compute-0 sudo[286463]: pam_unix(sudo:session): session closed for user root
Dec 13 08:23:57 compute-0 sudo[286488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:23:57 compute-0 sudo[286488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:23:57 compute-0 podman[286525]: 2025-12-13 08:23:57.50791205 +0000 UTC m=+0.047353459 container create ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 08:23:57 compute-0 systemd[1]: Started libpod-conmon-ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3.scope.
Dec 13 08:23:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:57 compute-0 podman[286525]: 2025-12-13 08:23:57.48888407 +0000 UTC m=+0.028325499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:23:57 compute-0 podman[286525]: 2025-12-13 08:23:57.598253809 +0000 UTC m=+0.137695238 container init ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:23:57 compute-0 podman[286525]: 2025-12-13 08:23:57.607431775 +0000 UTC m=+0.146873214 container start ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 08:23:57 compute-0 podman[286525]: 2025-12-13 08:23:57.611941477 +0000 UTC m=+0.151383006 container attach ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:23:57 compute-0 brave_mclean[286542]: 167 167
Dec 13 08:23:57 compute-0 systemd[1]: libpod-ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3.scope: Deactivated successfully.
Dec 13 08:23:57 compute-0 podman[286525]: 2025-12-13 08:23:57.614299795 +0000 UTC m=+0.153741204 container died ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:23:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0458e1d31ec1397e896534b1d9b8699d5aefc5606fb58db25b19a86f08b9765-merged.mount: Deactivated successfully.
Dec 13 08:23:57 compute-0 podman[286525]: 2025-12-13 08:23:57.658458345 +0000 UTC m=+0.197899754 container remove ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:23:57 compute-0 systemd[1]: libpod-conmon-ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3.scope: Deactivated successfully.
Dec 13 08:23:57 compute-0 podman[286564]: 2025-12-13 08:23:57.876879214 +0000 UTC m=+0.044737885 container create 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:23:57 compute-0 systemd[1]: Started libpod-conmon-32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368.scope.
Dec 13 08:23:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:57 compute-0 podman[286564]: 2025-12-13 08:23:57.858705195 +0000 UTC m=+0.026563886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:23:57 compute-0 podman[286564]: 2025-12-13 08:23:57.961938983 +0000 UTC m=+0.129797654 container init 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:23:57 compute-0 podman[286564]: 2025-12-13 08:23:57.967865089 +0000 UTC m=+0.135723760 container start 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:23:57 compute-0 podman[286564]: 2025-12-13 08:23:57.97157856 +0000 UTC m=+0.139437231 container attach 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.093 248514 DEBUG nova.network.neutron [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated VIF entry in instance network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.095 248514 DEBUG nova.network.neutron [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.134 248514 DEBUG nova.network.neutron [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port 815f5388-ae4c-4748-ae1e-a35179c687ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.135 248514 DEBUG nova.network.neutron [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.148 248514 DEBUG oslo_concurrency.lockutils [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.156 248514 DEBUG oslo_concurrency.lockutils [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:58 compute-0 ceph-mon[76537]: pgmap v1763: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]: {
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:     "0": [
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:         {
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "devices": [
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "/dev/loop3"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             ],
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_name": "ceph_lv0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_size": "21470642176",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "name": "ceph_lv0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "tags": {
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cluster_name": "ceph",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.crush_device_class": "",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.encrypted": "0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.objectstore": "bluestore",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osd_id": "0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.type": "block",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.vdo": "0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.with_tpm": "0"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             },
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "type": "block",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "vg_name": "ceph_vg0"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:         }
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:     ],
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:     "1": [
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:         {
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "devices": [
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "/dev/loop4"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             ],
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_name": "ceph_lv1",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_size": "21470642176",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "name": "ceph_lv1",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "tags": {
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cluster_name": "ceph",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.crush_device_class": "",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.encrypted": "0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.objectstore": "bluestore",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osd_id": "1",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.type": "block",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.vdo": "0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.with_tpm": "0"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             },
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "type": "block",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "vg_name": "ceph_vg1"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:         }
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:     ],
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:     "2": [
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:         {
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "devices": [
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "/dev/loop5"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             ],
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_name": "ceph_lv2",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_size": "21470642176",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "name": "ceph_lv2",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "tags": {
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.cluster_name": "ceph",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.crush_device_class": "",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.encrypted": "0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.objectstore": "bluestore",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osd_id": "2",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.type": "block",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.vdo": "0",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:                 "ceph.with_tpm": "0"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             },
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "type": "block",
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:             "vg_name": "ceph_vg2"
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:         }
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]:     ]
Dec 13 08:23:58 compute-0 cool_elbakyan[286580]: }
Dec 13 08:23:58 compute-0 systemd[1]: libpod-32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368.scope: Deactivated successfully.
Dec 13 08:23:58 compute-0 podman[286564]: 2025-12-13 08:23:58.310954364 +0000 UTC m=+0.478813045 container died 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848-merged.mount: Deactivated successfully.
Dec 13 08:23:58 compute-0 podman[286564]: 2025-12-13 08:23:58.366990517 +0000 UTC m=+0.534849188 container remove 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 13 08:23:58 compute-0 systemd[1]: libpod-conmon-32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368.scope: Deactivated successfully.
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.378 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.379 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.415 248514 DEBUG nova.objects.instance [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:23:58 compute-0 sudo[286488]: pam_unix(sudo:session): session closed for user root
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.444 248514 DEBUG nova.virt.libvirt.vif [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.444 248514 DEBUG nova.network.os_vif_util [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.445 248514 DEBUG nova.network.os_vif_util [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.450 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.452 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.455 248514 DEBUG nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Attempting to detach device tap815f5388-ae from instance 3b43a9c7-85e7-4558-bd2f-e4712882021e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.455 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <target dev="tap815f5388-ae"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]: </interface>
Dec 13 08:23:58 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.457 248514 DEBUG nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 WARNING nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 DEBUG nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 WARNING nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.463 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.466 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface>not found in domain: <domain type='kvm' id='35'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <name>instance-0000001e</name>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <uuid>3b43a9c7-85e7-4558-bd2f-e4712882021e</uuid>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:23:53</nova:creationTime>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:23:58 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <system>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='serial'>3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='uuid'>3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </system>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <os>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </os>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <features>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </features>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk' index='2'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config' index='1'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:37:48:0d'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target dev='tapbc4158d8-49'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:a8:45:d8'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target dev='tap815f5388-ae'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='net1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log' append='off'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </target>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/0'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log' append='off'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </console>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </input>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </input>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </input>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <video>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </video>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c22,c900</label>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c22,c900</imagelabel>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:23:58 compute-0 nova_compute[248510]: </domain>
Dec 13 08:23:58 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.467 248514 INFO nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap815f5388-ae from instance 3b43a9c7-85e7-4558-bd2f-e4712882021e from the persistent domain config.
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.467 248514 DEBUG nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] (1/8): Attempting to detach device tap815f5388-ae with device alias net1 from instance 3b43a9c7-85e7-4558-bd2f-e4712882021e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.467 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <target dev="tap815f5388-ae"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]: </interface>
Dec 13 08:23:58 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 08:23:58 compute-0 sudo[286603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:23:58 compute-0 sudo[286603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:23:58 compute-0 sudo[286603]: pam_unix(sudo:session): session closed for user root
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.537 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.538 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.538 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.538 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.539 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:58 compute-0 sudo[286628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:23:58 compute-0 sudo[286628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:23:58 compute-0 kernel: tap815f5388-ae (unregistering): left promiscuous mode
Dec 13 08:23:58 compute-0 NetworkManager[50376]: <info>  [1765614238.5851] device (tap815f5388-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:58 compute-0 ovn_controller[148476]: 2025-12-13T08:23:58Z|00278|binding|INFO|Releasing lport 815f5388-ae4c-4748-ae1e-a35179c687ad from this chassis (sb_readonly=0)
Dec 13 08:23:58 compute-0 ovn_controller[148476]: 2025-12-13T08:23:58Z|00279|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad down in Southbound
Dec 13 08:23:58 compute-0 ovn_controller[148476]: 2025-12-13T08:23:58Z|00280|binding|INFO|Removing iface tap815f5388-ae ovn-installed in OVS
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.601 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:45:d8 10.100.0.9'], port_security=['fa:16:3e:a8:45:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3b43a9c7-85e7-4558-bd2f-e4712882021e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=815f5388-ae4c-4748-ae1e-a35179c687ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.603 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 815f5388-ae4c-4748-ae1e-a35179c687ad in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.604 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.616 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765614238.615932, 3b43a9c7-85e7-4558-bd2f-e4712882021e => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.618 248514 DEBUG nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Start waiting for the detach event from libvirt for device tap815f5388-ae with device alias net1 for instance 3b43a9c7-85e7-4558-bd2f-e4712882021e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.619 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.623 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface>not found in domain: <domain type='kvm' id='35'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <name>instance-0000001e</name>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <uuid>3b43a9c7-85e7-4558-bd2f-e4712882021e</uuid>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:23:53</nova:creationTime>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:23:58 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <system>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='serial'>3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='uuid'>3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </system>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <os>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </os>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <features>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </features>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk' index='2'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config' index='1'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </source>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:37:48:0d'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target dev='tapbc4158d8-49'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log' append='off'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       </target>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/0'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log' append='off'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </console>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </input>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </input>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </input>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <video>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </video>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c22,c900</label>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c22,c900</imagelabel>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:23:58 compute-0 nova_compute[248510]: </domain>
Dec 13 08:23:58 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.623 248514 INFO nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap815f5388-ae from instance 3b43a9c7-85e7-4558-bd2f-e4712882021e from the live domain config.
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.624 248514 DEBUG nova.virt.libvirt.vif [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.624 248514 DEBUG nova.network.os_vif_util [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.625 248514 DEBUG nova.network.os_vif_util [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.625 248514 DEBUG os_vif [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.627 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap815f5388-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.629 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c8978279-c36f-4e44-a925-c71c7ab51142]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.635 248514 INFO os_vif [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.636 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:23:58</nova:creationTime>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 08:23:58 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:23:58 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:23:58 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:23:58 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:23:58 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.670 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6950c41-904e-4268-b1c5-330fbf34ba17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.674 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[70a05c2a-642a-4610-99aa-f03a32f43758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.710 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec32b7f6-64b2-45bd-800a-c0b749233e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[050eba57-0e7d-40e9-b55b-9dd969a45e52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286664, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.744 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8ff381-1c07-4bb0-8eae-7ad302a3f8df]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286665, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286665, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.746 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:58 compute-0 nova_compute[248510]: 2025-12-13 08:23:58.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.752 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:23:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.752 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:23:58 compute-0 podman[286678]: 2025-12-13 08:23:58.892495513 +0000 UTC m=+0.046453357 container create f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:23:58 compute-0 systemd[1]: Started libpod-conmon-f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584.scope.
Dec 13 08:23:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:58 compute-0 podman[286678]: 2025-12-13 08:23:58.871964446 +0000 UTC m=+0.025922300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:23:58 compute-0 podman[286678]: 2025-12-13 08:23:58.977207153 +0000 UTC m=+0.131164997 container init f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:23:58 compute-0 podman[286678]: 2025-12-13 08:23:58.986099392 +0000 UTC m=+0.140057216 container start f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 08:23:58 compute-0 podman[286678]: 2025-12-13 08:23:58.990232894 +0000 UTC m=+0.144190738 container attach f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:23:58 compute-0 vigorous_bardeen[286694]: 167 167
Dec 13 08:23:58 compute-0 systemd[1]: libpod-f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584.scope: Deactivated successfully.
Dec 13 08:23:58 compute-0 podman[286678]: 2025-12-13 08:23:58.993110585 +0000 UTC m=+0.147068409 container died f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 08:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-be55dcd73a736c02796cab9f53e50bdfdd46c2db4cf55b5d3d35a52093905b8e-merged.mount: Deactivated successfully.
Dec 13 08:23:59 compute-0 podman[286678]: 2025-12-13 08:23:59.047951879 +0000 UTC m=+0.201909703 container remove f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:23:59 compute-0 systemd[1]: libpod-conmon-f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584.scope: Deactivated successfully.
Dec 13 08:23:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.088 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.089 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.116 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.212 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.213 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.221 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.222 248514 INFO nova.compute.claims [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:23:59 compute-0 podman[286718]: 2025-12-13 08:23:59.364408517 +0000 UTC m=+0.117885760 container create 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:23:59 compute-0 podman[286718]: 2025-12-13 08:23:59.272866718 +0000 UTC m=+0.026343971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:23:59 compute-0 systemd[1]: Started libpod-conmon-90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce.scope.
Dec 13 08:23:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:23:59 compute-0 podman[286718]: 2025-12-13 08:23:59.495288086 +0000 UTC m=+0.248765339 container init 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.496 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:23:59 compute-0 podman[286718]: 2025-12-13 08:23:59.505712053 +0000 UTC m=+0.259189286 container start 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:23:59 compute-0 podman[286718]: 2025-12-13 08:23:59.811367135 +0000 UTC m=+0.564844448 container attach 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.842 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.843 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.866 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.867 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.868 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing instance network info cache due to event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.868 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.868 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.869 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.948 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.948 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:23:59 compute-0 nova_compute[248510]: 2025-12-13 08:23:59.949 248514 DEBUG nova.network.neutron [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:24:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611451730' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.095 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.104 248514 DEBUG nova.compute.provider_tree [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.232 248514 DEBUG nova.scheduler.client.report [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:24:00 compute-0 lvm[286831]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:24:00 compute-0 lvm[286831]: VG ceph_vg0 finished
Dec 13 08:24:00 compute-0 lvm[286833]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:24:00 compute-0 lvm[286833]: VG ceph_vg1 finished
Dec 13 08:24:00 compute-0 lvm[286835]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:24:00 compute-0 lvm[286835]: VG ceph_vg2 finished
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.294 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.295 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.353 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.353 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:24:00 compute-0 nostalgic_visvesvaraya[286734]: {}
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.391 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:24:00 compute-0 systemd[1]: libpod-90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce.scope: Deactivated successfully.
Dec 13 08:24:00 compute-0 systemd[1]: libpod-90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce.scope: Consumed 1.504s CPU time.
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.418 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:24:00 compute-0 podman[286838]: 2025-12-13 08:24:00.456231855 +0000 UTC m=+0.024580987 container died 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:24:00 compute-0 ceph-mon[76537]: pgmap v1764: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Dec 13 08:24:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3611451730' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.519 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.521 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.522 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Creating image(s)
Dec 13 08:24:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015-merged.mount: Deactivated successfully.
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.552 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.580 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.605 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.610 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:00 compute-0 podman[286838]: 2025-12-13 08:24:00.624384804 +0000 UTC m=+0.192733906 container remove 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:24:00 compute-0 systemd[1]: libpod-conmon-90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce.scope: Deactivated successfully.
Dec 13 08:24:00 compute-0 sudo[286628]: pam_unix(sudo:session): session closed for user root
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.682 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.683 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.684 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.684 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.721 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:24:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:24:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.727 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:24:00 compute-0 nova_compute[248510]: 2025-12-13 08:24:00.802 248514 DEBUG nova.policy [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6827fc2174b74c2a92803d852e87c70a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:24:00 compute-0 sudo[286930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:24:00 compute-0 sudo[286930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:24:00 compute-0 sudo[286930]: pam_unix(sudo:session): session closed for user root
Dec 13 08:24:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 507 KiB/s wr, 59 op/s
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.124 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.179 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] resizing rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.261 248514 DEBUG nova.objects.instance [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'migration_context' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.279 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.280 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Ensure instance console log exists: /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.281 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.281 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.281 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.428 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updated VIF entry in instance network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.429 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.449 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Processing event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] No waiting events found dispatching network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.452 248514 WARNING nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received unexpected event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 for instance with vm_state building and task_state spawning.
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.452 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.458 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614241.4579537, ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.458 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] VM Resumed (Lifecycle Event)
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.460 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.463 248514 INFO nova.virt.libvirt.driver [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance spawned successfully.
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.464 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.479 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.486 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.491 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.491 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.492 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.492 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.492 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.493 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.521 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.554 248514 INFO nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Took 23.43 seconds to spawn the instance on the hypervisor.
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.555 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.633 248514 INFO nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Took 25.04 seconds to build instance.
Dec 13 08:24:01 compute-0 nova_compute[248510]: 2025-12-13 08:24:01.652 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:01 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:24:01 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.052 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Successfully created port: b494f789-c137-45c5-9750-2bf0b43681ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.058 248514 DEBUG nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.059 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.059 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.059 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.060 248514 DEBUG nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.060 248514 WARNING nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.060 248514 DEBUG nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.061 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.061 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.061 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.061 248514 DEBUG nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.062 248514 WARNING nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.135 248514 DEBUG nova.compute.manager [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.135 248514 DEBUG nova.compute.manager [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing instance network info cache due to event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.136 248514 DEBUG oslo_concurrency.lockutils [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.136 248514 DEBUG oslo_concurrency.lockutils [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.136 248514 DEBUG nova.network.neutron [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.167 248514 INFO nova.network.neutron [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Port 815f5388-ae4c-4748-ae1e-a35179c687ad from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.169 248514 DEBUG nova.network.neutron [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.205 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:02 compute-0 nova_compute[248510]: 2025-12-13 08:24:02.225 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:02 compute-0 ceph-mon[76537]: pgmap v1765: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 507 KiB/s wr, 59 op/s
Dec 13 08:24:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 415 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 879 KiB/s wr, 113 op/s
Dec 13 08:24:03 compute-0 nova_compute[248510]: 2025-12-13 08:24:03.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.281 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.282 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.282 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.282 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.282 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.283 248514 INFO nova.compute.manager [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Terminating instance
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.284 248514 DEBUG nova.compute.manager [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:24:04 compute-0 kernel: tapd462a8a0-34 (unregistering): left promiscuous mode
Dec 13 08:24:04 compute-0 NetworkManager[50376]: <info>  [1765614244.5142] device (tapd462a8a0-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 ovn_controller[148476]: 2025-12-13T08:24:04Z|00281|binding|INFO|Releasing lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c from this chassis (sb_readonly=0)
Dec 13 08:24:04 compute-0 ovn_controller[148476]: 2025-12-13T08:24:04Z|00282|binding|INFO|Setting lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c down in Southbound
Dec 13 08:24:04 compute-0 ovn_controller[148476]: 2025-12-13T08:24:04Z|00283|binding|INFO|Removing iface tapd462a8a0-34 ovn-installed in OVS
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.550 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:23:d9 10.100.0.4'], port_security=['fa:16:3e:9f:23:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4403714d-3521-4409-9c3b-59d655fc999d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06fbab937d6444558229b2351632e711', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fa325bd2-c57a-49fb-8dd9-f45405c95b4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f47224ac-d05f-46db-ac07-cb476b38b044, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d462a8a0-34ee-4682-ac5c-f7632b5ad39c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.554 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d462a8a0-34ee-4682-ac5c-f7632b5ad39c in datapath 62193ff6-aaa1-401a-b1e0-512e67752a9e unbound from our chassis
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.556 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62193ff6-aaa1-401a-b1e0-512e67752a9e
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.576 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff39a6b-a793-491f-888d-0850a1835b9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:04 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Deactivated successfully.
Dec 13 08:24:04 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Consumed 13.435s CPU time.
Dec 13 08:24:04 compute-0 systemd-machined[210538]: Machine qemu-38-instance-00000021 terminated.
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.608 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e1edcc71-ef76-4dc8-a3ca-2cfbef8ebb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.613 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17edef4a-a1f1-4cb2-9236-503f045fc41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.655 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0ec32e-c9f4-4082-8bd5-4a877ebbdfaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7ad46f-5730-4ae6-a3ac-7c70d79beb20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62193ff6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:33:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676001, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287057, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b201e56b-e91f-4c75-b621-a41cd6cf48d5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62193ff6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676016, 'tstamp': 676016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287058, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62193ff6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676019, 'tstamp': 676019}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287058, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.696 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62193ff6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.704 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62193ff6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.704 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.705 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62193ff6-a0, col_values=(('external_ids', {'iface-id': '67d122d2-811d-4aa8-bdde-aafc5e939b2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.705 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.727 248514 INFO nova.virt.libvirt.driver [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance destroyed successfully.
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.728 248514 DEBUG nova.objects.instance [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'resources' on Instance uuid 4403714d-3521-4409-9c3b-59d655fc999d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.741 248514 DEBUG nova.virt.libvirt.vif [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1547732604',display_name='tempest-FloatingIPsAssociationTestJSON-server-1547732604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1547732604',id=33,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-ath0htte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:33Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=4403714d-3521-4409-9c3b-59d655fc999d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.742 248514 DEBUG nova.network.os_vif_util [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.743 248514 DEBUG nova.network.os_vif_util [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.743 248514 DEBUG os_vif [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.745 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd462a8a0-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:04 compute-0 nova_compute[248510]: 2025-12-13 08:24:04.752 248514 INFO os_vif [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34')
Dec 13 08:24:04 compute-0 ceph-mon[76537]: pgmap v1766: 321 pgs: 321 active+clean; 415 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 879 KiB/s wr, 113 op/s
Dec 13 08:24:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 451 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.577 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Successfully updated port: b494f789-c137-45c5-9750-2bf0b43681ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.637 248514 DEBUG nova.network.neutron [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updated VIF entry in instance network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.637 248514 DEBUG nova.network.neutron [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.804 248514 DEBUG oslo_concurrency.lockutils [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.809 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.809 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquired lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.809 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.814 248514 DEBUG nova.compute.manager [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-changed-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.815 248514 DEBUG nova.compute.manager [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Refreshing instance network info cache due to event network-changed-b494f789-c137-45c5-9750-2bf0b43681ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:05 compute-0 nova_compute[248510]: 2025-12-13 08:24:05.815 248514 DEBUG oslo_concurrency.lockutils [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:06 compute-0 nova_compute[248510]: 2025-12-13 08:24:06.089 248514 INFO nova.virt.libvirt.driver [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Deleting instance files /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d_del
Dec 13 08:24:06 compute-0 nova_compute[248510]: 2025-12-13 08:24:06.089 248514 INFO nova.virt.libvirt.driver [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Deletion of /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d_del complete
Dec 13 08:24:06 compute-0 nova_compute[248510]: 2025-12-13 08:24:06.181 248514 INFO nova.compute.manager [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Took 1.90 seconds to destroy the instance on the hypervisor.
Dec 13 08:24:06 compute-0 nova_compute[248510]: 2025-12-13 08:24:06.182 248514 DEBUG oslo.service.loopingcall [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:24:06 compute-0 nova_compute[248510]: 2025-12-13 08:24:06.182 248514 DEBUG nova.compute.manager [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:24:06 compute-0 nova_compute[248510]: 2025-12-13 08:24:06.182 248514 DEBUG nova.network.neutron [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:24:06 compute-0 nova_compute[248510]: 2025-12-13 08:24:06.603 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:24:07 compute-0 ceph-mon[76537]: pgmap v1767: 321 pgs: 321 active+clean; 451 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Dec 13 08:24:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 451 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.491 248514 DEBUG nova.compute.manager [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.492 248514 DEBUG nova.compute.manager [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.492 248514 DEBUG oslo_concurrency.lockutils [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.492 248514 DEBUG oslo_concurrency.lockutils [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.493 248514 DEBUG nova.network.neutron [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.750 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:07.750 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:07.752 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.905 248514 DEBUG nova.network.neutron [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.928 248514 INFO nova.compute.manager [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Took 1.75 seconds to deallocate network for instance.
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.972 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:07 compute-0 nova_compute[248510]: 2025-12-13 08:24:07.973 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:08 compute-0 ceph-mon[76537]: pgmap v1768: 321 pgs: 321 active+clean; 451 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.232 248514 DEBUG oslo_concurrency.processutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.416 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updating instance_info_cache with network_info: [{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.447 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Releasing lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.449 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance network_info: |[{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.450 248514 DEBUG oslo_concurrency.lockutils [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.451 248514 DEBUG nova.network.neutron [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Refreshing network info cache for port b494f789-c137-45c5-9750-2bf0b43681ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.456 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start _get_guest_xml network_info=[{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.465 248514 WARNING nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.474 248514 DEBUG nova.virt.libvirt.host [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.475 248514 DEBUG nova.virt.libvirt.host [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.479 248514 DEBUG nova.virt.libvirt.host [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.480 248514 DEBUG nova.virt.libvirt.host [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.480 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.481 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.481 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.482 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.482 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.483 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.483 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.483 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.484 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.484 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.484 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.485 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.489 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.554 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.555 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.556 248514 DEBUG nova.objects.instance [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465593196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.801 248514 DEBUG oslo_concurrency.processutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.808 248514 DEBUG nova.compute.provider_tree [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.837 248514 DEBUG nova.scheduler.client.report [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.865 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.892 248514 INFO nova.scheduler.client.report [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Deleted allocations for instance 4403714d-3521-4409-9c3b-59d655fc999d
Dec 13 08:24:08 compute-0 nova_compute[248510]: 2025-12-13 08:24:08.949 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:24:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744464859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.078 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 372 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.106 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.111 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3465593196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/744464859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:24:09
Dec 13 08:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['backups', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Dec 13 08:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.419 248514 DEBUG nova.network.neutron [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.420 248514 DEBUG nova.network.neutron [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.448 248514 DEBUG oslo_concurrency.lockutils [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.498 248514 DEBUG nova.objects.instance [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.561 248514 DEBUG nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:24:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603289413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.686 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.688 248514 DEBUG nova.virt.libvirt.vif [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:24:00Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.688 248514 DEBUG nova.network.os_vif_util [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.689 248514 DEBUG nova.network.os_vif_util [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.690 248514 DEBUG nova.objects.instance [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.727 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <uuid>b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</uuid>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <name>instance-00000023</name>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <nova:name>tempest-InstanceActionsTestJSON-server-370636454</nova:name>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:24:08</nova:creationTime>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <nova:user uuid="6827fc2174b74c2a92803d852e87c70a">tempest-InstanceActionsTestJSON-1859862292-project-member</nova:user>
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <nova:project uuid="8f78f312dfcc4df6ba40b7c8a4e1aa97">tempest-InstanceActionsTestJSON-1859862292</nova:project>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <nova:port uuid="b494f789-c137-45c5-9750-2bf0b43681ad">
Dec 13 08:24:09 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <system>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <entry name="serial">b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</entry>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <entry name="uuid">b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</entry>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </system>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <os>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   </os>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <features>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   </features>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk">
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       </source>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config">
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       </source>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:24:09 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:26:55:3f"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <target dev="tapb494f789-c1"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/console.log" append="off"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <video>
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </video>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:24:09 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:24:09 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:24:09 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:24:09 compute-0 nova_compute[248510]: </domain>
Dec 13 08:24:09 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.735 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Preparing to wait for external event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.735 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.736 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.736 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.737 248514 DEBUG nova.virt.libvirt.vif [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:24:00Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.737 248514 DEBUG nova.network.os_vif_util [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.737 248514 DEBUG nova.network.os_vif_util [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.738 248514 DEBUG os_vif [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.739 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.739 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.743 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb494f789-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.743 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb494f789-c1, col_values=(('external_ids', {'iface-id': 'b494f789-c137-45c5-9750-2bf0b43681ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:55:3f', 'vm-uuid': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:09 compute-0 NetworkManager[50376]: <info>  [1765614249.7462] manager: (tapb494f789-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:09 compute-0 nova_compute[248510]: 2025-12-13 08:24:09.752 248514 INFO os_vif [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1')
Dec 13 08:24:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.053 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.055 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.055 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] No VIF found with MAC fa:16:3e:26:55:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.055 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Using config drive
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.082 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:24:10 compute-0 ceph-mon[76537]: pgmap v1769: 321 pgs: 321 active+clean; 372 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Dec 13 08:24:10 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3603289413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.383 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Creating config drive at /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.389 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8dnsgvwa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.429 248514 DEBUG nova.network.neutron [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updated VIF entry in instance network info cache for port b494f789-c137-45c5-9750-2bf0b43681ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.430 248514 DEBUG nova.network.neutron [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updating instance_info_cache with network_info: [{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.434 248514 DEBUG nova.policy [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.532 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8dnsgvwa" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:24:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.571 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.576 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.697 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.698 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Deleting local config drive /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config because it was imported into RBD.
Dec 13 08:24:10 compute-0 kernel: tapb494f789-c1: entered promiscuous mode
Dec 13 08:24:10 compute-0 NetworkManager[50376]: <info>  [1765614250.7529] manager: (tapb494f789-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Dec 13 08:24:10 compute-0 ovn_controller[148476]: 2025-12-13T08:24:10Z|00284|binding|INFO|Claiming lport b494f789-c137-45c5-9750-2bf0b43681ad for this chassis.
Dec 13 08:24:10 compute-0 ovn_controller[148476]: 2025-12-13T08:24:10Z|00285|binding|INFO|b494f789-c137-45c5-9750-2bf0b43681ad: Claiming fa:16:3e:26:55:3f 10.100.0.12
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:10 compute-0 ovn_controller[148476]: 2025-12-13T08:24:10Z|00286|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad ovn-installed in OVS
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:10 compute-0 systemd-machined[210538]: New machine qemu-40-instance-00000023.
Dec 13 08:24:10 compute-0 systemd-udevd[287248]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:24:10 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000023.
Dec 13 08:24:10 compute-0 NetworkManager[50376]: <info>  [1765614250.8130] device (tapb494f789-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:24:10 compute-0 NetworkManager[50376]: <info>  [1765614250.8137] device (tapb494f789-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:24:10 compute-0 nova_compute[248510]: 2025-12-13 08:24:10.834 248514 DEBUG oslo_concurrency.lockutils [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.849 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:55:3f 10.100.0.12'], port_security=['fa:16:3e:26:55:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9aebbe0-bcf2-4e40-aa62-10b8cea7c801', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c3ff6a3-0731-41dd-8d72-42e18a11ea75, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b494f789-c137-45c5-9750-2bf0b43681ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:10 compute-0 ovn_controller[148476]: 2025-12-13T08:24:10Z|00287|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad up in Southbound
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.851 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b494f789-c137-45c5-9750-2bf0b43681ad in datapath 0740d1ee-47e1-4bdf-bdc4-2dafff999f03 bound to our chassis
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.852 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.868 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf925a6-7f61-4333-8632-f570732c7272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.869 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0740d1ee-41 in ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.871 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0740d1ee-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.871 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f512038-daf6-4a78-8754-a1f1cfb2e7fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df5298df-9a3d-4318-8402-6ce29d226f52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.890 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[2e07d5f5-4ee4-4fec-9ebe-126125a96477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.905 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb954c1e-0405-4080-ae6b-b064e8cdd4e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.936 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ef74279f-6020-4c44-8fac-451178b08769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:10 compute-0 NetworkManager[50376]: <info>  [1765614250.9458] manager: (tap0740d1ee-40): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9d0ef7-5e3d-48f0-ad98-732760e55c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.987 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e62ca239-913d-4e0a-af80-52fa2c489d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.995 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[584b5b17-80ae-413d-b9f7-da4f258467b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:11 compute-0 NetworkManager[50376]: <info>  [1765614251.0328] device (tap0740d1ee-40): carrier: link connected
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.040 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe81026-989d-414d-9494-cd97d3681264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.064 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[671579bb-1f3c-444e-9cbe-ef44a4d081a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0740d1ee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:40:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682824, 'reachable_time': 37010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287281, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.082 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c288e77-14d1-47dc-820f-5a0bc9412a5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:4019'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682824, 'tstamp': 682824}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287282, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 372 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.102 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cadb37bc-76a2-43e5-8856-f71dfd0deb5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0740d1ee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:40:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682824, 'reachable_time': 37010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287283, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.148 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18f9063a-364e-4d83-922f-b9e658e26b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[278d3ea9-141f-4c50-a2fc-2229e02f1b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.219 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0740d1ee-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.219 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.220 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0740d1ee-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:11 compute-0 NetworkManager[50376]: <info>  [1765614251.2225] manager: (tap0740d1ee-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Dec 13 08:24:11 compute-0 kernel: tap0740d1ee-40: entered promiscuous mode
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.225 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0740d1ee-40, col_values=(('external_ids', {'iface-id': '57853c24-d10c-4ddd-b435-f78af259fd27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:11 compute-0 ovn_controller[148476]: 2025-12-13T08:24:11Z|00288|binding|INFO|Releasing lport 57853c24-d10c-4ddd-b435-f78af259fd27 from this chassis (sb_readonly=0)
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.249 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.250 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51f87d70-174f-4ccd-8121-6101de71d1eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.251 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.252 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'env', 'PROCESS_TAG=haproxy-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.425 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614251.4251044, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.426 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Started (Lifecycle Event)
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.479 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.484 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614251.4266348, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.485 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Paused (Lifecycle Event)
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.514 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.519 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:24:11 compute-0 nova_compute[248510]: 2025-12-13 08:24:11.611 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:24:11 compute-0 podman[287357]: 2025-12-13 08:24:11.643765681 +0000 UTC m=+0.048077957 container create 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:24:11 compute-0 systemd[1]: Started libpod-conmon-3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9.scope.
Dec 13 08:24:11 compute-0 podman[287357]: 2025-12-13 08:24:11.616838587 +0000 UTC m=+0.021150883 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:24:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4393fad8ce5a73df0ceef6cfed59401542ad7871cb68553053253ed76ee8e4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:24:11 compute-0 podman[287357]: 2025-12-13 08:24:11.737427162 +0000 UTC m=+0.141739458 container init 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:24:11 compute-0 podman[287357]: 2025-12-13 08:24:11.743738788 +0000 UTC m=+0.148051064 container start 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:24:11 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [NOTICE]   (287376) : New worker (287378) forked
Dec 13 08:24:11 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [NOTICE]   (287376) : Loading success.
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.015 248514 DEBUG nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-unplugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.016 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.016 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.016 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.016 248514 DEBUG nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] No waiting events found dispatching network-vif-unplugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 WARNING nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received unexpected event network-vif-unplugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c for instance with vm_state deleted and task_state None.
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 DEBUG nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.019 248514 DEBUG nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] No waiting events found dispatching network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.019 248514 WARNING nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received unexpected event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c for instance with vm_state deleted and task_state None.
Dec 13 08:24:12 compute-0 ceph-mon[76537]: pgmap v1770: 321 pgs: 321 active+clean; 372 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.487 248514 DEBUG nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Successfully updated port: 815f5388-ae4c-4748-ae1e-a35179c687ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.505 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.505 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.506 248514 DEBUG nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.671 248514 WARNING nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.736 248514 DEBUG nova.compute.manager [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.737 248514 DEBUG nova.compute.manager [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:12 compute-0 nova_compute[248510]: 2025-12-13 08:24:12.737 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:12.755 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 376 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Dec 13 08:24:13 compute-0 podman[287388]: 2025-12-13 08:24:13.988140786 +0000 UTC m=+0.069708791 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:24:14 compute-0 podman[287389]: 2025-12-13 08:24:14.003549646 +0000 UTC m=+0.083797259 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:24:14 compute-0 podman[287387]: 2025-12-13 08:24:14.015604933 +0000 UTC m=+0.097074286 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 08:24:14 compute-0 ceph-mon[76537]: pgmap v1771: 321 pgs: 321 active+clean; 376 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Dec 13 08:24:14 compute-0 ovn_controller[148476]: 2025-12-13T08:24:14Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:58:17 10.100.0.7
Dec 13 08:24:14 compute-0 ovn_controller[148476]: 2025-12-13T08:24:14Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:58:17 10.100.0.7
Dec 13 08:24:14 compute-0 nova_compute[248510]: 2025-12-13 08:24:14.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:14 compute-0 nova_compute[248510]: 2025-12-13 08:24:14.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:24:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1842606236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:24:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:24:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1842606236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:24:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 390 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 157 op/s
Dec 13 08:24:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1842606236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:24:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1842606236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:24:15 compute-0 nova_compute[248510]: 2025-12-13 08:24:15.618 248514 DEBUG nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.056 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.057 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.057 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.062 248514 DEBUG nova.virt.libvirt.vif [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.062 248514 DEBUG nova.network.os_vif_util [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.063 248514 DEBUG nova.network.os_vif_util [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.064 248514 DEBUG os_vif [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.065 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.065 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.069 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap815f5388-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.069 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap815f5388-ae, col_values=(('external_ids', {'iface-id': '815f5388-ae4c-4748-ae1e-a35179c687ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:45:d8', 'vm-uuid': 'd503913e-a05e-47d4-9366-db4426b9aac1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:16 compute-0 NetworkManager[50376]: <info>  [1765614256.0719] manager: (tap815f5388-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.080 248514 INFO os_vif [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.081 248514 DEBUG nova.virt.libvirt.vif [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.081 248514 DEBUG nova.network.os_vif_util [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.082 248514 DEBUG nova.network.os_vif_util [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.086 248514 DEBUG nova.virt.libvirt.guest [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <target dev="tap815f5388-ae"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]: </interface>
Dec 13 08:24:16 compute-0 nova_compute[248510]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 08:24:16 compute-0 kernel: tap815f5388-ae: entered promiscuous mode
Dec 13 08:24:16 compute-0 NetworkManager[50376]: <info>  [1765614256.1078] manager: (tap815f5388-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Dec 13 08:24:16 compute-0 ovn_controller[148476]: 2025-12-13T08:24:16Z|00289|binding|INFO|Claiming lport 815f5388-ae4c-4748-ae1e-a35179c687ad for this chassis.
Dec 13 08:24:16 compute-0 ovn_controller[148476]: 2025-12-13T08:24:16Z|00290|binding|INFO|815f5388-ae4c-4748-ae1e-a35179c687ad: Claiming fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:16 compute-0 systemd-udevd[287453]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:24:16 compute-0 ovn_controller[148476]: 2025-12-13T08:24:16Z|00291|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad ovn-installed in OVS
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:16 compute-0 NetworkManager[50376]: <info>  [1765614256.1651] device (tap815f5388-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:24:16 compute-0 NetworkManager[50376]: <info>  [1765614256.1660] device (tap815f5388-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:24:16 compute-0 ovn_controller[148476]: 2025-12-13T08:24:16Z|00292|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad up in Southbound
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.206 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:45:d8 10.100.0.9'], port_security=['fa:16:3e:a8:45:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd503913e-a05e-47d4-9366-db4426b9aac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=815f5388-ae4c-4748-ae1e-a35179c687ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.208 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 815f5388-ae4c-4748-ae1e-a35179c687ad in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.211 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:24:16 compute-0 ceph-mon[76537]: pgmap v1772: 321 pgs: 321 active+clean; 390 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 157 op/s
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.231 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11a577ca-5f95-4e1f-bebb-3ce3e906d039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.252 248514 DEBUG nova.virt.libvirt.driver [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.252 248514 DEBUG nova.virt.libvirt.driver [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.252 248514 DEBUG nova.virt.libvirt.driver [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:b3:27:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.252 248514 DEBUG nova.virt.libvirt.driver [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:a8:45:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.281 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[243c790f-5734-4d8c-8014-775101b72516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.286 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e09a60b-2cde-4d23-8b5b-025a043f4100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.309 248514 DEBUG nova.virt.libvirt.guest [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:24:16</nova:creationTime>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 08:24:16 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 08:24:16 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:24:16 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:24:16 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:24:16 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:24:16 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.325 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[16faf166-28f7-4f23-aeb9-e7b910fab72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.330 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.345 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d726848d-d2c0-485a-a7d4-fdf7a59594c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 18986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287461, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.367 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09b8957b-657c-406d-ab32-01b67248d4fe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287462, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287462, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.370 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:16 compute-0 nova_compute[248510]: 2025-12-13 08:24:16.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.373 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.374 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.374 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.375 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.021 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-deleted-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.021 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.022 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.022 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.022 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.022 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 390 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 171 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.221 248514 DEBUG nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.222 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.222 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.223 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.224 248514 DEBUG nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Processing event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.224 248514 DEBUG nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.224 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.225 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.225 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.226 248514 DEBUG nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.226 248514 WARNING nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state building and task_state spawning.
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.227 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.234 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614257.2332642, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.235 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Resumed (Lifecycle Event)
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.238 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.247 248514 INFO nova.virt.libvirt.driver [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance spawned successfully.
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.248 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.266 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.273 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.277 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.278 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.278 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.278 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.278 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.279 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.337 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.415 248514 INFO nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Took 16.90 seconds to spawn the instance on the hypervisor.
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.416 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.514 248514 INFO nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Took 18.34 seconds to build instance.
Dec 13 08:24:17 compute-0 nova_compute[248510]: 2025-12-13 08:24:17.598 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:18 compute-0 ovn_controller[148476]: 2025-12-13T08:24:18Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 08:24:18 compute-0 ovn_controller[148476]: 2025-12-13T08:24:18Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 08:24:18 compute-0 ceph-mon[76537]: pgmap v1773: 321 pgs: 321 active+clean; 390 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 171 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 13 08:24:18 compute-0 nova_compute[248510]: 2025-12-13 08:24:18.475 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:18 compute-0 nova_compute[248510]: 2025-12-13 08:24:18.476 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:18 compute-0 nova_compute[248510]: 2025-12-13 08:24:18.492 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:18 compute-0 nova_compute[248510]: 2025-12-13 08:24:18.493 248514 DEBUG nova.compute.manager [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:18 compute-0 nova_compute[248510]: 2025-12-13 08:24:18.493 248514 DEBUG nova.compute.manager [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing instance network info cache due to event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:18 compute-0 nova_compute[248510]: 2025-12-13 08:24:18.493 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:18 compute-0 nova_compute[248510]: 2025-12-13 08:24:18.493 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:18 compute-0 nova_compute[248510]: 2025-12-13 08:24:18.494 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.057 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.058 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.089 248514 DEBUG nova.objects.instance [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 156 op/s
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.117 248514 DEBUG nova.virt.libvirt.vif [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.117 248514 DEBUG nova.network.os_vif_util [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.118 248514 DEBUG nova.network.os_vif_util [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.121 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.123 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.125 248514 DEBUG nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Attempting to detach device tap815f5388-ae from instance d503913e-a05e-47d4-9366-db4426b9aac1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.126 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <target dev="tap815f5388-ae"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]: </interface>
Dec 13 08:24:19 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.131 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.135 248514 DEBUG nova.compute.manager [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.135 248514 DEBUG nova.compute.manager [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.136 248514 DEBUG oslo_concurrency.lockutils [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.138 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface>not found in domain: <domain type='kvm' id='37'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <name>instance-00000020</name>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <uuid>d503913e-a05e-47d4-9366-db4426b9aac1</uuid>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:24:16</nova:creationTime>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:24:19 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <system>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='serial'>d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='uuid'>d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </system>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <os>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </os>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <features>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </features>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk' index='2'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </source>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk.config' index='1'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </source>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:b3:27:85'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target dev='tape1eabc5e-9e'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:a8:45:d8'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target dev='tap815f5388-ae'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='net1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <source path='/dev/pts/2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log' append='off'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </target>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/2'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <source path='/dev/pts/2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log' append='off'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </console>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </input>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </input>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </input>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <video>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </video>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c374,c657</label>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c374,c657</imagelabel>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:24:19 compute-0 nova_compute[248510]: </domain>
Dec 13 08:24:19 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.139 248514 INFO nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap815f5388-ae from instance d503913e-a05e-47d4-9366-db4426b9aac1 from the persistent domain config.
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.139 248514 DEBUG nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] (1/8): Attempting to detach device tap815f5388-ae with device alias net1 from instance d503913e-a05e-47d4-9366-db4426b9aac1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.139 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <target dev="tap815f5388-ae"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]: </interface>
Dec 13 08:24:19 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 08:24:19 compute-0 kernel: tap815f5388-ae (unregistering): left promiscuous mode
Dec 13 08:24:19 compute-0 NetworkManager[50376]: <info>  [1765614259.2733] device (tap815f5388-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.278 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.278 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:19 compute-0 ovn_controller[148476]: 2025-12-13T08:24:19Z|00293|binding|INFO|Releasing lport 815f5388-ae4c-4748-ae1e-a35179c687ad from this chassis (sb_readonly=0)
Dec 13 08:24:19 compute-0 ovn_controller[148476]: 2025-12-13T08:24:19Z|00294|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad down in Southbound
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.282 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:19 compute-0 ovn_controller[148476]: 2025-12-13T08:24:19Z|00295|binding|INFO|Removing iface tap815f5388-ae ovn-installed in OVS
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.289 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:45:d8 10.100.0.9'], port_security=['fa:16:3e:a8:45:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd503913e-a05e-47d4-9366-db4426b9aac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=815f5388-ae4c-4748-ae1e-a35179c687ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.290 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 815f5388-ae4c-4748-ae1e-a35179c687ad in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.292 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.299 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765614259.2940338, d503913e-a05e-47d4-9366-db4426b9aac1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.300 248514 DEBUG nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Start waiting for the detach event from libvirt for device tap815f5388-ae with device alias net1 for instance d503913e-a05e-47d4-9366-db4426b9aac1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.300 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.308 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface>not found in domain: <domain type='kvm' id='37'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <name>instance-00000020</name>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <uuid>d503913e-a05e-47d4-9366-db4426b9aac1</uuid>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:24:16</nova:creationTime>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:24:19 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <system>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='serial'>d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='uuid'>d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </system>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <os>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </os>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <features>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </features>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk' index='2'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </source>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk.config' index='1'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </source>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:b3:27:85'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target dev='tape1eabc5e-9e'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <source path='/dev/pts/2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log' append='off'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       </target>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/2'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <source path='/dev/pts/2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log' append='off'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </console>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </input>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </input>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </input>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <video>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </video>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c374,c657</label>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c374,c657</imagelabel>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:24:19 compute-0 nova_compute[248510]: </domain>
Dec 13 08:24:19 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.309 248514 INFO nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap815f5388-ae from instance d503913e-a05e-47d4-9366-db4426b9aac1 from the live domain config.
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.310 248514 DEBUG nova.virt.libvirt.vif [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.310 248514 DEBUG nova.network.os_vif_util [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.311 248514 DEBUG nova.network.os_vif_util [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.311 248514 DEBUG os_vif [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.313 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap815f5388-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.315 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.315 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.315 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-815f5388-ae4c-4748-ae1e-a35179c687ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.316 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.316 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.316 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port 815f5388-ae4c-4748-ae1e-a35179c687ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.319 248514 DEBUG oslo_concurrency.lockutils [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.319 248514 DEBUG nova.network.neutron [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.321 248514 DEBUG nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 WARNING nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.323 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.323 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.323 248514 DEBUG nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.323 248514 WARNING nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.317 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6067a788-bdce-40fb-b115-302926969b8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.326 248514 INFO os_vif [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.327 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:24:19</nova:creationTime>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 08:24:19 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:24:19 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:24:19 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:24:19 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:24:19 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.370 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c30207aa-4ab3-4174-a08e-11eb6f8e9b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.375 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ccae78c6-adb1-49e0-aa7a-f012de4c7745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.410 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb8d5de-7ab5-4480-915c-2fbe7298d8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.429 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ecef10ab-8044-4579-ac88-5c5630d00709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 18986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287474, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.448 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d13eab-fc42-4e60-a5d9-e32dff4c535f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287475, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287475, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.451 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.455 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.455 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.456 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.456 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.726 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614244.724721, 4403714d-3521-4409-9c3b-59d655fc999d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.727 248514 INFO nova.compute.manager [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] VM Stopped (Lifecycle Event)
Dec 13 08:24:19 compute-0 nova_compute[248510]: 2025-12-13 08:24:19.749 248514 DEBUG nova.compute.manager [None req-be68987f-44fc-4709-aea9-2322cd833bbc - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:20 compute-0 ceph-mon[76537]: pgmap v1774: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 156 op/s
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003389182341954993 of space, bias 1.0, pg target 1.016754702586498 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670482364989137 of space, bias 1.0, pg target 0.1994474227131752 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.230337999555883e-07 of space, bias 4.0, pg target 0.0008647484247468835 quantized to 16 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:24:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 08:24:20 compute-0 nova_compute[248510]: 2025-12-13 08:24:20.912 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated VIF entry in instance network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:20 compute-0 nova_compute[248510]: 2025-12-13 08:24:20.912 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:20 compute-0 nova_compute[248510]: 2025-12-13 08:24:20.959 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.060 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.061 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.061 248514 INFO nova.compute.manager [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Rebooting instance
Dec 13 08:24:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.196 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.197 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquired lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.197 248514 DEBUG nova.network.neutron [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.480 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port 815f5388-ae4c-4748-ae1e-a35179c687ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.481 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.500 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.645 248514 DEBUG nova.network.neutron [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.645 248514 DEBUG nova.network.neutron [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.665 248514 DEBUG oslo_concurrency.lockutils [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.826 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.826 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.827 248514 DEBUG nova.network.neutron [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.914 248514 DEBUG nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.914 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.915 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.915 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.915 248514 DEBUG nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 WARNING nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 DEBUG nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.917 248514 DEBUG nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:21 compute-0 nova_compute[248510]: 2025-12-13 08:24:21.917 248514 WARNING nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.
Dec 13 08:24:22 compute-0 ceph-mon[76537]: pgmap v1775: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.297 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.297 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.298 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.298 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.298 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.299 248514 INFO nova.compute.manager [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Terminating instance
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.300 248514 DEBUG nova.compute.manager [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:24:22 compute-0 kernel: tape1eabc5e-9e (unregistering): left promiscuous mode
Dec 13 08:24:22 compute-0 NetworkManager[50376]: <info>  [1765614262.3432] device (tape1eabc5e-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:24:22 compute-0 ovn_controller[148476]: 2025-12-13T08:24:22Z|00296|binding|INFO|Releasing lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 from this chassis (sb_readonly=0)
Dec 13 08:24:22 compute-0 ovn_controller[148476]: 2025-12-13T08:24:22Z|00297|binding|INFO|Setting lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 down in Southbound
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:22 compute-0 ovn_controller[148476]: 2025-12-13T08:24:22Z|00298|binding|INFO|Removing iface tape1eabc5e-9e ovn-installed in OVS
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.367 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:27:85 10.100.0.3'], port_security=['fa:16:3e:b3:27:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd503913e-a05e-47d4-9366-db4426b9aac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd758160-971f-4f0e-a4ca-a13304d3c491', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.368 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.375 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.379 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.399 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[754d463a-5f3d-4458-9c24-f389e5e4371b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:22 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Dec 13 08:24:22 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 14.674s CPU time.
Dec 13 08:24:22 compute-0 systemd-machined[210538]: Machine qemu-37-instance-00000020 terminated.
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.438 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ae46693a-fabe-4fd0-b75c-5c6e2d4070bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.443 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[22c4103f-6729-4240-9582-e498e3c73766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.485 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3f1acd-4552-44ff-a60c-0919cfdf22c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.506 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[868cf228-a58a-4fc4-b5b1-c363d3ef5cee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 18986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287487, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.527 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4ebeef-be2d-42db-8c21-0b89d71f8b76]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287489, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287489, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.530 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.540 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.541 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.541 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.542 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.546 248514 INFO nova.virt.libvirt.driver [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance destroyed successfully.
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.547 248514 DEBUG nova.objects.instance [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'resources' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.566 248514 DEBUG nova.virt.libvirt.vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.571 248514 DEBUG nova.network.os_vif_util [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.572 248514 DEBUG nova.network.os_vif_util [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.572 248514 DEBUG os_vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.575 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1eabc5e-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.579 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.581 248514 INFO os_vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e')
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.582 248514 DEBUG nova.virt.libvirt.vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.583 248514 DEBUG nova.network.os_vif_util [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.584 248514 DEBUG nova.network.os_vif_util [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.584 248514 DEBUG os_vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.586 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap815f5388-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.587 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.588 248514 INFO os_vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.859 248514 INFO nova.virt.libvirt.driver [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Deleting instance files /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1_del
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.860 248514 INFO nova.virt.libvirt.driver [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Deletion of /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1_del complete
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.967 248514 INFO nova.compute.manager [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Took 0.67 seconds to destroy the instance on the hypervisor.
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.968 248514 DEBUG oslo.service.loopingcall [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.969 248514 DEBUG nova.compute.manager [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.969 248514 DEBUG nova.network.neutron [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:24:22 compute-0 nova_compute[248510]: 2025-12-13 08:24:22.978 248514 DEBUG nova.network.neutron [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updating instance_info_cache with network_info: [{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.002 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Releasing lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.004 248514 DEBUG nova.compute.manager [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 383 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.271 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.272 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.272 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.273 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.273 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.275 248514 INFO nova.compute.manager [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Terminating instance
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.276 248514 DEBUG nova.compute.manager [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:24:23 compute-0 kernel: tapb494f789-c1 (unregistering): left promiscuous mode
Dec 13 08:24:23 compute-0 NetworkManager[50376]: <info>  [1765614263.5958] device (tapb494f789-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:24:23 compute-0 kernel: tap627622b8-ef (unregistering): left promiscuous mode
Dec 13 08:24:23 compute-0 NetworkManager[50376]: <info>  [1765614263.6027] device (tap627622b8-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:24:23 compute-0 ovn_controller[148476]: 2025-12-13T08:24:23Z|00299|binding|INFO|Releasing lport b494f789-c137-45c5-9750-2bf0b43681ad from this chassis (sb_readonly=0)
Dec 13 08:24:23 compute-0 ovn_controller[148476]: 2025-12-13T08:24:23Z|00300|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad down in Southbound
Dec 13 08:24:23 compute-0 ovn_controller[148476]: 2025-12-13T08:24:23Z|00301|binding|INFO|Removing iface tapb494f789-c1 ovn-installed in OVS
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.612 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:55:3f 10.100.0.12'], port_security=['fa:16:3e:26:55:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9aebbe0-bcf2-4e40-aa62-10b8cea7c801', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c3ff6a3-0731-41dd-8d72-42e18a11ea75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b494f789-c137-45c5-9750-2bf0b43681ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.613 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b494f789-c137-45c5-9750-2bf0b43681ad in datapath 0740d1ee-47e1-4bdf-bdc4-2dafff999f03 unbound from our chassis
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.615 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.617 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35c8bae5-3a8a-4490-890d-0025c8002e17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.618 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 namespace which is not needed anymore
Dec 13 08:24:23 compute-0 ovn_controller[148476]: 2025-12-13T08:24:23Z|00302|binding|INFO|Releasing lport 627622b8-ef54-4181-bd8d-e8e82650b143 from this chassis (sb_readonly=0)
Dec 13 08:24:23 compute-0 ovn_controller[148476]: 2025-12-13T08:24:23Z|00303|binding|INFO|Setting lport 627622b8-ef54-4181-bd8d-e8e82650b143 down in Southbound
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 ovn_controller[148476]: 2025-12-13T08:24:23Z|00304|binding|INFO|Removing iface tap627622b8-ef ovn-installed in OVS
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.637 248514 INFO nova.network.neutron [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Port 815f5388-ae4c-4748-ae1e-a35179c687ad from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.638 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:b2:83 10.100.0.9'], port_security=['fa:16:3e:4a:b2:83 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dc64fea4-e9a8-47e7-8a3a-d01897fc81de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06fbab937d6444558229b2351632e711', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fa325bd2-c57a-49fb-8dd9-f45405c95b4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f47224ac-d05f-46db-ac07-cb476b38b044, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=627622b8-ef54-4181-bd8d-e8e82650b143) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.638 248514 DEBUG nova.network.neutron [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Deactivated successfully.
Dec 13 08:24:23 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Consumed 6.689s CPU time.
Dec 13 08:24:23 compute-0 systemd-machined[210538]: Machine qemu-40-instance-00000023 terminated.
Dec 13 08:24:23 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Dec 13 08:24:23 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Consumed 15.210s CPU time.
Dec 13 08:24:23 compute-0 systemd-machined[210538]: Machine qemu-36-instance-0000001f terminated.
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.675 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.704 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:23 compute-0 NetworkManager[50376]: <info>  [1765614263.7215] manager: (tapb494f789-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.725 248514 INFO nova.virt.libvirt.driver [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance destroyed successfully.
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.729 248514 DEBUG nova.objects.instance [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'resources' on Instance uuid dc64fea4-e9a8-47e7-8a3a-d01897fc81de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.736 248514 INFO nova.virt.libvirt.driver [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance destroyed successfully.
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.736 248514 DEBUG nova.objects.instance [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'resources' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.746 248514 DEBUG nova.virt.libvirt.vif [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-153788010',display_name='tempest-FloatingIPsAssociationTestJSON-server-153788010',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-153788010',id=31,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-m786w5ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:05Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=dc64fea4-e9a8-47e7-8a3a-d01897fc81de,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.747 248514 DEBUG nova.network.os_vif_util [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.747 248514 DEBUG nova.network.os_vif_util [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.748 248514 DEBUG os_vif [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.750 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap627622b8-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.754 248514 DEBUG nova.virt.libvirt.vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:23Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.754 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.755 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.755 248514 DEBUG os_vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.758 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb494f789-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.759 248514 INFO os_vif [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef')
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [NOTICE]   (287376) : haproxy version is 2.8.14-c23fe91
Dec 13 08:24:23 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [NOTICE]   (287376) : path to executable is /usr/sbin/haproxy
Dec 13 08:24:23 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [WARNING]  (287376) : Exiting Master process...
Dec 13 08:24:23 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [WARNING]  (287376) : Exiting Master process...
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:24:23 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [ALERT]    (287376) : Current worker (287378) exited with code 143 (Terminated)
Dec 13 08:24:23 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [WARNING]  (287376) : All workers exited. Exiting... (0)
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.779 248514 INFO os_vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1')
Dec 13 08:24:23 compute-0 systemd[1]: libpod-3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9.scope: Deactivated successfully.
Dec 13 08:24:23 compute-0 podman[287557]: 2025-12-13 08:24:23.786427824 +0000 UTC m=+0.053397529 container died 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.790 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start _get_guest_xml network_info=[{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.798 248514 WARNING nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.806 248514 DEBUG nova.virt.libvirt.host [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.808 248514 DEBUG nova.virt.libvirt.host [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.811 248514 DEBUG nova.virt.libvirt.host [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.811 248514 DEBUG nova.virt.libvirt.host [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.812 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.812 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.objects.instance [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9-userdata-shm.mount: Deactivated successfully.
Dec 13 08:24:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4393fad8ce5a73df0ceef6cfed59401542ad7871cb68553053253ed76ee8e4e-merged.mount: Deactivated successfully.
Dec 13 08:24:23 compute-0 podman[287557]: 2025-12-13 08:24:23.837096014 +0000 UTC m=+0.104065709 container cleanup 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.841 248514 DEBUG oslo_concurrency.processutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:23 compute-0 systemd[1]: libpod-conmon-3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9.scope: Deactivated successfully.
Dec 13 08:24:23 compute-0 podman[287614]: 2025-12-13 08:24:23.94798045 +0000 UTC m=+0.085866340 container remove 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.956 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1f9e19-1f8e-4821-84a2-d0d7fd8ef4c9]: (4, ('Sat Dec 13 08:24:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 (3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9)\n3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9\nSat Dec 13 08:24:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 (3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9)\n3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.959 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[945dc335-6f26-44b6-a2ea-316d6f6c379e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.960 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0740d1ee-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:23 compute-0 kernel: tap0740d1ee-40: left promiscuous mode
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 nova_compute[248510]: 2025-12-13 08:24:23.978 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.982 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[17942e2f-5587-4a13-a3aa-ed896b765434]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.996 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3614aa6f-35ac-4d1b-a92d-9733f8f22899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.998 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85bcf9c9-03f6-4dad-9d88-fb02281d7a7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b280c6-b28e-4a63-944a-9c85c0601397]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682814, 'reachable_time': 41738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287646, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.023 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.023 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d40fd7-e361-489c-bcfc-689b105dd531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.023 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 627622b8-ef54-4181-bd8d-e8e82650b143 in datapath 62193ff6-aaa1-401a-b1e0-512e67752a9e unbound from our chassis
Dec 13 08:24:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d0740d1ee\x2d47e1\x2d4bdf\x2dbdc4\x2d2dafff999f03.mount: Deactivated successfully.
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.025 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62193ff6-aaa1-401a-b1e0-512e67752a9e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.025 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34caeb64-a20d-438c-8f5c-d672a5b3d466]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.026 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e namespace which is not needed anymore
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.177 248514 INFO nova.virt.libvirt.driver [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Deleting instance files /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de_del
Dec 13 08:24:24 compute-0 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [NOTICE]   (284807) : haproxy version is 2.8.14-c23fe91
Dec 13 08:24:24 compute-0 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [NOTICE]   (284807) : path to executable is /usr/sbin/haproxy
Dec 13 08:24:24 compute-0 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [WARNING]  (284807) : Exiting Master process...
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.178 248514 INFO nova.virt.libvirt.driver [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Deletion of /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de_del complete
Dec 13 08:24:24 compute-0 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [WARNING]  (284807) : Exiting Master process...
Dec 13 08:24:24 compute-0 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [ALERT]    (284807) : Current worker (284809) exited with code 143 (Terminated)
Dec 13 08:24:24 compute-0 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [WARNING]  (284807) : All workers exited. Exiting... (0)
Dec 13 08:24:24 compute-0 systemd[1]: libpod-70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08.scope: Deactivated successfully.
Dec 13 08:24:24 compute-0 podman[287667]: 2025-12-13 08:24:24.190865063 +0000 UTC m=+0.051904142 container died 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08-userdata-shm.mount: Deactivated successfully.
Dec 13 08:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f58e74a911db8950501b6e1ba22b29e7f74e9f24881f0a836706699e6a74d9c3-merged.mount: Deactivated successfully.
Dec 13 08:24:24 compute-0 podman[287667]: 2025-12-13 08:24:24.227671881 +0000 UTC m=+0.088710950 container cleanup 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:24:24 compute-0 systemd[1]: libpod-conmon-70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08.scope: Deactivated successfully.
Dec 13 08:24:24 compute-0 podman[287698]: 2025-12-13 08:24:24.291490835 +0000 UTC m=+0.042294204 container remove 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.298 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c817ddd-bf43-447f-bf95-01766bfc8416]: (4, ('Sat Dec 13 08:24:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e (70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08)\n70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08\nSat Dec 13 08:24:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e (70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08)\n70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.300 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c09a7a-270e-4ed6-a6a9-3f9ef758a7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.301 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62193ff6-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:24 compute-0 kernel: tap62193ff6-a0: left promiscuous mode
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.322 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f880138-bc72-4b35-afc8-be60c2e17c7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.326 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-unplugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.326 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.327 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.327 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.327 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-unplugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.327 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-unplugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 WARNING nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 for instance with vm_state active and task_state deleting.
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 WARNING nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state reboot_started_hard.
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.344 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e90fc71b-89ee-447a-9f0b-a3f8d7eb3dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.346 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[518edbc1-bc50-4fb5-8df7-39757b3580e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.367 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[33157f67-3657-4cf3-9b59-e1f6f2fafdc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675992, 'reachable_time': 27996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287713, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.369 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:24:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.369 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2b82ac-fb40-4925-b540-daf801678ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:24:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806289067' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.469 248514 DEBUG oslo_concurrency.processutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.502 248514 DEBUG oslo_concurrency.processutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:24 compute-0 ceph-mon[76537]: pgmap v1776: 321 pgs: 321 active+clean; 383 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Dec 13 08:24:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3806289067' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.746 248514 INFO nova.compute.manager [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Took 1.47 seconds to destroy the instance on the hypervisor.
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.747 248514 DEBUG oslo.service.loopingcall [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.748 248514 DEBUG nova.compute.manager [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:24:24 compute-0 nova_compute[248510]: 2025-12-13 08:24:24.749 248514 DEBUG nova.network.neutron [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:24:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d62193ff6\x2daaa1\x2d401a\x2db1e0\x2d512e67752a9e.mount: Deactivated successfully.
Dec 13 08:24:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:24:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176819737' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.096 248514 DEBUG oslo_concurrency.processutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 326 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 157 op/s
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.098 248514 DEBUG nova.virt.libvirt.vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:23Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.098 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.099 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.100 248514 DEBUG nova.objects.instance [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.121 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <uuid>b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</uuid>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <name>instance-00000023</name>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <nova:name>tempest-InstanceActionsTestJSON-server-370636454</nova:name>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:24:23</nova:creationTime>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <nova:user uuid="6827fc2174b74c2a92803d852e87c70a">tempest-InstanceActionsTestJSON-1859862292-project-member</nova:user>
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <nova:project uuid="8f78f312dfcc4df6ba40b7c8a4e1aa97">tempest-InstanceActionsTestJSON-1859862292</nova:project>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <nova:port uuid="b494f789-c137-45c5-9750-2bf0b43681ad">
Dec 13 08:24:25 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <system>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <entry name="serial">b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</entry>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <entry name="uuid">b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</entry>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </system>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <os>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   </os>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <features>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   </features>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk">
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       </source>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config">
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       </source>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:24:25 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:26:55:3f"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <target dev="tapb494f789-c1"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/console.log" append="off"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <video>
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </video>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:24:25 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:24:25 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:24:25 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:24:25 compute-0 nova_compute[248510]: </domain>
Dec 13 08:24:25 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.122 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.122 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.122 248514 DEBUG nova.virt.libvirt.vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:23Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.123 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.123 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.124 248514 DEBUG os_vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.124 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.125 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.128 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb494f789-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.129 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb494f789-c1, col_values=(('external_ids', {'iface-id': 'b494f789-c137-45c5-9750-2bf0b43681ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:55:3f', 'vm-uuid': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 NetworkManager[50376]: <info>  [1765614265.1317] manager: (tapb494f789-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.140 248514 INFO os_vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1')
Dec 13 08:24:25 compute-0 kernel: tapb494f789-c1: entered promiscuous mode
Dec 13 08:24:25 compute-0 NetworkManager[50376]: <info>  [1765614265.2196] manager: (tapb494f789-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Dec 13 08:24:25 compute-0 systemd-udevd[287563]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:24:25 compute-0 ovn_controller[148476]: 2025-12-13T08:24:25Z|00305|binding|INFO|Claiming lport b494f789-c137-45c5-9750-2bf0b43681ad for this chassis.
Dec 13 08:24:25 compute-0 ovn_controller[148476]: 2025-12-13T08:24:25Z|00306|binding|INFO|b494f789-c137-45c5-9750-2bf0b43681ad: Claiming fa:16:3e:26:55:3f 10.100.0.12
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 NetworkManager[50376]: <info>  [1765614265.2348] device (tapb494f789-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:24:25 compute-0 NetworkManager[50376]: <info>  [1765614265.2355] device (tapb494f789-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:24:25 compute-0 ovn_controller[148476]: 2025-12-13T08:24:25Z|00307|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad ovn-installed in OVS
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 ovn_controller[148476]: 2025-12-13T08:24:25Z|00308|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad up in Southbound
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.244 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:55:3f 10.100.0.12'], port_security=['fa:16:3e:26:55:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a9aebbe0-bcf2-4e40-aa62-10b8cea7c801', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c3ff6a3-0731-41dd-8d72-42e18a11ea75, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b494f789-c137-45c5-9750-2bf0b43681ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.245 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b494f789-c137-45c5-9750-2bf0b43681ad in datapath 0740d1ee-47e1-4bdf-bdc4-2dafff999f03 bound to our chassis
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.247 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.261 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc9a630-fa8f-4085-850f-e1adb846ede0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.262 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0740d1ee-41 in ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.264 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0740d1ee-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.264 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[290f0751-6b94-4338-a0ed-3d53dba42d52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 systemd-machined[210538]: New machine qemu-41-instance-00000023.
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.265 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1cd87c-6166-48a9-a2af-bd9f334e11e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.277 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c3971b62-b3a7-44b9-8719-45e589e81470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000023.
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.304 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6badbc24-3102-4360-b6c2-b1ddd9d831c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.339 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[305efb95-62b9-4f9a-8985-57d5d41c3db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 NetworkManager[50376]: <info>  [1765614265.3462] manager: (tap0740d1ee-40): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.346 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[14b4f90a-620b-4308-87ca-6b1cc30acc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 systemd-udevd[287784]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.378 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[19ac91df-8a33-46b2-81c2-bbf59d52a6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.382 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[be62647d-188e-4d35-9d8c-a28b89bf6e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 NetworkManager[50376]: <info>  [1765614265.4129] device (tap0740d1ee-40): carrier: link connected
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.417 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7c89cfac-ae01-4e08-9f80-6531e6cacae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.437 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4faae7-33ce-4089-873b-3e8ff5aca6e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0740d1ee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:40:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684262, 'reachable_time': 32953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287803, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.455 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55f8884f-22ff-4fa9-85ef-d72d633604c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:4019'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 684262, 'tstamp': 684262}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287804, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.478 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5fdc3e-a336-4ab5-bbd4-cff41115e38f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0740d1ee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:40:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684262, 'reachable_time': 32953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287805, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.509 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91ccd67d-d9e5-4c21-83dd-f381bee17da5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3181ec-8f2b-4e4d-bf23-31b0595e02fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0740d1ee-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.574 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.574 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0740d1ee-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:25 compute-0 NetworkManager[50376]: <info>  [1765614265.5768] manager: (tap0740d1ee-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/176819737' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:24:25 compute-0 kernel: tap0740d1ee-40: entered promiscuous mode
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.579 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.586 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0740d1ee-40, col_values=(('external_ids', {'iface-id': '57853c24-d10c-4ddd-b435-f78af259fd27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 ovn_controller[148476]: 2025-12-13T08:24:25Z|00309|binding|INFO|Releasing lport 57853c24-d10c-4ddd-b435-f78af259fd27 from this chassis (sb_readonly=0)
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.594 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2a7a77-963d-4993-be85-9cde1ca1e740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:25 compute-0 nova_compute[248510]: 2025-12-13 08:24:25.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.604 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.605 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'env', 'PROCESS_TAG=haproxy-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.034 248514 DEBUG nova.network.neutron [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.056 248514 INFO nova.compute.manager [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Took 3.09 seconds to deallocate network for instance.
Dec 13 08:24:26 compute-0 podman[287838]: 2025-12-13 08:24:26.068981002 +0000 UTC m=+0.065655131 container create 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 08:24:26 compute-0 systemd[1]: Started libpod-conmon-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01.scope.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.106 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.108 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:26 compute-0 podman[287838]: 2025-12-13 08:24:26.031556038 +0000 UTC m=+0.028230187 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:24:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d33e02833de22fe4b162634b9e49aaa76311ba51063cfa9b4f10999c837011/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:24:26 compute-0 podman[287838]: 2025-12-13 08:24:26.170293742 +0000 UTC m=+0.166967861 container init 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 08:24:26 compute-0 podman[287838]: 2025-12-13 08:24:26.175580152 +0000 UTC m=+0.172254281 container start 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.178 248514 DEBUG nova.network.neutron [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:26 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [NOTICE]   (287857) : New worker (287859) forked
Dec 13 08:24:26 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [NOTICE]   (287857) : Loading success.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.213 248514 INFO nova.compute.manager [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Took 1.46 seconds to deallocate network for instance.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.275 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.279 248514 DEBUG oslo_concurrency.processutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.312 248514 DEBUG nova.compute.manager [req-89e79db9-cd12-4da3-915c-090407c17d1e req-4309feb6-2eb2-44b0-8bb6-d9a01adc3bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-deleted-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.456 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.456 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state reboot_started_hard.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-unplugged-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] No waiting events found dispatching network-vif-unplugged-627622b8-ef54-4181-bd8d-e8e82650b143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received unexpected event network-vif-unplugged-627622b8-ef54-4181-bd8d-e8e82650b143 for instance with vm_state deleted and task_state None.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] No waiting events found dispatching network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received unexpected event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 for instance with vm_state deleted and task_state None.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state reboot_started_hard.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state reboot_started_hard.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-deleted-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.573 248514 DEBUG nova.compute.manager [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.574 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.574 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614266.5731237, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.575 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Resumed (Lifecycle Event)
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.580 248514 INFO nova.virt.libvirt.driver [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance rebooted successfully.
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.580 248514 DEBUG nova.compute.manager [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:26 compute-0 ceph-mon[76537]: pgmap v1777: 321 pgs: 321 active+clean; 326 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 157 op/s
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.738 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.741 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.745 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.771 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614266.5742145, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.772 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Started (Lifecycle Event)
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.791 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.796 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:24:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110202748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.881 248514 DEBUG oslo_concurrency.processutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.888 248514 DEBUG nova.compute.provider_tree [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.908 248514 DEBUG nova.scheduler.client.report [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.937 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.940 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:26 compute-0 nova_compute[248510]: 2025-12-13 08:24:26.968 248514 INFO nova.scheduler.client.report [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Deleted allocations for instance d503913e-a05e-47d4-9366-db4426b9aac1
Dec 13 08:24:27 compute-0 nova_compute[248510]: 2025-12-13 08:24:27.034 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:27 compute-0 nova_compute[248510]: 2025-12-13 08:24:27.048 248514 DEBUG oslo_concurrency.processutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 326 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 437 KiB/s wr, 131 op/s
Dec 13 08:24:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4110202748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307190209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:27 compute-0 nova_compute[248510]: 2025-12-13 08:24:27.661 248514 DEBUG oslo_concurrency.processutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:27 compute-0 nova_compute[248510]: 2025-12-13 08:24:27.667 248514 DEBUG nova.compute.provider_tree [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.037 248514 DEBUG nova.scheduler.client.report [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.082 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.088 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.088 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.089 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.089 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.089 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.091 248514 INFO nova.compute.manager [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Terminating instance
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.092 248514 DEBUG nova.compute.manager [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.093 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.093 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.094 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.094 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.094 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.096 248514 INFO nova.compute.manager [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Terminating instance
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.097 248514 DEBUG nova.compute.manager [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.124 248514 INFO nova.scheduler.client.report [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Deleted allocations for instance dc64fea4-e9a8-47e7-8a3a-d01897fc81de
Dec 13 08:24:28 compute-0 kernel: tapb494f789-c1 (unregistering): left promiscuous mode
Dec 13 08:24:28 compute-0 NetworkManager[50376]: <info>  [1765614268.1332] device (tapb494f789-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:24:28 compute-0 ovn_controller[148476]: 2025-12-13T08:24:28Z|00310|binding|INFO|Releasing lport b494f789-c137-45c5-9750-2bf0b43681ad from this chassis (sb_readonly=0)
Dec 13 08:24:28 compute-0 ovn_controller[148476]: 2025-12-13T08:24:28Z|00311|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad down in Southbound
Dec 13 08:24:28 compute-0 ovn_controller[148476]: 2025-12-13T08:24:28Z|00312|binding|INFO|Removing iface tapb494f789-c1 ovn-installed in OVS
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 kernel: tapbc4158d8-49 (unregistering): left promiscuous mode
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.151 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:55:3f 10.100.0.12'], port_security=['fa:16:3e:26:55:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a9aebbe0-bcf2-4e40-aa62-10b8cea7c801', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c3ff6a3-0731-41dd-8d72-42e18a11ea75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b494f789-c137-45c5-9750-2bf0b43681ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.152 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b494f789-c137-45c5-9750-2bf0b43681ad in datapath 0740d1ee-47e1-4bdf-bdc4-2dafff999f03 unbound from our chassis
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.154 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:24:28 compute-0 NetworkManager[50376]: <info>  [1765614268.1556] device (tapbc4158d8-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.158 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7472297c-b04d-4db7-8ebf-9c200166b59b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.158 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 namespace which is not needed anymore
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 ovn_controller[148476]: 2025-12-13T08:24:28Z|00313|binding|INFO|Releasing lport bc4158d8-4963-4009-a434-0a0106941c9d from this chassis (sb_readonly=0)
Dec 13 08:24:28 compute-0 ovn_controller[148476]: 2025-12-13T08:24:28Z|00314|binding|INFO|Setting lport bc4158d8-4963-4009-a434-0a0106941c9d down in Southbound
Dec 13 08:24:28 compute-0 ovn_controller[148476]: 2025-12-13T08:24:28Z|00315|binding|INFO|Removing iface tapbc4158d8-49 ovn-installed in OVS
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.175 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:48:0d 10.100.0.6'], port_security=['fa:16:3e:37:48:0d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b43a9c7-85e7-4558-bd2f-e4712882021e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd758160-971f-4f0e-a4ca-a13304d3c491', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bc4158d8-4963-4009-a434-0a0106941c9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.180 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000023.scope: Deactivated successfully.
Dec 13 08:24:28 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000023.scope: Consumed 2.894s CPU time.
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.196 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:28 compute-0 systemd-machined[210538]: Machine qemu-41-instance-00000023 terminated.
Dec 13 08:24:28 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Dec 13 08:24:28 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Consumed 16.494s CPU time.
Dec 13 08:24:28 compute-0 systemd-machined[210538]: Machine qemu-35-instance-0000001e terminated.
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [NOTICE]   (287857) : haproxy version is 2.8.14-c23fe91
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [NOTICE]   (287857) : path to executable is /usr/sbin/haproxy
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [WARNING]  (287857) : Exiting Master process...
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [WARNING]  (287857) : Exiting Master process...
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [ALERT]    (287857) : Current worker (287859) exited with code 143 (Terminated)
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [WARNING]  (287857) : All workers exited. Exiting... (0)
Dec 13 08:24:28 compute-0 systemd[1]: libpod-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01.scope: Deactivated successfully.
Dec 13 08:24:28 compute-0 conmon[287853]: conmon 49118c3b6b0e483ae4f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01.scope/container/memory.events
Dec 13 08:24:28 compute-0 podman[287977]: 2025-12-13 08:24:28.304939371 +0000 UTC m=+0.050559438 container died 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:24:28 compute-0 kernel: tapbc4158d8-49: entered promiscuous mode
Dec 13 08:24:28 compute-0 NetworkManager[50376]: <info>  [1765614268.3239] manager: (tapbc4158d8-49): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Dec 13 08:24:28 compute-0 systemd-udevd[287791]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:24:28 compute-0 kernel: tapbc4158d8-49 (unregistering): left promiscuous mode
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 NetworkManager[50376]: <info>  [1765614268.3375] manager: (tapb494f789-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Dec 13 08:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01-userdata-shm.mount: Deactivated successfully.
Dec 13 08:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-87d33e02833de22fe4b162634b9e49aaa76311ba51063cfa9b4f10999c837011-merged.mount: Deactivated successfully.
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.356 248514 INFO nova.virt.libvirt.driver [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance destroyed successfully.
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.356 248514 DEBUG nova.objects.instance [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'resources' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:28 compute-0 podman[287977]: 2025-12-13 08:24:28.357769785 +0000 UTC m=+0.103389832 container cleanup 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.359 248514 INFO nova.virt.libvirt.driver [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance destroyed successfully.
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.359 248514 DEBUG nova.objects.instance [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'resources' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:28 compute-0 systemd[1]: libpod-conmon-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01.scope: Deactivated successfully.
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.404 248514 DEBUG nova.virt.libvirt.vif [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:26Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.405 248514 DEBUG nova.network.os_vif_util [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.405 248514 DEBUG nova.network.os_vif_util [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.405 248514 DEBUG os_vif [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.407 248514 DEBUG nova.virt.libvirt.vif [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.407 248514 DEBUG nova.network.os_vif_util [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.408 248514 DEBUG nova.network.os_vif_util [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.408 248514 DEBUG os_vif [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.409 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb494f789-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.416 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.416 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc4158d8-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.418 248514 INFO os_vif [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1')
Dec 13 08:24:28 compute-0 podman[288019]: 2025-12-13 08:24:28.429285619 +0000 UTC m=+0.044701484 container remove 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.436 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34258883-0646-4d48-a27c-753451abee99]: (4, ('Sat Dec 13 08:24:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 (49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01)\n49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01\nSat Dec 13 08:24:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 (49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01)\n49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.438 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eef3711f-13df-4524-b38c-1a0c913a52a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.439 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0740d1ee-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.440 248514 INFO os_vif [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49')
Dec 13 08:24:28 compute-0 kernel: tap0740d1ee-40: left promiscuous mode
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.465 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cad9cc-eeb8-414c-a1ba-502c5659bacb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.477 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9de6e19-467f-460a-8238-49fc93db755c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.479 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[305fbe83-b991-4f1a-ab40-c597e5131744]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.497 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b503935f-3d7f-4eed-b19a-60ba7033b349]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684254, 'reachable_time': 36671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288072, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d0740d1ee\x2d47e1\x2d4bdf\x2dbdc4\x2d2dafff999f03.mount: Deactivated successfully.
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.501 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.501 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5e2aa0-298a-40d7-8167-7503c9d4ce32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.503 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bc4158d8-4963-4009-a434-0a0106941c9d in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.505 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ca92864-3b70-4794-9db1-fa08128cef92, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.506 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af6f24d7-034e-4dda-b75c-7b632e5bad05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.507 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace which is not needed anymore
Dec 13 08:24:28 compute-0 ceph-mon[76537]: pgmap v1778: 321 pgs: 321 active+clean; 326 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 437 KiB/s wr, 131 op/s
Dec 13 08:24:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/307190209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [NOTICE]   (284593) : haproxy version is 2.8.14-c23fe91
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [NOTICE]   (284593) : path to executable is /usr/sbin/haproxy
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [WARNING]  (284593) : Exiting Master process...
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [ALERT]    (284593) : Current worker (284595) exited with code 143 (Terminated)
Dec 13 08:24:28 compute-0 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [WARNING]  (284593) : All workers exited. Exiting... (0)
Dec 13 08:24:28 compute-0 systemd[1]: libpod-e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b.scope: Deactivated successfully.
Dec 13 08:24:28 compute-0 podman[288089]: 2025-12-13 08:24:28.66924186 +0000 UTC m=+0.058127135 container died e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 08:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b-userdata-shm.mount: Deactivated successfully.
Dec 13 08:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b4067aa9b3f2b0ef0dd9859d3a57acc4d914a6c704c9a2c14d2ad026817e68f-merged.mount: Deactivated successfully.
Dec 13 08:24:28 compute-0 podman[288089]: 2025-12-13 08:24:28.710898738 +0000 UTC m=+0.099784013 container cleanup e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.733 248514 INFO nova.virt.libvirt.driver [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Deleting instance files /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_del
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.734 248514 INFO nova.virt.libvirt.driver [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Deletion of /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_del complete
Dec 13 08:24:28 compute-0 systemd[1]: libpod-conmon-e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b.scope: Deactivated successfully.
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.748 248514 INFO nova.virt.libvirt.driver [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Deleting instance files /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e_del
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.750 248514 INFO nova.virt.libvirt.driver [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Deletion of /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e_del complete
Dec 13 08:24:28 compute-0 podman[288119]: 2025-12-13 08:24:28.777262275 +0000 UTC m=+0.041354761 container remove e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.784 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[31577268-1040-4406-8704-e43c3565d478]: (4, ('Sat Dec 13 08:24:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b)\ne5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b\nSat Dec 13 08:24:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b)\ne5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c955946-729a-45b8-b8b4-03aab6a3433a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.788 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 kernel: tap1ca92864-30: left promiscuous mode
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.805 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.809 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e954ec9-9fbd-4820-8f7c-399b4c518511]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.833 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8d419911-ab76-498c-952c-fc13b17677da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.835 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8942a49b-592f-4ab1-bd48-e5584f5b7c7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.843 248514 INFO nova.compute.manager [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Took 0.75 seconds to destroy the instance on the hypervisor.
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.843 248514 DEBUG oslo.service.loopingcall [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.843 248514 DEBUG nova.compute.manager [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.844 248514 DEBUG nova.network.neutron [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.851 248514 DEBUG nova.compute.manager [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-unplugged-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.851 248514 DEBUG oslo_concurrency.lockutils [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.851 248514 DEBUG oslo_concurrency.lockutils [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.852 248514 DEBUG oslo_concurrency.lockutils [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.852 248514 DEBUG nova.compute.manager [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-unplugged-bc4158d8-4963-4009-a434-0a0106941c9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.852 248514 DEBUG nova.compute.manager [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-unplugged-bc4158d8-4963-4009-a434-0a0106941c9d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.855 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86047300-5490-4e64-b1fd-9b761187f19e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675787, 'reachable_time': 19581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288134, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.858 248514 INFO nova.compute.manager [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Took 0.76 seconds to destroy the instance on the hypervisor.
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.858 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:24:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.858 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b54a160f-3235-4e04-bb7b-05ec7ed00208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.858 248514 DEBUG oslo.service.loopingcall [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.858 248514 DEBUG nova.compute.manager [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.859 248514 DEBUG nova.network.neutron [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.942 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.943 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.943 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.943 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.944 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.944 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.944 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.944 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.945 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.945 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.946 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:28 compute-0 nova_compute[248510]: 2025-12-13 08:24:28.946 248514 WARNING nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state deleting.
Dec 13 08:24:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 150 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 440 KiB/s wr, 268 op/s
Dec 13 08:24:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d1ca92864\x2d3b70\x2d4794\x2d9db1\x2dfa08128cef92.mount: Deactivated successfully.
Dec 13 08:24:29 compute-0 nova_compute[248510]: 2025-12-13 08:24:29.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.043687) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270043886, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1220, "num_deletes": 259, "total_data_size": 1804466, "memory_usage": 1832064, "flush_reason": "Manual Compaction"}
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270058963, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1761195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32820, "largest_seqno": 34039, "table_properties": {"data_size": 1755397, "index_size": 3065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12854, "raw_average_key_size": 19, "raw_value_size": 1743488, "raw_average_value_size": 2690, "num_data_blocks": 137, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614163, "oldest_key_time": 1765614163, "file_creation_time": 1765614270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 15316 microseconds, and 7295 cpu microseconds.
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.059025) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1761195 bytes OK
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.059060) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.060392) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.060409) EVENT_LOG_v1 {"time_micros": 1765614270060404, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.060438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1798811, prev total WAL file size 1798811, number of live WAL files 2.
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.061480) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323539' seq:0, type:0; will stop at (end)
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1719KB)], [71(9305KB)]
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270061597, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11289796, "oldest_snapshot_seqno": -1}
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5873 keys, 11185935 bytes, temperature: kUnknown
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270169815, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11185935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11143298, "index_size": 26847, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 148389, "raw_average_key_size": 25, "raw_value_size": 11034340, "raw_average_value_size": 1878, "num_data_blocks": 1100, "num_entries": 5873, "num_filter_entries": 5873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.170325) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11185935 bytes
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.171940) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.2 rd, 103.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.1 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(12.8) write-amplify(6.4) OK, records in: 6408, records dropped: 535 output_compression: NoCompression
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.171962) EVENT_LOG_v1 {"time_micros": 1765614270171953, "job": 40, "event": "compaction_finished", "compaction_time_micros": 108399, "compaction_time_cpu_micros": 27554, "output_level": 6, "num_output_files": 1, "total_output_size": 11185935, "num_input_records": 6408, "num_output_records": 5873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270172449, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270174190, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.061344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:24:30 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:24:30 compute-0 ceph-mon[76537]: pgmap v1779: 321 pgs: 321 active+clean; 150 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 440 KiB/s wr, 268 op/s
Dec 13 08:24:30 compute-0 nova_compute[248510]: 2025-12-13 08:24:30.823 248514 DEBUG nova.network.neutron [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:30 compute-0 nova_compute[248510]: 2025-12-13 08:24:30.839 248514 INFO nova.compute.manager [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Took 1.98 seconds to deallocate network for instance.
Dec 13 08:24:30 compute-0 nova_compute[248510]: 2025-12-13 08:24:30.996 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:30 compute-0 nova_compute[248510]: 2025-12-13 08:24:30.996 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.000 248514 DEBUG nova.compute.manager [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.001 248514 DEBUG oslo_concurrency.lockutils [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.001 248514 DEBUG oslo_concurrency.lockutils [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.002 248514 DEBUG oslo_concurrency.lockutils [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.002 248514 DEBUG nova.compute.manager [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.002 248514 WARNING nova.compute.manager [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d for instance with vm_state active and task_state deleting.
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.031 248514 DEBUG nova.objects.instance [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'flavor' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.076 248514 DEBUG oslo_concurrency.lockutils [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.076 248514 DEBUG oslo_concurrency.lockutils [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 150 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 177 op/s
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.105 248514 DEBUG nova.compute.manager [req-0a54e2d4-f715-4d40-b60c-775936225170 req-f8680226-b608-426d-981c-d7a2eb2662ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-deleted-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.114 248514 DEBUG nova.network.neutron [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.121 248514 DEBUG oslo_concurrency.processutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.195 248514 INFO nova.compute.manager [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Took 2.35 seconds to deallocate network for instance.
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.307 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1653095254' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.674 248514 DEBUG oslo_concurrency.processutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.681 248514 DEBUG nova.compute.provider_tree [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.712 248514 DEBUG nova.scheduler.client.report [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.782 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.785 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.835 248514 INFO nova.scheduler.client.report [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Deleted allocations for instance b3086cd3-fbaf-4f8e-bca2-162a0582d3a4
Dec 13 08:24:31 compute-0 nova_compute[248510]: 2025-12-13 08:24:31.900 248514 DEBUG oslo_concurrency.processutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:32 compute-0 nova_compute[248510]: 2025-12-13 08:24:32.024 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2648067625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:32 compute-0 nova_compute[248510]: 2025-12-13 08:24:32.468 248514 DEBUG oslo_concurrency.processutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:32 compute-0 nova_compute[248510]: 2025-12-13 08:24:32.476 248514 DEBUG nova.compute.provider_tree [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:24:32 compute-0 nova_compute[248510]: 2025-12-13 08:24:32.527 248514 DEBUG nova.scheduler.client.report [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:24:32 compute-0 ceph-mon[76537]: pgmap v1780: 321 pgs: 321 active+clean; 150 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 177 op/s
Dec 13 08:24:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1653095254' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2648067625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:32 compute-0 nova_compute[248510]: 2025-12-13 08:24:32.697 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:32 compute-0 nova_compute[248510]: 2025-12-13 08:24:32.751 248514 INFO nova.scheduler.client.report [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Deleted allocations for instance 3b43a9c7-85e7-4558-bd2f-e4712882021e
Dec 13 08:24:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 121 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 193 op/s
Dec 13 08:24:33 compute-0 nova_compute[248510]: 2025-12-13 08:24:33.180 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:33 compute-0 nova_compute[248510]: 2025-12-13 08:24:33.209 248514 DEBUG nova.network.neutron [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:24:33 compute-0 nova_compute[248510]: 2025-12-13 08:24:33.344 248514 DEBUG nova.compute.manager [req-077b5d46-edde-4534-9528-d243472b1ea9 req-9361c0a8-3e44-40db-812c-314932911455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-deleted-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:33 compute-0 nova_compute[248510]: 2025-12-13 08:24:33.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:33 compute-0 nova_compute[248510]: 2025-12-13 08:24:33.428 248514 DEBUG nova.compute.manager [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:33 compute-0 nova_compute[248510]: 2025-12-13 08:24:33.429 248514 DEBUG nova.compute.manager [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing instance network info cache due to event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:33 compute-0 nova_compute[248510]: 2025-12-13 08:24:33.429 248514 DEBUG oslo_concurrency.lockutils [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:34 compute-0 ceph-mon[76537]: pgmap v1781: 321 pgs: 321 active+clean; 121 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 193 op/s
Dec 13 08:24:34 compute-0 nova_compute[248510]: 2025-12-13 08:24:34.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:34 compute-0 nova_compute[248510]: 2025-12-13 08:24:34.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 174 op/s
Dec 13 08:24:35 compute-0 nova_compute[248510]: 2025-12-13 08:24:35.480 248514 DEBUG nova.network.neutron [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:35 compute-0 nova_compute[248510]: 2025-12-13 08:24:35.728 248514 DEBUG oslo_concurrency.lockutils [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:35 compute-0 nova_compute[248510]: 2025-12-13 08:24:35.729 248514 DEBUG nova.compute.manager [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 13 08:24:35 compute-0 nova_compute[248510]: 2025-12-13 08:24:35.729 248514 DEBUG nova.compute.manager [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] network_info to inject: |[{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 13 08:24:35 compute-0 nova_compute[248510]: 2025-12-13 08:24:35.732 248514 DEBUG oslo_concurrency.lockutils [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:35 compute-0 nova_compute[248510]: 2025-12-13 08:24:35.732 248514 DEBUG nova.network.neutron [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:36 compute-0 ceph-mon[76537]: pgmap v1782: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 174 op/s
Dec 13 08:24:36 compute-0 ovn_controller[148476]: 2025-12-13T08:24:36Z|00316|binding|INFO|Releasing lport 54f0e9c1-d2c9-4d7a-b554-d7af88f55e22 from this chassis (sb_readonly=0)
Dec 13 08:24:36 compute-0 nova_compute[248510]: 2025-12-13 08:24:36.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:36 compute-0 nova_compute[248510]: 2025-12-13 08:24:36.730 248514 DEBUG nova.objects.instance [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'flavor' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:36 compute-0 nova_compute[248510]: 2025-12-13 08:24:36.757 248514 DEBUG oslo_concurrency.lockutils [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 KiB/s wr, 153 op/s
Dec 13 08:24:37 compute-0 nova_compute[248510]: 2025-12-13 08:24:37.541 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614262.5397716, d503913e-a05e-47d4-9366-db4426b9aac1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:37 compute-0 nova_compute[248510]: 2025-12-13 08:24:37.542 248514 INFO nova.compute.manager [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] VM Stopped (Lifecycle Event)
Dec 13 08:24:37 compute-0 nova_compute[248510]: 2025-12-13 08:24:37.662 248514 DEBUG nova.compute.manager [None req-90b003fe-38c0-41ce-acee-a3e89a666ad9 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:38 compute-0 ceph-mon[76537]: pgmap v1783: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 KiB/s wr, 153 op/s
Dec 13 08:24:38 compute-0 nova_compute[248510]: 2025-12-13 08:24:38.422 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:38 compute-0 nova_compute[248510]: 2025-12-13 08:24:38.722 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614263.7212303, dc64fea4-e9a8-47e7-8a3a-d01897fc81de => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:38 compute-0 nova_compute[248510]: 2025-12-13 08:24:38.722 248514 INFO nova.compute.manager [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] VM Stopped (Lifecycle Event)
Dec 13 08:24:38 compute-0 nova_compute[248510]: 2025-12-13 08:24:38.760 248514 DEBUG nova.compute.manager [None req-7e0db656-f419-441d-97f9-d1c11f6737c5 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 KiB/s wr, 153 op/s
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.337 248514 DEBUG nova.network.neutron [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated VIF entry in instance network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.337 248514 DEBUG nova.network.neutron [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.366 248514 DEBUG oslo_concurrency.lockutils [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.369 248514 DEBUG oslo_concurrency.lockutils [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:24:39 compute-0 nova_compute[248510]: 2025-12-13 08:24:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:24:39 compute-0 ovn_controller[148476]: 2025-12-13T08:24:39Z|00317|binding|INFO|Releasing lport 54f0e9c1-d2c9-4d7a-b554-d7af88f55e22 from this chassis (sb_readonly=0)
Dec 13 08:24:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:40 compute-0 nova_compute[248510]: 2025-12-13 08:24:40.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:24:40 compute-0 ceph-mon[76537]: pgmap v1784: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 KiB/s wr, 153 op/s
Dec 13 08:24:40 compute-0 nova_compute[248510]: 2025-12-13 08:24:40.294 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Dec 13 08:24:41 compute-0 nova_compute[248510]: 2025-12-13 08:24:41.645 248514 DEBUG nova.network.neutron [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:24:41 compute-0 nova_compute[248510]: 2025-12-13 08:24:41.947 248514 DEBUG nova.compute.manager [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:41 compute-0 nova_compute[248510]: 2025-12-13 08:24:41.947 248514 DEBUG nova.compute.manager [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing instance network info cache due to event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:24:41 compute-0 nova_compute[248510]: 2025-12-13 08:24:41.948 248514 DEBUG oslo_concurrency.lockutils [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:24:42 compute-0 ceph-mon[76537]: pgmap v1785: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Dec 13 08:24:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Dec 13 08:24:43 compute-0 nova_compute[248510]: 2025-12-13 08:24:43.350 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614268.348804, 3b43a9c7-85e7-4558-bd2f-e4712882021e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:43 compute-0 nova_compute[248510]: 2025-12-13 08:24:43.351 248514 INFO nova.compute.manager [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] VM Stopped (Lifecycle Event)
Dec 13 08:24:43 compute-0 nova_compute[248510]: 2025-12-13 08:24:43.356 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614268.3554, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:24:43 compute-0 nova_compute[248510]: 2025-12-13 08:24:43.356 248514 INFO nova.compute.manager [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Stopped (Lifecycle Event)
Dec 13 08:24:43 compute-0 nova_compute[248510]: 2025-12-13 08:24:43.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:43 compute-0 nova_compute[248510]: 2025-12-13 08:24:43.433 248514 DEBUG nova.compute.manager [None req-591bbb0f-b706-43e3-8e1a-dcc3cdec21b7 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:43 compute-0 nova_compute[248510]: 2025-12-13 08:24:43.436 248514 DEBUG nova.compute.manager [None req-36d827d1-7b3a-4920-adb2-6ae593d450d6 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:24:43 compute-0 ovn_controller[148476]: 2025-12-13T08:24:43Z|00318|binding|INFO|Releasing lport 54f0e9c1-d2c9-4d7a-b554-d7af88f55e22 from this chassis (sb_readonly=0)
Dec 13 08:24:43 compute-0 nova_compute[248510]: 2025-12-13 08:24:43.794 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:44 compute-0 ceph-mon[76537]: pgmap v1786: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Dec 13 08:24:44 compute-0 nova_compute[248510]: 2025-12-13 08:24:44.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:44 compute-0 nova_compute[248510]: 2025-12-13 08:24:44.879 248514 DEBUG nova.network.neutron [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:44 compute-0 podman[288181]: 2025-12-13 08:24:44.972139299 +0000 UTC m=+0.052725982 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 08:24:44 compute-0 podman[288180]: 2025-12-13 08:24:44.981444018 +0000 UTC m=+0.066735707 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 13 08:24:45 compute-0 podman[288179]: 2025-12-13 08:24:45.010128746 +0000 UTC m=+0.097782844 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:24:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:45 compute-0 nova_compute[248510]: 2025-12-13 08:24:45.082 248514 DEBUG oslo_concurrency.lockutils [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:45 compute-0 nova_compute[248510]: 2025-12-13 08:24:45.083 248514 DEBUG nova.compute.manager [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 13 08:24:45 compute-0 nova_compute[248510]: 2025-12-13 08:24:45.083 248514 DEBUG nova.compute.manager [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] network_info to inject: |[{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 13 08:24:45 compute-0 nova_compute[248510]: 2025-12-13 08:24:45.086 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:45 compute-0 nova_compute[248510]: 2025-12-13 08:24:45.087 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:24:45 compute-0 nova_compute[248510]: 2025-12-13 08:24:45.087 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:24:46 compute-0 ceph-mon[76537]: pgmap v1787: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.308 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.310 248514 INFO nova.compute.manager [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Terminating instance
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.311 248514 DEBUG nova.compute.manager [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:24:46 compute-0 kernel: tap5d8e1c45-4a (unregistering): left promiscuous mode
Dec 13 08:24:46 compute-0 NetworkManager[50376]: <info>  [1765614286.3672] device (tap5d8e1c45-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:24:46 compute-0 ovn_controller[148476]: 2025-12-13T08:24:46Z|00319|binding|INFO|Releasing lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 from this chassis (sb_readonly=0)
Dec 13 08:24:46 compute-0 ovn_controller[148476]: 2025-12-13T08:24:46Z|00320|binding|INFO|Setting lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 down in Southbound
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:46 compute-0 ovn_controller[148476]: 2025-12-13T08:24:46Z|00321|binding|INFO|Removing iface tap5d8e1c45-4a ovn-installed in OVS
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.378 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.420 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:58:17 10.100.0.7'], port_security=['fa:16:3e:83:58:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2be5ed2a3b1a405bb6891ecdc5cba68c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '16148d83-2b30-49dd-9926-d0fb6490d2c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=615ba7d0-57bb-42d2-948a-6426e9af82d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.422 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 in datapath e08cb57c-0bd2-4c88-a4f8-e9d9be925301 unbound from our chassis
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.425 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e08cb57c-0bd2-4c88-a4f8-e9d9be925301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.426 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53b865f3-f7bf-4ae1-aa02-47c7cb79d4ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.427 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 namespace which is not needed anymore
Dec 13 08:24:46 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Deactivated successfully.
Dec 13 08:24:46 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Consumed 14.306s CPU time.
Dec 13 08:24:46 compute-0 systemd-machined[210538]: Machine qemu-39-instance-00000022 terminated.
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.548 248514 INFO nova.virt.libvirt.driver [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance destroyed successfully.
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.549 248514 DEBUG nova.objects.instance [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'resources' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:24:46 compute-0 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [NOTICE]   (286217) : haproxy version is 2.8.14-c23fe91
Dec 13 08:24:46 compute-0 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [NOTICE]   (286217) : path to executable is /usr/sbin/haproxy
Dec 13 08:24:46 compute-0 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [WARNING]  (286217) : Exiting Master process...
Dec 13 08:24:46 compute-0 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [ALERT]    (286217) : Current worker (286236) exited with code 143 (Terminated)
Dec 13 08:24:46 compute-0 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [WARNING]  (286217) : All workers exited. Exiting... (0)
Dec 13 08:24:46 compute-0 systemd[1]: libpod-502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef.scope: Deactivated successfully.
Dec 13 08:24:46 compute-0 podman[288267]: 2025-12-13 08:24:46.595607246 +0000 UTC m=+0.059727495 container died 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef-userdata-shm.mount: Deactivated successfully.
Dec 13 08:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c545acf84645020cc8882e401df567b891cd88033c1b634e5d68fb95026bcdfb-merged.mount: Deactivated successfully.
Dec 13 08:24:46 compute-0 podman[288267]: 2025-12-13 08:24:46.641534299 +0000 UTC m=+0.105654528 container cleanup 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:24:46 compute-0 systemd[1]: libpod-conmon-502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef.scope: Deactivated successfully.
Dec 13 08:24:46 compute-0 podman[288306]: 2025-12-13 08:24:46.722143047 +0000 UTC m=+0.055095010 container remove 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c25c7faf-c8fe-4b02-b5ea-db6a5fff5418]: (4, ('Sat Dec 13 08:24:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 (502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef)\n502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef\nSat Dec 13 08:24:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 (502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef)\n502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.730 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e40a0b8-cad8-4983-af68-d84c3f7d320a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.731 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape08cb57c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:46 compute-0 kernel: tape08cb57c-00: left promiscuous mode
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.751 248514 DEBUG nova.virt.libvirt.vif [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1901007496',display_name='tempest-AttachInterfacesUnderV243Test-server-1901007496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1901007496',id=34,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAtqDaZq3IK7Bvm/s6fqCH+TSHLKWsERX0aPeV408BGJSMsRQoO1UjptArZn77j735/fg+c2goyKkkvVN7UQeehgaqDzhHMveiUhv8vzTex1upUSSOpKWfKRhsOR5NuVjA==',key_name='tempest-keypair-862524999',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2be5ed2a3b1a405bb6891ecdc5cba68c',ramdisk_id='',reservation_id='r-l244eynx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1008670327',owner_user_name='tempest-AttachInterfacesUnderV243Test-1008670327-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5faa7317a5cd4b748a984970f79ef52b',uuid=ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.752 248514 DEBUG nova.network.os_vif_util [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converting VIF {"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.753 248514 DEBUG nova.network.os_vif_util [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.753 248514 DEBUG os_vif [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.757 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d8e1c45-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.756 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef03529b-126c-4ae4-8920-c14e9f3ef375]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:46 compute-0 nova_compute[248510]: 2025-12-13 08:24:46.762 248514 INFO os_vif [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a')
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.776 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[059664b4-edd9-49c5-981f-940b2055884d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.778 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbb16e1-3117-4cfb-8325-c30eb34befb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.796 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9b51e1-bed5-43a8-aead-3b657f18c88a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681046, 'reachable_time': 23671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288338, 'error': None, 'target': 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:46 compute-0 systemd[1]: run-netns-ovnmeta\x2de08cb57c\x2d0bd2\x2d4c88\x2da4f8\x2de9d9be925301.mount: Deactivated successfully.
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.800 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:24:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.800 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8872fc67-e88b-451f-91f5-36b2deb76094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:24:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.209 248514 DEBUG nova.compute.manager [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-unplugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.210 248514 DEBUG oslo_concurrency.lockutils [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.210 248514 DEBUG oslo_concurrency.lockutils [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.211 248514 DEBUG oslo_concurrency.lockutils [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.211 248514 DEBUG nova.compute.manager [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] No waiting events found dispatching network-vif-unplugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.211 248514 DEBUG nova.compute.manager [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-unplugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.311 248514 INFO nova.virt.libvirt.driver [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Deleting instance files /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_del
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.313 248514 INFO nova.virt.libvirt.driver [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Deletion of /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_del complete
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.457 248514 INFO nova.compute.manager [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Took 1.15 seconds to destroy the instance on the hypervisor.
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.458 248514 DEBUG oslo.service.loopingcall [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.459 248514 DEBUG nova.compute.manager [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:24:47 compute-0 nova_compute[248510]: 2025-12-13 08:24:47.459 248514 DEBUG nova.network.neutron [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.237 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.268 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.269 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.269 248514 DEBUG oslo_concurrency.lockutils [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.269 248514 DEBUG nova.network.neutron [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.270 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.271 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.271 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.271 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.272 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.272 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:48 compute-0 ceph-mon[76537]: pgmap v1788: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.337 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.337 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.337 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.338 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.338 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307034094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:48 compute-0 nova_compute[248510]: 2025-12-13 08:24:48.900 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.085 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.086 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4396MB free_disk=59.94239760842174GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.086 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.087 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.225 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.226 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.226 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.264 248514 DEBUG nova.network.neutron [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3307034094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.291 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.391 248514 INFO nova.compute.manager [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Took 1.93 seconds to deallocate network for instance.
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.439 248514 DEBUG nova.compute.manager [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.439 248514 DEBUG oslo_concurrency.lockutils [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 DEBUG oslo_concurrency.lockutils [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 DEBUG oslo_concurrency.lockutils [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 DEBUG nova.compute.manager [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] No waiting events found dispatching network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 WARNING nova.compute.manager [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received unexpected event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 for instance with vm_state active and task_state deleting.
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 DEBUG nova.compute.manager [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-deleted-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.468 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994078775' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.882 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.888 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:24:49 compute-0 nova_compute[248510]: 2025-12-13 08:24:49.951 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:24:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:50 compute-0 ceph-mon[76537]: pgmap v1789: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1994078775' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:50 compute-0 nova_compute[248510]: 2025-12-13 08:24:50.328 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:24:50 compute-0 nova_compute[248510]: 2025-12-13 08:24:50.329 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:50 compute-0 nova_compute[248510]: 2025-12-13 08:24:50.330 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:50 compute-0 nova_compute[248510]: 2025-12-13 08:24:50.330 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:50 compute-0 nova_compute[248510]: 2025-12-13 08:24:50.330 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 08:24:50 compute-0 nova_compute[248510]: 2025-12-13 08:24:50.378 248514 DEBUG oslo_concurrency.processutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:24:50 compute-0 nova_compute[248510]: 2025-12-13 08:24:50.525 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:24:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/861745803' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:50 compute-0 nova_compute[248510]: 2025-12-13 08:24:50.997 248514 DEBUG oslo_concurrency.processutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.004 248514 DEBUG nova.compute.provider_tree [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:24:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.191 248514 DEBUG nova.scheduler.client.report [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.254 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.376 248514 INFO nova.scheduler.client.report [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Deleted allocations for instance ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.380 248514 DEBUG nova.network.neutron [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated VIF entry in instance network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.380 248514 DEBUG nova.network.neutron [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:24:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/861745803' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.665 248514 DEBUG oslo_concurrency.lockutils [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.740 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:51 compute-0 nova_compute[248510]: 2025-12-13 08:24:51.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:52 compute-0 ceph-mon[76537]: pgmap v1790: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:52 compute-0 nova_compute[248510]: 2025-12-13 08:24:52.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:54 compute-0 nova_compute[248510]: 2025-12-13 08:24:54.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:54 compute-0 ceph-mon[76537]: pgmap v1791: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:55 compute-0 nova_compute[248510]: 2025-12-13 08:24:55.008 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:55 compute-0 nova_compute[248510]: 2025-12-13 08:24:55.009 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:24:55 compute-0 nova_compute[248510]: 2025-12-13 08:24:55.009 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 08:24:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:24:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:55 compute-0 nova_compute[248510]: 2025-12-13 08:24:55.388 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 08:24:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:55.406 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:24:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:55.407 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:24:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:24:55.407 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:24:56 compute-0 nova_compute[248510]: 2025-12-13 08:24:56.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:56 compute-0 ceph-mon[76537]: pgmap v1792: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:58 compute-0 ceph-mon[76537]: pgmap v1793: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:24:59 compute-0 nova_compute[248510]: 2025-12-13 08:24:59.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:24:59 compute-0 nova_compute[248510]: 2025-12-13 08:24:59.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:00 compute-0 nova_compute[248510]: 2025-12-13 08:25:00.082 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:00 compute-0 ceph-mon[76537]: pgmap v1794: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 08:25:00 compute-0 sudo[288410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:25:00 compute-0 sudo[288410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:00 compute-0 sudo[288410]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:00 compute-0 sudo[288435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 08:25:01 compute-0 sudo[288435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:01 compute-0 sudo[288435]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:25:01 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:25:01 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:01 compute-0 sudo[288481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:25:01 compute-0 sudo[288481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:01 compute-0 sudo[288481]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:01 compute-0 sudo[288506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:25:01 compute-0 sudo[288506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:01 compute-0 nova_compute[248510]: 2025-12-13 08:25:01.546 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614286.5454469, ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:01 compute-0 nova_compute[248510]: 2025-12-13 08:25:01.548 248514 INFO nova.compute.manager [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] VM Stopped (Lifecycle Event)
Dec 13 08:25:01 compute-0 nova_compute[248510]: 2025-12-13 08:25:01.742 248514 DEBUG nova.compute.manager [None req-b50378d2-355a-4c00-8e3e-37a5bb860d86 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:01 compute-0 nova_compute[248510]: 2025-12-13 08:25:01.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:02 compute-0 sudo[288506]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:25:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:25:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:25:02 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:25:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:25:02 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:25:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:25:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:25:02 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:25:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:25:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:25:02 compute-0 sudo[288563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:25:02 compute-0 sudo[288563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:02 compute-0 sudo[288563]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:02 compute-0 sudo[288588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:25:02 compute-0 sudo[288588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:02 compute-0 ceph-mon[76537]: pgmap v1795: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:25:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:25:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:25:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:25:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:25:02 compute-0 podman[288626]: 2025-12-13 08:25:02.603047785 +0000 UTC m=+0.043331040 container create 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:25:02 compute-0 systemd[1]: Started libpod-conmon-56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a.scope.
Dec 13 08:25:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:02 compute-0 podman[288626]: 2025-12-13 08:25:02.585733378 +0000 UTC m=+0.026016663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:25:02 compute-0 podman[288626]: 2025-12-13 08:25:02.701289159 +0000 UTC m=+0.141572444 container init 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:25:02 compute-0 podman[288626]: 2025-12-13 08:25:02.712015014 +0000 UTC m=+0.152298269 container start 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:25:02 compute-0 podman[288626]: 2025-12-13 08:25:02.716054313 +0000 UTC m=+0.156337588 container attach 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 08:25:02 compute-0 flamboyant_mendeleev[288642]: 167 167
Dec 13 08:25:02 compute-0 systemd[1]: libpod-56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a.scope: Deactivated successfully.
Dec 13 08:25:02 compute-0 podman[288626]: 2025-12-13 08:25:02.722721988 +0000 UTC m=+0.163005243 container died 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6905200a4619d4e756e2479197ee978a9e57a950c1a6af50faa91271e4e290d-merged.mount: Deactivated successfully.
Dec 13 08:25:02 compute-0 podman[288626]: 2025-12-13 08:25:02.760747876 +0000 UTC m=+0.201031131 container remove 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 08:25:02 compute-0 systemd[1]: libpod-conmon-56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a.scope: Deactivated successfully.
Dec 13 08:25:02 compute-0 podman[288666]: 2025-12-13 08:25:02.958443054 +0000 UTC m=+0.059918779 container create 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:25:03 compute-0 systemd[1]: Started libpod-conmon-00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5.scope.
Dec 13 08:25:03 compute-0 podman[288666]: 2025-12-13 08:25:02.928386102 +0000 UTC m=+0.029861857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:25:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:03 compute-0 podman[288666]: 2025-12-13 08:25:03.051824658 +0000 UTC m=+0.153300403 container init 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:25:03 compute-0 podman[288666]: 2025-12-13 08:25:03.058711888 +0000 UTC m=+0.160187613 container start 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:25:03 compute-0 podman[288666]: 2025-12-13 08:25:03.062324747 +0000 UTC m=+0.163800492 container attach 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:25:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:03 compute-0 crazy_babbage[288682]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:25:03 compute-0 crazy_babbage[288682]: --> All data devices are unavailable
Dec 13 08:25:03 compute-0 systemd[1]: libpod-00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5.scope: Deactivated successfully.
Dec 13 08:25:03 compute-0 podman[288666]: 2025-12-13 08:25:03.561586846 +0000 UTC m=+0.663062571 container died 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 08:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4-merged.mount: Deactivated successfully.
Dec 13 08:25:03 compute-0 podman[288666]: 2025-12-13 08:25:03.619640818 +0000 UTC m=+0.721116543 container remove 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:25:03 compute-0 systemd[1]: libpod-conmon-00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5.scope: Deactivated successfully.
Dec 13 08:25:03 compute-0 sudo[288588]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:03 compute-0 sudo[288715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:25:03 compute-0 sudo[288715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:03 compute-0 sudo[288715]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:03 compute-0 sudo[288740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:25:03 compute-0 sudo[288740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:03 compute-0 nova_compute[248510]: 2025-12-13 08:25:03.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:04 compute-0 podman[288778]: 2025-12-13 08:25:04.122739732 +0000 UTC m=+0.041330801 container create 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Dec 13 08:25:04 compute-0 systemd[1]: Started libpod-conmon-7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75.scope.
Dec 13 08:25:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:04 compute-0 podman[288778]: 2025-12-13 08:25:04.104935612 +0000 UTC m=+0.023526701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:25:04 compute-0 podman[288778]: 2025-12-13 08:25:04.205551945 +0000 UTC m=+0.124143034 container init 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:25:04 compute-0 podman[288778]: 2025-12-13 08:25:04.21304421 +0000 UTC m=+0.131635279 container start 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 08:25:04 compute-0 podman[288778]: 2025-12-13 08:25:04.217203172 +0000 UTC m=+0.135794241 container attach 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:25:04 compute-0 mystifying_herschel[288794]: 167 167
Dec 13 08:25:04 compute-0 systemd[1]: libpod-7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75.scope: Deactivated successfully.
Dec 13 08:25:04 compute-0 podman[288778]: 2025-12-13 08:25:04.219163501 +0000 UTC m=+0.137754570 container died 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Dec 13 08:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c53e0bbc3582b6d948973d4f90a43d0da731473d94ba746485414f4ceb245e1-merged.mount: Deactivated successfully.
Dec 13 08:25:04 compute-0 podman[288778]: 2025-12-13 08:25:04.263912635 +0000 UTC m=+0.182503704 container remove 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:25:04 compute-0 systemd[1]: libpod-conmon-7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75.scope: Deactivated successfully.
Dec 13 08:25:04 compute-0 ceph-mon[76537]: pgmap v1796: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:04 compute-0 podman[288818]: 2025-12-13 08:25:04.468556204 +0000 UTC m=+0.058634018 container create 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:25:04 compute-0 systemd[1]: Started libpod-conmon-25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9.scope.
Dec 13 08:25:04 compute-0 podman[288818]: 2025-12-13 08:25:04.439947068 +0000 UTC m=+0.030024902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:25:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:04 compute-0 podman[288818]: 2025-12-13 08:25:04.560019911 +0000 UTC m=+0.150097745 container init 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:25:04 compute-0 podman[288818]: 2025-12-13 08:25:04.566107781 +0000 UTC m=+0.156185595 container start 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 08:25:04 compute-0 podman[288818]: 2025-12-13 08:25:04.570187982 +0000 UTC m=+0.160265816 container attach 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:25:04 compute-0 nova_compute[248510]: 2025-12-13 08:25:04.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:04 compute-0 loving_noyce[288834]: {
Dec 13 08:25:04 compute-0 loving_noyce[288834]:     "0": [
Dec 13 08:25:04 compute-0 loving_noyce[288834]:         {
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "devices": [
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "/dev/loop3"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             ],
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_name": "ceph_lv0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_size": "21470642176",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "name": "ceph_lv0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "tags": {
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cluster_name": "ceph",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.crush_device_class": "",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.encrypted": "0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.objectstore": "bluestore",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osd_id": "0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.type": "block",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.vdo": "0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.with_tpm": "0"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             },
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "type": "block",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "vg_name": "ceph_vg0"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:         }
Dec 13 08:25:04 compute-0 loving_noyce[288834]:     ],
Dec 13 08:25:04 compute-0 loving_noyce[288834]:     "1": [
Dec 13 08:25:04 compute-0 loving_noyce[288834]:         {
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "devices": [
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "/dev/loop4"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             ],
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_name": "ceph_lv1",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_size": "21470642176",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "name": "ceph_lv1",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "tags": {
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cluster_name": "ceph",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.crush_device_class": "",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.encrypted": "0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.objectstore": "bluestore",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osd_id": "1",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.type": "block",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.vdo": "0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.with_tpm": "0"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             },
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "type": "block",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "vg_name": "ceph_vg1"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:         }
Dec 13 08:25:04 compute-0 loving_noyce[288834]:     ],
Dec 13 08:25:04 compute-0 loving_noyce[288834]:     "2": [
Dec 13 08:25:04 compute-0 loving_noyce[288834]:         {
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "devices": [
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "/dev/loop5"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             ],
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_name": "ceph_lv2",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_size": "21470642176",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "name": "ceph_lv2",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "tags": {
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.cluster_name": "ceph",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.crush_device_class": "",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.encrypted": "0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.objectstore": "bluestore",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osd_id": "2",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.type": "block",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.vdo": "0",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:                 "ceph.with_tpm": "0"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             },
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "type": "block",
Dec 13 08:25:04 compute-0 loving_noyce[288834]:             "vg_name": "ceph_vg2"
Dec 13 08:25:04 compute-0 loving_noyce[288834]:         }
Dec 13 08:25:04 compute-0 loving_noyce[288834]:     ]
Dec 13 08:25:04 compute-0 loving_noyce[288834]: }
Dec 13 08:25:04 compute-0 systemd[1]: libpod-25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9.scope: Deactivated successfully.
Dec 13 08:25:04 compute-0 podman[288843]: 2025-12-13 08:25:04.964659244 +0000 UTC m=+0.029391916 container died 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:25:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541-merged.mount: Deactivated successfully.
Dec 13 08:25:05 compute-0 podman[288843]: 2025-12-13 08:25:05.279742958 +0000 UTC m=+0.344475570 container remove 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:25:05 compute-0 systemd[1]: libpod-conmon-25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9.scope: Deactivated successfully.
Dec 13 08:25:05 compute-0 sudo[288740]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:05 compute-0 sudo[288858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:25:05 compute-0 sudo[288858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:05 compute-0 sudo[288858]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:05 compute-0 sudo[288883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:25:05 compute-0 sudo[288883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:05 compute-0 podman[288920]: 2025-12-13 08:25:05.771534872 +0000 UTC m=+0.025659824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:25:06 compute-0 podman[288920]: 2025-12-13 08:25:06.381327658 +0000 UTC m=+0.635452610 container create aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 08:25:06 compute-0 systemd[1]: Started libpod-conmon-aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9.scope.
Dec 13 08:25:06 compute-0 nova_compute[248510]: 2025-12-13 08:25:06.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:06 compute-0 ceph-mon[76537]: pgmap v1797: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.067 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.067 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:07 compute-0 podman[288920]: 2025-12-13 08:25:07.089192614 +0000 UTC m=+1.343317526 container init aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:25:07 compute-0 podman[288920]: 2025-12-13 08:25:07.099820066 +0000 UTC m=+1.353944988 container start aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 08:25:07 compute-0 heuristic_albattani[288937]: 167 167
Dec 13 08:25:07 compute-0 systemd[1]: libpod-aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9.scope: Deactivated successfully.
Dec 13 08:25:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:07 compute-0 podman[288920]: 2025-12-13 08:25:07.140617223 +0000 UTC m=+1.394742175 container attach aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:25:07 compute-0 podman[288920]: 2025-12-13 08:25:07.141776431 +0000 UTC m=+1.395901353 container died aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.311 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.530 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.531 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.539 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.539 248514 INFO nova.compute.claims [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.562 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1070d96e041fb6b74a7562d91750286d093a3beeb0f66ec604ccc81252720243-merged.mount: Deactivated successfully.
Dec 13 08:25:07 compute-0 podman[288920]: 2025-12-13 08:25:07.678503185 +0000 UTC m=+1.932628107 container remove aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:25:07 compute-0 systemd[1]: libpod-conmon-aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9.scope: Deactivated successfully.
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.715 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:07 compute-0 nova_compute[248510]: 2025-12-13 08:25:07.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:07 compute-0 podman[288963]: 2025-12-13 08:25:07.866419281 +0000 UTC m=+0.063361484 container create 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:25:07 compute-0 systemd[1]: Started libpod-conmon-32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437.scope.
Dec 13 08:25:07 compute-0 podman[288963]: 2025-12-13 08:25:07.825739317 +0000 UTC m=+0.022681530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:25:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:07 compute-0 ceph-mon[76537]: pgmap v1798: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:07 compute-0 podman[288963]: 2025-12-13 08:25:07.991107778 +0000 UTC m=+0.188050001 container init 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 08:25:08 compute-0 podman[288963]: 2025-12-13 08:25:08.000176171 +0000 UTC m=+0.197118374 container start 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:25:08 compute-0 podman[288963]: 2025-12-13 08:25:08.054211015 +0000 UTC m=+0.251153228 container attach 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 08:25:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/288930945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.310 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.319 248514 DEBUG nova.compute.provider_tree [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.409 248514 DEBUG nova.scheduler.client.report [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.452 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.453 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.537 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.538 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.561 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:25:08 compute-0 nova_compute[248510]: 2025-12-13 08:25:08.582 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:25:08 compute-0 lvm[289078]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:25:08 compute-0 lvm[289078]: VG ceph_vg0 finished
Dec 13 08:25:08 compute-0 lvm[289079]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:25:08 compute-0 lvm[289079]: VG ceph_vg1 finished
Dec 13 08:25:08 compute-0 lvm[289081]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:25:08 compute-0 lvm[289081]: VG ceph_vg2 finished
Dec 13 08:25:08 compute-0 focused_driscoll[288998]: {}
Dec 13 08:25:08 compute-0 podman[288963]: 2025-12-13 08:25:08.885891464 +0000 UTC m=+1.082833677 container died 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:25:08 compute-0 systemd[1]: libpod-32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437.scope: Deactivated successfully.
Dec 13 08:25:08 compute-0 systemd[1]: libpod-32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437.scope: Consumed 1.467s CPU time.
Dec 13 08:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e-merged.mount: Deactivated successfully.
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.025 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.026 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.027 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Creating image(s)
Dec 13 08:25:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/288930945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:09 compute-0 podman[288963]: 2025-12-13 08:25:09.081157731 +0000 UTC m=+1.278099964 container remove 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.089 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:09 compute-0 systemd[1]: libpod-conmon-32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437.scope: Deactivated successfully.
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.114 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:09 compute-0 sudo[288883]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.140 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.144 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.175 248514 DEBUG nova.policy [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:25:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:25:09
Dec 13 08:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.log']
Dec 13 08:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.214 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.216 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.217 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.217 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.240 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:09 compute-0 sudo[289152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.245 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2c76f149-c467-4e59-afee-77940e515f8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:09 compute-0 sudo[289152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:25:09 compute-0 sudo[289152]: pam_unix(sudo:session): session closed for user root
Dec 13 08:25:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:09.281 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:09.282 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.558 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2c76f149-c467-4e59-afee-77940e515f8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.633 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] resizing rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.765 248514 DEBUG nova.objects.instance [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c76f149-c467-4e59-afee-77940e515f8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.888 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.889 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Ensure instance console log exists: /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.889 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.890 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:09 compute-0 nova_compute[248510]: 2025-12-13 08:25:09.890 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:25:10 compute-0 ceph-mon[76537]: pgmap v1799: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:25:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:25:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:11 compute-0 nova_compute[248510]: 2025-12-13 08:25:11.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:12 compute-0 nova_compute[248510]: 2025-12-13 08:25:12.728 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Successfully created port: a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:25:12 compute-0 ceph-mon[76537]: pgmap v1800: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 08:25:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 67 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 1.2 MiB/s wr, 2 op/s
Dec 13 08:25:14 compute-0 ceph-mon[76537]: pgmap v1801: 321 pgs: 321 active+clean; 67 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 1.2 MiB/s wr, 2 op/s
Dec 13 08:25:14 compute-0 nova_compute[248510]: 2025-12-13 08:25:14.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:25:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254337099' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:25:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:25:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254337099' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:25:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4254337099' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:25:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4254337099' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:25:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:25:15 compute-0 podman[289289]: 2025-12-13 08:25:15.972886926 +0000 UTC m=+0.061394236 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 13 08:25:15 compute-0 podman[289288]: 2025-12-13 08:25:15.997121334 +0000 UTC m=+0.085616424 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 08:25:15 compute-0 podman[289290]: 2025-12-13 08:25:15.997469482 +0000 UTC m=+0.085773107 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 08:25:16 compute-0 ceph-mon[76537]: pgmap v1802: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:25:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:16.284 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:16 compute-0 nova_compute[248510]: 2025-12-13 08:25:16.643 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Successfully updated port: a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:25:16 compute-0 nova_compute[248510]: 2025-12-13 08:25:16.729 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:16 compute-0 nova_compute[248510]: 2025-12-13 08:25:16.730 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:16 compute-0 nova_compute[248510]: 2025-12-13 08:25:16.730 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:25:16 compute-0 nova_compute[248510]: 2025-12-13 08:25:16.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:16 compute-0 nova_compute[248510]: 2025-12-13 08:25:16.966 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:25:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:25:17 compute-0 nova_compute[248510]: 2025-12-13 08:25:17.687 248514 DEBUG nova.compute.manager [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-changed-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:17 compute-0 nova_compute[248510]: 2025-12-13 08:25:17.688 248514 DEBUG nova.compute.manager [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Refreshing instance network info cache due to event network-changed-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:25:17 compute-0 nova_compute[248510]: 2025-12-13 08:25:17.688 248514 DEBUG oslo_concurrency.lockutils [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:18 compute-0 ceph-mon[76537]: pgmap v1803: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.359 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Updating instance_info_cache with network_info: [{"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.391 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.392 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance network_info: |[{"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.392 248514 DEBUG oslo_concurrency.lockutils [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.393 248514 DEBUG nova.network.neutron [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Refreshing network info cache for port a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.396 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start _get_guest_xml network_info=[{"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.403 248514 WARNING nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.412 248514 DEBUG nova.virt.libvirt.host [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.413 248514 DEBUG nova.virt.libvirt.host [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.417 248514 DEBUG nova.virt.libvirt.host [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.418 248514 DEBUG nova.virt.libvirt.host [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.418 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.419 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.419 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.419 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.420 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.420 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.420 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.420 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.421 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.421 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.421 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.422 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.425 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.503 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.504 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.532 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.633 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.634 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.640 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.640 248514 INFO nova.compute.claims [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:25:18 compute-0 nova_compute[248510]: 2025-12-13 08:25:18.794 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1341543332' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.028 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.053 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.058 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:25:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1341543332' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886910405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.351 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.359 248514 DEBUG nova.compute.provider_tree [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.389 248514 DEBUG nova.scheduler.client.report [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.419 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.420 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.467 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.467 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.490 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.521 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:25:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2823282569' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.621 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.622 248514 DEBUG nova.virt.libvirt.vif [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1811238498',display_name='tempest-ImagesTestJSON-server-1811238498',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1811238498',id=36,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-2ad42grg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:08Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=2c76f149-c467-4e59-afee-77940e515f8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.622 248514 DEBUG nova.network.os_vif_util [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.623 248514 DEBUG nova.network.os_vif_util [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.624 248514 DEBUG nova.objects.instance [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c76f149-c467-4e59-afee-77940e515f8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.642 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.644 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.644 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Creating image(s)
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.669 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.698 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.721 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.724 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.751 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <uuid>2c76f149-c467-4e59-afee-77940e515f8c</uuid>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <name>instance-00000024</name>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesTestJSON-server-1811238498</nova:name>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:25:18</nova:creationTime>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <nova:port uuid="a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1">
Dec 13 08:25:19 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <system>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <entry name="serial">2c76f149-c467-4e59-afee-77940e515f8c</entry>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <entry name="uuid">2c76f149-c467-4e59-afee-77940e515f8c</entry>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </system>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <os>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   </os>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <features>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   </features>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2c76f149-c467-4e59-afee-77940e515f8c_disk">
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2c76f149-c467-4e59-afee-77940e515f8c_disk.config">
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:19 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e8:0b:3c"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <target dev="tapa5b00a7b-ee"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/console.log" append="off"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <video>
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </video>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:25:19 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:25:19 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:25:19 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:25:19 compute-0 nova_compute[248510]: </domain>
Dec 13 08:25:19 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.753 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Preparing to wait for external event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.754 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.754 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.755 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.756 248514 DEBUG nova.virt.libvirt.vif [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1811238498',display_name='tempest-ImagesTestJSON-server-1811238498',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1811238498',id=36,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-2ad42grg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:08Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=2c76f149-c467-4e59-afee-77940e515f8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.756 248514 DEBUG nova.network.os_vif_util [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.757 248514 DEBUG nova.network.os_vif_util [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.758 248514 DEBUG os_vif [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.760 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.761 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.764 248514 DEBUG nova.policy [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca2c7fce813a4f919271d56491477e18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '59a0c2f158ce417f80e49cc5eb2db59d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.778 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5b00a7b-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.778 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5b00a7b-ee, col_values=(('external_ids', {'iface-id': 'a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:0b:3c', 'vm-uuid': '2c76f149-c467-4e59-afee-77940e515f8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.780 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:19 compute-0 NetworkManager[50376]: <info>  [1765614319.7813] manager: (tapa5b00a7b-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.789 248514 INFO os_vif [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee')
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.796 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.797 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.797 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.798 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.822 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.826 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 84309355-d1f4-4a59-9f19-b212232e2428_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.901 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:e8:0b:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.902 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Using config drive
Dec 13 08:25:19 compute-0 nova_compute[248510]: 2025-12-13 08:25:19.929 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.124 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 84309355-d1f4-4a59-9f19-b212232e2428_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.193 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] resizing rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:25:20 compute-0 ceph-mon[76537]: pgmap v1804: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:25:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3886910405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2823282569' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.272 248514 DEBUG nova.objects.instance [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lazy-loading 'migration_context' on Instance uuid 84309355-d1f4-4a59-9f19-b212232e2428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.288 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.288 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Ensure instance console log exists: /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.289 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.289 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.289 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.477 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Creating config drive at /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.483 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzx5ua25_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.621 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzx5ua25_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.646 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.651 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config 2c76f149-c467-4e59-afee-77940e515f8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.680 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Successfully created port: 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.792 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config 2c76f149-c467-4e59-afee-77940e515f8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.793 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Deleting local config drive /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config because it was imported into RBD.
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.811 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.812 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.840 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:25:20 compute-0 kernel: tapa5b00a7b-ee: entered promiscuous mode
Dec 13 08:25:20 compute-0 NetworkManager[50376]: <info>  [1765614320.8537] manager: (tapa5b00a7b-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:25:20 compute-0 ovn_controller[148476]: 2025-12-13T08:25:20Z|00322|binding|INFO|Claiming lport a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 for this chassis.
Dec 13 08:25:20 compute-0 ovn_controller[148476]: 2025-12-13T08:25:20Z|00323|binding|INFO|a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1: Claiming fa:16:3e:e8:0b:3c 10.100.0.13
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.854 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003510625938221278 of space, bias 1.0, pg target 0.10531877814663833 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669903900688993 of space, bias 1.0, pg target 0.2000971170206698 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.168237698949889e-07 of space, bias 4.0, pg target 0.0008601885238739866 quantized to 16 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:25:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.867 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:0b:3c 10.100.0.13'], port_security=['fa:16:3e:e8:0b:3c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c76f149-c467-4e59-afee-77940e515f8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.868 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.869 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:25:20 compute-0 systemd-udevd[289676]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[478ad446-a8cb-4471-87b1-7093b7684388]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.886 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87bd91d0-e1 in ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.888 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87bd91d0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.888 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[499c498c-cb77-4494-b83d-a87b34f4234c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.889 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f941c4-3a47-470f-a407-f45c1cf41d4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:20 compute-0 systemd-machined[210538]: New machine qemu-42-instance-00000024.
Dec 13 08:25:20 compute-0 NetworkManager[50376]: <info>  [1765614320.9015] device (tapa5b00a7b-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:25:20 compute-0 NetworkManager[50376]: <info>  [1765614320.9023] device (tapa5b00a7b-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.903 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7e20b3-749c-4345-9c0f-ebed75a3176c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.924 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a34b1eb6-082d-41e4-a719-7ecaa01f64d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:20 compute-0 ovn_controller[148476]: 2025-12-13T08:25:20Z|00324|binding|INFO|Setting lport a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 ovn-installed in OVS
Dec 13 08:25:20 compute-0 ovn_controller[148476]: 2025-12-13T08:25:20Z|00325|binding|INFO|Setting lport a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 up in Southbound
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:20 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000024.
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.958 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1e4b9b-fae8-4e85-b017-9d472aa58dcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.960 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.960 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.963 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[65764f27-88cd-42c4-a944-4fb69f102ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:20 compute-0 NetworkManager[50376]: <info>  [1765614320.9647] manager: (tap87bd91d0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.974 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:25:20 compute-0 nova_compute[248510]: 2025-12-13 08:25:20.974 248514 INFO nova.compute.claims [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:25:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.996 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3603b6-1ae6-4dfd-9f73-58ef4bec3db6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.001 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc6d89e-4ee3-475e-8f7b-4a6cd116587f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 NetworkManager[50376]: <info>  [1765614321.0296] device (tap87bd91d0-e0): carrier: link connected
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.034 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7be3fde5-c86e-433c-b56f-7da4451019f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.058 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c986e23-4d71-449c-ba2c-b55963a95a80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689824, 'reachable_time': 42343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289709, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.076 248514 DEBUG nova.network.neutron [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Updated VIF entry in instance network info cache for port a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.078 248514 DEBUG nova.network.neutron [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Updating instance_info_cache with network_info: [{"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.077 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85cebd47-e297-4843-b1a9-3382385eb1f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:ec7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689824, 'tstamp': 689824}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289710, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6380ea-8742-4982-9569-68840c4f2c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689824, 'reachable_time': 42343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289711, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.114 248514 DEBUG oslo_concurrency.lockutils [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.139 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d737cf2-25ab-4a57-9ddc-0a3194d6fa10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.209 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5884599b-2f97-483a-9daa-f70efe33feb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.211 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.211 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:21 compute-0 NetworkManager[50376]: <info>  [1765614321.2153] manager: (tap87bd91d0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Dec 13 08:25:21 compute-0 kernel: tap87bd91d0-e0: entered promiscuous mode
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.218 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:21 compute-0 ovn_controller[148476]: 2025-12-13T08:25:21Z|00326|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.239 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.240 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69bc8d90-39ad-4ef6-ac11-b2bf9d8fe658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.241 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:25:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.243 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'env', 'PROCESS_TAG=haproxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87bd91d0-eead-49b6-8f92-f8d0dba555dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.459 248514 DEBUG nova.compute.manager [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.460 248514 DEBUG oslo_concurrency.lockutils [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.461 248514 DEBUG oslo_concurrency.lockutils [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.461 248514 DEBUG oslo_concurrency.lockutils [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.461 248514 DEBUG nova.compute.manager [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Processing event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:25:21 compute-0 podman[289779]: 2025-12-13 08:25:21.657747462 +0000 UTC m=+0.053520852 container create 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:25:21 compute-0 systemd[1]: Started libpod-conmon-4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4.scope.
Dec 13 08:25:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:21 compute-0 podman[289779]: 2025-12-13 08:25:21.627594118 +0000 UTC m=+0.023367518 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ffc37c60de5edd5330ae1bfe43f83df8db2401855afba2c6dc6e244c01b6f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.731 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.732 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614321.731839, 2c76f149-c467-4e59-afee-77940e515f8c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.733 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Started (Lifecycle Event)
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.741 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.745 248514 INFO nova.virt.libvirt.driver [-] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance spawned successfully.
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.746 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:25:21 compute-0 podman[289779]: 2025-12-13 08:25:21.749989548 +0000 UTC m=+0.145762928 container init 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:25:21 compute-0 podman[289779]: 2025-12-13 08:25:21.755994976 +0000 UTC m=+0.151768346 container start 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.768 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2126761090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.779 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.784 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.784 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.785 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.785 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.785 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.786 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:21 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [NOTICE]   (289823) : New worker (289827) forked
Dec 13 08:25:21 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [NOTICE]   (289823) : Loading success.
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.813 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.821 248514 DEBUG nova.compute.provider_tree [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.826 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.826 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614321.7319286, 2c76f149-c467-4e59-afee-77940e515f8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.826 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Paused (Lifecycle Event)
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.848 248514 DEBUG nova.scheduler.client.report [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.853 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.856 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614321.7413564, 2c76f149-c467-4e59-afee-77940e515f8c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.856 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Resumed (Lifecycle Event)
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.861 248514 INFO nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Took 12.84 seconds to spawn the instance on the hypervisor.
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.862 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.889 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.890 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.896 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.900 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.960 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.964 248514 INFO nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Took 14.48 seconds to build instance.
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.972 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.973 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:25:21 compute-0 nova_compute[248510]: 2025-12-13 08:25:21.992 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.002 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.020 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Successfully updated port: 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.024 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.036 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.037 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquired lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.037 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:25:22 compute-0 ceph-mon[76537]: pgmap v1805: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:25:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2126761090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.283 248514 DEBUG nova.policy [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f6fadd0581d041428cc88161ae6e6e02', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e370bdecda394d32b21d4eee440a61fa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.306 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.307 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.308 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Creating image(s)
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.337 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.367 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.392 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.399 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.431 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.479 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.480 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.481 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.482 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.505 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.509 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 29375ff9-300a-43de-a53d-942e7afbb439_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.907 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 29375ff9-300a-43de-a53d-942e7afbb439_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:22 compute-0 nova_compute[248510]: 2025-12-13 08:25:22.985 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] resizing rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.082 248514 DEBUG nova.objects.instance [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lazy-loading 'migration_context' on Instance uuid 29375ff9-300a-43de-a53d-942e7afbb439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.109 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.110 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Ensure instance console log exists: /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.110 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.110 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.111 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 99 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 797 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.386 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Updating instance_info_cache with network_info: [{"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.423 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Releasing lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.424 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance network_info: |[{"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.427 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start _get_guest_xml network_info=[{"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.432 248514 WARNING nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.438 248514 DEBUG nova.virt.libvirt.host [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.439 248514 DEBUG nova.virt.libvirt.host [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.442 248514 DEBUG nova.virt.libvirt.host [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.443 248514 DEBUG nova.virt.libvirt.host [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.444 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.444 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.445 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.445 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.446 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.446 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.446 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.447 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.447 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.448 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.448 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.448 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.452 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.934 248514 DEBUG nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.935 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.936 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.936 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.936 248514 DEBUG nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] No waiting events found dispatching network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.937 248514 WARNING nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received unexpected event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 for instance with vm_state active and task_state None.
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.937 248514 DEBUG nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-changed-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.937 248514 DEBUG nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Refreshing instance network info cache due to event network-changed-2ad09377-22a9-4666-8a3b-f24c9cba7ac9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.938 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.938 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:23 compute-0 nova_compute[248510]: 2025-12-13 08:25:23.938 248514 DEBUG nova.network.neutron [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Refreshing network info cache for port 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:25:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594273136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.087 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.111 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.116 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:24 compute-0 ceph-mon[76537]: pgmap v1806: 321 pgs: 321 active+clean; 99 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 797 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Dec 13 08:25:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1594273136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2589049922' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.652 248514 INFO nova.compute.manager [None req-43aaf044-9b00-476d-8390-fcd36e56b111 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Pausing
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.654 248514 DEBUG nova.objects.instance [None req-43aaf044-9b00-476d-8390-fcd36e56b111 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'flavor' on Instance uuid 2c76f149-c467-4e59-afee-77940e515f8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.656 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.657 248514 DEBUG nova.virt.libvirt.vif [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1068678446',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1068678446',id=37,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59a0c2f158ce417f80e49cc5eb2db59d',ramdisk_id='',reservation_id='r-xk1seud9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-328435411',owner_user_name='tempest-InstanceActionsV221TestJSON-328435411-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:19Z,user_data=None,user_id='ca2c7fce813a4f919271d56491477e18',uuid=84309355-d1f4-4a59-9f19-b212232e2428,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.658 248514 DEBUG nova.network.os_vif_util [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converting VIF {"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.659 248514 DEBUG nova.network.os_vif_util [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.660 248514 DEBUG nova.objects.instance [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lazy-loading 'pci_devices' on Instance uuid 84309355-d1f4-4a59-9f19-b212232e2428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.693 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <uuid>84309355-d1f4-4a59-9f19-b212232e2428</uuid>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <name>instance-00000025</name>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-1068678446</nova:name>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:25:23</nova:creationTime>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <nova:user uuid="ca2c7fce813a4f919271d56491477e18">tempest-InstanceActionsV221TestJSON-328435411-project-member</nova:user>
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <nova:project uuid="59a0c2f158ce417f80e49cc5eb2db59d">tempest-InstanceActionsV221TestJSON-328435411</nova:project>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <nova:port uuid="2ad09377-22a9-4666-8a3b-f24c9cba7ac9">
Dec 13 08:25:24 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <system>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <entry name="serial">84309355-d1f4-4a59-9f19-b212232e2428</entry>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <entry name="uuid">84309355-d1f4-4a59-9f19-b212232e2428</entry>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </system>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <os>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   </os>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <features>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   </features>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/84309355-d1f4-4a59-9f19-b212232e2428_disk">
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/84309355-d1f4-4a59-9f19-b212232e2428_disk.config">
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:24 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:38:17:ba"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <target dev="tap2ad09377-22"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/console.log" append="off"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <video>
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </video>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:25:24 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:25:24 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:25:24 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:25:24 compute-0 nova_compute[248510]: </domain>
Dec 13 08:25:24 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.698 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Preparing to wait for external event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.699 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.699 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.699 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.700 248514 DEBUG nova.virt.libvirt.vif [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1068678446',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1068678446',id=37,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59a0c2f158ce417f80e49cc5eb2db59d',ramdisk_id='',reservation_id='r-xk1seud9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-328435411',owner_user_name='tempest-InstanceActionsV221TestJSON-328435411-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:19Z,user_data=None,user_id='ca2c7fce813a4f919271d56491477e18',uuid=84309355-d1f4-4a59-9f19-b212232e2428,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.701 248514 DEBUG nova.network.os_vif_util [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converting VIF {"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.702 248514 DEBUG nova.network.os_vif_util [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.703 248514 DEBUG os_vif [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.705 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.706 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.713 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ad09377-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.714 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ad09377-22, col_values=(('external_ids', {'iface-id': '2ad09377-22a9-4666-8a3b-f24c9cba7ac9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:17:ba', 'vm-uuid': '84309355-d1f4-4a59-9f19-b212232e2428'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.716 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614324.7155876, 2c76f149-c467-4e59-afee-77940e515f8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.717 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Paused (Lifecycle Event)
Dec 13 08:25:24 compute-0 NetworkManager[50376]: <info>  [1765614324.7172] manager: (tap2ad09377-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.723 248514 DEBUG nova.compute.manager [None req-43aaf044-9b00-476d-8390-fcd36e56b111 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.724 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.724 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.725 248514 INFO os_vif [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22')
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.748 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.752 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.807 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] During sync_power_state the instance has a pending task (pausing). Skip.
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.827 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.828 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.828 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] No VIF found with MAC fa:16:3e:38:17:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.829 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Using config drive
Dec 13 08:25:24 compute-0 nova_compute[248510]: 2025-12-13 08:25:24.856 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 171 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Dec 13 08:25:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2589049922' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:25 compute-0 nova_compute[248510]: 2025-12-13 08:25:25.276 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Successfully created port: 66179e56-6ff7-4353-872a-ee206fe0b050 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:25:25 compute-0 nova_compute[248510]: 2025-12-13 08:25:25.820 248514 DEBUG nova.network.neutron [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Updated VIF entry in instance network info cache for port 2ad09377-22a9-4666-8a3b-f24c9cba7ac9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:25:25 compute-0 nova_compute[248510]: 2025-12-13 08:25:25.820 248514 DEBUG nova.network.neutron [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Updating instance_info_cache with network_info: [{"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:25 compute-0 nova_compute[248510]: 2025-12-13 08:25:25.842 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:25 compute-0 nova_compute[248510]: 2025-12-13 08:25:25.850 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Creating config drive at /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config
Dec 13 08:25:25 compute-0 nova_compute[248510]: 2025-12-13 08:25:25.855 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkufyj9qh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:25 compute-0 nova_compute[248510]: 2025-12-13 08:25:25.991 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkufyj9qh" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.019 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.024 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config 84309355-d1f4-4a59-9f19-b212232e2428_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.181 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config 84309355-d1f4-4a59-9f19-b212232e2428_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.182 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Deleting local config drive /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config because it was imported into RBD.
Dec 13 08:25:26 compute-0 kernel: tap2ad09377-22: entered promiscuous mode
Dec 13 08:25:26 compute-0 NetworkManager[50376]: <info>  [1765614326.2374] manager: (tap2ad09377-22): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Dec 13 08:25:26 compute-0 ceph-mon[76537]: pgmap v1807: 321 pgs: 321 active+clean; 171 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Dec 13 08:25:26 compute-0 systemd-udevd[290135]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:26 compute-0 ovn_controller[148476]: 2025-12-13T08:25:26Z|00327|binding|INFO|Claiming lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 for this chassis.
Dec 13 08:25:26 compute-0 ovn_controller[148476]: 2025-12-13T08:25:26Z|00328|binding|INFO|2ad09377-22a9-4666-8a3b-f24c9cba7ac9: Claiming fa:16:3e:38:17:ba 10.100.0.4
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:26 compute-0 NetworkManager[50376]: <info>  [1765614326.2908] device (tap2ad09377-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:25:26 compute-0 NetworkManager[50376]: <info>  [1765614326.2917] device (tap2ad09377-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.289 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:17:ba 10.100.0.4'], port_security=['fa:16:3e:38:17:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '84309355-d1f4-4a59-9f19-b212232e2428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '59a0c2f158ce417f80e49cc5eb2db59d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f0954fb5-841d-4580-afb8-2b944d8752df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bbe5302-d9e0-4808-ac4c-176cf71b9a80, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2ad09377-22a9-4666-8a3b-f24c9cba7ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.290 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 in datapath 9f51a29b-e726-472d-a3c5-2b60fcdbe425 bound to our chassis
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.292 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f51a29b-e726-472d-a3c5-2b60fcdbe425
Dec 13 08:25:26 compute-0 systemd-machined[210538]: New machine qemu-43-instance-00000025.
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.307 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79729fef-4831-4145-a672-0f94c973322d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.308 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f51a29b-e1 in ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.310 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f51a29b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.310 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da3f9b3b-cc05-4a22-b28c-95706c6fe6e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.311 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68e86e1d-074c-4d33-94d7-8f3bf190ce0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.324 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5a4ade-f907-4c44-ab2e-086f05ad274f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000025.
Dec 13 08:25:26 compute-0 ovn_controller[148476]: 2025-12-13T08:25:26Z|00329|binding|INFO|Setting lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 ovn-installed in OVS
Dec 13 08:25:26 compute-0 ovn_controller[148476]: 2025-12-13T08:25:26Z|00330|binding|INFO|Setting lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 up in Southbound
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.340 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce539abb-c18c-4dc5-a390-e802f9e48278]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.374 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[68384225-a606-4306-954b-9a209b1cd536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 systemd-udevd[290139]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:25:26 compute-0 NetworkManager[50376]: <info>  [1765614326.3837] manager: (tap9f51a29b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.382 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a07518d6-3487-494c-bb0f-ad8d76d07e5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.424 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aa09c4c0-0571-4d2b-ae54-1896de7e9938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.429 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8f731663-c76c-4489-b58e-b1f8302e07df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 NetworkManager[50376]: <info>  [1765614326.4585] device (tap9f51a29b-e0): carrier: link connected
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.465 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[874b093b-1b4a-445d-98f9-c4777d61b649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.486 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3396fb-f7dd-46ac-bb1d-730d54af1443]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f51a29b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:6d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690367, 'reachable_time': 20778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290171, 'error': None, 'target': 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.529 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e684cafb-fa55-43e1-a668-3b0dbb1ebe83]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:6d00'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 690367, 'tstamp': 690367}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290172, 'error': None, 'target': 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[67b3c2df-abcb-4a73-b3d3-4099b00f72e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f51a29b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:6d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690367, 'reachable_time': 20778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290173, 'error': None, 'target': 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.593 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[efeaf166-7639-43c2-bc8e-cffcd4bc42a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.626 248514 DEBUG nova.compute.manager [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.626 248514 DEBUG oslo_concurrency.lockutils [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.628 248514 DEBUG oslo_concurrency.lockutils [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.628 248514 DEBUG oslo_concurrency.lockutils [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.628 248514 DEBUG nova.compute.manager [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Processing event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0cb55e-ca73-4981-ae41-044d74af9efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.675 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f51a29b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.675 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.676 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f51a29b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:26 compute-0 NetworkManager[50376]: <info>  [1765614326.6785] manager: (tap9f51a29b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Dec 13 08:25:26 compute-0 kernel: tap9f51a29b-e0: entered promiscuous mode
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.684 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f51a29b-e0, col_values=(('external_ids', {'iface-id': 'eb9a13a8-7cca-49d7-94a2-673238e93c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.686 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:26 compute-0 ovn_controller[148476]: 2025-12-13T08:25:26Z|00331|binding|INFO|Releasing lport eb9a13a8-7cca-49d7-94a2-673238e93c02 from this chassis (sb_readonly=0)
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.690 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f51a29b-e726-472d-a3c5-2b60fcdbe425.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f51a29b-e726-472d-a3c5-2b60fcdbe425.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.691 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3a49df17-06c1-4769-a4cb-2aa8cb941bd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.692 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-9f51a29b-e726-472d-a3c5-2b60fcdbe425
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/9f51a29b-e726-472d-a3c5-2b60fcdbe425.pid.haproxy
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 9f51a29b-e726-472d-a3c5-2b60fcdbe425
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:25:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.692 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'env', 'PROCESS_TAG=haproxy-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f51a29b-e726-472d-a3c5-2b60fcdbe425.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.701 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.926 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.927 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614326.925932, 84309355-d1f4-4a59-9f19-b212232e2428 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.927 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] VM Started (Lifecycle Event)
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.932 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.937 248514 INFO nova.virt.libvirt.driver [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance spawned successfully.
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.937 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.950 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.956 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.984 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.985 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614326.926225, 84309355-d1f4-4a59-9f19-b212232e2428 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.985 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] VM Paused (Lifecycle Event)
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.990 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.991 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.992 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.992 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.993 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:26 compute-0 nova_compute[248510]: 2025-12-13 08:25:26.993 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.023 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.028 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614326.9322202, 84309355-d1f4-4a59-9f19-b212232e2428 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.028 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] VM Resumed (Lifecycle Event)
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.060 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.064 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.076 248514 INFO nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Took 7.43 seconds to spawn the instance on the hypervisor.
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.076 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.104 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:27 compute-0 podman[290245]: 2025-12-13 08:25:27.105854635 +0000 UTC m=+0.058337700 container create 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:25:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 171 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 111 op/s
Dec 13 08:25:27 compute-0 systemd[1]: Started libpod-conmon-95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b.scope.
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.163 248514 INFO nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Took 8.57 seconds to build instance.
Dec 13 08:25:27 compute-0 podman[290245]: 2025-12-13 08:25:27.074982064 +0000 UTC m=+0.027465129 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:25:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b051f3cea02d979437a5f7518015f0b37be7af8acf55c365f6681705a374d9be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:27 compute-0 podman[290245]: 2025-12-13 08:25:27.20452821 +0000 UTC m=+0.157011285 container init 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 08:25:27 compute-0 nova_compute[248510]: 2025-12-13 08:25:27.206 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:27 compute-0 podman[290245]: 2025-12-13 08:25:27.213214904 +0000 UTC m=+0.165697959 container start 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:25:27 compute-0 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [NOTICE]   (290265) : New worker (290267) forked
Dec 13 08:25:27 compute-0 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [NOTICE]   (290265) : Loading success.
Dec 13 08:25:28 compute-0 ceph-mon[76537]: pgmap v1808: 321 pgs: 321 active+clean; 171 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 111 op/s
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.361 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Successfully updated port: 66179e56-6ff7-4353-872a-ee206fe0b050 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.378 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.378 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquired lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.378 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.673 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.711 248514 DEBUG nova.compute.manager [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.750 248514 DEBUG nova.compute.manager [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received event network-changed-66179e56-6ff7-4353-872a-ee206fe0b050 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.751 248514 DEBUG nova.compute.manager [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Refreshing instance network info cache due to event network-changed-66179e56-6ff7-4353-872a-ee206fe0b050. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.751 248514 DEBUG oslo_concurrency.lockutils [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.792 248514 INFO nova.compute.manager [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] instance snapshotting
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.793 248514 WARNING nova.compute.manager [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] trying to snapshot a non-running instance: (state: 3 expected: 1)
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.914 248514 DEBUG nova.compute.manager [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.914 248514 DEBUG oslo_concurrency.lockutils [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.915 248514 DEBUG oslo_concurrency.lockutils [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.915 248514 DEBUG oslo_concurrency.lockutils [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.915 248514 DEBUG nova.compute.manager [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] No waiting events found dispatching network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:28 compute-0 nova_compute[248510]: 2025-12-13 08:25:28.915 248514 WARNING nova.compute.manager [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received unexpected event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 for instance with vm_state active and task_state None.
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.113 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.115 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.115 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.116 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.116 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.117 248514 INFO nova.compute.manager [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Terminating instance
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.118 248514 DEBUG nova.compute.manager [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:25:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 181 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.194 248514 INFO nova.virt.libvirt.driver [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Beginning live snapshot process
Dec 13 08:25:29 compute-0 kernel: tap2ad09377-22 (unregistering): left promiscuous mode
Dec 13 08:25:29 compute-0 NetworkManager[50376]: <info>  [1765614329.2927] device (tap2ad09377-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:25:29 compute-0 ovn_controller[148476]: 2025-12-13T08:25:29Z|00332|binding|INFO|Releasing lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 from this chassis (sb_readonly=0)
Dec 13 08:25:29 compute-0 ovn_controller[148476]: 2025-12-13T08:25:29Z|00333|binding|INFO|Setting lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 down in Southbound
Dec 13 08:25:29 compute-0 ovn_controller[148476]: 2025-12-13T08:25:29Z|00334|binding|INFO|Removing iface tap2ad09377-22 ovn-installed in OVS
Dec 13 08:25:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.316 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:17:ba 10.100.0.4'], port_security=['fa:16:3e:38:17:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '84309355-d1f4-4a59-9f19-b212232e2428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '59a0c2f158ce417f80e49cc5eb2db59d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f0954fb5-841d-4580-afb8-2b944d8752df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bbe5302-d9e0-4808-ac4c-176cf71b9a80, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2ad09377-22a9-4666-8a3b-f24c9cba7ac9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.318 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 in datapath 9f51a29b-e726-472d-a3c5-2b60fcdbe425 unbound from our chassis
Dec 13 08:25:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.320 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f51a29b-e726-472d-a3c5-2b60fcdbe425, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:25:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.321 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3fb91a-d10a-4229-b66e-6e3413e549aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.322 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 namespace which is not needed anymore
Dec 13 08:25:29 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000025.scope: Deactivated successfully.
Dec 13 08:25:29 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000025.scope: Consumed 2.854s CPU time.
Dec 13 08:25:29 compute-0 systemd-machined[210538]: Machine qemu-43-instance-00000025 terminated.
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.380 248514 DEBUG nova.virt.libvirt.imagebackend [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.569 248514 INFO nova.virt.libvirt.driver [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance destroyed successfully.
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.570 248514 DEBUG nova.objects.instance [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lazy-loading 'resources' on Instance uuid 84309355-d1f4-4a59-9f19-b212232e2428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:29 compute-0 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [NOTICE]   (290265) : haproxy version is 2.8.14-c23fe91
Dec 13 08:25:29 compute-0 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [NOTICE]   (290265) : path to executable is /usr/sbin/haproxy
Dec 13 08:25:29 compute-0 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [WARNING]  (290265) : Exiting Master process...
Dec 13 08:25:29 compute-0 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [ALERT]    (290265) : Current worker (290267) exited with code 143 (Terminated)
Dec 13 08:25:29 compute-0 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [WARNING]  (290265) : All workers exited. Exiting... (0)
Dec 13 08:25:29 compute-0 systemd[1]: libpod-95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b.scope: Deactivated successfully.
Dec 13 08:25:29 compute-0 podman[290332]: 2025-12-13 08:25:29.652851648 +0000 UTC m=+0.227508033 container died 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.774 248514 DEBUG nova.virt.libvirt.vif [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1068678446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1068678446',id=37,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:25:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='59a0c2f158ce417f80e49cc5eb2db59d',ramdisk_id='',reservation_id='r-xk1seud9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-328435411',owner_user_name='tempest-InstanceActionsV221TestJSON-328435411-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:25:27Z,user_data=None,user_id='ca2c7fce813a4f919271d56491477e18',uuid=84309355-d1f4-4a59-9f19-b212232e2428,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.775 248514 DEBUG nova.network.os_vif_util [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converting VIF {"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.776 248514 DEBUG nova.network.os_vif_util [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.777 248514 DEBUG os_vif [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.780 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.782 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ad09377-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:25:29 compute-0 nova_compute[248510]: 2025-12-13 08:25:29.788 248514 INFO os_vif [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22')
Dec 13 08:25:30 compute-0 nova_compute[248510]: 2025-12-13 08:25:30.033 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(5e9462edf74042da85c0ddaea983ddb2) on rbd image(2c76f149-c467-4e59-afee-77940e515f8c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:25:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:30 compute-0 ceph-mon[76537]: pgmap v1809: 321 pgs: 321 active+clean; 181 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Dec 13 08:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b-userdata-shm.mount: Deactivated successfully.
Dec 13 08:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b051f3cea02d979437a5f7518015f0b37be7af8acf55c365f6681705a374d9be-merged.mount: Deactivated successfully.
Dec 13 08:25:30 compute-0 podman[290332]: 2025-12-13 08:25:30.494786212 +0000 UTC m=+1.069442577 container cleanup 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:25:30 compute-0 systemd[1]: libpod-conmon-95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b.scope: Deactivated successfully.
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.047 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-unplugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.048 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.049 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.049 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.049 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] No waiting events found dispatching network-vif-unplugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.049 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-unplugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] No waiting events found dispatching network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.051 248514 WARNING nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received unexpected event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 for instance with vm_state active and task_state deleting.
Dec 13 08:25:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 181 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.204 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updating instance_info_cache with network_info: [{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.230 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Releasing lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.231 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance network_info: |[{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.231 248514 DEBUG oslo_concurrency.lockutils [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.232 248514 DEBUG nova.network.neutron [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Refreshing network info cache for port 66179e56-6ff7-4353-872a-ee206fe0b050 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.235 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start _get_guest_xml network_info=[{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.240 248514 WARNING nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.246 248514 DEBUG nova.virt.libvirt.host [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.247 248514 DEBUG nova.virt.libvirt.host [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.253 248514 DEBUG nova.virt.libvirt.host [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.254 248514 DEBUG nova.virt.libvirt.host [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.254 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.254 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.256 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.256 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.256 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.256 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.257 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.260 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:31 compute-0 podman[290409]: 2025-12-13 08:25:31.292038093 +0000 UTC m=+0.769451746 container remove 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.301 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f961c2-35dc-4c67-b66c-19bd9a9e0c15]: (4, ('Sat Dec 13 08:25:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 (95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b)\n95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b\nSat Dec 13 08:25:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 (95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b)\n95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.304 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a802e16-1511-488f-913b-4057ab6d4fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.306 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f51a29b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:31 compute-0 kernel: tap9f51a29b-e0: left promiscuous mode
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.328 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21c60805-5157-46b5-9982-382c4af15304]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.348 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[83f6c95f-d2ac-4d66-adbb-b38da087b1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.350 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2f02d8-28d6-4414-a7bf-41c1472ac551]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.370 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d313267-b7af-49bb-8068-34297ac96138]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690358, 'reachable_time': 19203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290428, 'error': None, 'target': 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.374 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:25:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f51a29b\x2de726\x2d472d\x2da3c5\x2d2b60fcdbe425.mount: Deactivated successfully.
Dec 13 08:25:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.374 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae1c4ac-f9f1-4afe-8775-e4625451081c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Dec 13 08:25:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Dec 13 08:25:31 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Dec 13 08:25:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232377830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.836 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.872 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.880 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:31 compute-0 nova_compute[248510]: 2025-12-13 08:25:31.964 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning vms/2c76f149-c467-4e59-afee-77940e515f8c_disk@5e9462edf74042da85c0ddaea983ddb2 to images/385a36c8-745e-468c-aebe-757bcf15df75 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.040 248514 INFO nova.virt.libvirt.driver [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Deleting instance files /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428_del
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.042 248514 INFO nova.virt.libvirt.driver [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Deletion of /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428_del complete
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.082 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] flattening images/385a36c8-745e-468c-aebe-757bcf15df75 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.308 248514 INFO nova.compute.manager [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Took 3.19 seconds to destroy the instance on the hypervisor.
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.309 248514 DEBUG oslo.service.loopingcall [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.309 248514 DEBUG nova.compute.manager [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.310 248514 DEBUG nova.network.neutron [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:25:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283608750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.499 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.501 248514 DEBUG nova.virt.libvirt.vif [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1257374937',display_name='tempest-ServersTestJSON-server-1257374937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1257374937',id=38,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQM1EROcyTGYUNd3w+nL2c9tIiIpr7CpaZ/uVd5bqgtT8dSelOLMXPhJ/HVb4yRy7qvGCJbUXgeaaHZuyNIHWqDvsU3xORtagVm9kIjwgQBLgTK/GBqyOzmv5WpZhagyQ==',key_name='tempest-keypair-1165619389',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e370bdecda394d32b21d4eee440a61fa',ramdisk_id='',reservation_id='r-tuwuy5hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1483541269',owner_user_name='tempest-ServersTestJSON-1483541269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f6fadd0581d041428cc88161ae6e6e02',uuid=29375ff9-300a-43de-a53d-942e7afbb439,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.502 248514 DEBUG nova.network.os_vif_util [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converting VIF {"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.503 248514 DEBUG nova.network.os_vif_util [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.504 248514 DEBUG nova.objects.instance [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 29375ff9-300a-43de-a53d-942e7afbb439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.661 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <uuid>29375ff9-300a-43de-a53d-942e7afbb439</uuid>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <name>instance-00000026</name>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestJSON-server-1257374937</nova:name>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:25:31</nova:creationTime>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <nova:user uuid="f6fadd0581d041428cc88161ae6e6e02">tempest-ServersTestJSON-1483541269-project-member</nova:user>
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <nova:project uuid="e370bdecda394d32b21d4eee440a61fa">tempest-ServersTestJSON-1483541269</nova:project>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <nova:port uuid="66179e56-6ff7-4353-872a-ee206fe0b050">
Dec 13 08:25:32 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <system>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <entry name="serial">29375ff9-300a-43de-a53d-942e7afbb439</entry>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <entry name="uuid">29375ff9-300a-43de-a53d-942e7afbb439</entry>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </system>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <os>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   </os>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <features>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   </features>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/29375ff9-300a-43de-a53d-942e7afbb439_disk">
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/29375ff9-300a-43de-a53d-942e7afbb439_disk.config">
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:c2:ee:5f"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <target dev="tap66179e56-6f"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/console.log" append="off"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <video>
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </video>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:25:32 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:25:32 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:25:32 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:25:32 compute-0 nova_compute[248510]: </domain>
Dec 13 08:25:32 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.664 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Preparing to wait for external event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.665 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.665 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.665 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.666 248514 DEBUG nova.virt.libvirt.vif [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1257374937',display_name='tempest-ServersTestJSON-server-1257374937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1257374937',id=38,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQM1EROcyTGYUNd3w+nL2c9tIiIpr7CpaZ/uVd5bqgtT8dSelOLMXPhJ/HVb4yRy7qvGCJbUXgeaaHZuyNIHWqDvsU3xORtagVm9kIjwgQBLgTK/GBqyOzmv5WpZhagyQ==',key_name='tempest-keypair-1165619389',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e370bdecda394d32b21d4eee440a61fa',ramdisk_id='',reservation_id='r-tuwuy5hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1483541269',owner_user_name='tempest-ServersTestJSON-1483541269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f6fadd0581d041428cc88161ae6e6e02',uuid=29375ff9-300a-43de-a53d-942e7afbb439,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.667 248514 DEBUG nova.network.os_vif_util [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converting VIF {"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.667 248514 DEBUG nova.network.os_vif_util [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.668 248514 DEBUG os_vif [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.669 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.669 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.676 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66179e56-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.676 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66179e56-6f, col_values=(('external_ids', {'iface-id': '66179e56-6ff7-4353-872a-ee206fe0b050', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:ee:5f', 'vm-uuid': '29375ff9-300a-43de-a53d-942e7afbb439'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:32 compute-0 NetworkManager[50376]: <info>  [1765614332.6807] manager: (tap66179e56-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.686 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:32 compute-0 nova_compute[248510]: 2025-12-13 08:25:32.688 248514 INFO os_vif [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f')
Dec 13 08:25:32 compute-0 ceph-mon[76537]: pgmap v1810: 321 pgs: 321 active+clean; 181 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Dec 13 08:25:32 compute-0 ceph-mon[76537]: osdmap e190: 3 total, 3 up, 3 in
Dec 13 08:25:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3232377830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2283608750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 181 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.7 MiB/s wr, 214 op/s
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.163 248514 DEBUG nova.network.neutron [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updated VIF entry in instance network info cache for port 66179e56-6ff7-4353-872a-ee206fe0b050. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.163 248514 DEBUG nova.network.neutron [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updating instance_info_cache with network_info: [{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.290 248514 DEBUG oslo_concurrency.lockutils [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.296 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.297 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.297 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] No VIF found with MAC fa:16:3e:c2:ee:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.298 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Using config drive
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.322 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.372 248514 DEBUG nova.network.neutron [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.392 248514 INFO nova.compute.manager [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Took 1.08 seconds to deallocate network for instance.
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.444 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.444 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.595 248514 DEBUG oslo_concurrency.processutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.647 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Creating config drive at /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.653 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoba82dx4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.759 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(5e9462edf74042da85c0ddaea983ddb2) on rbd image(2c76f149-c467-4e59-afee-77940e515f8c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.795 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoba82dx4" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.820 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.824 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config 29375ff9-300a-43de-a53d-942e7afbb439_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Dec 13 08:25:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Dec 13 08:25:33 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Dec 13 08:25:33 compute-0 nova_compute[248510]: 2025-12-13 08:25:33.881 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(snap) on rbd image(385a36c8-745e-468c-aebe-757bcf15df75) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:25:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3004742559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.178 248514 DEBUG oslo_concurrency.processutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.187 248514 DEBUG nova.compute.provider_tree [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.229 248514 DEBUG nova.scheduler.client.report [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.257 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.296 248514 INFO nova.scheduler.client.report [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Deleted allocations for instance 84309355-d1f4-4a59-9f19-b212232e2428
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.382 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.417 248514 DEBUG nova.compute.manager [req-b1f8b38c-7cda-4b0b-8173-507027b3515d req-040e502f-f43e-48d4-82c0-0802102b1029 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-deleted-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Dec 13 08:25:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.882 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config 29375ff9-300a-43de-a53d-942e7afbb439_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.882 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Deleting local config drive /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config because it was imported into RBD.
Dec 13 08:25:34 compute-0 kernel: tap66179e56-6f: entered promiscuous mode
Dec 13 08:25:34 compute-0 NetworkManager[50376]: <info>  [1765614334.9472] manager: (tap66179e56-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Dec 13 08:25:34 compute-0 ovn_controller[148476]: 2025-12-13T08:25:34Z|00335|binding|INFO|Claiming lport 66179e56-6ff7-4353-872a-ee206fe0b050 for this chassis.
Dec 13 08:25:34 compute-0 ovn_controller[148476]: 2025-12-13T08:25:34Z|00336|binding|INFO|66179e56-6ff7-4353-872a-ee206fe0b050: Claiming fa:16:3e:c2:ee:5f 10.100.0.13
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:34 compute-0 nova_compute[248510]: 2025-12-13 08:25:34.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.960 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:ee:5f 10.100.0.13'], port_security=['fa:16:3e:c2:ee:5f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '29375ff9-300a-43de-a53d-942e7afbb439', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9453e13-be77-4aff-899d-cbb572239200', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e370bdecda394d32b21d4eee440a61fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06ce8d20-248e-4763-9b89-5c8df9c9f100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=490cf977-f165-4aa4-8179-937f6e939091, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=66179e56-6ff7-4353-872a-ee206fe0b050) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.962 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 66179e56-6ff7-4353-872a-ee206fe0b050 in datapath f9453e13-be77-4aff-899d-cbb572239200 bound to our chassis
Dec 13 08:25:34 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Dec 13 08:25:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.963 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f9453e13-be77-4aff-899d-cbb572239200
Dec 13 08:25:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.978 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2cefc7ea-fa33-438b-8bc7-bbb0191f57c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.979 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf9453e13-b1 in ovnmeta-f9453e13-be77-4aff-899d-cbb572239200 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:25:34 compute-0 systemd-udevd[290676]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:25:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.982 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf9453e13-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:25:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.982 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0d7fa7-485d-47ee-829e-bb5efc26811c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.983 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec714b9-bdef-4bb0-a39f-d2f3dc695b5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:34 compute-0 NetworkManager[50376]: <info>  [1765614334.9941] device (tap66179e56-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:25:34 compute-0 NetworkManager[50376]: <info>  [1765614334.9949] device (tap66179e56-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:25:35 compute-0 systemd-machined[210538]: New machine qemu-44-instance-00000026.
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.998 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[5414ec3b-dcfc-4c59-9b39-ae2b93683fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ceph-mon[76537]: pgmap v1812: 321 pgs: 321 active+clean; 181 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.7 MiB/s wr, 214 op/s
Dec 13 08:25:35 compute-0 ceph-mon[76537]: osdmap e191: 3 total, 3 up, 3 in
Dec 13 08:25:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3004742559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:35 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000026.
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:35 compute-0 ovn_controller[148476]: 2025-12-13T08:25:35Z|00337|binding|INFO|Setting lport 66179e56-6ff7-4353-872a-ee206fe0b050 ovn-installed in OVS
Dec 13 08:25:35 compute-0 ovn_controller[148476]: 2025-12-13T08:25:35Z|00338|binding|INFO|Setting lport 66179e56-6ff7-4353-872a-ee206fe0b050 up in Southbound
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.028 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0884279e-0b33-4816-9a6d-f24e6ac8bc24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.065 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3885fa91-2d97-4f35-88f9-39f9962a990e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.071 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[986f1eec-b1b3-4068-996c-c5459468b360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 NetworkManager[50376]: <info>  [1765614335.0723] manager: (tapf9453e13-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/154)
Dec 13 08:25:35 compute-0 systemd-udevd[290680]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.108 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7c6763-9dbb-4955-8b81-121af71e5425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.112 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1e5c58-512a-422c-8d3d-89462b4f2c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 176 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 157 op/s
Dec 13 08:25:35 compute-0 NetworkManager[50376]: <info>  [1765614335.1409] device (tapf9453e13-b0): carrier: link connected
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.147 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce64d7b-d2e7-4ccc-93f1-db81e67435f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.168 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d2d15d-195f-4bae-be8d-8111c7f04bea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf9453e13-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:a3:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691235, 'reachable_time': 42879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290709, 'error': None, 'target': 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d59c396-199d-4f1f-ad5a-3abd55322047]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:a3cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691235, 'tstamp': 691235}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290710, 'error': None, 'target': 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.213 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b72931bf-faa9-43f6-b66e-f44a0be0a4ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf9453e13-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:a3:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691235, 'reachable_time': 42879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290711, 'error': None, 'target': 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.250 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0def6b13-9e46-463b-9f6f-b7369ca202db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.323 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2bf6fb-71e1-4f9e-974f-47610e883b8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.325 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9453e13-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.326 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.326 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9453e13-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:35 compute-0 NetworkManager[50376]: <info>  [1765614335.3295] manager: (tapf9453e13-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:35 compute-0 kernel: tapf9453e13-b0: entered promiscuous mode
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.332 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf9453e13-b0, col_values=(('external_ids', {'iface-id': 'b4b5208f-f540-413f-b711-c27b19daefc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:35 compute-0 ovn_controller[148476]: 2025-12-13T08:25:35Z|00339|binding|INFO|Releasing lport b4b5208f-f540-413f-b711-c27b19daefc1 from this chassis (sb_readonly=0)
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.338 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f9453e13-be77-4aff-899d-cbb572239200.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f9453e13-be77-4aff-899d-cbb572239200.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.339 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea01f35-01b1-4c1f-a792-da00f692e478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.340 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-f9453e13-be77-4aff-899d-cbb572239200
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/f9453e13-be77-4aff-899d-cbb572239200.pid.haproxy
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID f9453e13-be77-4aff-899d-cbb572239200
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:25:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.341 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'env', 'PROCESS_TAG=haproxy-f9453e13-be77-4aff-899d-cbb572239200', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f9453e13-be77-4aff-899d-cbb572239200.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.539 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614335.5391817, 29375ff9-300a-43de-a53d-942e7afbb439 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.540 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] VM Started (Lifecycle Event)
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.570 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.574 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614335.5414612, 29375ff9-300a-43de-a53d-942e7afbb439 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.575 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] VM Paused (Lifecycle Event)
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.603 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.609 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:35 compute-0 nova_compute[248510]: 2025-12-13 08:25:35.633 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:35 compute-0 podman[290785]: 2025-12-13 08:25:35.734877004 +0000 UTC m=+0.028903274 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:25:36 compute-0 podman[290785]: 2025-12-13 08:25:36.005351408 +0000 UTC m=+0.299377648 container create 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:25:36 compute-0 ceph-mon[76537]: osdmap e192: 3 total, 3 up, 3 in
Dec 13 08:25:36 compute-0 ceph-mon[76537]: pgmap v1815: 321 pgs: 321 active+clean; 176 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 157 op/s
Dec 13 08:25:36 compute-0 systemd[1]: Started libpod-conmon-1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04.scope.
Dec 13 08:25:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d31eec13ac5f88b15ff754f3542405a83e8a8c72b6ef49186db1b491f558445/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:36 compute-0 podman[290785]: 2025-12-13 08:25:36.10190212 +0000 UTC m=+0.395928380 container init 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 08:25:36 compute-0 podman[290785]: 2025-12-13 08:25:36.108761339 +0000 UTC m=+0.402787579 container start 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 08:25:36 compute-0 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [NOTICE]   (290804) : New worker (290806) forked
Dec 13 08:25:36 compute-0 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [NOTICE]   (290804) : Loading success.
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.437 248514 DEBUG nova.compute.manager [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.437 248514 DEBUG oslo_concurrency.lockutils [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.438 248514 DEBUG oslo_concurrency.lockutils [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.438 248514 DEBUG oslo_concurrency.lockutils [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.438 248514 DEBUG nova.compute.manager [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Processing event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.438 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.444 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614336.443807, 29375ff9-300a-43de-a53d-942e7afbb439 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.444 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] VM Resumed (Lifecycle Event)
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.447 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.451 248514 INFO nova.virt.libvirt.driver [-] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance spawned successfully.
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.451 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.471 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.478 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.483 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.484 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.484 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.485 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.485 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.486 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.512 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.575 248514 INFO nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Took 14.27 seconds to spawn the instance on the hypervisor.
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.576 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.648 248514 INFO nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Took 15.74 seconds to build instance.
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.671 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.812 248514 INFO nova.virt.libvirt.driver [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Snapshot image upload complete
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.812 248514 INFO nova.compute.manager [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Took 8.02 seconds to snapshot the instance on the hypervisor.
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.895 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.895 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.914 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:25:36 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.999 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:36.999 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:37.008 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:37.008 248514 INFO nova.compute.claims [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:25:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 176 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 157 op/s
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:37.151 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:37.344 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:37.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198550678' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:37.901 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:37.907 248514 DEBUG nova.compute.provider_tree [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:37 compute-0 nova_compute[248510]: 2025-12-13 08:25:37.978 248514 DEBUG nova.scheduler.client.report [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.020 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.021 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:25:38 compute-0 ceph-mon[76537]: pgmap v1816: 321 pgs: 321 active+clean; 176 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 157 op/s
Dec 13 08:25:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2198550678' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.447 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.448 248514 DEBUG nova.network.neutron [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.532 248514 INFO nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.652 248514 DEBUG nova.policy [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6f7967ce16c45d394188c1302b02907', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.696 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.753 248514 DEBUG nova.compute.manager [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.754 248514 DEBUG oslo_concurrency.lockutils [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.755 248514 DEBUG oslo_concurrency.lockutils [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.755 248514 DEBUG oslo_concurrency.lockutils [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.756 248514 DEBUG nova.compute.manager [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] No waiting events found dispatching network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.756 248514 WARNING nova.compute.manager [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received unexpected event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 for instance with vm_state active and task_state None.
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.946 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.948 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.949 248514 INFO nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Creating image(s)
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.970 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:38 compute-0 nova_compute[248510]: 2025-12-13 08:25:38.996 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.026 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.032 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.105 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.106 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.107 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.108 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.130 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 181 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 2.9 MiB/s wr, 279 op/s
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.136 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 50753e78-0239-4dc0-aeb1-4ee82b493132_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.406 248514 DEBUG nova.network.neutron [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Successfully created port: 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.742 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 50753e78-0239-4dc0-aeb1-4ee82b493132_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.804 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] resizing rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.913 248514 DEBUG nova.objects.instance [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'migration_context' on Instance uuid 50753e78-0239-4dc0-aeb1-4ee82b493132 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.982 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.983 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Ensure instance console log exists: /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.983 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.984 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:39 compute-0 nova_compute[248510]: 2025-12-13 08:25:39.984 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Dec 13 08:25:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Dec 13 08:25:40 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Dec 13 08:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:25:40 compute-0 ceph-mon[76537]: pgmap v1817: 321 pgs: 321 active+clean; 181 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 2.9 MiB/s wr, 279 op/s
Dec 13 08:25:40 compute-0 ceph-mon[76537]: osdmap e193: 3 total, 3 up, 3 in
Dec 13 08:25:40 compute-0 NetworkManager[50376]: <info>  [1765614340.4286] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Dec 13 08:25:40 compute-0 NetworkManager[50376]: <info>  [1765614340.4292] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Dec 13 08:25:40 compute-0 nova_compute[248510]: 2025-12-13 08:25:40.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:40 compute-0 nova_compute[248510]: 2025-12-13 08:25:40.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:40 compute-0 ovn_controller[148476]: 2025-12-13T08:25:40Z|00340|binding|INFO|Releasing lport b4b5208f-f540-413f-b711-c27b19daefc1 from this chassis (sb_readonly=0)
Dec 13 08:25:40 compute-0 ovn_controller[148476]: 2025-12-13T08:25:40Z|00341|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 08:25:40 compute-0 nova_compute[248510]: 2025-12-13 08:25:40.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:40 compute-0 nova_compute[248510]: 2025-12-13 08:25:40.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 181 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 180 op/s
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.149 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.150 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.150 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.151 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.151 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.153 248514 INFO nova.compute.manager [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Terminating instance
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.154 248514 DEBUG nova.compute.manager [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:25:41 compute-0 kernel: tapa5b00a7b-ee (unregistering): left promiscuous mode
Dec 13 08:25:41 compute-0 NetworkManager[50376]: <info>  [1765614341.1883] device (tapa5b00a7b-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.191 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 ovn_controller[148476]: 2025-12-13T08:25:41Z|00342|binding|INFO|Releasing lport a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 from this chassis (sb_readonly=0)
Dec 13 08:25:41 compute-0 ovn_controller[148476]: 2025-12-13T08:25:41Z|00343|binding|INFO|Setting lport a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 down in Southbound
Dec 13 08:25:41 compute-0 ovn_controller[148476]: 2025-12-13T08:25:41Z|00344|binding|INFO|Removing iface tapa5b00a7b-ee ovn-installed in OVS
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.213 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000024.scope: Deactivated successfully.
Dec 13 08:25:41 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000024.scope: Consumed 3.865s CPU time.
Dec 13 08:25:41 compute-0 systemd-machined[210538]: Machine qemu-42-instance-00000024 terminated.
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.238 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:0b:3c 10.100.0.13'], port_security=['fa:16:3e:e8:0b:3c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c76f149-c467-4e59-afee-77940e515f8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.239 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.241 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.242 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e8cb48-c537-4271-959a-303cc48640de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.242 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace which is not needed anymore
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [NOTICE]   (289823) : haproxy version is 2.8.14-c23fe91
Dec 13 08:25:41 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [NOTICE]   (289823) : path to executable is /usr/sbin/haproxy
Dec 13 08:25:41 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [WARNING]  (289823) : Exiting Master process...
Dec 13 08:25:41 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [ALERT]    (289823) : Current worker (289827) exited with code 143 (Terminated)
Dec 13 08:25:41 compute-0 rsyslogd[1002]: imjournal from <np0005558241:nova_compute>: begin to drop messages due to rate-limiting
Dec 13 08:25:41 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [WARNING]  (289823) : All workers exited. Exiting... (0)
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 systemd[1]: libpod-4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4.scope: Deactivated successfully.
Dec 13 08:25:41 compute-0 podman[291028]: 2025-12-13 08:25:41.395482201 +0000 UTC m=+0.059074999 container died 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.402 248514 INFO nova.virt.libvirt.driver [-] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance destroyed successfully.
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.403 248514 DEBUG nova.objects.instance [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid 2c76f149-c467-4e59-afee-77940e515f8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4-userdata-shm.mount: Deactivated successfully.
Dec 13 08:25:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-27ffc37c60de5edd5330ae1bfe43f83df8db2401855afba2c6dc6e244c01b6f7-merged.mount: Deactivated successfully.
Dec 13 08:25:41 compute-0 podman[291028]: 2025-12-13 08:25:41.439362353 +0000 UTC m=+0.102955151 container cleanup 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:25:41 compute-0 systemd[1]: libpod-conmon-4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4.scope: Deactivated successfully.
Dec 13 08:25:41 compute-0 podman[291065]: 2025-12-13 08:25:41.511394471 +0000 UTC m=+0.046520079 container remove 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[30a51d9c-ecb7-4eaa-8bb1-d3469671542e]: (4, ('Sat Dec 13 08:25:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4)\n4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4\nSat Dec 13 08:25:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4)\n4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.520 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc2af51-4147-4558-8820-f784f0e8a729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.521 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 kernel: tap87bd91d0-e0: left promiscuous mode
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.543 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a69f040c-7ee3-479b-82e3-b151dc607d95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e11d6f93-ac07-4804-b925-33e7b8086c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[325a369f-a301-4c55-b2c1-21c0a1be71cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.592 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a22ce33-b5dc-4fac-906b-9017111ed03d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689816, 'reachable_time': 16215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291085, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d87bd91d0\x2deead\x2d49b6\x2d8f92\x2df8d0dba555dc.mount: Deactivated successfully.
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.595 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:25:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:41.596 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[831ab42e-7a18-46f3-a96d-cff7c4767e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.667 248514 DEBUG nova.virt.libvirt.vif [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:25:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1811238498',display_name='tempest-ImagesTestJSON-server-1811238498',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1811238498',id=36,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:25:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-2ad42grg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:25:36Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=2c76f149-c467-4e59-afee-77940e515f8c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.668 248514 DEBUG nova.network.os_vif_util [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.669 248514 DEBUG nova.network.os_vif_util [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.669 248514 DEBUG os_vif [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.671 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5b00a7b-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.677 248514 INFO os_vif [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee')
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.800 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.800 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.927 248514 DEBUG nova.network.neutron [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Successfully updated port: 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.934 248514 INFO nova.virt.libvirt.driver [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Deleting instance files /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c_del
Dec 13 08:25:41 compute-0 nova_compute[248510]: 2025-12-13 08:25:41.934 248514 INFO nova.virt.libvirt.driver [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Deletion of /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c_del complete
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.063 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-50753e78-0239-4dc0-aeb1-4ee82b493132" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.063 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-50753e78-0239-4dc0-aeb1-4ee82b493132" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.064 248514 DEBUG nova.network.neutron [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:25:42 compute-0 ceph-mon[76537]: pgmap v1819: 321 pgs: 321 active+clean; 181 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 180 op/s
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.334 248514 INFO nova.compute.manager [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Took 1.18 seconds to destroy the instance on the hypervisor.
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.334 248514 DEBUG oslo.service.loopingcall [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.334 248514 DEBUG nova.compute.manager [-] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.335 248514 DEBUG nova.network.neutron [-] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.436 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.436 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.437 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.437 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29375ff9-300a-43de-a53d-942e7afbb439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:42 compute-0 nova_compute[248510]: 2025-12-13 08:25:42.639 248514 DEBUG nova.network.neutron [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:25:43 compute-0 ovn_controller[148476]: 2025-12-13T08:25:43Z|00345|binding|INFO|Releasing lport b4b5208f-f540-413f-b711-c27b19daefc1 from this chassis (sb_readonly=0)
Dec 13 08:25:43 compute-0 nova_compute[248510]: 2025-12-13 08:25:43.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 163 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 995 KiB/s wr, 159 op/s
Dec 13 08:25:43 compute-0 nova_compute[248510]: 2025-12-13 08:25:43.913 248514 DEBUG nova.network.neutron [-] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.076 248514 INFO nova.compute.manager [-] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Took 1.74 seconds to deallocate network for instance.
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.150 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.150 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:44 compute-0 ceph-mon[76537]: pgmap v1820: 321 pgs: 321 active+clean; 163 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 995 KiB/s wr, 159 op/s
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.570 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614329.5658126, 84309355-d1f4-4a59-9f19-b212232e2428 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.571 248514 INFO nova.compute.manager [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] VM Stopped (Lifecycle Event)
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.641 248514 DEBUG nova.compute.manager [None req-bd3a21cd-ef1b-4b3d-a894-93b6511d6652 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.656 248514 DEBUG oslo_concurrency.processutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.694 248514 DEBUG nova.compute.manager [req-20c1d668-5e42-4d20-96e3-ea51dcd83083 req-0780b85f-c3f6-4f7c-af62-85c444bce926 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received event network-changed-66179e56-6ff7-4353-872a-ee206fe0b050 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.694 248514 DEBUG nova.compute.manager [req-20c1d668-5e42-4d20-96e3-ea51dcd83083 req-0780b85f-c3f6-4f7c-af62-85c444bce926 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Refreshing instance network info cache due to event network-changed-66179e56-6ff7-4353-872a-ee206fe0b050. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.695 248514 DEBUG oslo_concurrency.lockutils [req-20c1d668-5e42-4d20-96e3-ea51dcd83083 req-0780b85f-c3f6-4f7c-af62-85c444bce926 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.788 248514 DEBUG nova.compute.manager [req-226f94b8-f0c8-45c0-977f-de4ae2722700 req-c99df782-826d-4ab3-8a7b-e36a38cbe050 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received event network-changed-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.789 248514 DEBUG nova.compute.manager [req-226f94b8-f0c8-45c0-977f-de4ae2722700 req-c99df782-826d-4ab3-8a7b-e36a38cbe050 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Refreshing instance network info cache due to event network-changed-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.789 248514 DEBUG oslo_concurrency.lockutils [req-226f94b8-f0c8-45c0-977f-de4ae2722700 req-c99df782-826d-4ab3-8a7b-e36a38cbe050 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-50753e78-0239-4dc0-aeb1-4ee82b493132" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:44 compute-0 nova_compute[248510]: 2025-12-13 08:25:44.893 248514 DEBUG nova.network.neutron [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Updating instance_info_cache with network_info: [{"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 134 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 200 op/s
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.213 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-50753e78-0239-4dc0-aeb1-4ee82b493132" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.214 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Instance network_info: |[{"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.215 248514 DEBUG oslo_concurrency.lockutils [req-226f94b8-f0c8-45c0-977f-de4ae2722700 req-c99df782-826d-4ab3-8a7b-e36a38cbe050 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-50753e78-0239-4dc0-aeb1-4ee82b493132" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.215 248514 DEBUG nova.network.neutron [req-226f94b8-f0c8-45c0-977f-de4ae2722700 req-c99df782-826d-4ab3-8a7b-e36a38cbe050 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Refreshing network info cache for port 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.218 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Start _get_guest_xml network_info=[{"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.224 248514 WARNING nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.229 248514 DEBUG nova.virt.libvirt.host [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.230 248514 DEBUG nova.virt.libvirt.host [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.234 248514 DEBUG nova.virt.libvirt.host [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.234 248514 DEBUG nova.virt.libvirt.host [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.235 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.235 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.236 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.236 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.237 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.237 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.237 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.237 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.238 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.238 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.238 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.239 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.242 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722237704' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.282 248514 DEBUG oslo_concurrency.processutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.289 248514 DEBUG nova.compute.provider_tree [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1722237704' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.441 248514 DEBUG nova.scheduler.client.report [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.555 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updating instance_info_cache with network_info: [{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.613 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.660 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.661 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.661 248514 DEBUG oslo_concurrency.lockutils [req-20c1d668-5e42-4d20-96e3-ea51dcd83083 req-0780b85f-c3f6-4f7c-af62-85c444bce926 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.662 248514 DEBUG nova.network.neutron [req-20c1d668-5e42-4d20-96e3-ea51dcd83083 req-0780b85f-c3f6-4f7c-af62-85c444bce926 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Refreshing network info cache for port 66179e56-6ff7-4353-872a-ee206fe0b050 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.663 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.672 248514 INFO nova.scheduler.client.report [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance 2c76f149-c467-4e59-afee-77940e515f8c
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4189576968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.837 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.863 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.868 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:45 compute-0 nova_compute[248510]: 2025-12-13 08:25:45.972 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:46 compute-0 ceph-mon[76537]: pgmap v1821: 321 pgs: 321 active+clean; 134 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 200 op/s
Dec 13 08:25:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4189576968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099581009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.455 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.456 248514 DEBUG nova.virt.libvirt.vif [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-444361975',display_name='tempest-DeleteServersTestJSON-server-444361975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-444361975',id=39,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-msln1k5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:38Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=50753e78-0239-4dc0-aeb1-4ee82b493132,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.456 248514 DEBUG nova.network.os_vif_util [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.457 248514 DEBUG nova.network.os_vif_util [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:86:f4,bridge_name='br-int',has_traffic_filtering=True,id=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ef0b77-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.458 248514 DEBUG nova.objects.instance [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50753e78-0239-4dc0-aeb1-4ee82b493132 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.481 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <uuid>50753e78-0239-4dc0-aeb1-4ee82b493132</uuid>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <name>instance-00000027</name>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <nova:name>tempest-DeleteServersTestJSON-server-444361975</nova:name>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:25:45</nova:creationTime>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <nova:user uuid="b6f7967ce16c45d394188c1302b02907">tempest-DeleteServersTestJSON-991966373-project-member</nova:user>
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <nova:project uuid="6f6beadbb0244529b8dfc1abff8e8e10">tempest-DeleteServersTestJSON-991966373</nova:project>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <nova:port uuid="18ef0b77-20ed-4ee2-ac70-d33b3d31ea43">
Dec 13 08:25:46 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <system>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <entry name="serial">50753e78-0239-4dc0-aeb1-4ee82b493132</entry>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <entry name="uuid">50753e78-0239-4dc0-aeb1-4ee82b493132</entry>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </system>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <os>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   </os>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <features>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   </features>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/50753e78-0239-4dc0-aeb1-4ee82b493132_disk">
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/50753e78-0239-4dc0-aeb1-4ee82b493132_disk.config">
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:46 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ab:86:f4"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <target dev="tap18ef0b77-20"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/console.log" append="off"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <video>
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </video>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:25:46 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:25:46 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:25:46 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:25:46 compute-0 nova_compute[248510]: </domain>
Dec 13 08:25:46 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.482 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Preparing to wait for external event network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.482 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.483 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.483 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.484 248514 DEBUG nova.virt.libvirt.vif [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-444361975',display_name='tempest-DeleteServersTestJSON-server-444361975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-444361975',id=39,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-msln1k5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:38Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=50753e78-0239-4dc0-aeb1-4ee82b493132,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.484 248514 DEBUG nova.network.os_vif_util [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.485 248514 DEBUG nova.network.os_vif_util [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:86:f4,bridge_name='br-int',has_traffic_filtering=True,id=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ef0b77-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.485 248514 DEBUG os_vif [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:86:f4,bridge_name='br-int',has_traffic_filtering=True,id=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ef0b77-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.486 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.487 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.489 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef0b77-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.490 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18ef0b77-20, col_values=(('external_ids', {'iface-id': '18ef0b77-20ed-4ee2-ac70-d33b3d31ea43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:86:f4', 'vm-uuid': '50753e78-0239-4dc0-aeb1-4ee82b493132'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:46 compute-0 NetworkManager[50376]: <info>  [1765614346.4925] manager: (tap18ef0b77-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.498 248514 INFO os_vif [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:86:f4,bridge_name='br-int',has_traffic_filtering=True,id=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ef0b77-20')
Dec 13 08:25:46 compute-0 podman[291193]: 2025-12-13 08:25:46.609146 +0000 UTC m=+0.069299500 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202)
Dec 13 08:25:46 compute-0 podman[291194]: 2025-12-13 08:25:46.624733035 +0000 UTC m=+0.080323273 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Dec 13 08:25:46 compute-0 podman[291192]: 2025-12-13 08:25:46.663092382 +0000 UTC m=+0.121121560 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:25:46 compute-0 nova_compute[248510]: 2025-12-13 08:25:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 134 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 200 op/s
Dec 13 08:25:47 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1099581009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.605 248514 DEBUG nova.compute.manager [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-vif-unplugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.606 248514 DEBUG oslo_concurrency.lockutils [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.606 248514 DEBUG oslo_concurrency.lockutils [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.606 248514 DEBUG oslo_concurrency.lockutils [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.607 248514 DEBUG nova.compute.manager [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] No waiting events found dispatching network-vif-unplugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.607 248514 WARNING nova.compute.manager [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received unexpected event network-vif-unplugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 for instance with vm_state deleted and task_state None.
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.607 248514 DEBUG nova.compute.manager [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.607 248514 DEBUG oslo_concurrency.lockutils [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.607 248514 DEBUG oslo_concurrency.lockutils [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.608 248514 DEBUG oslo_concurrency.lockutils [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.608 248514 DEBUG nova.compute.manager [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] No waiting events found dispatching network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.608 248514 WARNING nova.compute.manager [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received unexpected event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 for instance with vm_state deleted and task_state None.
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.608 248514 DEBUG nova.compute.manager [req-71a6ed74-59ee-45a1-aa2c-5be1c0535b13 req-31a7a9f7-1f7d-47da-878c-0076f5038c62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-vif-deleted-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.737 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.738 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.738 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No VIF found with MAC fa:16:3e:ab:86:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.739 248514 INFO nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Using config drive
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.760 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.768 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.769 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.769 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.769 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:25:47 compute-0 nova_compute[248510]: 2025-12-13 08:25:47.770 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.136 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.136 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.155 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.222 248514 DEBUG nova.network.neutron [req-226f94b8-f0c8-45c0-977f-de4ae2722700 req-c99df782-826d-4ab3-8a7b-e36a38cbe050 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Updated VIF entry in instance network info cache for port 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.223 248514 DEBUG nova.network.neutron [req-226f94b8-f0c8-45c0-977f-de4ae2722700 req-c99df782-826d-4ab3-8a7b-e36a38cbe050 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Updating instance_info_cache with network_info: [{"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.246 248514 DEBUG oslo_concurrency.lockutils [req-226f94b8-f0c8-45c0-977f-de4ae2722700 req-c99df782-826d-4ab3-8a7b-e36a38cbe050 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-50753e78-0239-4dc0-aeb1-4ee82b493132" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.264 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.265 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.272 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.272 248514 INFO nova.compute.claims [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:25:48 compute-0 ceph-mon[76537]: pgmap v1822: 321 pgs: 321 active+clean; 134 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 200 op/s
Dec 13 08:25:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430799626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.379 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.408 248514 INFO nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Creating config drive at /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/disk.config
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.413 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5tvd0hca execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.439 248514 DEBUG nova.network.neutron [req-20c1d668-5e42-4d20-96e3-ea51dcd83083 req-0780b85f-c3f6-4f7c-af62-85c444bce926 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updated VIF entry in instance network info cache for port 66179e56-6ff7-4353-872a-ee206fe0b050. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.440 248514 DEBUG nova.network.neutron [req-20c1d668-5e42-4d20-96e3-ea51dcd83083 req-0780b85f-c3f6-4f7c-af62-85c444bce926 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updating instance_info_cache with network_info: [{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.453 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.482 248514 DEBUG oslo_concurrency.lockutils [req-20c1d668-5e42-4d20-96e3-ea51dcd83083 req-0780b85f-c3f6-4f7c-af62-85c444bce926 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.487 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.487 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.489 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.490 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.544 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5tvd0hca" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.567 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.571 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/disk.config 50753e78-0239-4dc0-aeb1-4ee82b493132_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.711 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.713 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4074MB free_disk=59.946276866830885GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.713 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.726 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/disk.config 50753e78-0239-4dc0-aeb1-4ee82b493132_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.727 248514 INFO nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Deleting local config drive /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/disk.config because it was imported into RBD.
Dec 13 08:25:48 compute-0 kernel: tap18ef0b77-20: entered promiscuous mode
Dec 13 08:25:48 compute-0 NetworkManager[50376]: <info>  [1765614348.7826] manager: (tap18ef0b77-20): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:48 compute-0 ovn_controller[148476]: 2025-12-13T08:25:48Z|00346|binding|INFO|Claiming lport 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 for this chassis.
Dec 13 08:25:48 compute-0 ovn_controller[148476]: 2025-12-13T08:25:48Z|00347|binding|INFO|18ef0b77-20ed-4ee2-ac70-d33b3d31ea43: Claiming fa:16:3e:ab:86:f4 10.100.0.6
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.793 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:86:f4 10.100.0.6'], port_security=['fa:16:3e:ab:86:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '50753e78-0239-4dc0-aeb1-4ee82b493132', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.794 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 in datapath 85372fca-ab50-48b6-8c21-507f630c205a bound to our chassis
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.796 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:25:48 compute-0 ovn_controller[148476]: 2025-12-13T08:25:48Z|00348|binding|INFO|Setting lport 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 ovn-installed in OVS
Dec 13 08:25:48 compute-0 ovn_controller[148476]: 2025-12-13T08:25:48Z|00349|binding|INFO|Setting lport 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 up in Southbound
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:48 compute-0 nova_compute[248510]: 2025-12-13 08:25:48.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d901d2-aeee-4105-b1c9-5a51623c0533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.826 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85372fca-a1 in ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.829 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85372fca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.829 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[acba171f-fad5-42bb-ae67-25c1fb855b0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 systemd-machined[210538]: New machine qemu-45-instance-00000027.
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.836 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[99d76a57-1460-4966-8eef-ce398eee6a09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000027.
Dec 13 08:25:48 compute-0 systemd-udevd[291370]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.855 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9bf761-d8ea-4b75-b0a5-51f514075292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 NetworkManager[50376]: <info>  [1765614348.8722] device (tap18ef0b77-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:25:48 compute-0 NetworkManager[50376]: <info>  [1765614348.8748] device (tap18ef0b77-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.880 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e52cc243-cfff-4152-a839-b0c2d1d0664e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.911 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6232d47c-8b37-4fa2-8d98-cbc74d83f938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f00a68c4-00b0-4b76-8f4b-42b3db144cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 NetworkManager[50376]: <info>  [1765614348.9201] manager: (tap85372fca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.954 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6556cf-67e8-46de-80d2-be2973dce46a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.959 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5431f5-f7be-4afd-8224-2e78753457c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:48 compute-0 NetworkManager[50376]: <info>  [1765614348.9849] device (tap85372fca-a0): carrier: link connected
Dec 13 08:25:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:48.992 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dda52a-ab34-423f-b663-acb6a6622bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f2c689-a1b7-4bb2-970a-5ddc12c23742]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692619, 'reachable_time': 39745, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291401, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:49 compute-0 ovn_controller[148476]: 2025-12-13T08:25:49Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:ee:5f 10.100.0.13
Dec 13 08:25:49 compute-0 ovn_controller[148476]: 2025-12-13T08:25:49Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:ee:5f 10.100.0.13
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.057 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f77382cb-aa9c-4a70-ae23-d2ebf238bd21]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:30d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692619, 'tstamp': 692619}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291402, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.085 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf6c4ad-77de-4ab9-86b3-36169bcf1201]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692619, 'reachable_time': 39745, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291403, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3667354196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.117 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.122 248514 DEBUG nova.compute.provider_tree [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.134 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24821365-d12f-4549-b57b-e08edbb73ce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 160 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 4.6 MiB/s wr, 154 op/s
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.146 248514 DEBUG nova.scheduler.client.report [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.182 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.183 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.187 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.225 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b07228-42c9-41bf-bc28-4fa562b3404b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.228 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.228 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.229 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85372fca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:49 compute-0 kernel: tap85372fca-a0: entered promiscuous mode
Dec 13 08:25:49 compute-0 NetworkManager[50376]: <info>  [1765614349.2316] manager: (tap85372fca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.230 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.235 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85372fca-a0, col_values=(('external_ids', {'iface-id': '2c0f4981-0ad0-478e-b1ad-551d231022ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:49 compute-0 ovn_controller[148476]: 2025-12-13T08:25:49Z|00350|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.238 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.239 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c81f521c-6d05-4215-a9bd-608bf9c6d8a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.240 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:25:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:49.240 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'env', 'PROCESS_TAG=haproxy-85372fca-ab50-48b6-8c21-507f630c205a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85372fca-ab50-48b6-8c21-507f630c205a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.258 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.258 248514 DEBUG nova.network.neutron [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.279 248514 INFO nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.295 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 29375ff9-300a-43de-a53d-942e7afbb439 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 50753e78-0239-4dc0-aeb1-4ee82b493132 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ad40734f-2fce-406e-ba77-dd8eeb4743bc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.308 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:25:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1430799626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3667354196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.405 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.439 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.441 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.442 248514 INFO nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Creating image(s)
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.465 248514 DEBUG nova.storage.rbd_utils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.490 248514 DEBUG nova.storage.rbd_utils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.525 248514 DEBUG nova.storage.rbd_utils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.531 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.568 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614349.4452834, 50753e78-0239-4dc0-aeb1-4ee82b493132 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.570 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] VM Started (Lifecycle Event)
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.627 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.629 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.629 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.630 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.652 248514 DEBUG nova.storage.rbd_utils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.656 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:49 compute-0 podman[291554]: 2025-12-13 08:25:49.656743875 +0000 UTC m=+0.061182020 container create 403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:25:49 compute-0 systemd[1]: Started libpod-conmon-403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6.scope.
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.701 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.707 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614349.445508, 50753e78-0239-4dc0-aeb1-4ee82b493132 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.707 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] VM Paused (Lifecycle Event)
Dec 13 08:25:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975b6977a3825c58ea7256bbfa98239ac319ed501e4e9a06196f995b4b3af87b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:49 compute-0 podman[291554]: 2025-12-13 08:25:49.628734754 +0000 UTC m=+0.033172929 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.720 248514 DEBUG nova.policy [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:25:49 compute-0 podman[291554]: 2025-12-13 08:25:49.727917621 +0000 UTC m=+0.132355776 container init 403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:25:49 compute-0 podman[291554]: 2025-12-13 08:25:49.735248732 +0000 UTC m=+0.139686877 container start 403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 08:25:49 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[291590]: [NOTICE]   (291595) : New worker (291611) forked
Dec 13 08:25:49 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[291590]: [NOTICE]   (291595) : Loading success.
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.920 248514 DEBUG nova.compute.manager [req-59a408d6-f472-48e0-93f5-98b0fc60f1b0 req-b41f61bc-89db-4d83-a52e-38f7d91052ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received event network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.922 248514 DEBUG oslo_concurrency.lockutils [req-59a408d6-f472-48e0-93f5-98b0fc60f1b0 req-b41f61bc-89db-4d83-a52e-38f7d91052ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.923 248514 DEBUG oslo_concurrency.lockutils [req-59a408d6-f472-48e0-93f5-98b0fc60f1b0 req-b41f61bc-89db-4d83-a52e-38f7d91052ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.923 248514 DEBUG oslo_concurrency.lockutils [req-59a408d6-f472-48e0-93f5-98b0fc60f1b0 req-b41f61bc-89db-4d83-a52e-38f7d91052ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.924 248514 DEBUG nova.compute.manager [req-59a408d6-f472-48e0-93f5-98b0fc60f1b0 req-b41f61bc-89db-4d83-a52e-38f7d91052ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Processing event network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.925 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.933 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.936 248514 INFO nova.virt.libvirt.driver [-] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Instance spawned successfully.
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.936 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.949 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.960 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614349.929329, 50753e78-0239-4dc0-aeb1-4ee82b493132 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.961 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] VM Resumed (Lifecycle Event)
Dec 13 08:25:49 compute-0 nova_compute[248510]: 2025-12-13 08:25:49.964 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752068994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.037 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.040 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.040 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.041 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.041 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.042 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.042 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.053 248514 DEBUG nova.storage.rbd_utils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] resizing rbd image ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:25:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Dec 13 08:25:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Dec 13 08:25:50 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.072335) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614350072370, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1001, "num_deletes": 254, "total_data_size": 1412495, "memory_usage": 1443448, "flush_reason": "Manual Compaction"}
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614350081907, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 908151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34040, "largest_seqno": 35040, "table_properties": {"data_size": 904049, "index_size": 1690, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11022, "raw_average_key_size": 21, "raw_value_size": 895151, "raw_average_value_size": 1714, "num_data_blocks": 76, "num_entries": 522, "num_filter_entries": 522, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614270, "oldest_key_time": 1765614270, "file_creation_time": 1765614350, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 9629 microseconds, and 3382 cpu microseconds.
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.081959) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 908151 bytes OK
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.081984) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.083780) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.083795) EVENT_LOG_v1 {"time_micros": 1765614350083791, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.083821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1407694, prev total WAL file size 1407694, number of live WAL files 2.
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.084538) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323537' seq:72057594037927935, type:22 .. '6D6772737461740031353130' seq:0, type:0; will stop at (end)
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(886KB)], [74(10MB)]
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614350084616, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12094086, "oldest_snapshot_seqno": -1}
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.112 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.116 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.121 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.137 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.148 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5905 keys, 9075429 bytes, temperature: kUnknown
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614350160843, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9075429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9036269, "index_size": 23306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 149322, "raw_average_key_size": 25, "raw_value_size": 8930383, "raw_average_value_size": 1512, "num_data_blocks": 955, "num_entries": 5905, "num_filter_entries": 5905, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614350, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.161279) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9075429 bytes
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.163655) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.5 rd, 118.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(23.3) write-amplify(10.0) OK, records in: 6395, records dropped: 490 output_compression: NoCompression
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.163673) EVENT_LOG_v1 {"time_micros": 1765614350163665, "job": 42, "event": "compaction_finished", "compaction_time_micros": 76297, "compaction_time_cpu_micros": 23958, "output_level": 6, "num_output_files": 1, "total_output_size": 9075429, "num_input_records": 6395, "num_output_records": 5905, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614350164148, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614350165801, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.084470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.165892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.165907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.165909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.165911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:25:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:25:50.165912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.173 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.174 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.219 248514 DEBUG nova.objects.instance [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid ad40734f-2fce-406e-ba77-dd8eeb4743bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.335 248514 INFO nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Took 11.39 seconds to spawn the instance on the hypervisor.
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.336 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.341 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.341 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Ensure instance console log exists: /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.341 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.342 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.342 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:50 compute-0 ceph-mon[76537]: pgmap v1823: 321 pgs: 321 active+clean; 160 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 4.6 MiB/s wr, 154 op/s
Dec 13 08:25:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/752068994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:50 compute-0 ceph-mon[76537]: osdmap e194: 3 total, 3 up, 3 in
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.408 248514 INFO nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Took 13.43 seconds to build instance.
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.427 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:50 compute-0 nova_compute[248510]: 2025-12-13 08:25:50.897 248514 DEBUG nova.network.neutron [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Successfully created port: 5b45c191-4599-4de6-9ca8-2a6c38ccf703 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:25:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 160 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 4.6 MiB/s wr, 154 op/s
Dec 13 08:25:51 compute-0 nova_compute[248510]: 2025-12-13 08:25:51.174 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:51 compute-0 nova_compute[248510]: 2025-12-13 08:25:51.174 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:51 compute-0 nova_compute[248510]: 2025-12-13 08:25:51.174 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:25:51 compute-0 nova_compute[248510]: 2025-12-13 08:25:51.493 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:51 compute-0 nova_compute[248510]: 2025-12-13 08:25:51.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.077 248514 DEBUG nova.compute.manager [req-00c4d61d-bda1-45a7-8b48-cd56661d1bcc req-39852a4b-8667-4db1-a217-3d42c05b919e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received event network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.077 248514 DEBUG oslo_concurrency.lockutils [req-00c4d61d-bda1-45a7-8b48-cd56661d1bcc req-39852a4b-8667-4db1-a217-3d42c05b919e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.078 248514 DEBUG oslo_concurrency.lockutils [req-00c4d61d-bda1-45a7-8b48-cd56661d1bcc req-39852a4b-8667-4db1-a217-3d42c05b919e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.078 248514 DEBUG oslo_concurrency.lockutils [req-00c4d61d-bda1-45a7-8b48-cd56661d1bcc req-39852a4b-8667-4db1-a217-3d42c05b919e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.079 248514 DEBUG nova.compute.manager [req-00c4d61d-bda1-45a7-8b48-cd56661d1bcc req-39852a4b-8667-4db1-a217-3d42c05b919e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] No waiting events found dispatching network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.079 248514 WARNING nova.compute.manager [req-00c4d61d-bda1-45a7-8b48-cd56661d1bcc req-39852a4b-8667-4db1-a217-3d42c05b919e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received unexpected event network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 for instance with vm_state active and task_state None.
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.161 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.162 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.162 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.162 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.163 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.165 248514 INFO nova.compute.manager [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Terminating instance
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.166 248514 DEBUG nova.compute.manager [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:25:52 compute-0 kernel: tap18ef0b77-20 (unregistering): left promiscuous mode
Dec 13 08:25:52 compute-0 NetworkManager[50376]: <info>  [1765614352.2163] device (tap18ef0b77-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:25:52 compute-0 ovn_controller[148476]: 2025-12-13T08:25:52Z|00351|binding|INFO|Releasing lport 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 from this chassis (sb_readonly=0)
Dec 13 08:25:52 compute-0 ovn_controller[148476]: 2025-12-13T08:25:52Z|00352|binding|INFO|Setting lport 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 down in Southbound
Dec 13 08:25:52 compute-0 ovn_controller[148476]: 2025-12-13T08:25:52Z|00353|binding|INFO|Removing iface tap18ef0b77-20 ovn-installed in OVS
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:52 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000027.scope: Deactivated successfully.
Dec 13 08:25:52 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000027.scope: Consumed 2.803s CPU time.
Dec 13 08:25:52 compute-0 systemd-machined[210538]: Machine qemu-45-instance-00000027 terminated.
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.348 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:86:f4 10.100.0.6'], port_security=['fa:16:3e:ab:86:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '50753e78-0239-4dc0-aeb1-4ee82b493132', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.349 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.351 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fcb979-7671-4c25-ac43-52608c39f7dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.352 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace which is not needed anymore
Dec 13 08:25:52 compute-0 ceph-mon[76537]: pgmap v1825: 321 pgs: 321 active+clean; 160 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 4.6 MiB/s wr, 154 op/s
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.416 248514 INFO nova.virt.libvirt.driver [-] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Instance destroyed successfully.
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.417 248514 DEBUG nova.objects.instance [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'resources' on Instance uuid 50753e78-0239-4dc0-aeb1-4ee82b493132 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.471 248514 DEBUG nova.virt.libvirt.vif [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-444361975',display_name='tempest-DeleteServersTestJSON-server-444361975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-444361975',id=39,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:25:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-msln1k5o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:25:50Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=50753e78-0239-4dc0-aeb1-4ee82b493132,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.472 248514 DEBUG nova.network.os_vif_util [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "address": "fa:16:3e:ab:86:f4", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ef0b77-20", "ovs_interfaceid": "18ef0b77-20ed-4ee2-ac70-d33b3d31ea43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.473 248514 DEBUG nova.network.os_vif_util [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:86:f4,bridge_name='br-int',has_traffic_filtering=True,id=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ef0b77-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.473 248514 DEBUG os_vif [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:86:f4,bridge_name='br-int',has_traffic_filtering=True,id=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ef0b77-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.477 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef0b77-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.485 248514 INFO os_vif [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:86:f4,bridge_name='br-int',has_traffic_filtering=True,id=18ef0b77-20ed-4ee2-ac70-d33b3d31ea43,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ef0b77-20')
Dec 13 08:25:52 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[291590]: [NOTICE]   (291595) : haproxy version is 2.8.14-c23fe91
Dec 13 08:25:52 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[291590]: [NOTICE]   (291595) : path to executable is /usr/sbin/haproxy
Dec 13 08:25:52 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[291590]: [WARNING]  (291595) : Exiting Master process...
Dec 13 08:25:52 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[291590]: [WARNING]  (291595) : Exiting Master process...
Dec 13 08:25:52 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[291590]: [ALERT]    (291595) : Current worker (291611) exited with code 143 (Terminated)
Dec 13 08:25:52 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[291590]: [WARNING]  (291595) : All workers exited. Exiting... (0)
Dec 13 08:25:52 compute-0 systemd[1]: libpod-403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6.scope: Deactivated successfully.
Dec 13 08:25:52 compute-0 podman[291730]: 2025-12-13 08:25:52.517435089 +0000 UTC m=+0.050461517 container died 403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.529 248514 DEBUG nova.network.neutron [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Successfully updated port: 5b45c191-4599-4de6-9ca8-2a6c38ccf703 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-975b6977a3825c58ea7256bbfa98239ac319ed501e4e9a06196f995b4b3af87b-merged.mount: Deactivated successfully.
Dec 13 08:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6-userdata-shm.mount: Deactivated successfully.
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.552 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-ad40734f-2fce-406e-ba77-dd8eeb4743bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.553 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-ad40734f-2fce-406e-ba77-dd8eeb4743bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.553 248514 DEBUG nova.network.neutron [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:25:52 compute-0 podman[291730]: 2025-12-13 08:25:52.558665566 +0000 UTC m=+0.091691994 container cleanup 403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:25:52 compute-0 systemd[1]: libpod-conmon-403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6.scope: Deactivated successfully.
Dec 13 08:25:52 compute-0 podman[291779]: 2025-12-13 08:25:52.642803202 +0000 UTC m=+0.059925180 container remove 403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.651 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa573334-171c-4341-9cc5-3bd6caf3045f]: (4, ('Sat Dec 13 08:25:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6)\n403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6\nSat Dec 13 08:25:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6)\n403883aef82ceee5a52774fe10758d474ca83150d234e52dd447054d660eb8a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.653 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd1e147-1b54-4fc2-91ad-2bac2be78c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.654 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:52 compute-0 kernel: tap85372fca-a0: left promiscuous mode
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.672 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.675 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[181dfd82-2de0-4b25-a903-6c1d5a2898ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.695 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[973201b2-8535-4dc0-a556-9bea1db8edce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.698 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e16d2e41-0abb-4bff-b8c3-0d2d1fc51edc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.718 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0e0fd5-51bc-445f-9f09-1503184b9ffb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692612, 'reachable_time': 36142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291795, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d85372fca\x2dab50\x2d48b6\x2d8c21\x2d507f630c205a.mount: Deactivated successfully.
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.723 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:25:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:52.724 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9f62ad07-4049-4679-8650-7f8aa68801e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.775 248514 DEBUG nova.network.neutron [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.795 248514 INFO nova.virt.libvirt.driver [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Deleting instance files /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132_del
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.796 248514 INFO nova.virt.libvirt.driver [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Deletion of /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132_del complete
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.858 248514 DEBUG nova.compute.manager [req-d624947a-607c-4e22-bc03-1defede33c8f req-40401b14-ca2c-4e0f-9377-a66a8b0e3c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received event network-changed-5b45c191-4599-4de6-9ca8-2a6c38ccf703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.858 248514 DEBUG nova.compute.manager [req-d624947a-607c-4e22-bc03-1defede33c8f req-40401b14-ca2c-4e0f-9377-a66a8b0e3c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Refreshing instance network info cache due to event network-changed-5b45c191-4599-4de6-9ca8-2a6c38ccf703. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.858 248514 DEBUG oslo_concurrency.lockutils [req-d624947a-607c-4e22-bc03-1defede33c8f req-40401b14-ca2c-4e0f-9377-a66a8b0e3c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ad40734f-2fce-406e-ba77-dd8eeb4743bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.868 248514 INFO nova.compute.manager [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Took 0.70 seconds to destroy the instance on the hypervisor.
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.868 248514 DEBUG oslo.service.loopingcall [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.869 248514 DEBUG nova.compute.manager [-] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:25:52 compute-0 nova_compute[248510]: 2025-12-13 08:25:52.869 248514 DEBUG nova.network.neutron [-] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:25:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 157 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 204 op/s
Dec 13 08:25:53 compute-0 nova_compute[248510]: 2025-12-13 08:25:53.489 248514 DEBUG nova.network.neutron [-] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:53 compute-0 nova_compute[248510]: 2025-12-13 08:25:53.519 248514 INFO nova.compute.manager [-] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Took 0.65 seconds to deallocate network for instance.
Dec 13 08:25:53 compute-0 nova_compute[248510]: 2025-12-13 08:25:53.593 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:53 compute-0 nova_compute[248510]: 2025-12-13 08:25:53.594 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:53 compute-0 nova_compute[248510]: 2025-12-13 08:25:53.670 248514 DEBUG oslo_concurrency.processutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:53 compute-0 nova_compute[248510]: 2025-12-13 08:25:53.733 248514 DEBUG nova.network.neutron [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Updating instance_info_cache with network_info: [{"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:53 compute-0 nova_compute[248510]: 2025-12-13 08:25:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:25:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:25:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502432347' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.245 248514 DEBUG oslo_concurrency.processutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.253 248514 DEBUG nova.compute.provider_tree [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.257 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-ad40734f-2fce-406e-ba77-dd8eeb4743bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.257 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Instance network_info: |[{"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.258 248514 DEBUG oslo_concurrency.lockutils [req-d624947a-607c-4e22-bc03-1defede33c8f req-40401b14-ca2c-4e0f-9377-a66a8b0e3c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ad40734f-2fce-406e-ba77-dd8eeb4743bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.259 248514 DEBUG nova.network.neutron [req-d624947a-607c-4e22-bc03-1defede33c8f req-40401b14-ca2c-4e0f-9377-a66a8b0e3c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Refreshing network info cache for port 5b45c191-4599-4de6-9ca8-2a6c38ccf703 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.262 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Start _get_guest_xml network_info=[{"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.268 248514 WARNING nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.276 248514 DEBUG nova.virt.libvirt.host [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.277 248514 DEBUG nova.virt.libvirt.host [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.282 248514 DEBUG nova.compute.manager [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received event network-vif-unplugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.282 248514 DEBUG oslo_concurrency.lockutils [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.283 248514 DEBUG oslo_concurrency.lockutils [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.284 248514 DEBUG oslo_concurrency.lockutils [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.284 248514 DEBUG nova.compute.manager [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] No waiting events found dispatching network-vif-unplugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.284 248514 WARNING nova.compute.manager [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received unexpected event network-vif-unplugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 for instance with vm_state deleted and task_state None.
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.284 248514 DEBUG nova.compute.manager [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received event network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.285 248514 DEBUG oslo_concurrency.lockutils [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.285 248514 DEBUG oslo_concurrency.lockutils [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.285 248514 DEBUG oslo_concurrency.lockutils [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.285 248514 DEBUG nova.compute.manager [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] No waiting events found dispatching network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.285 248514 WARNING nova.compute.manager [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received unexpected event network-vif-plugged-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 for instance with vm_state deleted and task_state None.
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.285 248514 DEBUG nova.compute.manager [req-fe4552a8-c76b-4544-a32e-0205b046ef72 req-ee7a1332-793b-4db5-ba3e-a198faf2e968 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Received event network-vif-deleted-18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.286 248514 DEBUG nova.virt.libvirt.host [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.287 248514 DEBUG nova.virt.libvirt.host [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.287 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.288 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.288 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.288 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.288 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.288 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.289 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.289 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.289 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.289 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.289 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.290 248514 DEBUG nova.virt.hardware [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.293 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.382 248514 DEBUG nova.scheduler.client.report [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:25:54 compute-0 ceph-mon[76537]: pgmap v1826: 321 pgs: 321 active+clean; 157 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 204 op/s
Dec 13 08:25:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1502432347' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.570 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2065810930' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.899 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.924 248514 DEBUG nova.storage.rbd_utils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.929 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:54 compute-0 nova_compute[248510]: 2025-12-13 08:25:54.959 248514 INFO nova.scheduler.client.report [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Deleted allocations for instance 50753e78-0239-4dc0-aeb1-4ee82b493132
Dec 13 08:25:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:25:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 167 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 230 op/s
Dec 13 08:25:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2065810930' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:55.407 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:55.408 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:55.408 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:25:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203637889' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.527 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.529 248514 DEBUG nova.virt.libvirt.vif [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-797160490',display_name='tempest-ImagesTestJSON-server-797160490',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-797160490',id=40,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-7bbfmcda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:49Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=ad40734f-2fce-406e-ba77-dd8eeb4743bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.530 248514 DEBUG nova.network.os_vif_util [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.531 248514 DEBUG nova.network.os_vif_util [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:4d:11,bridge_name='br-int',has_traffic_filtering=True,id=5b45c191-4599-4de6-9ca8-2a6c38ccf703,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b45c191-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.533 248514 DEBUG nova.objects.instance [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ad40734f-2fce-406e-ba77-dd8eeb4743bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.902 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <uuid>ad40734f-2fce-406e-ba77-dd8eeb4743bc</uuid>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <name>instance-00000028</name>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesTestJSON-server-797160490</nova:name>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:25:54</nova:creationTime>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <nova:port uuid="5b45c191-4599-4de6-9ca8-2a6c38ccf703">
Dec 13 08:25:55 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <system>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <entry name="serial">ad40734f-2fce-406e-ba77-dd8eeb4743bc</entry>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <entry name="uuid">ad40734f-2fce-406e-ba77-dd8eeb4743bc</entry>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </system>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <os>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   </os>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <features>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   </features>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk">
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk.config">
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       </source>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:25:55 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b0:4d:11"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <target dev="tap5b45c191-45"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc/console.log" append="off"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <video>
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </video>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:25:55 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:25:55 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:25:55 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:25:55 compute-0 nova_compute[248510]: </domain>
Dec 13 08:25:55 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.903 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Preparing to wait for external event network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.903 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.903 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.904 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.904 248514 DEBUG nova.virt.libvirt.vif [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-797160490',display_name='tempest-ImagesTestJSON-server-797160490',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-797160490',id=40,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-7bbfmcda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:49Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=ad40734f-2fce-406e-ba77-dd8eeb4743bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.905 248514 DEBUG nova.network.os_vif_util [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.905 248514 DEBUG nova.network.os_vif_util [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:4d:11,bridge_name='br-int',has_traffic_filtering=True,id=5b45c191-4599-4de6-9ca8-2a6c38ccf703,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b45c191-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.906 248514 DEBUG os_vif [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:4d:11,bridge_name='br-int',has_traffic_filtering=True,id=5b45c191-4599-4de6-9ca8-2a6c38ccf703,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b45c191-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.908 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.909 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.913 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b45c191-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.913 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b45c191-45, col_values=(('external_ids', {'iface-id': '5b45c191-4599-4de6-9ca8-2a6c38ccf703', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:4d:11', 'vm-uuid': 'ad40734f-2fce-406e-ba77-dd8eeb4743bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:55 compute-0 NetworkManager[50376]: <info>  [1765614355.9163] manager: (tap5b45c191-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:55 compute-0 nova_compute[248510]: 2025-12-13 08:25:55.921 248514 INFO os_vif [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:4d:11,bridge_name='br-int',has_traffic_filtering=True,id=5b45c191-4599-4de6-9ca8-2a6c38ccf703,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b45c191-45')
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.007 248514 DEBUG oslo_concurrency.lockutils [None req-6433b372-abd6-450f-bf88-3739f33353db b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.011 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.012 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.012 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:b0:4d:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.012 248514 INFO nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Using config drive
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.037 248514 DEBUG nova.storage.rbd_utils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.399 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614341.3922899, 2c76f149-c467-4e59-afee-77940e515f8c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.400 248514 INFO nova.compute.manager [-] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Stopped (Lifecycle Event)
Dec 13 08:25:56 compute-0 ceph-mon[76537]: pgmap v1827: 321 pgs: 321 active+clean; 167 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 230 op/s
Dec 13 08:25:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2203637889' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.459 248514 DEBUG nova.compute.manager [None req-544ef506-11fb-47ec-a708-ca687ae693bb - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.854 248514 INFO nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Creating config drive at /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc/disk.config
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.859 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2e_b036v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:56 compute-0 nova_compute[248510]: 2025-12-13 08:25:56.994 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2e_b036v" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.020 248514 DEBUG nova.storage.rbd_utils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.026 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc/disk.config ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:25:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 167 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 230 op/s
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.163 248514 DEBUG oslo_concurrency.processutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc/disk.config ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.164 248514 INFO nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Deleting local config drive /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc/disk.config because it was imported into RBD.
Dec 13 08:25:57 compute-0 kernel: tap5b45c191-45: entered promiscuous mode
Dec 13 08:25:57 compute-0 NetworkManager[50376]: <info>  [1765614357.2161] manager: (tap5b45c191-45): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Dec 13 08:25:57 compute-0 ovn_controller[148476]: 2025-12-13T08:25:57Z|00354|binding|INFO|Claiming lport 5b45c191-4599-4de6-9ca8-2a6c38ccf703 for this chassis.
Dec 13 08:25:57 compute-0 ovn_controller[148476]: 2025-12-13T08:25:57Z|00355|binding|INFO|5b45c191-4599-4de6-9ca8-2a6c38ccf703: Claiming fa:16:3e:b0:4d:11 10.100.0.5
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:57 compute-0 ovn_controller[148476]: 2025-12-13T08:25:57Z|00356|binding|INFO|Setting lport 5b45c191-4599-4de6-9ca8-2a6c38ccf703 ovn-installed in OVS
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:57 compute-0 systemd-udevd[291954]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:25:57 compute-0 systemd-machined[210538]: New machine qemu-46-instance-00000028.
Dec 13 08:25:57 compute-0 NetworkManager[50376]: <info>  [1765614357.2563] device (tap5b45c191-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:25:57 compute-0 NetworkManager[50376]: <info>  [1765614357.2574] device (tap5b45c191-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:25:57 compute-0 ovn_controller[148476]: 2025-12-13T08:25:57Z|00357|binding|INFO|Setting lport 5b45c191-4599-4de6-9ca8-2a6c38ccf703 up in Southbound
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.268 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:4d:11 10.100.0.5'], port_security=['fa:16:3e:b0:4d:11 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ad40734f-2fce-406e-ba77-dd8eeb4743bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5b45c191-4599-4de6-9ca8-2a6c38ccf703) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.270 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5b45c191-4599-4de6-9ca8-2a6c38ccf703 in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis
Dec 13 08:25:57 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000028.
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.271 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.289 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0083aea0-f61d-4ae4-8161-ecb8f5e1f499]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.290 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87bd91d0-e1 in ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.293 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87bd91d0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.293 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da92b594-1ca9-4171-a252-c7c80f75119d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0569df-c842-48a0-9556-01d2f892db79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.309 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[cb70251d-c11a-4f4d-bde3-41549f65d3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.335 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa478ba-9b09-4965-825d-8991d00cc09f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.369 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[76f0184a-aaec-409e-a148-2bce3d4012de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 systemd-udevd[291956]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.376 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e1dac72a-0126-4fea-b714-e20e7990c0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 NetworkManager[50376]: <info>  [1765614357.3774] manager: (tap87bd91d0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.413 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d60782-75f6-43ae-93e7-3e19bd2b9d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.416 248514 DEBUG nova.network.neutron [req-d624947a-607c-4e22-bc03-1defede33c8f req-40401b14-ca2c-4e0f-9377-a66a8b0e3c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Updated VIF entry in instance network info cache for port 5b45c191-4599-4de6-9ca8-2a6c38ccf703. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.417 248514 DEBUG nova.network.neutron [req-d624947a-607c-4e22-bc03-1defede33c8f req-40401b14-ca2c-4e0f-9377-a66a8b0e3c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Updating instance_info_cache with network_info: [{"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.417 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4bece630-d59d-401f-aa74-deb46606e3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.442 248514 DEBUG oslo_concurrency.lockutils [req-d624947a-607c-4e22-bc03-1defede33c8f req-40401b14-ca2c-4e0f-9377-a66a8b0e3c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ad40734f-2fce-406e-ba77-dd8eeb4743bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:25:57 compute-0 NetworkManager[50376]: <info>  [1765614357.4468] device (tap87bd91d0-e0): carrier: link connected
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.453 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1116e243-26a0-4551-b8ae-2b7408c3d3b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.475 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[14488864-748a-47d6-8409-b916e33b687d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693466, 'reachable_time': 38412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291987, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.494 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[075d64fb-f4fd-4444-b6ef-568513938c47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:ec7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693466, 'tstamp': 693466}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291988, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.516 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b98a606a-096d-48c0-bab1-8f2eec35113e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693466, 'reachable_time': 38412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291989, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.557 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[812a036e-7a90-4a2b-8dc9-0f50e9c22222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.632 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c05faf0-d80f-4767-b64b-160fd93b5a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.634 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.634 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.634 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:57 compute-0 NetworkManager[50376]: <info>  [1765614357.6382] manager: (tap87bd91d0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Dec 13 08:25:57 compute-0 kernel: tap87bd91d0-e0: entered promiscuous mode
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.642 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:57 compute-0 ovn_controller[148476]: 2025-12-13T08:25:57Z|00358|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.646 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.646 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.648 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b22ef48-2b05-4b7c-b8a8-e8f5b407da5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.648 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:25:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:25:57.649 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'env', 'PROCESS_TAG=haproxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87bd91d0-eead-49b6-8f92-f8d0dba555dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.913 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614357.912695, ad40734f-2fce-406e-ba77-dd8eeb4743bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:57 compute-0 nova_compute[248510]: 2025-12-13 08:25:57.913 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] VM Started (Lifecycle Event)
Dec 13 08:25:58 compute-0 podman[292064]: 2025-12-13 08:25:58.039954209 +0000 UTC m=+0.047582695 container create 08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.068 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.072 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614357.9154894, ad40734f-2fce-406e-ba77-dd8eeb4743bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.073 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] VM Paused (Lifecycle Event)
Dec 13 08:25:58 compute-0 systemd[1]: Started libpod-conmon-08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31.scope.
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.094 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.100 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:58 compute-0 podman[292064]: 2025-12-13 08:25:58.014906761 +0000 UTC m=+0.022535217 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.117 248514 DEBUG nova.compute.manager [req-55dff29e-902e-4d02-bf6f-e789b51d4185 req-5e0540b1-c2b7-4a31-957a-5b40ea33f44d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received event network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.117 248514 DEBUG oslo_concurrency.lockutils [req-55dff29e-902e-4d02-bf6f-e789b51d4185 req-5e0540b1-c2b7-4a31-957a-5b40ea33f44d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.117 248514 DEBUG oslo_concurrency.lockutils [req-55dff29e-902e-4d02-bf6f-e789b51d4185 req-5e0540b1-c2b7-4a31-957a-5b40ea33f44d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.118 248514 DEBUG oslo_concurrency.lockutils [req-55dff29e-902e-4d02-bf6f-e789b51d4185 req-5e0540b1-c2b7-4a31-957a-5b40ea33f44d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.118 248514 DEBUG nova.compute.manager [req-55dff29e-902e-4d02-bf6f-e789b51d4185 req-5e0540b1-c2b7-4a31-957a-5b40ea33f44d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Processing event network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.118 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.124 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.126 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.126 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614358.1226556, ad40734f-2fce-406e-ba77-dd8eeb4743bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.126 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] VM Resumed (Lifecycle Event)
Dec 13 08:25:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.132 248514 INFO nova.virt.libvirt.driver [-] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Instance spawned successfully.
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.132 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d160f445aba41155c240855f1df7fa557239cb4c94d7230a43cf607fa8902cc5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:25:58 compute-0 podman[292064]: 2025-12-13 08:25:58.152982508 +0000 UTC m=+0.160610964 container init 08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 08:25:58 compute-0 podman[292064]: 2025-12-13 08:25:58.163770434 +0000 UTC m=+0.171398890 container start 08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.170 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.175 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:25:58 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[292079]: [NOTICE]   (292083) : New worker (292085) forked
Dec 13 08:25:58 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[292079]: [NOTICE]   (292083) : Loading success.
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.200 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.201 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.202 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.202 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.202 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.203 248514 DEBUG nova.virt.libvirt.driver [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.254 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.373 248514 INFO nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Took 8.93 seconds to spawn the instance on the hypervisor.
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.373 248514 DEBUG nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:25:58 compute-0 ceph-mon[76537]: pgmap v1828: 321 pgs: 321 active+clean; 167 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 230 op/s
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.486 248514 INFO nova.compute.manager [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Took 10.26 seconds to build instance.
Dec 13 08:25:58 compute-0 nova_compute[248510]: 2025-12-13 08:25:58.582 248514 DEBUG oslo_concurrency.lockutils [None req-4c3786e4-2099-43ed-987f-fb5807e5b44d 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:25:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 185 op/s
Dec 13 08:25:59 compute-0 nova_compute[248510]: 2025-12-13 08:25:59.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:00 compute-0 ceph-mon[76537]: pgmap v1829: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 185 op/s
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.611 248514 DEBUG nova.compute.manager [req-7b2eaf3a-6140-43a4-bc3f-f1742a585b2a req-6c5bbbe7-0ffb-4bd3-832f-dac441376f41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received event network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.612 248514 DEBUG oslo_concurrency.lockutils [req-7b2eaf3a-6140-43a4-bc3f-f1742a585b2a req-6c5bbbe7-0ffb-4bd3-832f-dac441376f41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.612 248514 DEBUG oslo_concurrency.lockutils [req-7b2eaf3a-6140-43a4-bc3f-f1742a585b2a req-6c5bbbe7-0ffb-4bd3-832f-dac441376f41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.612 248514 DEBUG oslo_concurrency.lockutils [req-7b2eaf3a-6140-43a4-bc3f-f1742a585b2a req-6c5bbbe7-0ffb-4bd3-832f-dac441376f41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.612 248514 DEBUG nova.compute.manager [req-7b2eaf3a-6140-43a4-bc3f-f1742a585b2a req-6c5bbbe7-0ffb-4bd3-832f-dac441376f41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] No waiting events found dispatching network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.613 248514 WARNING nova.compute.manager [req-7b2eaf3a-6140-43a4-bc3f-f1742a585b2a req-6c5bbbe7-0ffb-4bd3-832f-dac441376f41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received unexpected event network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 for instance with vm_state active and task_state None.
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.950 248514 DEBUG oslo_concurrency.lockutils [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.951 248514 DEBUG oslo_concurrency.lockutils [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.951 248514 DEBUG nova.compute.manager [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.955 248514 DEBUG nova.compute.manager [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 08:26:00 compute-0 nova_compute[248510]: 2025-12-13 08:26:00.956 248514 DEBUG nova.objects.instance [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'flavor' on Instance uuid ad40734f-2fce-406e-ba77-dd8eeb4743bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:01 compute-0 nova_compute[248510]: 2025-12-13 08:26:01.002 248514 DEBUG nova.virt.libvirt.driver [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:26:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 167 op/s
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.270 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.271 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.298 248514 DEBUG nova.compute.manager [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.382 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.384 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.395 248514 DEBUG nova.virt.hardware [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.395 248514 INFO nova.compute.claims [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.566 248514 DEBUG oslo_concurrency.processutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:02 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:26:02 compute-0 ceph-mon[76537]: pgmap v1830: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 167 op/s
Dec 13 08:26:02 compute-0 nova_compute[248510]: 2025-12-13 08:26:02.670 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.074 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.075 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.075 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.075 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.076 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.077 248514 INFO nova.compute.manager [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Terminating instance
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.078 248514 DEBUG nova.compute.manager [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:26:03 compute-0 kernel: tap66179e56-6f (unregistering): left promiscuous mode
Dec 13 08:26:03 compute-0 NetworkManager[50376]: <info>  [1765614363.1413] device (tap66179e56-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:26:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 173 op/s
Dec 13 08:26:03 compute-0 ovn_controller[148476]: 2025-12-13T08:26:03Z|00359|binding|INFO|Releasing lport 66179e56-6ff7-4353-872a-ee206fe0b050 from this chassis (sb_readonly=0)
Dec 13 08:26:03 compute-0 ovn_controller[148476]: 2025-12-13T08:26:03Z|00360|binding|INFO|Setting lport 66179e56-6ff7-4353-872a-ee206fe0b050 down in Southbound
Dec 13 08:26:03 compute-0 ovn_controller[148476]: 2025-12-13T08:26:03Z|00361|binding|INFO|Removing iface tap66179e56-6f ovn-installed in OVS
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.168 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:ee:5f 10.100.0.13'], port_security=['fa:16:3e:c2:ee:5f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '29375ff9-300a-43de-a53d-942e7afbb439', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9453e13-be77-4aff-899d-cbb572239200', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e370bdecda394d32b21d4eee440a61fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '06ce8d20-248e-4763-9b89-5c8df9c9f100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=490cf977-f165-4aa4-8179-937f6e939091, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=66179e56-6ff7-4353-872a-ee206fe0b050) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.169 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 66179e56-6ff7-4353-872a-ee206fe0b050 in datapath f9453e13-be77-4aff-899d-cbb572239200 unbound from our chassis
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.171 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f9453e13-be77-4aff-899d-cbb572239200, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.172 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae28fb9e-5993-4221-b36f-5d98bffeae1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.174 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f9453e13-be77-4aff-899d-cbb572239200 namespace which is not needed anymore
Dec 13 08:26:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/231895203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.194 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:03 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000026.scope: Deactivated successfully.
Dec 13 08:26:03 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000026.scope: Consumed 13.398s CPU time.
Dec 13 08:26:03 compute-0 systemd-machined[210538]: Machine qemu-44-instance-00000026 terminated.
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.231 248514 DEBUG oslo_concurrency.processutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.238 248514 DEBUG nova.compute.provider_tree [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.264 248514 DEBUG nova.scheduler.client.report [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.307 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.308 248514 DEBUG nova.compute.manager [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.322 248514 INFO nova.virt.libvirt.driver [-] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance destroyed successfully.
Dec 13 08:26:03 compute-0 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [NOTICE]   (290804) : haproxy version is 2.8.14-c23fe91
Dec 13 08:26:03 compute-0 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [NOTICE]   (290804) : path to executable is /usr/sbin/haproxy
Dec 13 08:26:03 compute-0 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [WARNING]  (290804) : Exiting Master process...
Dec 13 08:26:03 compute-0 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [ALERT]    (290804) : Current worker (290806) exited with code 143 (Terminated)
Dec 13 08:26:03 compute-0 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [WARNING]  (290804) : All workers exited. Exiting... (0)
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.323 248514 DEBUG nova.objects.instance [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lazy-loading 'resources' on Instance uuid 29375ff9-300a-43de-a53d-942e7afbb439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:03 compute-0 systemd[1]: libpod-1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04.scope: Deactivated successfully.
Dec 13 08:26:03 compute-0 podman[292139]: 2025-12-13 08:26:03.333895009 +0000 UTC m=+0.053357808 container died 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.346 248514 DEBUG nova.virt.libvirt.vif [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:25:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1257374937',display_name='tempest-ServersTestJSON-server-1257374937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1257374937',id=38,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQM1EROcyTGYUNd3w+nL2c9tIiIpr7CpaZ/uVd5bqgtT8dSelOLMXPhJ/HVb4yRy7qvGCJbUXgeaaHZuyNIHWqDvsU3xORtagVm9kIjwgQBLgTK/GBqyOzmv5WpZhagyQ==',key_name='tempest-keypair-1165619389',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:25:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e370bdecda394d32b21d4eee440a61fa',ramdisk_id='',reservation_id='r-tuwuy5hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1483541269',owner_user_name='tempest-ServersTestJSON-1483541269-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:25:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f6fadd0581d041428cc88161ae6e6e02',uuid=29375ff9-300a-43de-a53d-942e7afbb439,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.347 248514 DEBUG nova.network.os_vif_util [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converting VIF {"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.348 248514 DEBUG nova.network.os_vif_util [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.349 248514 DEBUG os_vif [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.351 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.352 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66179e56-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.364 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.366 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d31eec13ac5f88b15ff754f3542405a83e8a8c72b6ef49186db1b491f558445-merged.mount: Deactivated successfully.
Dec 13 08:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04-userdata-shm.mount: Deactivated successfully.
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.374 248514 DEBUG nova.compute.claims [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Aborting claim: <nova.compute.claims.Claim object at 0x7f2a2023de80> abort /usr/lib/python3.9/site-packages/nova/compute/claims.py:85
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.376 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.377 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.381 248514 INFO os_vif [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f')
Dec 13 08:26:03 compute-0 podman[292139]: 2025-12-13 08:26:03.383644436 +0000 UTC m=+0.103107205 container cleanup 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:26:03 compute-0 systemd[1]: libpod-conmon-1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04.scope: Deactivated successfully.
Dec 13 08:26:03 compute-0 podman[292191]: 2025-12-13 08:26:03.458926124 +0000 UTC m=+0.049255767 container remove 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.467 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[be1af105-1f64-47df-872b-0775d2de56d6]: (4, ('Sat Dec 13 08:26:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200 (1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04)\n1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04\nSat Dec 13 08:26:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200 (1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04)\n1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.469 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2fce6260-d580-43f7-afab-9102a92b10af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.472 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9453e13-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:03 compute-0 kernel: tapf9453e13-b0: left promiscuous mode
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.495 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f13f91-0ebe-4eb8-916a-401a28544537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.512 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69d6f360-72e3-4446-90b5-5604091af206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.514 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd9e79c-9844-49b7-a627-eedb008240e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.530 248514 DEBUG oslo_concurrency.processutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.540 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4d4410-4d37-45f9-863e-b1f897326eeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691227, 'reachable_time': 34919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292210, 'error': None, 'target': 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:03 compute-0 systemd[1]: run-netns-ovnmeta\x2df9453e13\x2dbe77\x2d4aff\x2d899d\x2dcbb572239200.mount: Deactivated successfully.
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.543 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f9453e13-be77-4aff-899d-cbb572239200 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:26:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:03.544 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ec570d5c-7cc0-437a-a1b1-ad47e3b4a482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/231895203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.722 248514 INFO nova.virt.libvirt.driver [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Deleting instance files /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439_del
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.724 248514 INFO nova.virt.libvirt.driver [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Deletion of /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439_del complete
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.781 248514 INFO nova.compute.manager [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Took 0.70 seconds to destroy the instance on the hypervisor.
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.782 248514 DEBUG oslo.service.loopingcall [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.783 248514 DEBUG nova.compute.manager [-] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:26:03 compute-0 nova_compute[248510]: 2025-12-13 08:26:03.783 248514 DEBUG nova.network.neutron [-] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:26:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570483464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.093 248514 DEBUG oslo_concurrency.processutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.103 248514 DEBUG nova.compute.provider_tree [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.122 248514 DEBUG nova.scheduler.client.report [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.163 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.165 248514 DEBUG nova.compute.utils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Conflict updating instance 0709dd6e-7ff5-4f0f-b093-69b920b4fe77. Expected: {'task_state': [None]}. Actual: {'task_state': 'deleting'} notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.166 248514 DEBUG nova.compute.manager [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Instance disappeared during build. _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2483
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.166 248514 DEBUG nova.compute.manager [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.167 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-0709dd6e-7ff5-4f0f-b093-69b920b4fe77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.167 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-0709dd6e-7ff5-4f0f-b093-69b920b4fe77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.167 248514 DEBUG nova.network.neutron [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:26:04 compute-0 ceph-mon[76537]: pgmap v1831: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 173 op/s
Dec 13 08:26:04 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3570483464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.775 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:04 compute-0 nova_compute[248510]: 2025-12-13 08:26:04.979 248514 DEBUG nova.network.neutron [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 113 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 179 op/s
Dec 13 08:26:05 compute-0 nova_compute[248510]: 2025-12-13 08:26:05.502 248514 DEBUG nova.network.neutron [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:06 compute-0 nova_compute[248510]: 2025-12-13 08:26:06.233 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-0709dd6e-7ff5-4f0f-b093-69b920b4fe77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:06 compute-0 nova_compute[248510]: 2025-12-13 08:26:06.233 248514 DEBUG nova.compute.manager [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012
Dec 13 08:26:06 compute-0 nova_compute[248510]: 2025-12-13 08:26:06.234 248514 DEBUG nova.compute.manager [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:26:06 compute-0 nova_compute[248510]: 2025-12-13 08:26:06.234 248514 DEBUG nova.network.neutron [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:26:06 compute-0 nova_compute[248510]: 2025-12-13 08:26:06.422 248514 DEBUG nova.network.neutron [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:06 compute-0 nova_compute[248510]: 2025-12-13 08:26:06.694 248514 DEBUG nova.network.neutron [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:06 compute-0 nova_compute[248510]: 2025-12-13 08:26:06.762 248514 INFO nova.compute.manager [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Took 0.53 seconds to deallocate network for instance.
Dec 13 08:26:06 compute-0 ceph-mon[76537]: pgmap v1832: 321 pgs: 321 active+clean; 113 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 179 op/s
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.098 248514 DEBUG nova.network.neutron [-] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 113 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 99 op/s
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.326 248514 INFO nova.compute.manager [-] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Took 3.54 seconds to deallocate network for instance.
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.410 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614352.4096103, 50753e78-0239-4dc0-aeb1-4ee82b493132 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.411 248514 INFO nova.compute.manager [-] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] VM Stopped (Lifecycle Event)
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:07.870570) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614367870608, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 401, "num_deletes": 251, "total_data_size": 288535, "memory_usage": 297288, "flush_reason": "Manual Compaction"}
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614367875017, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 283789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35041, "largest_seqno": 35441, "table_properties": {"data_size": 281375, "index_size": 514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5898, "raw_average_key_size": 18, "raw_value_size": 276663, "raw_average_value_size": 878, "num_data_blocks": 23, "num_entries": 315, "num_filter_entries": 315, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614350, "oldest_key_time": 1765614350, "file_creation_time": 1765614367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 4505 microseconds, and 1838 cpu microseconds.
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:07.875059) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 283789 bytes OK
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:07.875093) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:07.876567) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:07.876583) EVENT_LOG_v1 {"time_micros": 1765614367876577, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:07.876607) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 285981, prev total WAL file size 285981, number of live WAL files 2.
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:07.877036) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(277KB)], [77(8862KB)]
Dec 13 08:26:07 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614367877157, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 9359218, "oldest_snapshot_seqno": -1}
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.891 248514 INFO nova.scheduler.client.report [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Deleted allocations for instance 0709dd6e-7ff5-4f0f-b093-69b920b4fe77
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.892 248514 DEBUG oslo_concurrency.lockutils [None req-107a6278-cd6c-4bb8-92f4-d78874ae3d8b b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.892 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 5.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.892 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.893 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.893 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.894 248514 DEBUG nova.objects.instance [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'flavor' on Instance uuid 0709dd6e-7ff5-4f0f-b093-69b920b4fe77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.926 248514 DEBUG nova.compute.manager [None req-05bef3c7-21e3-4a75-b945-beeeb29df6ca - - - - - -] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.929 248514 DEBUG nova.objects.instance [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'metadata' on Instance uuid 0709dd6e-7ff5-4f0f-b093-69b920b4fe77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.991 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:07 compute-0 nova_compute[248510]: 2025-12-13 08:26:07.992 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.060 248514 DEBUG oslo_concurrency.processutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5711 keys, 7577958 bytes, temperature: kUnknown
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614368099048, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7577958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7541485, "index_size": 21067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 146001, "raw_average_key_size": 25, "raw_value_size": 7440378, "raw_average_value_size": 1302, "num_data_blocks": 852, "num_entries": 5711, "num_filter_entries": 5711, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:08.099424) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7577958 bytes
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:08.134390) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.1 rd, 34.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(59.7) write-amplify(26.7) OK, records in: 6220, records dropped: 509 output_compression: NoCompression
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:08.134436) EVENT_LOG_v1 {"time_micros": 1765614368134420, "job": 44, "event": "compaction_finished", "compaction_time_micros": 222063, "compaction_time_cpu_micros": 21310, "output_level": 6, "num_output_files": 1, "total_output_size": 7577958, "num_input_records": 6220, "num_output_records": 5711, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614368134886, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614368136508, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:07.876916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:08.136675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:08.136684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:08.136687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:08.136689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:26:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:26:08.136691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.480 248514 INFO nova.compute.manager [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Terminating instance
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.482 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-0709dd6e-7ff5-4f0f-b093-69b920b4fe77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.482 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-0709dd6e-7ff5-4f0f-b093-69b920b4fe77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.483 248514 DEBUG nova.network.neutron [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:26:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1933693603' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.653 248514 DEBUG oslo_concurrency.processutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.659 248514 DEBUG nova.compute.provider_tree [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:08 compute-0 nova_compute[248510]: 2025-12-13 08:26:08.776 248514 DEBUG nova.network.neutron [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:08 compute-0 ceph-mon[76537]: pgmap v1833: 321 pgs: 321 active+clean; 113 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 99 op/s
Dec 13 08:26:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1933693603' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 102 op/s
Dec 13 08:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:26:09
Dec 13 08:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', '.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control', '.rgw.root']
Dec 13 08:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:26:09 compute-0 nova_compute[248510]: 2025-12-13 08:26:09.243 248514 DEBUG nova.compute.manager [req-055a78e2-8ed1-4d76-bafe-59c2e948776e req-bf826fa6-0cda-4367-97c6-0104665e5712 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received event network-vif-deleted-66179e56-6ff7-4353-872a-ee206fe0b050 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:09 compute-0 nova_compute[248510]: 2025-12-13 08:26:09.250 248514 DEBUG nova.scheduler.client.report [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:09 compute-0 sudo[292256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:26:09 compute-0 sudo[292256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:09 compute-0 sudo[292256]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:09 compute-0 sudo[292281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:26:09 compute-0 sudo[292281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:09 compute-0 nova_compute[248510]: 2025-12-13 08:26:09.502 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:09 compute-0 nova_compute[248510]: 2025-12-13 08:26:09.673 248514 INFO nova.scheduler.client.report [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Deleted allocations for instance 29375ff9-300a-43de-a53d-942e7afbb439
Dec 13 08:26:09 compute-0 nova_compute[248510]: 2025-12-13 08:26:09.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:10 compute-0 sudo[292281]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 08:26:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:26:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:26:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:26:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:26:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:26:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:26:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:26:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:26:10 compute-0 sudo[292337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:26:10 compute-0 sudo[292337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:10 compute-0 sudo[292337]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:10 compute-0 sudo[292362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:26:10 compute-0 sudo[292362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:10 compute-0 podman[292399]: 2025-12-13 08:26:10.529255033 +0000 UTC m=+0.046743334 container create 7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 08:26:10 compute-0 systemd[1]: Started libpod-conmon-7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18.scope.
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:26:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:26:10 compute-0 podman[292399]: 2025-12-13 08:26:10.508542063 +0000 UTC m=+0.026030384 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:26:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:10 compute-0 podman[292399]: 2025-12-13 08:26:10.63285894 +0000 UTC m=+0.150347271 container init 7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:26:10 compute-0 podman[292399]: 2025-12-13 08:26:10.645896441 +0000 UTC m=+0.163384742 container start 7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 08:26:10 compute-0 podman[292399]: 2025-12-13 08:26:10.649212963 +0000 UTC m=+0.166701284 container attach 7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:26:10 compute-0 tender_hugle[292416]: 167 167
Dec 13 08:26:10 compute-0 systemd[1]: libpod-7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18.scope: Deactivated successfully.
Dec 13 08:26:10 compute-0 podman[292399]: 2025-12-13 08:26:10.656301868 +0000 UTC m=+0.173790169 container died 7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 08:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-071b431a277bb660ec99e95e4cf9c932c8a2e9daf13f87977c23fb6ef863b316-merged.mount: Deactivated successfully.
Dec 13 08:26:10 compute-0 podman[292399]: 2025-12-13 08:26:10.697362531 +0000 UTC m=+0.214850832 container remove 7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 08:26:10 compute-0 systemd[1]: libpod-conmon-7a6af832a3e3f1b62fe178e3c8db5be00e0fbd49888c39a68e1f85468bf8bc18.scope: Deactivated successfully.
Dec 13 08:26:10 compute-0 nova_compute[248510]: 2025-12-13 08:26:10.760 248514 DEBUG nova.network.neutron [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:10 compute-0 nova_compute[248510]: 2025-12-13 08:26:10.781 248514 DEBUG oslo_concurrency.lockutils [None req-48e9d3b7-ed34-4983-a53c-01040272a38a f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:10 compute-0 nova_compute[248510]: 2025-12-13 08:26:10.849 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-0709dd6e-7ff5-4f0f-b093-69b920b4fe77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:10 compute-0 nova_compute[248510]: 2025-12-13 08:26:10.850 248514 DEBUG nova.compute.manager [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:26:10 compute-0 nova_compute[248510]: 2025-12-13 08:26:10.855 248514 DEBUG nova.virt.libvirt.driver [-] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527
Dec 13 08:26:10 compute-0 nova_compute[248510]: 2025-12-13 08:26:10.856 248514 INFO nova.virt.libvirt.driver [-] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Instance destroyed successfully.
Dec 13 08:26:10 compute-0 nova_compute[248510]: 2025-12-13 08:26:10.856 248514 DEBUG nova.objects.instance [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0709dd6e-7ff5-4f0f-b093-69b920b4fe77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:10 compute-0 ceph-mon[76537]: pgmap v1834: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 102 op/s
Dec 13 08:26:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:26:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:26:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:26:10 compute-0 podman[292440]: 2025-12-13 08:26:10.909923096 +0000 UTC m=+0.056569977 container create 7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:26:10 compute-0 systemd[1]: Started libpod-conmon-7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70.scope.
Dec 13 08:26:10 compute-0 nova_compute[248510]: 2025-12-13 08:26:10.960 248514 DEBUG nova.objects.instance [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'resources' on Instance uuid 0709dd6e-7ff5-4f0f-b093-69b920b4fe77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:10 compute-0 podman[292440]: 2025-12-13 08:26:10.888887797 +0000 UTC m=+0.035534698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:26:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac25f2040f8e4b48fdd6cfacab938fc5177ae18414433648cbdb330f99bc098/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac25f2040f8e4b48fdd6cfacab938fc5177ae18414433648cbdb330f99bc098/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac25f2040f8e4b48fdd6cfacab938fc5177ae18414433648cbdb330f99bc098/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac25f2040f8e4b48fdd6cfacab938fc5177ae18414433648cbdb330f99bc098/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac25f2040f8e4b48fdd6cfacab938fc5177ae18414433648cbdb330f99bc098/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:11 compute-0 podman[292440]: 2025-12-13 08:26:11.005141736 +0000 UTC m=+0.151788617 container init 7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:26:11 compute-0 podman[292440]: 2025-12-13 08:26:11.012162889 +0000 UTC m=+0.158809770 container start 7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 08:26:11 compute-0 podman[292440]: 2025-12-13 08:26:11.015996493 +0000 UTC m=+0.162643374 container attach 7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.056 248514 DEBUG nova.virt.libvirt.driver [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.102 248514 INFO nova.virt.libvirt.driver [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Deletion of /var/lib/nova/instances/0709dd6e-7ff5-4f0f-b093-69b920b4fe77_del complete
Dec 13 08:26:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 KiB/s wr, 86 op/s
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.449 248514 INFO nova.compute.manager [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Took 0.60 seconds to destroy the instance on the hypervisor.
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.451 248514 DEBUG oslo.service.loopingcall [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.452 248514 DEBUG nova.compute.manager [-] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.452 248514 DEBUG nova.network.neutron [-] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:26:11 compute-0 youthful_lamarr[292457]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:26:11 compute-0 youthful_lamarr[292457]: --> All data devices are unavailable
Dec 13 08:26:11 compute-0 systemd[1]: libpod-7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70.scope: Deactivated successfully.
Dec 13 08:26:11 compute-0 podman[292440]: 2025-12-13 08:26:11.574056573 +0000 UTC m=+0.720703464 container died 7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lamarr, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 13 08:26:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ac25f2040f8e4b48fdd6cfacab938fc5177ae18414433648cbdb330f99bc098-merged.mount: Deactivated successfully.
Dec 13 08:26:11 compute-0 podman[292440]: 2025-12-13 08:26:11.823538068 +0000 UTC m=+0.970184949 container remove 7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lamarr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 08:26:11 compute-0 systemd[1]: libpod-conmon-7a067a40ace598c1b1dccc99872900d3ba3cedb91e7c169cf35badb92a910a70.scope: Deactivated successfully.
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.887 248514 DEBUG nova.network.neutron [-] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:11 compute-0 sudo[292362]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:11 compute-0 nova_compute[248510]: 2025-12-13 08:26:11.917 248514 DEBUG nova.network.neutron [-] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:11 compute-0 sudo[292504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:26:11 compute-0 sudo[292504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:11 compute-0 sudo[292504]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:12 compute-0 sudo[292529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:26:12 compute-0 sudo[292529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:12 compute-0 ceph-mon[76537]: pgmap v1835: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 KiB/s wr, 86 op/s
Dec 13 08:26:12 compute-0 nova_compute[248510]: 2025-12-13 08:26:12.397 248514 INFO nova.compute.manager [-] [instance: 0709dd6e-7ff5-4f0f-b093-69b920b4fe77] Took 0.95 seconds to deallocate network for instance.
Dec 13 08:26:12 compute-0 podman[292567]: 2025-12-13 08:26:12.388921569 +0000 UTC m=+0.025864650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:26:12 compute-0 podman[292567]: 2025-12-13 08:26:12.587310753 +0000 UTC m=+0.224253814 container create 717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 08:26:12 compute-0 systemd[1]: Started libpod-conmon-717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666.scope.
Dec 13 08:26:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:12 compute-0 podman[292567]: 2025-12-13 08:26:12.676887073 +0000 UTC m=+0.313830164 container init 717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 08:26:12 compute-0 podman[292567]: 2025-12-13 08:26:12.686642413 +0000 UTC m=+0.323585474 container start 717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_burnell, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 08:26:12 compute-0 podman[292567]: 2025-12-13 08:26:12.690360815 +0000 UTC m=+0.327303906 container attach 717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 08:26:12 compute-0 systemd[1]: libpod-717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666.scope: Deactivated successfully.
Dec 13 08:26:12 compute-0 determined_burnell[292583]: 167 167
Dec 13 08:26:12 compute-0 conmon[292583]: conmon 717fd528bad92e04480e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666.scope/container/memory.events
Dec 13 08:26:12 compute-0 podman[292567]: 2025-12-13 08:26:12.694802165 +0000 UTC m=+0.331745246 container died 717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_burnell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-14966cbd6c488a020bb3274038f5c758bf41be8ee49772c59451aad532f20a54-merged.mount: Deactivated successfully.
Dec 13 08:26:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 103 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 619 KiB/s wr, 114 op/s
Dec 13 08:26:13 compute-0 nova_compute[248510]: 2025-12-13 08:26:13.293 248514 DEBUG oslo_concurrency.lockutils [None req-61cb8e96-fed5-443e-a5b9-3c4827fd23d5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "0709dd6e-7ff5-4f0f-b093-69b920b4fe77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:13 compute-0 podman[292567]: 2025-12-13 08:26:13.345766057 +0000 UTC m=+0.982709118 container remove 717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:26:13 compute-0 nova_compute[248510]: 2025-12-13 08:26:13.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:13 compute-0 systemd[1]: libpod-conmon-717fd528bad92e04480ebace9a0e41ac3c2a8657b009ee4d84ca76ffe0e28666.scope: Deactivated successfully.
Dec 13 08:26:13 compute-0 kernel: tap5b45c191-45 (unregistering): left promiscuous mode
Dec 13 08:26:13 compute-0 NetworkManager[50376]: <info>  [1765614373.4081] device (tap5b45c191-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:26:13 compute-0 ovn_controller[148476]: 2025-12-13T08:26:13Z|00362|binding|INFO|Releasing lport 5b45c191-4599-4de6-9ca8-2a6c38ccf703 from this chassis (sb_readonly=0)
Dec 13 08:26:13 compute-0 ovn_controller[148476]: 2025-12-13T08:26:13Z|00363|binding|INFO|Setting lport 5b45c191-4599-4de6-9ca8-2a6c38ccf703 down in Southbound
Dec 13 08:26:13 compute-0 ovn_controller[148476]: 2025-12-13T08:26:13Z|00364|binding|INFO|Removing iface tap5b45c191-45 ovn-installed in OVS
Dec 13 08:26:13 compute-0 nova_compute[248510]: 2025-12-13 08:26:13.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:13 compute-0 nova_compute[248510]: 2025-12-13 08:26:13.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:13 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Deactivated successfully.
Dec 13 08:26:13 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Consumed 13.078s CPU time.
Dec 13 08:26:13 compute-0 systemd-machined[210538]: Machine qemu-46-instance-00000028 terminated.
Dec 13 08:26:13 compute-0 podman[292615]: 2025-12-13 08:26:13.51086771 +0000 UTC m=+0.023114971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:26:13 compute-0 nova_compute[248510]: 2025-12-13 08:26:13.634 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:13 compute-0 nova_compute[248510]: 2025-12-13 08:26:13.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:13 compute-0 podman[292615]: 2025-12-13 08:26:13.677731437 +0000 UTC m=+0.189978678 container create 1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:26:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:13.714 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:4d:11 10.100.0.5'], port_security=['fa:16:3e:b0:4d:11 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ad40734f-2fce-406e-ba77-dd8eeb4743bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5b45c191-4599-4de6-9ca8-2a6c38ccf703) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:13.715 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5b45c191-4599-4de6-9ca8-2a6c38ccf703 in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis
Dec 13 08:26:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:13.717 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:26:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:13.718 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6db6bda0-d27d-40f7-aee2-e92c68286a36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:13.719 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace which is not needed anymore
Dec 13 08:26:13 compute-0 systemd[1]: Started libpod-conmon-1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888.scope.
Dec 13 08:26:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cf1d04b4aca7b5d3980e3ec160c8c7f2e1e4dacb0f168fabd1516d1de95152/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cf1d04b4aca7b5d3980e3ec160c8c7f2e1e4dacb0f168fabd1516d1de95152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cf1d04b4aca7b5d3980e3ec160c8c7f2e1e4dacb0f168fabd1516d1de95152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cf1d04b4aca7b5d3980e3ec160c8c7f2e1e4dacb0f168fabd1516d1de95152/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:13 compute-0 podman[292615]: 2025-12-13 08:26:13.888462477 +0000 UTC m=+0.400709718 container init 1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:26:13 compute-0 podman[292615]: 2025-12-13 08:26:13.898403982 +0000 UTC m=+0.410651223 container start 1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:26:13 compute-0 podman[292615]: 2025-12-13 08:26:13.948523689 +0000 UTC m=+0.460770930 container attach 1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:26:14 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[292079]: [NOTICE]   (292083) : haproxy version is 2.8.14-c23fe91
Dec 13 08:26:14 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[292079]: [NOTICE]   (292083) : path to executable is /usr/sbin/haproxy
Dec 13 08:26:14 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[292079]: [WARNING]  (292083) : Exiting Master process...
Dec 13 08:26:14 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[292079]: [ALERT]    (292083) : Current worker (292085) exited with code 143 (Terminated)
Dec 13 08:26:14 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[292079]: [WARNING]  (292083) : All workers exited. Exiting... (0)
Dec 13 08:26:14 compute-0 systemd[1]: libpod-08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31.scope: Deactivated successfully.
Dec 13 08:26:14 compute-0 podman[292664]: 2025-12-13 08:26:14.057943059 +0000 UTC m=+0.231670608 container died 08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.075 248514 INFO nova.virt.libvirt.driver [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Instance shutdown successfully after 13 seconds.
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.085 248514 INFO nova.virt.libvirt.driver [-] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Instance destroyed successfully.
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.085 248514 DEBUG nova.objects.instance [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'numa_topology' on Instance uuid ad40734f-2fce-406e-ba77-dd8eeb4743bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.089 248514 DEBUG nova.compute.manager [req-43130e9d-0d1c-412d-af90-4eb3c06dfcf7 req-475d5a26-079d-4248-a4bd-b366d509c17c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received event network-vif-unplugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.089 248514 DEBUG oslo_concurrency.lockutils [req-43130e9d-0d1c-412d-af90-4eb3c06dfcf7 req-475d5a26-079d-4248-a4bd-b366d509c17c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.090 248514 DEBUG oslo_concurrency.lockutils [req-43130e9d-0d1c-412d-af90-4eb3c06dfcf7 req-475d5a26-079d-4248-a4bd-b366d509c17c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.090 248514 DEBUG oslo_concurrency.lockutils [req-43130e9d-0d1c-412d-af90-4eb3c06dfcf7 req-475d5a26-079d-4248-a4bd-b366d509c17c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.090 248514 DEBUG nova.compute.manager [req-43130e9d-0d1c-412d-af90-4eb3c06dfcf7 req-475d5a26-079d-4248-a4bd-b366d509c17c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] No waiting events found dispatching network-vif-unplugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.090 248514 WARNING nova.compute.manager [req-43130e9d-0d1c-412d-af90-4eb3c06dfcf7 req-475d5a26-079d-4248-a4bd-b366d509c17c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received unexpected event network-vif-unplugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 for instance with vm_state active and task_state powering-off.
Dec 13 08:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31-userdata-shm.mount: Deactivated successfully.
Dec 13 08:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d160f445aba41155c240855f1df7fa557239cb4c94d7230a43cf607fa8902cc5-merged.mount: Deactivated successfully.
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.145 248514 DEBUG nova.compute.manager [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]: {
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:     "0": [
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:         {
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "devices": [
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "/dev/loop3"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             ],
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_name": "ceph_lv0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_size": "21470642176",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "name": "ceph_lv0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "tags": {
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cluster_name": "ceph",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.crush_device_class": "",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.encrypted": "0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.objectstore": "bluestore",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osd_id": "0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.type": "block",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.vdo": "0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.with_tpm": "0"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             },
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "type": "block",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "vg_name": "ceph_vg0"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:         }
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:     ],
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:     "1": [
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:         {
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "devices": [
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "/dev/loop4"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             ],
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_name": "ceph_lv1",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_size": "21470642176",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "name": "ceph_lv1",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "tags": {
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cluster_name": "ceph",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.crush_device_class": "",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.encrypted": "0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.objectstore": "bluestore",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osd_id": "1",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.type": "block",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.vdo": "0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.with_tpm": "0"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             },
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "type": "block",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "vg_name": "ceph_vg1"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:         }
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:     ],
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:     "2": [
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:         {
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "devices": [
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "/dev/loop5"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             ],
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_name": "ceph_lv2",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_size": "21470642176",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "name": "ceph_lv2",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "tags": {
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.cluster_name": "ceph",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.crush_device_class": "",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.encrypted": "0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.objectstore": "bluestore",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osd_id": "2",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.type": "block",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.vdo": "0",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:                 "ceph.with_tpm": "0"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             },
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "type": "block",
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:             "vg_name": "ceph_vg2"
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:         }
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]:     ]
Dec 13 08:26:14 compute-0 serene_stonebraker[292657]: }
Dec 13 08:26:14 compute-0 ceph-mon[76537]: pgmap v1836: 321 pgs: 321 active+clean; 103 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 619 KiB/s wr, 114 op/s
Dec 13 08:26:14 compute-0 podman[292664]: 2025-12-13 08:26:14.242290607 +0000 UTC m=+0.416018156 container cleanup 08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 08:26:14 compute-0 systemd[1]: libpod-conmon-08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31.scope: Deactivated successfully.
Dec 13 08:26:14 compute-0 systemd[1]: libpod-1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888.scope: Deactivated successfully.
Dec 13 08:26:14 compute-0 podman[292615]: 2025-12-13 08:26:14.258342483 +0000 UTC m=+0.770589724 container died 1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-30cf1d04b4aca7b5d3980e3ec160c8c7f2e1e4dacb0f168fabd1516d1de95152-merged.mount: Deactivated successfully.
Dec 13 08:26:14 compute-0 podman[292615]: 2025-12-13 08:26:14.317004561 +0000 UTC m=+0.829251802 container remove 1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 08:26:14 compute-0 podman[292697]: 2025-12-13 08:26:14.32712413 +0000 UTC m=+0.053890690 container remove 08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.333 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9616c7da-f499-41dd-9bdc-e45dd2984e92]: (4, ('Sat Dec 13 08:26:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31)\n08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31\nSat Dec 13 08:26:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31)\n08ce9933be0030feb4c2b88afffb547a7499420044149cf3206eb0d07b6f9f31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.334 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[678a2afb-c140-4e64-b5be-0cc45d23ce74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.335 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:14 compute-0 systemd[1]: libpod-conmon-1875f3c4e853b421bddc214a2676c0d0fc1e61ae4a4ea8b19b46eeac1dd27888.scope: Deactivated successfully.
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:14 compute-0 kernel: tap87bd91d0-e0: left promiscuous mode
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.361 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.365 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c830e152-7960-44b4-81c8-ec0f5e40d78d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.378 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21315384-5c87-434f-a4f9-1dbbaab0fcae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.380 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[74ac5e12-ba1a-4bd1-8026-a9f8ca3f0e05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:14 compute-0 sudo[292529]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.399 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed57a4e7-0cd6-4514-9631-8dcca4d8479c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693457, 'reachable_time': 20859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292728, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d87bd91d0\x2deead\x2d49b6\x2d8f92\x2df8d0dba555dc.mount: Deactivated successfully.
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.402 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:26:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:14.403 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c0b6e9-1cd5-464d-85fa-6a571c64f02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:14 compute-0 sudo[292729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:26:14 compute-0 sudo[292729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:14 compute-0 sudo[292729]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:14 compute-0 sudo[292754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:26:14 compute-0 sudo[292754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.696 248514 DEBUG oslo_concurrency.lockutils [None req-c2a7355a-facf-4d82-b20f-eae404ec872a 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:14 compute-0 nova_compute[248510]: 2025-12-13 08:26:14.794 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:14 compute-0 podman[292792]: 2025-12-13 08:26:14.843001019 +0000 UTC m=+0.042198302 container create 4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 08:26:14 compute-0 systemd[1]: Started libpod-conmon-4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494.scope.
Dec 13 08:26:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:14 compute-0 podman[292792]: 2025-12-13 08:26:14.823295803 +0000 UTC m=+0.022493106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:26:14 compute-0 podman[292792]: 2025-12-13 08:26:14.938198168 +0000 UTC m=+0.137395481 container init 4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:26:14 compute-0 podman[292792]: 2025-12-13 08:26:14.94839857 +0000 UTC m=+0.147595843 container start 4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:26:14 compute-0 podman[292792]: 2025-12-13 08:26:14.955115505 +0000 UTC m=+0.154312988 container attach 4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 08:26:14 compute-0 mystifying_bohr[292808]: 167 167
Dec 13 08:26:14 compute-0 systemd[1]: libpod-4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494.scope: Deactivated successfully.
Dec 13 08:26:14 compute-0 podman[292792]: 2025-12-13 08:26:14.957983066 +0000 UTC m=+0.157180349 container died 4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:26:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:26:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2429084912' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:26:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:26:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2429084912' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:26:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ad9b45c9287e2a83e78e47563342d0aae2184d4c7b9a1b068939ad4d847b595-merged.mount: Deactivated successfully.
Dec 13 08:26:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 121 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Dec 13 08:26:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:15.186 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:15 compute-0 nova_compute[248510]: 2025-12-13 08:26:15.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:15.190 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:26:15 compute-0 podman[292792]: 2025-12-13 08:26:15.682568224 +0000 UTC m=+0.881765517 container remove 4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bohr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:26:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2429084912' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:26:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2429084912' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:26:15 compute-0 systemd[1]: libpod-conmon-4b60d60021f3b06dd503c93299178328fcc46a4621ccc5992f91660f8f5d3494.scope: Deactivated successfully.
Dec 13 08:26:15 compute-0 podman[292832]: 2025-12-13 08:26:15.881027531 +0000 UTC m=+0.052575308 container create 2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 08:26:15 compute-0 systemd[1]: Started libpod-conmon-2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2.scope.
Dec 13 08:26:15 compute-0 podman[292832]: 2025-12-13 08:26:15.855674196 +0000 UTC m=+0.027222013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:26:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8832d40087904ed9c41fdff3995e4f6a4a752459845601b5477acd71605eec27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8832d40087904ed9c41fdff3995e4f6a4a752459845601b5477acd71605eec27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8832d40087904ed9c41fdff3995e4f6a4a752459845601b5477acd71605eec27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8832d40087904ed9c41fdff3995e4f6a4a752459845601b5477acd71605eec27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:15 compute-0 podman[292832]: 2025-12-13 08:26:15.991396364 +0000 UTC m=+0.162944121 container init 2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_poincare, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 08:26:15 compute-0 podman[292832]: 2025-12-13 08:26:15.998954631 +0000 UTC m=+0.170502368 container start 2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 08:26:16 compute-0 podman[292832]: 2025-12-13 08:26:16.003130814 +0000 UTC m=+0.174678551 container attach 2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:26:16 compute-0 nova_compute[248510]: 2025-12-13 08:26:16.538 248514 DEBUG nova.compute.manager [req-9c2e0ef9-0785-4f78-af30-e4df21feeb67 req-0d7ff726-f4bd-4da1-8ed6-3da1d944cd92 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received event network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:16 compute-0 nova_compute[248510]: 2025-12-13 08:26:16.539 248514 DEBUG oslo_concurrency.lockutils [req-9c2e0ef9-0785-4f78-af30-e4df21feeb67 req-0d7ff726-f4bd-4da1-8ed6-3da1d944cd92 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:16 compute-0 nova_compute[248510]: 2025-12-13 08:26:16.539 248514 DEBUG oslo_concurrency.lockutils [req-9c2e0ef9-0785-4f78-af30-e4df21feeb67 req-0d7ff726-f4bd-4da1-8ed6-3da1d944cd92 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:16 compute-0 nova_compute[248510]: 2025-12-13 08:26:16.539 248514 DEBUG oslo_concurrency.lockutils [req-9c2e0ef9-0785-4f78-af30-e4df21feeb67 req-0d7ff726-f4bd-4da1-8ed6-3da1d944cd92 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:16 compute-0 nova_compute[248510]: 2025-12-13 08:26:16.539 248514 DEBUG nova.compute.manager [req-9c2e0ef9-0785-4f78-af30-e4df21feeb67 req-0d7ff726-f4bd-4da1-8ed6-3da1d944cd92 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] No waiting events found dispatching network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:16 compute-0 nova_compute[248510]: 2025-12-13 08:26:16.539 248514 WARNING nova.compute.manager [req-9c2e0ef9-0785-4f78-af30-e4df21feeb67 req-0d7ff726-f4bd-4da1-8ed6-3da1d944cd92 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received unexpected event network-vif-plugged-5b45c191-4599-4de6-9ca8-2a6c38ccf703 for instance with vm_state stopped and task_state None.
Dec 13 08:26:16 compute-0 ceph-mon[76537]: pgmap v1837: 321 pgs: 321 active+clean; 121 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Dec 13 08:26:16 compute-0 lvm[292954]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:26:16 compute-0 lvm[292955]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:26:16 compute-0 lvm[292954]: VG ceph_vg1 finished
Dec 13 08:26:16 compute-0 lvm[292955]: VG ceph_vg2 finished
Dec 13 08:26:16 compute-0 lvm[292950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:26:16 compute-0 lvm[292950]: VG ceph_vg0 finished
Dec 13 08:26:16 compute-0 podman[292927]: 2025-12-13 08:26:16.807238973 +0000 UTC m=+0.068200774 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:26:16 compute-0 fervent_poincare[292849]: {}
Dec 13 08:26:16 compute-0 podman[292926]: 2025-12-13 08:26:16.841259053 +0000 UTC m=+0.108033227 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:26:16 compute-0 podman[292924]: 2025-12-13 08:26:16.848564603 +0000 UTC m=+0.115383198 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:26:16 compute-0 systemd[1]: libpod-2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2.scope: Deactivated successfully.
Dec 13 08:26:16 compute-0 systemd[1]: libpod-2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2.scope: Consumed 1.466s CPU time.
Dec 13 08:26:16 compute-0 podman[292832]: 2025-12-13 08:26:16.861448641 +0000 UTC m=+1.032996378 container died 2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_poincare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8832d40087904ed9c41fdff3995e4f6a4a752459845601b5477acd71605eec27-merged.mount: Deactivated successfully.
Dec 13 08:26:16 compute-0 podman[292832]: 2025-12-13 08:26:16.909316422 +0000 UTC m=+1.080864159 container remove 2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:26:16 compute-0 systemd[1]: libpod-conmon-2d7a6df9b6f33f478f4ba3112777bf7ae23967eee90c4cc3c5ef69fab44074e2.scope: Deactivated successfully.
Dec 13 08:26:16 compute-0 sudo[292754]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:26:16 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:26:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:26:16 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:26:17 compute-0 sudo[293006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:26:17 compute-0 sudo[293006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:26:17 compute-0 sudo[293006]: pam_unix(sudo:session): session closed for user root
Dec 13 08:26:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 121 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 13 08:26:17 compute-0 nova_compute[248510]: 2025-12-13 08:26:17.950 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:17 compute-0 nova_compute[248510]: 2025-12-13 08:26:17.950 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:26:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:26:17 compute-0 ceph-mon[76537]: pgmap v1838: 321 pgs: 321 active+clean; 121 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.152 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.316 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614363.3153963, 29375ff9-300a-43de-a53d-942e7afbb439 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.317 248514 INFO nova.compute.manager [-] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] VM Stopped (Lifecycle Event)
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.358 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "5ad89163-9711-4d02-94be-db41412cd173" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.359 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.360 248514 DEBUG nova.compute.manager [None req-a83941b3-dd10-4760-bf13-f83127b4c9fa - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.381 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.382 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.392 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.393 248514 INFO nova.compute.claims [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.493 248514 DEBUG nova.compute.manager [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.500 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.672 248514 INFO nova.compute.manager [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] instance snapshotting
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.673 248514 WARNING nova.compute.manager [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] trying to snapshot a non-running instance: (state: 4 expected: 1)
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.755 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.840 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:18 compute-0 nova_compute[248510]: 2025-12-13 08:26:18.951 248514 INFO nova.virt.libvirt.driver [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Beginning cold snapshot process
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 121 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.296 248514 DEBUG nova.virt.libvirt.imagebackend [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:26:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007182182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.322 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.330 248514 DEBUG nova.compute.provider_tree [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.401 248514 DEBUG nova.scheduler.client.report [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.472 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.473 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.476 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.484 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.485 248514 INFO nova.compute.claims [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.556 248514 DEBUG nova.storage.rbd_utils [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(787e154fd1ca4e39ad9cd362bf3d7e59) on rbd image(ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.674 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.675 248514 DEBUG nova.network.neutron [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:26:19 compute-0 nova_compute[248510]: 2025-12-13 08:26:19.796 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.183 248514 INFO nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:26:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Dec 13 08:26:20 compute-0 ceph-mon[76537]: pgmap v1839: 321 pgs: 321 active+clean; 121 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Dec 13 08:26:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3007182182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Dec 13 08:26:20 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.278 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.290 248514 DEBUG nova.storage.rbd_utils [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning vms/ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk@787e154fd1ca4e39ad9cd362bf3d7e59 to images/b37c7a16-9072-455e-ac1a-604badc22904 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.318 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.406 248514 DEBUG nova.storage.rbd_utils [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] flattening images/b37c7a16-9072-455e-ac1a-604badc22904 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.686 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.693 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.694 248514 INFO nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Creating image(s)
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.718 248514 DEBUG nova.storage.rbd_utils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] rbd image 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.758 248514 DEBUG nova.storage.rbd_utils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] rbd image 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.791 248514 DEBUG nova.storage.rbd_utils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] rbd image 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.795 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007641368400711345 of space, bias 1.0, pg target 0.22924105202134035 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670581259717853 of space, bias 1.0, pg target 0.2001174377915356 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.122283476501453e-07 of space, bias 4.0, pg target 0.0008546740171801743 quantized to 16 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:26:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.873 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.874 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.875 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.875 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.900 248514 DEBUG nova.storage.rbd_utils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] rbd image 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.904 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/632868350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.937 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.945 248514 DEBUG nova.compute.provider_tree [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.984 248514 DEBUG nova.scheduler.client.report [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:20 compute-0 nova_compute[248510]: 2025-12-13 08:26:20.993 248514 DEBUG nova.policy [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'edadf89429754e2b9dc21f5ba73894dc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b248a356f82d4070b9d1ff0500ca574c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.106 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.107 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:26:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 121 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.6 MiB/s wr, 79 op/s
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.208 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.209 248514 DEBUG nova.network.neutron [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.376 248514 INFO nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.461 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:26:21 compute-0 ceph-mon[76537]: osdmap e195: 3 total, 3 up, 3 in
Dec 13 08:26:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/632868350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.753 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.755 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.755 248514 INFO nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Creating image(s)
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.783 248514 DEBUG nova.storage.rbd_utils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 5ad89163-9711-4d02-94be-db41412cd173_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.809 248514 DEBUG nova.storage.rbd_utils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 5ad89163-9711-4d02-94be-db41412cd173_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.830 248514 DEBUG nova.storage.rbd_utils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 5ad89163-9711-4d02-94be-db41412cd173_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.834 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.906 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.908 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.909 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.910 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.933 248514 DEBUG nova.storage.rbd_utils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 5ad89163-9711-4d02-94be-db41412cd173_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.937 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5ad89163-9711-4d02-94be-db41412cd173_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:21 compute-0 nova_compute[248510]: 2025-12-13 08:26:21.973 248514 DEBUG nova.storage.rbd_utils [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(787e154fd1ca4e39ad9cd362bf3d7e59) on rbd image(ad40734f-2fce-406e-ba77-dd8eeb4743bc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.028 248514 DEBUG nova.policy [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6f7967ce16c45d394188c1302b02907', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:26:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:22.193 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.402 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.479 248514 DEBUG nova.storage.rbd_utils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] resizing rbd image 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:26:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Dec 13 08:26:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Dec 13 08:26:22 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Dec 13 08:26:22 compute-0 ceph-mon[76537]: pgmap v1841: 321 pgs: 321 active+clean; 121 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.6 MiB/s wr, 79 op/s
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.648 248514 DEBUG nova.storage.rbd_utils [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(snap) on rbd image(b37c7a16-9072-455e-ac1a-604badc22904) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.681 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5ad89163-9711-4d02-94be-db41412cd173_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.737 248514 DEBUG nova.objects.instance [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lazy-loading 'migration_context' on Instance uuid 4a2083e9-7387-4fa8-a7d5-f56fabb0d263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.772 248514 DEBUG nova.storage.rbd_utils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] resizing rbd image 5ad89163-9711-4d02-94be-db41412cd173_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.830 248514 DEBUG nova.network.neutron [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Successfully created port: 223b45c0-417b-4d31-85cf-1b87f4f68c12 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.869 248514 DEBUG nova.objects.instance [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ad89163-9711-4d02-94be-db41412cd173 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.875 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.875 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Ensure instance console log exists: /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.875 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.876 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.876 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.889 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.890 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Ensure instance console log exists: /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.890 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.890 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:22 compute-0 nova_compute[248510]: 2025-12-13 08:26:22.891 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 202 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 54 op/s
Dec 13 08:26:23 compute-0 nova_compute[248510]: 2025-12-13 08:26:23.297 248514 DEBUG nova.network.neutron [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Successfully created port: 218fbd96-4977-4565-bc40-c0dd1a19d059 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:26:23 compute-0 nova_compute[248510]: 2025-12-13 08:26:23.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Dec 13 08:26:23 compute-0 ceph-mon[76537]: osdmap e196: 3 total, 3 up, 3 in
Dec 13 08:26:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Dec 13 08:26:23 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.333 248514 DEBUG nova.network.neutron [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Successfully updated port: 223b45c0-417b-4d31-85cf-1b87f4f68c12 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.527 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "refresh_cache-4a2083e9-7387-4fa8-a7d5-f56fabb0d263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.528 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquired lock "refresh_cache-4a2083e9-7387-4fa8-a7d5-f56fabb0d263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.528 248514 DEBUG nova.network.neutron [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.599 248514 DEBUG nova.compute.manager [req-d41fb7c3-a7a7-47e8-ba62-6cce4e375d7a req-1f417ff0-91a5-4366-a2bc-e5e52e6ca13f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received event network-changed-223b45c0-417b-4d31-85cf-1b87f4f68c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.600 248514 DEBUG nova.compute.manager [req-d41fb7c3-a7a7-47e8-ba62-6cce4e375d7a req-1f417ff0-91a5-4366-a2bc-e5e52e6ca13f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Refreshing instance network info cache due to event network-changed-223b45c0-417b-4d31-85cf-1b87f4f68c12. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.600 248514 DEBUG oslo_concurrency.lockutils [req-d41fb7c3-a7a7-47e8-ba62-6cce4e375d7a req-1f417ff0-91a5-4366-a2bc-e5e52e6ca13f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4a2083e9-7387-4fa8-a7d5-f56fabb0d263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.747 248514 DEBUG nova.network.neutron [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Successfully updated port: 218fbd96-4977-4565-bc40-c0dd1a19d059 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.788 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-5ad89163-9711-4d02-94be-db41412cd173" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.789 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-5ad89163-9711-4d02-94be-db41412cd173" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.789 248514 DEBUG nova.network.neutron [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:24 compute-0 ceph-mon[76537]: pgmap v1843: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 202 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 54 op/s
Dec 13 08:26:24 compute-0 ceph-mon[76537]: osdmap e197: 3 total, 3 up, 3 in
Dec 13 08:26:24 compute-0 nova_compute[248510]: 2025-12-13 08:26:24.984 248514 DEBUG nova.network.neutron [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:25 compute-0 nova_compute[248510]: 2025-12-13 08:26:25.010 248514 DEBUG nova.network.neutron [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:25 compute-0 nova_compute[248510]: 2025-12-13 08:26:25.073 248514 DEBUG nova.compute.manager [req-f60625ba-f834-428b-a385-9df20c309a5d req-90a8fde1-4ae3-4679-b8e1-b49efd1618d3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received event network-changed-218fbd96-4977-4565-bc40-c0dd1a19d059 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:25 compute-0 nova_compute[248510]: 2025-12-13 08:26:25.074 248514 DEBUG nova.compute.manager [req-f60625ba-f834-428b-a385-9df20c309a5d req-90a8fde1-4ae3-4679-b8e1-b49efd1618d3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Refreshing instance network info cache due to event network-changed-218fbd96-4977-4565-bc40-c0dd1a19d059. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:26:25 compute-0 nova_compute[248510]: 2025-12-13 08:26:25.074 248514 DEBUG oslo_concurrency.lockutils [req-f60625ba-f834-428b-a385-9df20c309a5d req-90a8fde1-4ae3-4679-b8e1-b49efd1618d3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5ad89163-9711-4d02-94be-db41412cd173" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 292 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 253 op/s
Dec 13 08:26:25 compute-0 nova_compute[248510]: 2025-12-13 08:26:25.969 248514 INFO nova.virt.libvirt.driver [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Snapshot image upload complete
Dec 13 08:26:25 compute-0 nova_compute[248510]: 2025-12-13 08:26:25.970 248514 INFO nova.compute.manager [None req-a89aabf9-0670-4d04-a9a1-8b9fa5f11fd2 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Took 7.30 seconds to snapshot the instance on the hypervisor.
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.308 248514 DEBUG nova.network.neutron [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Updating instance_info_cache with network_info: [{"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.332 248514 DEBUG nova.network.neutron [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Updating instance_info_cache with network_info: [{"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.775 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Releasing lock "refresh_cache-4a2083e9-7387-4fa8-a7d5-f56fabb0d263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.776 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Instance network_info: |[{"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.777 248514 DEBUG oslo_concurrency.lockutils [req-d41fb7c3-a7a7-47e8-ba62-6cce4e375d7a req-1f417ff0-91a5-4366-a2bc-e5e52e6ca13f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4a2083e9-7387-4fa8-a7d5-f56fabb0d263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.777 248514 DEBUG nova.network.neutron [req-d41fb7c3-a7a7-47e8-ba62-6cce4e375d7a req-1f417ff0-91a5-4366-a2bc-e5e52e6ca13f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Refreshing network info cache for port 223b45c0-417b-4d31-85cf-1b87f4f68c12 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.779 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Start _get_guest_xml network_info=[{"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.784 248514 WARNING nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.786 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-5ad89163-9711-4d02-94be-db41412cd173" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.786 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Instance network_info: |[{"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.787 248514 DEBUG oslo_concurrency.lockutils [req-f60625ba-f834-428b-a385-9df20c309a5d req-90a8fde1-4ae3-4679-b8e1-b49efd1618d3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5ad89163-9711-4d02-94be-db41412cd173" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.787 248514 DEBUG nova.network.neutron [req-f60625ba-f834-428b-a385-9df20c309a5d req-90a8fde1-4ae3-4679-b8e1-b49efd1618d3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Refreshing network info cache for port 218fbd96-4977-4565-bc40-c0dd1a19d059 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.789 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Start _get_guest_xml network_info=[{"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.799 248514 DEBUG nova.virt.libvirt.host [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.800 248514 DEBUG nova.virt.libvirt.host [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.804 248514 WARNING nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.807 248514 DEBUG nova.virt.libvirt.host [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.808 248514 DEBUG nova.virt.libvirt.host [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.808 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.808 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.809 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.809 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.810 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.810 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.810 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.811 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.811 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.812 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.812 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.812 248514 DEBUG nova.virt.hardware [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.816 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.859 248514 DEBUG nova.virt.libvirt.host [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.860 248514 DEBUG nova.virt.libvirt.host [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.865 248514 DEBUG nova.virt.libvirt.host [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.866 248514 DEBUG nova.virt.libvirt.host [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.866 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.866 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.867 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.867 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.868 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.868 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.868 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.868 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.869 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.869 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.869 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.869 248514 DEBUG nova.virt.hardware [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:26:26 compute-0 nova_compute[248510]: 2025-12-13 08:26:26.873 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:26 compute-0 ceph-mon[76537]: pgmap v1845: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 292 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 253 op/s
Dec 13 08:26:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 292 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 13 MiB/s wr, 219 op/s
Dec 13 08:26:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012813358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:27 compute-0 nova_compute[248510]: 2025-12-13 08:26:27.412 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409895113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:27 compute-0 nova_compute[248510]: 2025-12-13 08:26:27.441 248514 DEBUG nova.storage.rbd_utils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] rbd image 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:27 compute-0 nova_compute[248510]: 2025-12-13 08:26:27.446 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:27 compute-0 nova_compute[248510]: 2025-12-13 08:26:27.487 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:27 compute-0 nova_compute[248510]: 2025-12-13 08:26:27.511 248514 DEBUG nova.storage.rbd_utils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 5ad89163-9711-4d02-94be-db41412cd173_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:27 compute-0 nova_compute[248510]: 2025-12-13 08:26:27.515 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Dec 13 08:26:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301493284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156703116' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Dec 13 08:26:28 compute-0 ceph-mon[76537]: pgmap v1846: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 292 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 13 MiB/s wr, 219 op/s
Dec 13 08:26:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4012813358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2409895113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:28 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.065 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.069 248514 DEBUG nova.virt.libvirt.vif [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1690898624',display_name='tempest-DeleteServersTestJSON-server-1690898624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1690898624',id=43,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-uvfukfl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:21Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=5ad89163-9711-4d02-94be-db41412cd173,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.070 248514 DEBUG nova.network.os_vif_util [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.071 248514 DEBUG nova.network.os_vif_util [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:29:99,bridge_name='br-int',has_traffic_filtering=True,id=218fbd96-4977-4565-bc40-c0dd1a19d059,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap218fbd96-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.073 248514 DEBUG nova.objects.instance [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ad89163-9711-4d02-94be-db41412cd173 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.080 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.082 248514 DEBUG nova.virt.libvirt.vif [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-13540936',display_name='tempest-InstanceActionsNegativeTestJSON-server-13540936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-13540936',id=42,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b248a356f82d4070b9d1ff0500ca574c',ramdisk_id='',reservation_id='r-hiyy0a7l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-720592495',owner_user_name='tempest-InstanceActionsNegativeTestJSON-720592495-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:20Z,user_data=None,user_id='edadf89429754e2b9dc21f5ba73894dc',uuid=4a2083e9-7387-4fa8-a7d5-f56fabb0d263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.082 248514 DEBUG nova.network.os_vif_util [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Converting VIF {"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.083 248514 DEBUG nova.network.os_vif_util [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c4:08,bridge_name='br-int',has_traffic_filtering=True,id=223b45c0-417b-4d31-85cf-1b87f4f68c12,network=Network(87e36b0f-d803-4f8a-a615-bb4ace6f6438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223b45c0-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.084 248514 DEBUG nova.objects.instance [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a2083e9-7387-4fa8-a7d5-f56fabb0d263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.217 248514 DEBUG nova.network.neutron [req-d41fb7c3-a7a7-47e8-ba62-6cce4e375d7a req-1f417ff0-91a5-4366-a2bc-e5e52e6ca13f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Updated VIF entry in instance network info cache for port 223b45c0-417b-4d31-85cf-1b87f4f68c12. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.218 248514 DEBUG nova.network.neutron [req-d41fb7c3-a7a7-47e8-ba62-6cce4e375d7a req-1f417ff0-91a5-4366-a2bc-e5e52e6ca13f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Updating instance_info_cache with network_info: [{"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.252 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <uuid>5ad89163-9711-4d02-94be-db41412cd173</uuid>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <name>instance-0000002b</name>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:name>tempest-DeleteServersTestJSON-server-1690898624</nova:name>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:26:26</nova:creationTime>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:user uuid="b6f7967ce16c45d394188c1302b02907">tempest-DeleteServersTestJSON-991966373-project-member</nova:user>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:project uuid="6f6beadbb0244529b8dfc1abff8e8e10">tempest-DeleteServersTestJSON-991966373</nova:project>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:port uuid="218fbd96-4977-4565-bc40-c0dd1a19d059">
Dec 13 08:26:28 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <system>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="serial">5ad89163-9711-4d02-94be-db41412cd173</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="uuid">5ad89163-9711-4d02-94be-db41412cd173</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </system>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <os>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </os>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <features>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </features>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5ad89163-9711-4d02-94be-db41412cd173_disk">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5ad89163-9711-4d02-94be-db41412cd173_disk.config">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:95:29:99"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <target dev="tap218fbd96-49"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173/console.log" append="off"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <video>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </video>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:26:28 compute-0 nova_compute[248510]: </domain>
Dec 13 08:26:28 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.253 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Preparing to wait for external event network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.254 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "5ad89163-9711-4d02-94be-db41412cd173-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.255 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.256 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.257 248514 DEBUG nova.virt.libvirt.vif [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1690898624',display_name='tempest-DeleteServersTestJSON-server-1690898624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1690898624',id=43,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-uvfukfl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:21Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=5ad89163-9711-4d02-94be-db41412cd173,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.257 248514 DEBUG nova.network.os_vif_util [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.258 248514 DEBUG nova.network.os_vif_util [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:29:99,bridge_name='br-int',has_traffic_filtering=True,id=218fbd96-4977-4565-bc40-c0dd1a19d059,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap218fbd96-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.259 248514 DEBUG os_vif [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:29:99,bridge_name='br-int',has_traffic_filtering=True,id=218fbd96-4977-4565-bc40-c0dd1a19d059,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap218fbd96-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.259 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.260 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.260 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.265 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.266 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap218fbd96-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.267 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap218fbd96-49, col_values=(('external_ids', {'iface-id': '218fbd96-4977-4565-bc40-c0dd1a19d059', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:29:99', 'vm-uuid': '5ad89163-9711-4d02-94be-db41412cd173'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:28 compute-0 NetworkManager[50376]: <info>  [1765614388.2709] manager: (tap218fbd96-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.272 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.280 248514 INFO os_vif [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:29:99,bridge_name='br-int',has_traffic_filtering=True,id=218fbd96-4977-4565-bc40-c0dd1a19d059,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap218fbd96-49')
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.649 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614373.648371, ad40734f-2fce-406e-ba77-dd8eeb4743bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.650 248514 INFO nova.compute.manager [-] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] VM Stopped (Lifecycle Event)
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.679 248514 DEBUG nova.network.neutron [req-f60625ba-f834-428b-a385-9df20c309a5d req-90a8fde1-4ae3-4679-b8e1-b49efd1618d3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Updated VIF entry in instance network info cache for port 218fbd96-4977-4565-bc40-c0dd1a19d059. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.680 248514 DEBUG nova.network.neutron [req-f60625ba-f834-428b-a385-9df20c309a5d req-90a8fde1-4ae3-4679-b8e1-b49efd1618d3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Updating instance_info_cache with network_info: [{"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.755 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <uuid>4a2083e9-7387-4fa8-a7d5-f56fabb0d263</uuid>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <name>instance-0000002a</name>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-13540936</nova:name>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:26:26</nova:creationTime>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:user uuid="edadf89429754e2b9dc21f5ba73894dc">tempest-InstanceActionsNegativeTestJSON-720592495-project-member</nova:user>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:project uuid="b248a356f82d4070b9d1ff0500ca574c">tempest-InstanceActionsNegativeTestJSON-720592495</nova:project>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <nova:port uuid="223b45c0-417b-4d31-85cf-1b87f4f68c12">
Dec 13 08:26:28 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <system>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="serial">4a2083e9-7387-4fa8-a7d5-f56fabb0d263</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="uuid">4a2083e9-7387-4fa8-a7d5-f56fabb0d263</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </system>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <os>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </os>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <features>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </features>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk.config">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e1:c4:08"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <target dev="tap223b45c0-41"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263/console.log" append="off"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <video>
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </video>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:26:28 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:26:28 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:26:28 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:26:28 compute-0 nova_compute[248510]: </domain>
Dec 13 08:26:28 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.755 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Preparing to wait for external event network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.755 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.756 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.756 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.756 248514 DEBUG nova.virt.libvirt.vif [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-13540936',display_name='tempest-InstanceActionsNegativeTestJSON-server-13540936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-13540936',id=42,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b248a356f82d4070b9d1ff0500ca574c',ramdisk_id='',reservation_id='r-hiyy0a7l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-720592495',owner_user_name='tempest-InstanceActionsNegativeTestJSON-720592495-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:20Z,user_data=None,user_id='edadf89429754e2b9dc21f5ba73894dc',uuid=4a2083e9-7387-4fa8-a7d5-f56fabb0d263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.757 248514 DEBUG nova.network.os_vif_util [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Converting VIF {"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.757 248514 DEBUG nova.network.os_vif_util [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c4:08,bridge_name='br-int',has_traffic_filtering=True,id=223b45c0-417b-4d31-85cf-1b87f4f68c12,network=Network(87e36b0f-d803-4f8a-a615-bb4ace6f6438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223b45c0-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.758 248514 DEBUG os_vif [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c4:08,bridge_name='br-int',has_traffic_filtering=True,id=223b45c0-417b-4d31-85cf-1b87f4f68c12,network=Network(87e36b0f-d803-4f8a-a615-bb4ace6f6438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223b45c0-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.760 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.760 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.763 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap223b45c0-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.763 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap223b45c0-41, col_values=(('external_ids', {'iface-id': '223b45c0-417b-4d31-85cf-1b87f4f68c12', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:c4:08', 'vm-uuid': '4a2083e9-7387-4fa8-a7d5-f56fabb0d263'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:28 compute-0 NetworkManager[50376]: <info>  [1765614388.7670] manager: (tap223b45c0-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.767 248514 DEBUG oslo_concurrency.lockutils [req-d41fb7c3-a7a7-47e8-ba62-6cce4e375d7a req-1f417ff0-91a5-4366-a2bc-e5e52e6ca13f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4a2083e9-7387-4fa8-a7d5-f56fabb0d263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.768 248514 DEBUG oslo_concurrency.lockutils [req-f60625ba-f834-428b-a385-9df20c309a5d req-90a8fde1-4ae3-4679-b8e1-b49efd1618d3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5ad89163-9711-4d02-94be-db41412cd173" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.769 248514 DEBUG nova.compute.manager [None req-ef932fb2-55eb-488b-b5e5-77bc1926d21e - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.772 248514 DEBUG nova.compute.manager [None req-ef932fb2-55eb-488b-b5e5-77bc1926d21e - - - - - -] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.778 248514 INFO os_vif [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c4:08,bridge_name='br-int',has_traffic_filtering=True,id=223b45c0-417b-4d31-85cf-1b87f4f68c12,network=Network(87e36b0f-d803-4f8a-a615-bb4ace6f6438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223b45c0-41')
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.924 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.925 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.925 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No VIF found with MAC fa:16:3e:95:29:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.927 248514 INFO nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Using config drive
Dec 13 08:26:28 compute-0 nova_compute[248510]: 2025-12-13 08:26:28.957 248514 DEBUG nova.storage.rbd_utils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 5ad89163-9711-4d02-94be-db41412cd173_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.023 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.024 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.025 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] No VIF found with MAC fa:16:3e:e1:c4:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.025 248514 INFO nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Using config drive
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.050 248514 DEBUG nova.storage.rbd_utils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] rbd image 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/301493284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2156703116' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:29 compute-0 ceph-mon[76537]: osdmap e198: 3 total, 3 up, 3 in
Dec 13 08:26:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 256 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 13 MiB/s wr, 293 op/s
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.456 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.456 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.457 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.457 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.458 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.459 248514 INFO nova.compute.manager [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Terminating instance
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.460 248514 DEBUG nova.compute.manager [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.466 248514 INFO nova.virt.libvirt.driver [-] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Instance destroyed successfully.
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.466 248514 DEBUG nova.objects.instance [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid ad40734f-2fce-406e-ba77-dd8eeb4743bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.920 248514 DEBUG nova.virt.libvirt.vif [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-797160490',display_name='tempest-ImagesTestJSON-server-797160490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-797160490',id=40,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:25:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-7bbfmcda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:26:26Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=ad40734f-2fce-406e-ba77-dd8eeb4743bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.921 248514 DEBUG nova.network.os_vif_util [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "address": "fa:16:3e:b0:4d:11", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b45c191-45", "ovs_interfaceid": "5b45c191-4599-4de6-9ca8-2a6c38ccf703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.922 248514 DEBUG nova.network.os_vif_util [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:4d:11,bridge_name='br-int',has_traffic_filtering=True,id=5b45c191-4599-4de6-9ca8-2a6c38ccf703,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b45c191-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.922 248514 DEBUG os_vif [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:4d:11,bridge_name='br-int',has_traffic_filtering=True,id=5b45c191-4599-4de6-9ca8-2a6c38ccf703,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b45c191-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.924 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b45c191-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.932 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.935 248514 INFO os_vif [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:4d:11,bridge_name='br-int',has_traffic_filtering=True,id=5b45c191-4599-4de6-9ca8-2a6c38ccf703,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b45c191-45')
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.957 248514 INFO nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Creating config drive at /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263/disk.config
Dec 13 08:26:29 compute-0 nova_compute[248510]: 2025-12-13 08:26:29.963 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9nd68oc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.001 248514 INFO nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Creating config drive at /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173/disk.config
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.006 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1qyrqb6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.105 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9nd68oc" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:30 compute-0 ceph-mon[76537]: pgmap v1848: 321 pgs: 321 active+clean; 256 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 13 MiB/s wr, 293 op/s
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.141 248514 DEBUG nova.storage.rbd_utils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] rbd image 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.145 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263/disk.config 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.193 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1qyrqb6" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.220 248514 DEBUG nova.storage.rbd_utils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 5ad89163-9711-4d02-94be-db41412cd173_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.228 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173/disk.config 5ad89163-9711-4d02-94be-db41412cd173_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.317 248514 DEBUG oslo_concurrency.processutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263/disk.config 4a2083e9-7387-4fa8-a7d5-f56fabb0d263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.318 248514 INFO nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Deleting local config drive /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263/disk.config because it was imported into RBD.
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.3857] manager: (tap223b45c0-41): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Dec 13 08:26:30 compute-0 kernel: tap223b45c0-41: entered promiscuous mode
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.395 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00365|binding|INFO|Claiming lport 223b45c0-417b-4d31-85cf-1b87f4f68c12 for this chassis.
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00366|binding|INFO|223b45c0-417b-4d31-85cf-1b87f4f68c12: Claiming fa:16:3e:e1:c4:08 10.100.0.12
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.407 248514 DEBUG oslo_concurrency.processutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173/disk.config 5ad89163-9711-4d02-94be-db41412cd173_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.408 248514 INFO nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Deleting local config drive /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173/disk.config because it was imported into RBD.
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.418 248514 INFO nova.virt.libvirt.driver [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Deleting instance files /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc_del
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.419 248514 INFO nova.virt.libvirt.driver [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Deletion of /var/lib/nova/instances/ad40734f-2fce-406e-ba77-dd8eeb4743bc_del complete
Dec 13 08:26:30 compute-0 systemd-udevd[293832]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:26:30 compute-0 systemd-machined[210538]: New machine qemu-47-instance-0000002a.
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.4435] device (tap223b45c0-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.4442] device (tap223b45c0-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.463 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:c4:08 10.100.0.12'], port_security=['fa:16:3e:e1:c4:08 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a2083e9-7387-4fa8-a7d5-f56fabb0d263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87e36b0f-d803-4f8a-a615-bb4ace6f6438', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b248a356f82d4070b9d1ff0500ca574c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c49ee4f7-c546-4f3e-87e0-d64442c2add1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864f1a11-3481-4951-8fd3-78a864369b75, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=223b45c0-417b-4d31-85cf-1b87f4f68c12) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.465 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 223b45c0-417b-4d31-85cf-1b87f4f68c12 in datapath 87e36b0f-d803-4f8a-a615-bb4ace6f6438 bound to our chassis
Dec 13 08:26:30 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.466 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87e36b0f-d803-4f8a-a615-bb4ace6f6438
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00367|binding|INFO|Setting lport 223b45c0-417b-4d31-85cf-1b87f4f68c12 ovn-installed in OVS
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00368|binding|INFO|Setting lport 223b45c0-417b-4d31-85cf-1b87f4f68c12 up in Southbound
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 kernel: tap218fbd96-49: entered promiscuous mode
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.4821] manager: (tap218fbd96-49): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.480 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[83ca519b-ccac-4198-ad45-c87dc7fb9b82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.481 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87e36b0f-d1 in ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.483 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87e36b0f-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.483 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c28a2e71-f8b1-491a-af85-a2e1c5002e18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.484 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f42727a8-d33c-478f-90f5-bb7cfe122f98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00369|if_status|INFO|Dropped 2 log messages in last 257 seconds (most recently, 257 seconds ago) due to excessive rate
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00370|if_status|INFO|Not updating pb chassis for 218fbd96-4977-4565-bc40-c0dd1a19d059 now as sb is readonly
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.4950] device (tap218fbd96-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.4964] device (tap218fbd96-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.499 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7d79fc96-c1cf-4416-8d09-15750b70e338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.517 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1bb7c35-b577-485b-aaeb-b89782a84e23]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 systemd-machined[210538]: New machine qemu-48-instance-0000002b.
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.545 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f713fb26-48c3-44d2-a286-e9a7d3099004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.550 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db37fe33-c012-439a-a1b5-20ae6df4cf4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.5516] manager: (tap87e36b0f-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/170)
Dec 13 08:26:30 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002b.
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00371|binding|INFO|Claiming lport 218fbd96-4977-4565-bc40-c0dd1a19d059 for this chassis.
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00372|binding|INFO|218fbd96-4977-4565-bc40-c0dd1a19d059: Claiming fa:16:3e:95:29:99 10.100.0.14
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.583 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:29:99 10.100.0.14'], port_security=['fa:16:3e:95:29:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5ad89163-9711-4d02-94be-db41412cd173', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=218fbd96-4977-4565-bc40-c0dd1a19d059) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.585 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f156ff82-2e36-429b-a906-b738e2c02c56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.587 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b3079f4b-f619-4f7c-aa5c-92f3cd945632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.6058] device (tap87e36b0f-d0): carrier: link connected
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.609 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6e3e19-f7c2-4680-9736-8d4ba589f979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.626 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6a228e67-e3bc-4663-ba8c-b7171374c4b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87e36b0f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:72:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696782, 'reachable_time': 26892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293881, 'error': None, 'target': 'ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00373|binding|INFO|Setting lport 218fbd96-4977-4565-bc40-c0dd1a19d059 ovn-installed in OVS
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00374|binding|INFO|Setting lport 218fbd96-4977-4565-bc40-c0dd1a19d059 up in Southbound
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0198c019-7443-4f79-970d-e18b3b3c1d37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:7264'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696782, 'tstamp': 696782}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293885, 'error': None, 'target': 'ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.656 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[05f253ea-4ec2-4b68-afb3-94b0e7949506]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87e36b0f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:72:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696782, 'reachable_time': 26892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293888, 'error': None, 'target': 'ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.691 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03442143-d439-4e6f-902d-e6e11766d36f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.750 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec1d433-a20a-4b0b-8d91-fd4a43222ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.752 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87e36b0f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.752 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.753 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87e36b0f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 NetworkManager[50376]: <info>  [1765614390.7551] manager: (tap87e36b0f-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Dec 13 08:26:30 compute-0 kernel: tap87e36b0f-d0: entered promiscuous mode
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.757 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87e36b0f-d0, col_values=(('external_ids', {'iface-id': '5de664be-471d-4e0d-b1a2-191c5a9a3d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 ovn_controller[148476]: 2025-12-13T08:26:30Z|00375|binding|INFO|Releasing lport 5de664be-471d-4e0d-b1a2-191c5a9a3d7e from this chassis (sb_readonly=1)
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.760 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87e36b0f-d803-4f8a-a615-bb4ace6f6438.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87e36b0f-d803-4f8a-a615-bb4ace6f6438.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.760 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4b8611-7c5f-4eb9-bf97-6548c45cc0f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.761 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-87e36b0f-d803-4f8a-a615-bb4ace6f6438
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/87e36b0f-d803-4f8a-a615-bb4ace6f6438.pid.haproxy
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 87e36b0f-d803-4f8a-a615-bb4ace6f6438
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:26:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:30.762 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438', 'env', 'PROCESS_TAG=haproxy-87e36b0f-d803-4f8a-a615-bb4ace6f6438', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87e36b0f-d803-4f8a-a615-bb4ace6f6438.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.818 248514 INFO nova.compute.manager [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Took 1.36 seconds to destroy the instance on the hypervisor.
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.819 248514 DEBUG oslo.service.loopingcall [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.819 248514 DEBUG nova.compute.manager [-] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:26:30 compute-0 nova_compute[248510]: 2025-12-13 08:26:30.819 248514 DEBUG nova.network.neutron [-] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.137 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614391.137169, 5ad89163-9711-4d02-94be-db41412cd173 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.138 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] VM Started (Lifecycle Event)
Dec 13 08:26:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 256 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.4 MiB/s wr, 188 op/s
Dec 13 08:26:31 compute-0 podman[293962]: 2025-12-13 08:26:31.168871415 +0000 UTC m=+0.055268335 container create 00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:26:31 compute-0 systemd[1]: Started libpod-conmon-00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15.scope.
Dec 13 08:26:31 compute-0 podman[293962]: 2025-12-13 08:26:31.142443053 +0000 UTC m=+0.028839993 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:26:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929f88439cbc002a64576b634ec7bff8c5c8d87e57ee4c4855038fb8be980ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:31 compute-0 podman[293962]: 2025-12-13 08:26:31.270700827 +0000 UTC m=+0.157097747 container init 00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:26:31 compute-0 podman[293962]: 2025-12-13 08:26:31.277538416 +0000 UTC m=+0.163935326 container start 00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:26:31 compute-0 neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438[294001]: [NOTICE]   (294022) : New worker (294025) forked
Dec 13 08:26:31 compute-0 neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438[294001]: [NOTICE]   (294022) : Loading success.
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.345 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 218fbd96-4977-4565-bc40-c0dd1a19d059 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.346 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.358 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1c0c41-d6cd-4b42-a607-5f437259f82e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.359 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85372fca-a1 in ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.361 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85372fca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.361 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7218e7-40f5-475a-9e20-23f2d9e4a5e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.362 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[125d5a88-946a-4d4d-b14d-de6974cc04ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.363 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.367 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614391.1382093, 5ad89163-9711-4d02-94be-db41412cd173 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.368 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] VM Paused (Lifecycle Event)
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.376 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[a8404f47-faf6-476f-9b3f-774a34355515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.393 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b79905a-1e8e-4ce3-b3e9-3f48fd5e79a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.422 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.426 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.427 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed11720-ca68-4732-859c-193cf248b9e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.433 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c8787942-1ed5-4254-b650-668e2b351567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 NetworkManager[50376]: <info>  [1765614391.4347] manager: (tap85372fca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/172)
Dec 13 08:26:31 compute-0 systemd-udevd[293877]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.473 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc155a8-f34e-4544-8d02-510697268047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.477 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[15bd1681-af57-40ec-afb9-7b53dd0b4f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 NetworkManager[50376]: <info>  [1765614391.5017] device (tap85372fca-a0): carrier: link connected
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.508 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec187c1-9c93-4741-8692-9b69b18508f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.531 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e88bc1-57fc-4381-be19-4adec21fade8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696871, 'reachable_time': 42813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294044, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.547 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46e6c9ca-fe59-40e2-a256-c6bb0ae63beb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:30d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696871, 'tstamp': 696871}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294045, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.565 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9061226d-73ea-4bbd-abc9-12a3f2228154]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696871, 'reachable_time': 42813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294046, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3e85cb46-6f10-4801-8b50-8386a157f71a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.677 248514 DEBUG nova.compute.manager [req-288d4ddf-fe0e-480e-8a86-4a570cec72d5 req-f19f2dd5-c9fb-410b-a7a1-21b93d613735 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received event network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.678 248514 DEBUG oslo_concurrency.lockutils [req-288d4ddf-fe0e-480e-8a86-4a570cec72d5 req-f19f2dd5-c9fb-410b-a7a1-21b93d613735 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.678 248514 DEBUG oslo_concurrency.lockutils [req-288d4ddf-fe0e-480e-8a86-4a570cec72d5 req-f19f2dd5-c9fb-410b-a7a1-21b93d613735 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.679 248514 DEBUG oslo_concurrency.lockutils [req-288d4ddf-fe0e-480e-8a86-4a570cec72d5 req-f19f2dd5-c9fb-410b-a7a1-21b93d613735 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.679 248514 DEBUG nova.compute.manager [req-288d4ddf-fe0e-480e-8a86-4a570cec72d5 req-f19f2dd5-c9fb-410b-a7a1-21b93d613735 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Processing event network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.680 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.685 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.690 248514 INFO nova.virt.libvirt.driver [-] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Instance spawned successfully.
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.691 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25b43d82-514e-4bab-9b03-3839a1088a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.696 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.698 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85372fca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:31 compute-0 NetworkManager[50376]: <info>  [1765614391.7013] manager: (tap85372fca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Dec 13 08:26:31 compute-0 kernel: tap85372fca-a0: entered promiscuous mode
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.705 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85372fca-a0, col_values=(('external_ids', {'iface-id': '2c0f4981-0ad0-478e-b1ad-551d231022ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:31 compute-0 ovn_controller[148476]: 2025-12-13T08:26:31Z|00376|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.708 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.709 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.709 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614391.3373568, 4a2083e9-7387-4fa8-a7d5-f56fabb0d263 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.709 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] VM Started (Lifecycle Event)
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.710 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3bd9fb1-ec67-4da7-8470-04bec77ac850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.711 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:26:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:31.711 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'env', 'PROCESS_TAG=haproxy-85372fca-ab50-48b6-8c21-507f630c205a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85372fca-ab50-48b6-8c21-507f630c205a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.765 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.770 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.770 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.770 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.771 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.772 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.772 248514 DEBUG nova.virt.libvirt.driver [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:31 compute-0 nova_compute[248510]: 2025-12-13 08:26:31.776 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:32 compute-0 podman[294078]: 2025-12-13 08:26:32.112947409 +0000 UTC m=+0.070010579 container create 166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 08:26:32 compute-0 systemd[1]: Started libpod-conmon-166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31.scope.
Dec 13 08:26:32 compute-0 podman[294078]: 2025-12-13 08:26:32.076840808 +0000 UTC m=+0.033903988 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:26:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db6410990b1c787616716b1257736de1bcab5283e8453831f423bfe4a69d731/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:32 compute-0 podman[294078]: 2025-12-13 08:26:32.2013556 +0000 UTC m=+0.158418760 container init 166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:26:32 compute-0 podman[294078]: 2025-12-13 08:26:32.207349298 +0000 UTC m=+0.164412438 container start 166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:26:32 compute-0 ceph-mon[76537]: pgmap v1849: 321 pgs: 321 active+clean; 256 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.4 MiB/s wr, 188 op/s
Dec 13 08:26:32 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[294093]: [NOTICE]   (294097) : New worker (294099) forked
Dec 13 08:26:32 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[294093]: [NOTICE]   (294097) : Loading success.
Dec 13 08:26:32 compute-0 nova_compute[248510]: 2025-12-13 08:26:32.433 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:26:32 compute-0 nova_compute[248510]: 2025-12-13 08:26:32.434 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614391.3377001, 4a2083e9-7387-4fa8-a7d5-f56fabb0d263 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:32 compute-0 nova_compute[248510]: 2025-12-13 08:26:32.434 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] VM Paused (Lifecycle Event)
Dec 13 08:26:32 compute-0 nova_compute[248510]: 2025-12-13 08:26:32.510 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:32 compute-0 nova_compute[248510]: 2025-12-13 08:26:32.513 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614391.6839373, 4a2083e9-7387-4fa8-a7d5-f56fabb0d263 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:32 compute-0 nova_compute[248510]: 2025-12-13 08:26:32.513 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] VM Resumed (Lifecycle Event)
Dec 13 08:26:32 compute-0 nova_compute[248510]: 2025-12-13 08:26:32.748 248514 INFO nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Took 12.06 seconds to spawn the instance on the hypervisor.
Dec 13 08:26:32 compute-0 nova_compute[248510]: 2025-12-13 08:26:32.749 248514 DEBUG nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 185 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.2 KiB/s wr, 124 op/s
Dec 13 08:26:33 compute-0 nova_compute[248510]: 2025-12-13 08:26:33.478 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:33 compute-0 nova_compute[248510]: 2025-12-13 08:26:33.485 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:33 compute-0 nova_compute[248510]: 2025-12-13 08:26:33.602 248514 INFO nova.compute.manager [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Took 15.26 seconds to build instance.
Dec 13 08:26:33 compute-0 nova_compute[248510]: 2025-12-13 08:26:33.982 248514 DEBUG oslo_concurrency.lockutils [None req-23843c8f-fb36-48cf-8fc9-3ed2ce82d82d edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:34 compute-0 ceph-mon[76537]: pgmap v1850: 321 pgs: 321 active+clean; 185 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.2 KiB/s wr, 124 op/s
Dec 13 08:26:34 compute-0 nova_compute[248510]: 2025-12-13 08:26:34.265 248514 DEBUG nova.network.neutron [-] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:34 compute-0 nova_compute[248510]: 2025-12-13 08:26:34.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:34 compute-0 nova_compute[248510]: 2025-12-13 08:26:34.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Dec 13 08:26:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Dec 13 08:26:35 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Dec 13 08:26:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 134 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 42 KiB/s wr, 221 op/s
Dec 13 08:26:35 compute-0 nova_compute[248510]: 2025-12-13 08:26:35.657 248514 INFO nova.compute.manager [-] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Took 4.84 seconds to deallocate network for instance.
Dec 13 08:26:35 compute-0 nova_compute[248510]: 2025-12-13 08:26:35.859 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:35 compute-0 nova_compute[248510]: 2025-12-13 08:26:35.860 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:35 compute-0 nova_compute[248510]: 2025-12-13 08:26:35.981 248514 DEBUG oslo_concurrency.processutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:36 compute-0 ceph-mon[76537]: osdmap e199: 3 total, 3 up, 3 in
Dec 13 08:26:36 compute-0 ceph-mon[76537]: pgmap v1852: 321 pgs: 321 active+clean; 134 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 42 KiB/s wr, 221 op/s
Dec 13 08:26:36 compute-0 nova_compute[248510]: 2025-12-13 08:26:36.471 248514 DEBUG nova.compute.manager [req-ed737bc2-874e-4fec-980b-46abc090d15a req-08e481cb-d3fa-4a3b-b831-01f538bd3101 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ad40734f-2fce-406e-ba77-dd8eeb4743bc] Received event network-vif-deleted-5b45c191-4599-4de6-9ca8-2a6c38ccf703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2720921584' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:36 compute-0 nova_compute[248510]: 2025-12-13 08:26:36.596 248514 DEBUG oslo_concurrency.processutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:36 compute-0 nova_compute[248510]: 2025-12-13 08:26:36.604 248514 DEBUG nova.compute.provider_tree [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:36 compute-0 nova_compute[248510]: 2025-12-13 08:26:36.700 248514 DEBUG nova.scheduler.client.report [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:36 compute-0 nova_compute[248510]: 2025-12-13 08:26:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:36 compute-0 nova_compute[248510]: 2025-12-13 08:26:36.777 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:36 compute-0 nova_compute[248510]: 2025-12-13 08:26:36.836 248514 INFO nova.scheduler.client.report [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance ad40734f-2fce-406e-ba77-dd8eeb4743bc
Dec 13 08:26:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2720921584' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 134 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 36 KiB/s wr, 192 op/s
Dec 13 08:26:37 compute-0 nova_compute[248510]: 2025-12-13 08:26:37.683 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9a28a763-a06b-4a67-ad17-93fae3cc602f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:37 compute-0 nova_compute[248510]: 2025-12-13 08:26:37.683 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:37 compute-0 nova_compute[248510]: 2025-12-13 08:26:37.688 248514 DEBUG oslo_concurrency.lockutils [None req-22951a4b-f07b-41c3-8835-bd051c6be37b 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "ad40734f-2fce-406e-ba77-dd8eeb4743bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:37 compute-0 nova_compute[248510]: 2025-12-13 08:26:37.952 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:26:38 compute-0 ceph-mon[76537]: pgmap v1853: 321 pgs: 321 active+clean; 134 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 36 KiB/s wr, 192 op/s
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.380 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.381 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.388 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.389 248514 INFO nova.compute.claims [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.759 248514 DEBUG nova.compute.manager [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received event network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.760 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.760 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.760 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.761 248514 DEBUG nova.compute.manager [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] No waiting events found dispatching network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.761 248514 WARNING nova.compute.manager [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received unexpected event network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 for instance with vm_state active and task_state None.
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.761 248514 DEBUG nova.compute.manager [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received event network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.762 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5ad89163-9711-4d02-94be-db41412cd173-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.762 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.762 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.762 248514 DEBUG nova.compute.manager [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Processing event network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.763 248514 DEBUG nova.compute.manager [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received event network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.763 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5ad89163-9711-4d02-94be-db41412cd173-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.763 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.763 248514 DEBUG oslo_concurrency.lockutils [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.764 248514 DEBUG nova.compute.manager [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] No waiting events found dispatching network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.764 248514 WARNING nova.compute.manager [req-f5a3a2c6-15ee-4e39-8d22-7515571f2585 req-aacf0df5-9290-43b4-bb39-cca1e7930168 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received unexpected event network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 for instance with vm_state building and task_state spawning.
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.765 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.768 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614398.7678943, 5ad89163-9711-4d02-94be-db41412cd173 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.768 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] VM Resumed (Lifecycle Event)
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.770 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.773 248514 INFO nova.virt.libvirt.driver [-] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Instance spawned successfully.
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.774 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.876 248514 DEBUG nova.scheduler.client.report [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.898 248514 DEBUG nova.scheduler.client.report [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.899 248514 DEBUG nova.compute.provider_tree [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.905 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.910 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.911 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.912 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.912 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.913 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.913 248514 DEBUG nova.virt.libvirt.driver [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.917 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.921 248514 DEBUG nova.scheduler.client.report [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.950 248514 DEBUG nova.scheduler.client.report [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 08:26:38 compute-0 nova_compute[248510]: 2025-12-13 08:26:38.962 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.025 248514 INFO nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Took 17.27 seconds to spawn the instance on the hypervisor.
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.026 248514 DEBUG nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.044 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 134 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 31 KiB/s wr, 135 op/s
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.285 248514 INFO nova.compute.manager [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Took 20.47 seconds to build instance.
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.427 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.428 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.428 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.429 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.429 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.430 248514 INFO nova.compute.manager [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Terminating instance
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.431 248514 DEBUG nova.compute.manager [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:26:39 compute-0 kernel: tap223b45c0-41 (unregistering): left promiscuous mode
Dec 13 08:26:39 compute-0 NetworkManager[50376]: <info>  [1765614399.4753] device (tap223b45c0-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:26:39 compute-0 ovn_controller[148476]: 2025-12-13T08:26:39Z|00377|binding|INFO|Releasing lport 223b45c0-417b-4d31-85cf-1b87f4f68c12 from this chassis (sb_readonly=0)
Dec 13 08:26:39 compute-0 ovn_controller[148476]: 2025-12-13T08:26:39Z|00378|binding|INFO|Setting lport 223b45c0-417b-4d31-85cf-1b87f4f68c12 down in Southbound
Dec 13 08:26:39 compute-0 ovn_controller[148476]: 2025-12-13T08:26:39Z|00379|binding|INFO|Removing iface tap223b45c0-41 ovn-installed in OVS
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.485 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:39 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Dec 13 08:26:39 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 8.741s CPU time.
Dec 13 08:26:39 compute-0 systemd-machined[210538]: Machine qemu-47-instance-0000002a terminated.
Dec 13 08:26:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277595281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.640 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.648 248514 DEBUG nova.compute.provider_tree [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.670 248514 INFO nova.virt.libvirt.driver [-] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Instance destroyed successfully.
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.671 248514 DEBUG nova.objects.instance [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lazy-loading 'resources' on Instance uuid 4a2083e9-7387-4fa8-a7d5-f56fabb0d263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.927 248514 DEBUG oslo_concurrency.lockutils [None req-8632c37e-5a4b-461a-98bf-1094a1dcf695 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:39 compute-0 nova_compute[248510]: 2025-12-13 08:26:39.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.016 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:c4:08 10.100.0.12'], port_security=['fa:16:3e:e1:c4:08 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a2083e9-7387-4fa8-a7d5-f56fabb0d263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87e36b0f-d803-4f8a-a615-bb4ace6f6438', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b248a356f82d4070b9d1ff0500ca574c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c49ee4f7-c546-4f3e-87e0-d64442c2add1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=864f1a11-3481-4951-8fd3-78a864369b75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=223b45c0-417b-4d31-85cf-1b87f4f68c12) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.017 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 223b45c0-417b-4d31-85cf-1b87f4f68c12 in datapath 87e36b0f-d803-4f8a-a615-bb4ace6f6438 unbound from our chassis
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.018 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87e36b0f-d803-4f8a-a615-bb4ace6f6438, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[904f6bdc-7428-48de-992e-5ebffbcb7414]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.021 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438 namespace which is not needed anymore
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.031 248514 DEBUG nova.virt.libvirt.vif [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-13540936',display_name='tempest-InstanceActionsNegativeTestJSON-server-13540936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-13540936',id=42,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:26:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b248a356f82d4070b9d1ff0500ca574c',ramdisk_id='',reservation_id='r-hiyy0a7l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-720592495',owner_user_name='tempest-InstanceActionsNegativeTestJSON-720592495-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:26:33Z,user_data=None,user_id='edadf89429754e2b9dc21f5ba73894dc',uuid=4a2083e9-7387-4fa8-a7d5-f56fabb0d263,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.032 248514 DEBUG nova.network.os_vif_util [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Converting VIF {"id": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "address": "fa:16:3e:e1:c4:08", "network": {"id": "87e36b0f-d803-4f8a-a615-bb4ace6f6438", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2124742053-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b248a356f82d4070b9d1ff0500ca574c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap223b45c0-41", "ovs_interfaceid": "223b45c0-417b-4d31-85cf-1b87f4f68c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.033 248514 DEBUG nova.network.os_vif_util [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c4:08,bridge_name='br-int',has_traffic_filtering=True,id=223b45c0-417b-4d31-85cf-1b87f4f68c12,network=Network(87e36b0f-d803-4f8a-a615-bb4ace6f6438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223b45c0-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.034 248514 DEBUG os_vif [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c4:08,bridge_name='br-int',has_traffic_filtering=True,id=223b45c0-417b-4d31-85cf-1b87f4f68c12,network=Network(87e36b0f-d803-4f8a-a615-bb4ace6f6438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223b45c0-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.036 248514 DEBUG nova.scheduler.client.report [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.039 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.040 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap223b45c0-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.042 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.047 248514 INFO os_vif [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c4:08,bridge_name='br-int',has_traffic_filtering=True,id=223b45c0-417b-4d31-85cf-1b87f4f68c12,network=Network(87e36b0f-d803-4f8a-a615-bb4ace6f6438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap223b45c0-41')
Dec 13 08:26:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:26:40 compute-0 neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438[294001]: [NOTICE]   (294022) : haproxy version is 2.8.14-c23fe91
Dec 13 08:26:40 compute-0 neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438[294001]: [NOTICE]   (294022) : path to executable is /usr/sbin/haproxy
Dec 13 08:26:40 compute-0 neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438[294001]: [WARNING]  (294022) : Exiting Master process...
Dec 13 08:26:40 compute-0 neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438[294001]: [ALERT]    (294022) : Current worker (294025) exited with code 143 (Terminated)
Dec 13 08:26:40 compute-0 neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438[294001]: [WARNING]  (294022) : All workers exited. Exiting... (0)
Dec 13 08:26:40 compute-0 systemd[1]: libpod-00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15.scope: Deactivated successfully.
Dec 13 08:26:40 compute-0 podman[294204]: 2025-12-13 08:26:40.175787638 +0000 UTC m=+0.058812803 container died 00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15-userdata-shm.mount: Deactivated successfully.
Dec 13 08:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9929f88439cbc002a64576b634ec7bff8c5c8d87e57ee4c4855038fb8be980ab-merged.mount: Deactivated successfully.
Dec 13 08:26:40 compute-0 ceph-mon[76537]: pgmap v1854: 321 pgs: 321 active+clean; 134 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 31 KiB/s wr, 135 op/s
Dec 13 08:26:40 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3277595281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.222 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.224 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:26:40 compute-0 podman[294204]: 2025-12-13 08:26:40.225171246 +0000 UTC m=+0.108196411 container cleanup 00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:26:40 compute-0 systemd[1]: libpod-conmon-00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15.scope: Deactivated successfully.
Dec 13 08:26:40 compute-0 podman[294235]: 2025-12-13 08:26:40.304014751 +0000 UTC m=+0.055174602 container remove 00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.311 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c3982a12-793e-4717-b9cc-a261afac288e]: (4, ('Sat Dec 13 08:26:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438 (00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15)\n00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15\nSat Dec 13 08:26:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438 (00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15)\n00271247af0a61d19c23639d8c5ddaae98babbe08718b7a7456f548fe47bdc15\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.314 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f72f490-415a-494b-bf5f-17a6dd41d33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.315 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87e36b0f-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:40 compute-0 kernel: tap87e36b0f-d0: left promiscuous mode
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.338 248514 INFO nova.virt.libvirt.driver [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Deleting instance files /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263_del
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.339 248514 INFO nova.virt.libvirt.driver [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Deletion of /var/lib/nova/instances/4a2083e9-7387-4fa8-a7d5-f56fabb0d263_del complete
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.341 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bb08ffbe-b7cf-4644-81fe-3d1d06011516]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.343 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.361 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7f4f35-e088-45f4-8d1a-f946200452e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.363 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6625eae7-fc0a-42eb-bd6f-c0104abb07a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.379 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfd4c8c-a403-4a2e-9459-f391ae757777]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696775, 'reachable_time': 20545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294250, 'error': None, 'target': 'ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d87e36b0f\x2dd803\x2d4f8a\x2da615\x2dbb4ace6f6438.mount: Deactivated successfully.
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.384 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87e36b0f-d803-4f8a-a615-bb4ace6f6438 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:26:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:40.385 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[11b546fc-0d33-45df-ad28-f0953000def1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.389 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.390 248514 DEBUG nova.network.neutron [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.420 248514 INFO nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.428 248514 INFO nova.compute.manager [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Took 1.00 seconds to destroy the instance on the hypervisor.
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.429 248514 DEBUG oslo.service.loopingcall [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.429 248514 DEBUG nova.compute.manager [-] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.429 248514 DEBUG nova.network.neutron [-] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.656 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.728 248514 INFO nova.compute.manager [None req-0ebddb72-0470-4351-9c31-57518af3d9d3 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Pausing
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.730 248514 DEBUG nova.objects.instance [None req-0ebddb72-0470-4351-9c31-57518af3d9d3 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'flavor' on Instance uuid 5ad89163-9711-4d02-94be-db41412cd173 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.807 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614400.8074434, 5ad89163-9711-4d02-94be-db41412cd173 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.809 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] VM Paused (Lifecycle Event)
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.811 248514 DEBUG nova.compute.manager [None req-0ebddb72-0470-4351-9c31-57518af3d9d3 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.858 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.860 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.860 248514 INFO nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Creating image(s)
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.886 248514 DEBUG nova.storage.rbd_utils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.909 248514 DEBUG nova.storage.rbd_utils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.935 248514 DEBUG nova.storage.rbd_utils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.939 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.977 248514 DEBUG nova.policy [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.982 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:40 compute-0 nova_compute[248510]: 2025-12-13 08:26:40.988 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.012 248514 DEBUG nova.compute.manager [req-a06d4e95-7e27-447e-be48-922606a6cb1a req-73d2193d-ca90-4b7b-a4a5-3b328ccbf01a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received event network-vif-unplugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.013 248514 DEBUG oslo_concurrency.lockutils [req-a06d4e95-7e27-447e-be48-922606a6cb1a req-73d2193d-ca90-4b7b-a4a5-3b328ccbf01a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.013 248514 DEBUG oslo_concurrency.lockutils [req-a06d4e95-7e27-447e-be48-922606a6cb1a req-73d2193d-ca90-4b7b-a4a5-3b328ccbf01a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.014 248514 DEBUG oslo_concurrency.lockutils [req-a06d4e95-7e27-447e-be48-922606a6cb1a req-73d2193d-ca90-4b7b-a4a5-3b328ccbf01a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.014 248514 DEBUG nova.compute.manager [req-a06d4e95-7e27-447e-be48-922606a6cb1a req-73d2193d-ca90-4b7b-a4a5-3b328ccbf01a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] No waiting events found dispatching network-vif-unplugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.014 248514 DEBUG nova.compute.manager [req-a06d4e95-7e27-447e-be48-922606a6cb1a req-73d2193d-ca90-4b7b-a4a5-3b328ccbf01a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received event network-vif-unplugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.029 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.030 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.031 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.031 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.054 248514 DEBUG nova.storage.rbd_utils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.058 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 134 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 31 KiB/s wr, 135 op/s
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.303 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.363 248514 DEBUG nova.storage.rbd_utils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] resizing rbd image 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.436 248514 DEBUG nova.objects.instance [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a28a763-a06b-4a67-ad17-93fae3cc602f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.465 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.466 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Ensure instance console log exists: /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.467 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.467 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.468 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.573 248514 DEBUG nova.network.neutron [-] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.630 248514 INFO nova.compute.manager [-] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Took 1.20 seconds to deallocate network for instance.
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.710 248514 DEBUG nova.compute.manager [req-0779b076-6d16-498e-87f6-587601f02cc3 req-9960b4ab-677b-43bf-bf6a-1014a91c9424 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received event network-vif-deleted-223b45c0-417b-4d31-85cf-1b87f4f68c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.713 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.714 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.812 248514 DEBUG oslo_concurrency.processutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:41 compute-0 nova_compute[248510]: 2025-12-13 08:26:41.850 248514 DEBUG nova.network.neutron [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Successfully created port: b22a4b91-ffcc-486e-8d60-13b257ea1fef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:26:42 compute-0 ceph-mon[76537]: pgmap v1855: 321 pgs: 321 active+clean; 134 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 31 KiB/s wr, 135 op/s
Dec 13 08:26:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186375639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:42 compute-0 nova_compute[248510]: 2025-12-13 08:26:42.377 248514 DEBUG oslo_concurrency.processutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:42 compute-0 nova_compute[248510]: 2025-12-13 08:26:42.385 248514 DEBUG nova.compute.provider_tree [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:42 compute-0 nova_compute[248510]: 2025-12-13 08:26:42.413 248514 DEBUG nova.scheduler.client.report [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:42 compute-0 nova_compute[248510]: 2025-12-13 08:26:42.445 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:42 compute-0 nova_compute[248510]: 2025-12-13 08:26:42.505 248514 INFO nova.scheduler.client.report [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Deleted allocations for instance 4a2083e9-7387-4fa8-a7d5-f56fabb0d263
Dec 13 08:26:42 compute-0 nova_compute[248510]: 2025-12-13 08:26:42.599 248514 DEBUG oslo_concurrency.lockutils [None req-90ef4fb3-e3c9-46b2-9e20-532f1fad6927 edadf89429754e2b9dc21f5ba73894dc b248a356f82d4070b9d1ff0500ca574c - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:42 compute-0 nova_compute[248510]: 2025-12-13 08:26:42.988 248514 DEBUG nova.network.neutron [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Successfully updated port: b22a4b91-ffcc-486e-8d60-13b257ea1fef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.064 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-9a28a763-a06b-4a67-ad17-93fae3cc602f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.064 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-9a28a763-a06b-4a67-ad17-93fae3cc602f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.065 248514 DEBUG nova.network.neutron [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.141 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "5ad89163-9711-4d02-94be-db41412cd173" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.142 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.143 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "5ad89163-9711-4d02-94be-db41412cd173-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.143 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.143 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.145 248514 INFO nova.compute.manager [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Terminating instance
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.146 248514 DEBUG nova.compute.manager [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.162 248514 DEBUG nova.compute.manager [req-ef01037e-bdda-4db0-a36f-a69a4839f32c req-029b37cb-9e86-41e5-be16-a7e18945ae36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received event network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.162 248514 DEBUG oslo_concurrency.lockutils [req-ef01037e-bdda-4db0-a36f-a69a4839f32c req-029b37cb-9e86-41e5-be16-a7e18945ae36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.163 248514 DEBUG oslo_concurrency.lockutils [req-ef01037e-bdda-4db0-a36f-a69a4839f32c req-029b37cb-9e86-41e5-be16-a7e18945ae36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.163 248514 DEBUG oslo_concurrency.lockutils [req-ef01037e-bdda-4db0-a36f-a69a4839f32c req-029b37cb-9e86-41e5-be16-a7e18945ae36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4a2083e9-7387-4fa8-a7d5-f56fabb0d263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.163 248514 DEBUG nova.compute.manager [req-ef01037e-bdda-4db0-a36f-a69a4839f32c req-029b37cb-9e86-41e5-be16-a7e18945ae36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] No waiting events found dispatching network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 128 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 444 KiB/s wr, 99 op/s
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.163 248514 WARNING nova.compute.manager [req-ef01037e-bdda-4db0-a36f-a69a4839f32c req-029b37cb-9e86-41e5-be16-a7e18945ae36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Received unexpected event network-vif-plugged-223b45c0-417b-4d31-85cf-1b87f4f68c12 for instance with vm_state deleted and task_state None.
Dec 13 08:26:43 compute-0 kernel: tap218fbd96-49 (unregistering): left promiscuous mode
Dec 13 08:26:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2186375639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:43 compute-0 NetworkManager[50376]: <info>  [1765614403.2407] device (tap218fbd96-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:26:43 compute-0 ovn_controller[148476]: 2025-12-13T08:26:43Z|00380|binding|INFO|Releasing lport 218fbd96-4977-4565-bc40-c0dd1a19d059 from this chassis (sb_readonly=0)
Dec 13 08:26:43 compute-0 ovn_controller[148476]: 2025-12-13T08:26:43Z|00381|binding|INFO|Setting lport 218fbd96-4977-4565-bc40-c0dd1a19d059 down in Southbound
Dec 13 08:26:43 compute-0 ovn_controller[148476]: 2025-12-13T08:26:43Z|00382|binding|INFO|Removing iface tap218fbd96-49 ovn-installed in OVS
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.250 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:29:99 10.100.0.14'], port_security=['fa:16:3e:95:29:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5ad89163-9711-4d02-94be-db41412cd173', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=218fbd96-4977-4565-bc40-c0dd1a19d059) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.252 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 218fbd96-4977-4565-bc40-c0dd1a19d059 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.254 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.256 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd506774-3129-4a31-9fc0-7dd045ececff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.256 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace which is not needed anymore
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.260 248514 DEBUG nova.network.neutron [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Dec 13 08:26:43 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Consumed 2.634s CPU time.
Dec 13 08:26:43 compute-0 systemd-machined[210538]: Machine qemu-48-instance-0000002b terminated.
Dec 13 08:26:43 compute-0 kernel: tap218fbd96-49: entered promiscuous mode
Dec 13 08:26:43 compute-0 NetworkManager[50376]: <info>  [1765614403.3684] manager: (tap218fbd96-49): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Dec 13 08:26:43 compute-0 systemd-udevd[294443]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:26:43 compute-0 kernel: tap218fbd96-49 (unregistering): left promiscuous mode
Dec 13 08:26:43 compute-0 ovn_controller[148476]: 2025-12-13T08:26:43Z|00383|binding|INFO|Claiming lport 218fbd96-4977-4565-bc40-c0dd1a19d059 for this chassis.
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 ovn_controller[148476]: 2025-12-13T08:26:43Z|00384|binding|INFO|218fbd96-4977-4565-bc40-c0dd1a19d059: Claiming fa:16:3e:95:29:99 10.100.0.14
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.395 248514 INFO nova.virt.libvirt.driver [-] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Instance destroyed successfully.
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.396 248514 DEBUG nova.objects.instance [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'resources' on Instance uuid 5ad89163-9711-4d02-94be-db41412cd173 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:43 compute-0 ovn_controller[148476]: 2025-12-13T08:26:43Z|00385|if_status|INFO|Dropped 1 log messages in last 316 seconds (most recently, 316 seconds ago) due to excessive rate
Dec 13 08:26:43 compute-0 ovn_controller[148476]: 2025-12-13T08:26:43Z|00386|if_status|INFO|Not setting lport 218fbd96-4977-4565-bc40-c0dd1a19d059 down as sb is readonly
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[294093]: [NOTICE]   (294097) : haproxy version is 2.8.14-c23fe91
Dec 13 08:26:43 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[294093]: [NOTICE]   (294097) : path to executable is /usr/sbin/haproxy
Dec 13 08:26:43 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[294093]: [WARNING]  (294097) : Exiting Master process...
Dec 13 08:26:43 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[294093]: [ALERT]    (294097) : Current worker (294099) exited with code 143 (Terminated)
Dec 13 08:26:43 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[294093]: [WARNING]  (294097) : All workers exited. Exiting... (0)
Dec 13 08:26:43 compute-0 systemd[1]: libpod-166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31.scope: Deactivated successfully.
Dec 13 08:26:43 compute-0 podman[294463]: 2025-12-13 08:26:43.425029528 +0000 UTC m=+0.073138776 container died 166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:26:43 compute-0 ovn_controller[148476]: 2025-12-13T08:26:43Z|00387|binding|INFO|Releasing lport 218fbd96-4977-4565-bc40-c0dd1a19d059 from this chassis (sb_readonly=0)
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.448 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:29:99 10.100.0.14'], port_security=['fa:16:3e:95:29:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5ad89163-9711-4d02-94be-db41412cd173', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=218fbd96-4977-4565-bc40-c0dd1a19d059) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31-userdata-shm.mount: Deactivated successfully.
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.458 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:29:99 10.100.0.14'], port_security=['fa:16:3e:95:29:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5ad89163-9711-4d02-94be-db41412cd173', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=218fbd96-4977-4565-bc40-c0dd1a19d059) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5db6410990b1c787616716b1257736de1bcab5283e8453831f423bfe4a69d731-merged.mount: Deactivated successfully.
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.458 248514 DEBUG nova.virt.libvirt.vif [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1690898624',display_name='tempest-DeleteServersTestJSON-server-1690898624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1690898624',id=43,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:26:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-uvfukfl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:26:40Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=5ad89163-9711-4d02-94be-db41412cd173,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.460 248514 DEBUG nova.network.os_vif_util [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "218fbd96-4977-4565-bc40-c0dd1a19d059", "address": "fa:16:3e:95:29:99", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap218fbd96-49", "ovs_interfaceid": "218fbd96-4977-4565-bc40-c0dd1a19d059", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.461 248514 DEBUG nova.network.os_vif_util [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:29:99,bridge_name='br-int',has_traffic_filtering=True,id=218fbd96-4977-4565-bc40-c0dd1a19d059,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap218fbd96-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.461 248514 DEBUG os_vif [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:29:99,bridge_name='br-int',has_traffic_filtering=True,id=218fbd96-4977-4565-bc40-c0dd1a19d059,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap218fbd96-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.464 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap218fbd96-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.467 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.470 248514 INFO os_vif [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:29:99,bridge_name='br-int',has_traffic_filtering=True,id=218fbd96-4977-4565-bc40-c0dd1a19d059,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap218fbd96-49')
Dec 13 08:26:43 compute-0 podman[294463]: 2025-12-13 08:26:43.47296154 +0000 UTC m=+0.121070788 container cleanup 166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:26:43 compute-0 systemd[1]: libpod-conmon-166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31.scope: Deactivated successfully.
Dec 13 08:26:43 compute-0 podman[294503]: 2025-12-13 08:26:43.539556104 +0000 UTC m=+0.044128750 container remove 166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.547 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d378c88e-ce0a-4c07-a862-276869006a06]: (4, ('Sat Dec 13 08:26:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31)\n166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31\nSat Dec 13 08:26:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31)\n166aebdb3ff2cdc60193dfaaaae048975ca17860b4bbb9d66d9be9cb51b18e31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3a7890-01a9-460e-867a-eead9fbdad66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 kernel: tap85372fca-a0: left promiscuous mode
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.566 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.569 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43b08bb2-fc93-4eda-8d26-863d9c86345e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.582 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2288a1a7-6e39-4946-8b87-bbd1131a3657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.585 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b70fe64-e418-4f73-84f1-b9fbf92e2fc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[be98c3dd-8933-434c-afef-259fed138f1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696863, 'reachable_time': 33789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294529, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.606 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.606 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7f19f9-5525-4c42-b540-672f17c41063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d85372fca\x2dab50\x2d48b6\x2d8c21\x2d507f630c205a.mount: Deactivated successfully.
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.607 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 218fbd96-4977-4565-bc40-c0dd1a19d059 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.609 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f6337d5a-8987-432e-b302-6308c9a38b06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.611 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 218fbd96-4977-4565-bc40-c0dd1a19d059 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.612 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:26:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:43.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8dca82a-1b47-4f50-b382-d89167d8b7c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.722 248514 INFO nova.virt.libvirt.driver [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Deleting instance files /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173_del
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.723 248514 INFO nova.virt.libvirt.driver [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Deletion of /var/lib/nova/instances/5ad89163-9711-4d02-94be-db41412cd173_del complete
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.793 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.793 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.794 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.831 248514 DEBUG nova.compute.manager [req-b8415f5d-0e28-43f2-86e8-a4d97a0ed894 req-d5e3e740-b834-4e8a-a45d-2d0a30adeda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received event network-changed-b22a4b91-ffcc-486e-8d60-13b257ea1fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.831 248514 DEBUG nova.compute.manager [req-b8415f5d-0e28-43f2-86e8-a4d97a0ed894 req-d5e3e740-b834-4e8a-a45d-2d0a30adeda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Refreshing instance network info cache due to event network-changed-b22a4b91-ffcc-486e-8d60-13b257ea1fef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.832 248514 DEBUG oslo_concurrency.lockutils [req-b8415f5d-0e28-43f2-86e8-a4d97a0ed894 req-d5e3e740-b834-4e8a-a45d-2d0a30adeda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9a28a763-a06b-4a67-ad17-93fae3cc602f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.837 248514 INFO nova.compute.manager [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Took 0.69 seconds to destroy the instance on the hypervisor.
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.839 248514 DEBUG oslo.service.loopingcall [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.839 248514 DEBUG nova.compute.manager [-] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:26:43 compute-0 nova_compute[248510]: 2025-12-13 08:26:43.840 248514 DEBUG nova.network.neutron [-] [instance: 5ad89163-9711-4d02-94be-db41412cd173] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:26:44 compute-0 ceph-mon[76537]: pgmap v1856: 321 pgs: 321 active+clean; 128 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 444 KiB/s wr, 99 op/s
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.618 248514 DEBUG nova.network.neutron [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Updating instance_info_cache with network_info: [{"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.642 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-9a28a763-a06b-4a67-ad17-93fae3cc602f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.643 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Instance network_info: |[{"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.643 248514 DEBUG oslo_concurrency.lockutils [req-b8415f5d-0e28-43f2-86e8-a4d97a0ed894 req-d5e3e740-b834-4e8a-a45d-2d0a30adeda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9a28a763-a06b-4a67-ad17-93fae3cc602f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.644 248514 DEBUG nova.network.neutron [req-b8415f5d-0e28-43f2-86e8-a4d97a0ed894 req-d5e3e740-b834-4e8a-a45d-2d0a30adeda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Refreshing network info cache for port b22a4b91-ffcc-486e-8d60-13b257ea1fef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.647 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Start _get_guest_xml network_info=[{"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.652 248514 WARNING nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.663 248514 DEBUG nova.virt.libvirt.host [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.664 248514 DEBUG nova.virt.libvirt.host [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.667 248514 DEBUG nova.virt.libvirt.host [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.668 248514 DEBUG nova.virt.libvirt.host [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.668 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.668 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.669 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.669 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.669 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.669 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.670 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.670 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.670 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.670 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.670 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.671 248514 DEBUG nova.virt.hardware [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.673 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:44 compute-0 nova_compute[248510]: 2025-12-13 08:26:44.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.079 248514 DEBUG nova.network.neutron [-] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.098 248514 INFO nova.compute.manager [-] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Took 1.26 seconds to deallocate network for instance.
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.155 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.156 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 107 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Dec 13 08:26:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283966826' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.238 248514 DEBUG oslo_concurrency.processutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4283966826' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.275 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.298 248514 DEBUG nova.storage.rbd_utils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.303 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.336 248514 DEBUG nova.compute.manager [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received event network-vif-unplugged-218fbd96-4977-4565-bc40-c0dd1a19d059 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.338 248514 DEBUG oslo_concurrency.lockutils [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5ad89163-9711-4d02-94be-db41412cd173-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.338 248514 DEBUG oslo_concurrency.lockutils [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.338 248514 DEBUG oslo_concurrency.lockutils [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.338 248514 DEBUG nova.compute.manager [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] No waiting events found dispatching network-vif-unplugged-218fbd96-4977-4565-bc40-c0dd1a19d059 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.339 248514 WARNING nova.compute.manager [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received unexpected event network-vif-unplugged-218fbd96-4977-4565-bc40-c0dd1a19d059 for instance with vm_state deleted and task_state None.
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.339 248514 DEBUG nova.compute.manager [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received event network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.339 248514 DEBUG oslo_concurrency.lockutils [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5ad89163-9711-4d02-94be-db41412cd173-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.340 248514 DEBUG oslo_concurrency.lockutils [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.340 248514 DEBUG oslo_concurrency.lockutils [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.340 248514 DEBUG nova.compute.manager [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] No waiting events found dispatching network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.340 248514 WARNING nova.compute.manager [req-8ee6148f-abe3-458e-8d0e-783fade30cc5 req-8e0169db-b38a-49fe-b2e0-f85650744e5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received unexpected event network-vif-plugged-218fbd96-4977-4565-bc40-c0dd1a19d059 for instance with vm_state deleted and task_state None.
Dec 13 08:26:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976878084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.823 248514 DEBUG oslo_concurrency.processutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.830 248514 DEBUG nova.compute.provider_tree [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.862 248514 DEBUG nova.scheduler.client.report [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/487490859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.884 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.885 248514 DEBUG nova.virt.libvirt.vif [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563022373',display_name='tempest-ImagesTestJSON-server-1563022373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563022373',id=44,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-h9h6w1n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:40Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9a28a763-a06b-4a67-ad17-93fae3cc602f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.885 248514 DEBUG nova.network.os_vif_util [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.886 248514 DEBUG nova.network.os_vif_util [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:d5:bd,bridge_name='br-int',has_traffic_filtering=True,id=b22a4b91-ffcc-486e-8d60-13b257ea1fef,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb22a4b91-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.887 248514 DEBUG nova.objects.instance [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a28a763-a06b-4a67-ad17-93fae3cc602f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.905 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.919 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <uuid>9a28a763-a06b-4a67-ad17-93fae3cc602f</uuid>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <name>instance-0000002c</name>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesTestJSON-server-1563022373</nova:name>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:26:44</nova:creationTime>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <nova:port uuid="b22a4b91-ffcc-486e-8d60-13b257ea1fef">
Dec 13 08:26:45 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <system>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <entry name="serial">9a28a763-a06b-4a67-ad17-93fae3cc602f</entry>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <entry name="uuid">9a28a763-a06b-4a67-ad17-93fae3cc602f</entry>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </system>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <os>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   </os>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <features>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   </features>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9a28a763-a06b-4a67-ad17-93fae3cc602f_disk">
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9a28a763-a06b-4a67-ad17-93fae3cc602f_disk.config">
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:45 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:51:d5:bd"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <target dev="tapb22a4b91-ff"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f/console.log" append="off"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <video>
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </video>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:26:45 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:26:45 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:26:45 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:26:45 compute-0 nova_compute[248510]: </domain>
Dec 13 08:26:45 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.921 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Preparing to wait for external event network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.922 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.922 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.922 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.923 248514 DEBUG nova.virt.libvirt.vif [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563022373',display_name='tempest-ImagesTestJSON-server-1563022373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563022373',id=44,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-h9h6w1n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:40Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9a28a763-a06b-4a67-ad17-93fae3cc602f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.924 248514 DEBUG nova.network.os_vif_util [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.924 248514 DEBUG nova.network.os_vif_util [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:d5:bd,bridge_name='br-int',has_traffic_filtering=True,id=b22a4b91-ffcc-486e-8d60-13b257ea1fef,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb22a4b91-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.925 248514 DEBUG os_vif [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:d5:bd,bridge_name='br-int',has_traffic_filtering=True,id=b22a4b91-ffcc-486e-8d60-13b257ea1fef,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb22a4b91-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.929 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.929 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.930 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.936 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb22a4b91-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.938 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb22a4b91-ff, col_values=(('external_ids', {'iface-id': 'b22a4b91-ffcc-486e-8d60-13b257ea1fef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:d5:bd', 'vm-uuid': '9a28a763-a06b-4a67-ad17-93fae3cc602f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:45 compute-0 NetworkManager[50376]: <info>  [1765614405.9420] manager: (tapb22a4b91-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.945 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.948 248514 INFO os_vif [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:d5:bd,bridge_name='br-int',has_traffic_filtering=True,id=b22a4b91-ffcc-486e-8d60-13b257ea1fef,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb22a4b91-ff')
Dec 13 08:26:45 compute-0 nova_compute[248510]: 2025-12-13 08:26:45.987 248514 INFO nova.scheduler.client.report [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Deleted allocations for instance 5ad89163-9711-4d02-94be-db41412cd173
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.044 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.045 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.045 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:51:d5:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.048 248514 INFO nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Using config drive
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.072 248514 DEBUG nova.storage.rbd_utils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.085 248514 DEBUG oslo_concurrency.lockutils [None req-d73f955e-348c-4988-9730-bac6c9815805 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "5ad89163-9711-4d02-94be-db41412cd173" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.113 248514 DEBUG nova.compute.manager [req-a8dfcbf9-5c39-4f49-9d92-bf7d1d648fef req-5bcd025e-7da7-4a7a-9bc6-0a059e5f88cb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Received event network-vif-deleted-218fbd96-4977-4565-bc40-c0dd1a19d059 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:46 compute-0 ceph-mon[76537]: pgmap v1857: 321 pgs: 321 active+clean; 107 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Dec 13 08:26:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/976878084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/487490859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.675 248514 INFO nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Creating config drive at /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f/disk.config
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.685 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpih9p5aoe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.806 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.806 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.807 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.844 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpih9p5aoe" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.875 248514 DEBUG nova.storage.rbd_utils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:46 compute-0 nova_compute[248510]: 2025-12-13 08:26:46.879 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f/disk.config 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:46 compute-0 podman[294660]: 2025-12-13 08:26:46.995451982 +0000 UTC m=+0.070123571 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 13 08:26:47 compute-0 podman[294661]: 2025-12-13 08:26:47.017974618 +0000 UTC m=+0.088386752 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:26:47 compute-0 podman[294658]: 2025-12-13 08:26:47.051434444 +0000 UTC m=+0.125629221 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.057 248514 DEBUG nova.network.neutron [req-b8415f5d-0e28-43f2-86e8-a4d97a0ed894 req-d5e3e740-b834-4e8a-a45d-2d0a30adeda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Updated VIF entry in instance network info cache for port b22a4b91-ffcc-486e-8d60-13b257ea1fef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.058 248514 DEBUG nova.network.neutron [req-b8415f5d-0e28-43f2-86e8-a4d97a0ed894 req-d5e3e740-b834-4e8a-a45d-2d0a30adeda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Updating instance_info_cache with network_info: [{"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.084 248514 DEBUG oslo_concurrency.lockutils [req-b8415f5d-0e28-43f2-86e8-a4d97a0ed894 req-d5e3e740-b834-4e8a-a45d-2d0a30adeda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9a28a763-a06b-4a67-ad17-93fae3cc602f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.102 248514 DEBUG oslo_concurrency.processutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f/disk.config 9a28a763-a06b-4a67-ad17-93fae3cc602f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.103 248514 INFO nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Deleting local config drive /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f/disk.config because it was imported into RBD.
Dec 13 08:26:47 compute-0 kernel: tapb22a4b91-ff: entered promiscuous mode
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:47 compute-0 NetworkManager[50376]: <info>  [1765614407.1652] manager: (tapb22a4b91-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Dec 13 08:26:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 107 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Dec 13 08:26:47 compute-0 ovn_controller[148476]: 2025-12-13T08:26:47Z|00388|binding|INFO|Claiming lport b22a4b91-ffcc-486e-8d60-13b257ea1fef for this chassis.
Dec 13 08:26:47 compute-0 ovn_controller[148476]: 2025-12-13T08:26:47Z|00389|binding|INFO|b22a4b91-ffcc-486e-8d60-13b257ea1fef: Claiming fa:16:3e:51:d5:bd 10.100.0.13
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.182 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:d5:bd 10.100.0.13'], port_security=['fa:16:3e:51:d5:bd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a28a763-a06b-4a67-ad17-93fae3cc602f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b22a4b91-ffcc-486e-8d60-13b257ea1fef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.184 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b22a4b91-ffcc-486e-8d60-13b257ea1fef in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.186 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.201 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8a5574-c691-4051-86c9-2dacb1683276]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.202 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87bd91d0-e1 in ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.204 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87bd91d0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.204 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0865ffd0-2586-4ea3-81cc-19d19047cf1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.205 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2207ee-2e8f-4aa2-b6d4-99f5c65d6c99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 systemd-udevd[294772]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:26:47 compute-0 systemd-machined[210538]: New machine qemu-49-instance-0000002c.
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.224 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0b35ebbf-d834-478b-a04c-f7bccc0ab112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002c.
Dec 13 08:26:47 compute-0 NetworkManager[50376]: <info>  [1765614407.2311] device (tapb22a4b91-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:26:47 compute-0 NetworkManager[50376]: <info>  [1765614407.2320] device (tapb22a4b91-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.256 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[04d84acc-bf16-406f-9cfd-4a69efead669]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.299 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a47e1a7c-b6dc-44f9-8984-36f3e6f0f04e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.310 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71d7befd-30bf-42d3-b419-9a9024fa8be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 NetworkManager[50376]: <info>  [1765614407.3133] manager: (tap87bd91d0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Dec 13 08:26:47 compute-0 systemd-udevd[294776]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.349 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ec87c5-372c-4f22-ad27-efc0f20ab23f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.355 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd43fb7-9ede-41c7-b645-059916db7382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 NetworkManager[50376]: <info>  [1765614407.3760] device (tap87bd91d0-e0): carrier: link connected
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.382 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5c044a44-8b36-4310-9da1-6334e7b622d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3747359182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.398 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7fadb730-666b-4d4f-8d6e-7b10beccca1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698459, 'reachable_time': 35380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294804, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ovn_controller[148476]: 2025-12-13T08:26:47Z|00390|binding|INFO|Setting lport b22a4b91-ffcc-486e-8d60-13b257ea1fef ovn-installed in OVS
Dec 13 08:26:47 compute-0 ovn_controller[148476]: 2025-12-13T08:26:47Z|00391|binding|INFO|Setting lport b22a4b91-ffcc-486e-8d60-13b257ea1fef up in Southbound
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.403 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.415 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d12a7b-e0b3-4274-8fcb-bad3cc2d9855]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:ec7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698459, 'tstamp': 698459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294807, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.422 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.433 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d06a46-0ecf-4884-b08c-c9c2700644a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698459, 'reachable_time': 35380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294808, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.469 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92de669e-1ce8-470e-83bb-e60824f08b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.493 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.494 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.548 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ee8e95-f58f-4d85-b924-3ea174e0fd00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.551 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.552 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:47 compute-0 NetworkManager[50376]: <info>  [1765614407.5545] manager: (tap87bd91d0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Dec 13 08:26:47 compute-0 kernel: tap87bd91d0-e0: entered promiscuous mode
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.557 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:47 compute-0 ovn_controller[148476]: 2025-12-13T08:26:47Z|00392|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.578 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.579 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfe7873-3f73-4c12-af12-b7253dd7da39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.580 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:26:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:47.580 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'env', 'PROCESS_TAG=haproxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87bd91d0-eead-49b6-8f92-f8d0dba555dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.668 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.669 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4166MB free_disk=59.95945811457932GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.669 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.670 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.691 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614407.6910272, 9a28a763-a06b-4a67-ad17-93fae3cc602f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.692 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] VM Started (Lifecycle Event)
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.720 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.723 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614407.691977, 9a28a763-a06b-4a67-ad17-93fae3cc602f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.723 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] VM Paused (Lifecycle Event)
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.748 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9a28a763-a06b-4a67-ad17-93fae3cc602f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.748 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.748 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.754 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.757 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.789 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:26:47 compute-0 nova_compute[248510]: 2025-12-13 08:26:47.805 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:47 compute-0 podman[294883]: 2025-12-13 08:26:47.955747267 +0000 UTC m=+0.064723518 container create 073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:26:47 compute-0 systemd[1]: Started libpod-conmon-073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684.scope.
Dec 13 08:26:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:26:48 compute-0 podman[294883]: 2025-12-13 08:26:47.927292465 +0000 UTC m=+0.036268736 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f325cb887c014836ab95cd0dedb32785391e7b8c70230ccce073c9eb9ab282ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:26:48 compute-0 podman[294883]: 2025-12-13 08:26:48.250610812 +0000 UTC m=+0.359587113 container init 073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:26:48 compute-0 podman[294883]: 2025-12-13 08:26:48.256384965 +0000 UTC m=+0.365361226 container start 073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 08:26:48 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[294917]: [NOTICE]   (294921) : New worker (294923) forked
Dec 13 08:26:48 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[294917]: [NOTICE]   (294921) : Loading success.
Dec 13 08:26:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517658367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:48 compute-0 ceph-mon[76537]: pgmap v1858: 321 pgs: 321 active+clean; 107 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Dec 13 08:26:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3747359182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:48 compute-0 nova_compute[248510]: 2025-12-13 08:26:48.338 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:48 compute-0 nova_compute[248510]: 2025-12-13 08:26:48.345 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:48 compute-0 nova_compute[248510]: 2025-12-13 08:26:48.372 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:48 compute-0 nova_compute[248510]: 2025-12-13 08:26:48.409 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:26:48 compute-0 nova_compute[248510]: 2025-12-13 08:26:48.410 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Dec 13 08:26:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/517658367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:49 compute-0 nova_compute[248510]: 2025-12-13 08:26:49.410 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:49 compute-0 nova_compute[248510]: 2025-12-13 08:26:49.411 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:49 compute-0 nova_compute[248510]: 2025-12-13 08:26:49.411 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:26:49 compute-0 nova_compute[248510]: 2025-12-13 08:26:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:49 compute-0 nova_compute[248510]: 2025-12-13 08:26:49.810 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.025 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "2e52d555-08dd-49fb-a73a-eded391e154c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.025 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.047 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:26:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.129 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.130 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.138 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.138 248514 INFO nova.compute.claims [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.281 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.432 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.433 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:50 compute-0 ceph-mon[76537]: pgmap v1859: 321 pgs: 321 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.470 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.553 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811246408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.855 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.861 248514 DEBUG nova.compute.provider_tree [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.897 248514 DEBUG nova.scheduler.client.report [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:50 compute-0 nova_compute[248510]: 2025-12-13 08:26:50.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.050 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.051 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.054 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.063 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.063 248514 INFO nova.compute.claims [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:26:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.210 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.211 248514 DEBUG nova.network.neutron [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.296 248514 INFO nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.319 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.388 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.426 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.428 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.428 248514 INFO nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Creating image(s)
Dec 13 08:26:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1811246408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.633 248514 DEBUG nova.storage.rbd_utils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 2e52d555-08dd-49fb-a73a-eded391e154c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.666 248514 DEBUG nova.storage.rbd_utils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 2e52d555-08dd-49fb-a73a-eded391e154c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.691 248514 DEBUG nova.storage.rbd_utils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 2e52d555-08dd-49fb-a73a-eded391e154c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.696 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.731 248514 DEBUG nova.policy [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6f7967ce16c45d394188c1302b02907', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.735 248514 DEBUG nova.compute.manager [req-5a4aacc7-ff14-40bb-8184-4aa469f0ff9b req-6a2b1be4-caea-4d8e-ac0a-4cbab5fa1ca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received event network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.736 248514 DEBUG oslo_concurrency.lockutils [req-5a4aacc7-ff14-40bb-8184-4aa469f0ff9b req-6a2b1be4-caea-4d8e-ac0a-4cbab5fa1ca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.736 248514 DEBUG oslo_concurrency.lockutils [req-5a4aacc7-ff14-40bb-8184-4aa469f0ff9b req-6a2b1be4-caea-4d8e-ac0a-4cbab5fa1ca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.736 248514 DEBUG oslo_concurrency.lockutils [req-5a4aacc7-ff14-40bb-8184-4aa469f0ff9b req-6a2b1be4-caea-4d8e-ac0a-4cbab5fa1ca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.736 248514 DEBUG nova.compute.manager [req-5a4aacc7-ff14-40bb-8184-4aa469f0ff9b req-6a2b1be4-caea-4d8e-ac0a-4cbab5fa1ca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Processing event network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.737 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.743 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614411.7431388, 9a28a763-a06b-4a67-ad17-93fae3cc602f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.743 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] VM Resumed (Lifecycle Event)
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.746 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.764 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.767 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.768 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.768 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.769 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.791 248514 DEBUG nova.storage.rbd_utils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 2e52d555-08dd-49fb-a73a-eded391e154c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.796 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e52d555-08dd-49fb-a73a-eded391e154c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.834 248514 INFO nova.virt.libvirt.driver [-] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Instance spawned successfully.
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.835 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.837 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.970 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.976 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.976 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.977 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.977 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.978 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:51 compute-0 nova_compute[248510]: 2025-12-13 08:26:51.979 248514 DEBUG nova.virt.libvirt.driver [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.036 248514 INFO nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Took 11.18 seconds to spawn the instance on the hypervisor.
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.037 248514 DEBUG nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:26:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/626772575' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.075 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.081 248514 DEBUG nova.compute.provider_tree [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.097 248514 INFO nova.compute.manager [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Took 13.73 seconds to build instance.
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.101 248514 DEBUG nova.scheduler.client.report [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.167 248514 DEBUG oslo_concurrency.lockutils [None req-90129bb0-a6e6-4f20-9979-e27d8524b875 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.172 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.173 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.219 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.219 248514 DEBUG nova.network.neutron [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.239 248514 INFO nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.273 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.368 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.370 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.370 248514 INFO nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Creating image(s)
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.411 248514 DEBUG nova.storage.rbd_utils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.448 248514 DEBUG nova.storage.rbd_utils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.481 248514 DEBUG nova.storage.rbd_utils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.486 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.526 248514 DEBUG nova.policy [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5bc32e49dbd4372a006913090b9ef0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.529 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e52d555-08dd-49fb-a73a-eded391e154c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.734s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.604 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.605 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.606 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.607 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.632 248514 DEBUG nova.storage.rbd_utils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.644 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.679 248514 DEBUG nova.storage.rbd_utils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] resizing rbd image 2e52d555-08dd-49fb-a73a-eded391e154c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:26:52 compute-0 ceph-mon[76537]: pgmap v1860: 321 pgs: 321 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Dec 13 08:26:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/626772575' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.759 248514 DEBUG nova.network.neutron [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Successfully created port: b1a08ea3-3044-4caa-a944-744bd324adc9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.775 248514 DEBUG nova.objects.instance [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e52d555-08dd-49fb-a73a-eded391e154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.793 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.794 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Ensure instance console log exists: /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.795 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.795 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.796 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:52 compute-0 nova_compute[248510]: 2025-12-13 08:26:52.953 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.014 248514 DEBUG nova.storage.rbd_utils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] resizing rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.097 248514 DEBUG nova.objects.instance [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'migration_context' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.112 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.112 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Ensure instance console log exists: /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.113 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.113 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.113 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 124 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.320 248514 DEBUG nova.network.neutron [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Successfully created port: 57241dd9-27dd-49bc-befb-1ef45674d6be _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.967 248514 DEBUG nova.compute.manager [req-9c292b14-cd40-470f-a291-1fdb47dd2747 req-9ad5e5ce-1fc3-4d85-8fb9-98b594a4ffe4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received event network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.968 248514 DEBUG oslo_concurrency.lockutils [req-9c292b14-cd40-470f-a291-1fdb47dd2747 req-9ad5e5ce-1fc3-4d85-8fb9-98b594a4ffe4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.968 248514 DEBUG oslo_concurrency.lockutils [req-9c292b14-cd40-470f-a291-1fdb47dd2747 req-9ad5e5ce-1fc3-4d85-8fb9-98b594a4ffe4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.969 248514 DEBUG oslo_concurrency.lockutils [req-9c292b14-cd40-470f-a291-1fdb47dd2747 req-9ad5e5ce-1fc3-4d85-8fb9-98b594a4ffe4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.969 248514 DEBUG nova.compute.manager [req-9c292b14-cd40-470f-a291-1fdb47dd2747 req-9ad5e5ce-1fc3-4d85-8fb9-98b594a4ffe4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] No waiting events found dispatching network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:53 compute-0 nova_compute[248510]: 2025-12-13 08:26:53.969 248514 WARNING nova.compute.manager [req-9c292b14-cd40-470f-a291-1fdb47dd2747 req-9ad5e5ce-1fc3-4d85-8fb9-98b594a4ffe4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received unexpected event network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef for instance with vm_state active and task_state None.
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.158 248514 DEBUG nova.objects.instance [None req-209d608a-96ca-40b1-850d-3fbcc5b1cb60 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a28a763-a06b-4a67-ad17-93fae3cc602f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.200 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614414.1998522, 9a28a763-a06b-4a67-ad17-93fae3cc602f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.201 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] VM Paused (Lifecycle Event)
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.228 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.235 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.260 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.665 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614399.6647465, 4a2083e9-7387-4fa8-a7d5-f56fabb0d263 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.666 248514 INFO nova.compute.manager [-] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] VM Stopped (Lifecycle Event)
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.697 248514 DEBUG nova.compute.manager [None req-1f249dcd-af50-4d35-a2d2-f4e19b02ff9d - - - - - -] [instance: 4a2083e9-7387-4fa8-a7d5-f56fabb0d263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.856 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:54 compute-0 ceph-mon[76537]: pgmap v1861: 321 pgs: 321 active+clean; 124 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Dec 13 08:26:54 compute-0 kernel: tapb22a4b91-ff (unregistering): left promiscuous mode
Dec 13 08:26:54 compute-0 NetworkManager[50376]: <info>  [1765614414.9238] device (tapb22a4b91-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:26:54 compute-0 ovn_controller[148476]: 2025-12-13T08:26:54Z|00393|binding|INFO|Releasing lport b22a4b91-ffcc-486e-8d60-13b257ea1fef from this chassis (sb_readonly=0)
Dec 13 08:26:54 compute-0 ovn_controller[148476]: 2025-12-13T08:26:54Z|00394|binding|INFO|Setting lport b22a4b91-ffcc-486e-8d60-13b257ea1fef down in Southbound
Dec 13 08:26:54 compute-0 ovn_controller[148476]: 2025-12-13T08:26:54Z|00395|binding|INFO|Removing iface tapb22a4b91-ff ovn-installed in OVS
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.929 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:54.942 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:d5:bd 10.100.0.13'], port_security=['fa:16:3e:51:d5:bd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a28a763-a06b-4a67-ad17-93fae3cc602f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b22a4b91-ffcc-486e-8d60-13b257ea1fef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:54.943 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b22a4b91-ffcc-486e-8d60-13b257ea1fef in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis
Dec 13 08:26:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:54.945 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:26:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:54.947 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c09a621b-3155-4bc9-9741-fff90dce982a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:54.947 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace which is not needed anymore
Dec 13 08:26:54 compute-0 nova_compute[248510]: 2025-12-13 08:26:54.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:54 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Dec 13 08:26:54 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 2.933s CPU time.
Dec 13 08:26:54 compute-0 systemd-machined[210538]: Machine qemu-49-instance-0000002c terminated.
Dec 13 08:26:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:26:55 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[294917]: [NOTICE]   (294921) : haproxy version is 2.8.14-c23fe91
Dec 13 08:26:55 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[294917]: [NOTICE]   (294921) : path to executable is /usr/sbin/haproxy
Dec 13 08:26:55 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[294917]: [WARNING]  (294921) : Exiting Master process...
Dec 13 08:26:55 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[294917]: [ALERT]    (294921) : Current worker (294923) exited with code 143 (Terminated)
Dec 13 08:26:55 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[294917]: [WARNING]  (294921) : All workers exited. Exiting... (0)
Dec 13 08:26:55 compute-0 systemd[1]: libpod-073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684.scope: Deactivated successfully.
Dec 13 08:26:55 compute-0 conmon[294917]: conmon 073dbf4880f1e4dc8078 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684.scope/container/memory.events
Dec 13 08:26:55 compute-0 podman[295338]: 2025-12-13 08:26:55.113383901 +0000 UTC m=+0.056612328 container died 073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.117 248514 DEBUG nova.compute.manager [None req-209d608a-96ca-40b1-850d-3fbcc5b1cb60 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684-userdata-shm.mount: Deactivated successfully.
Dec 13 08:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f325cb887c014836ab95cd0dedb32785391e7b8c70230ccce073c9eb9ab282ff-merged.mount: Deactivated successfully.
Dec 13 08:26:55 compute-0 podman[295338]: 2025-12-13 08:26:55.156559176 +0000 UTC m=+0.099787603 container cleanup 073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:26:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 180 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.0 MiB/s wr, 230 op/s
Dec 13 08:26:55 compute-0 systemd[1]: libpod-conmon-073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684.scope: Deactivated successfully.
Dec 13 08:26:55 compute-0 podman[295378]: 2025-12-13 08:26:55.22685502 +0000 UTC m=+0.046256192 container remove 073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.233 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[90a2f2b8-7511-47f9-aea4-d9d860317781]: (4, ('Sat Dec 13 08:26:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684)\n073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684\nSat Dec 13 08:26:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684)\n073dbf4880f1e4dc8078e3b73f90b467d99b7b7a79575e0c9b2d2a41c9ebf684\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.235 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9072b767-dddf-49c2-92d7-ed31929061ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.236 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:55 compute-0 kernel: tap87bd91d0-e0: left promiscuous mode
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.260 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7581f3-f423-4377-8cdc-a64418c0841f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.275 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[60d6f7fa-f401-4986-b0cb-91440398d428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.276 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea375b3c-ab02-486e-bc0d-542e33ad9cae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.290 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[90b61145-fe47-4245-9851-da0ceefb64c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698450, 'reachable_time': 39485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295397, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.294 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.294 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0fceb6e9-2be2-4fee-98aa-b18e307d3b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d87bd91d0\x2deead\x2d49b6\x2d8f92\x2df8d0dba555dc.mount: Deactivated successfully.
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.309 248514 DEBUG nova.network.neutron [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Successfully updated port: 57241dd9-27dd-49bc-befb-1ef45674d6be _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.333 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "refresh_cache-05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.334 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquired lock "refresh_cache-05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.334 248514 DEBUG nova.network.neutron [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.408 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.408 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:55.409 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.494 248514 DEBUG nova.network.neutron [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Successfully updated port: b1a08ea3-3044-4caa-a944-744bd324adc9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.516 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.517 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.517 248514 DEBUG nova.network.neutron [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.596 248514 DEBUG nova.network.neutron [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.640 248514 DEBUG nova.compute.manager [req-10d2e5f3-106b-47f6-8dfb-203efce0344c req-ac6542a3-e683-4ba2-96ac-ab3a98f14dea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received event network-changed-b1a08ea3-3044-4caa-a944-744bd324adc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.641 248514 DEBUG nova.compute.manager [req-10d2e5f3-106b-47f6-8dfb-203efce0344c req-ac6542a3-e683-4ba2-96ac-ab3a98f14dea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Refreshing instance network info cache due to event network-changed-b1a08ea3-3044-4caa-a944-744bd324adc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.641 248514 DEBUG oslo_concurrency.lockutils [req-10d2e5f3-106b-47f6-8dfb-203efce0344c req-ac6542a3-e683-4ba2-96ac-ab3a98f14dea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:55 compute-0 nova_compute[248510]: 2025-12-13 08:26:55.717 248514 DEBUG nova.network.neutron [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.087 248514 DEBUG nova.compute.manager [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received event network-vif-unplugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.088 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.088 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.088 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.089 248514 DEBUG nova.compute.manager [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] No waiting events found dispatching network-vif-unplugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.089 248514 WARNING nova.compute.manager [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received unexpected event network-vif-unplugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef for instance with vm_state suspended and task_state None.
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.090 248514 DEBUG nova.compute.manager [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-changed-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.090 248514 DEBUG nova.compute.manager [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Refreshing instance network info cache due to event network-changed-57241dd9-27dd-49bc-befb-1ef45674d6be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.091 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.479 248514 DEBUG nova.network.neutron [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Updating instance_info_cache with network_info: [{"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.591 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Releasing lock "refresh_cache-05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.591 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance network_info: |[{"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.593 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.594 248514 DEBUG nova.network.neutron [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Refreshing network info cache for port 57241dd9-27dd-49bc-befb-1ef45674d6be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.599 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Start _get_guest_xml network_info=[{"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.606 248514 DEBUG nova.network.neutron [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Updating instance_info_cache with network_info: [{"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.611 248514 WARNING nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.617 248514 DEBUG nova.virt.libvirt.host [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.618 248514 DEBUG nova.virt.libvirt.host [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.622 248514 DEBUG nova.virt.libvirt.host [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.623 248514 DEBUG nova.virt.libvirt.host [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.623 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.624 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.625 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.625 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.626 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.626 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.627 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.627 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.628 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.628 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.629 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.629 248514 DEBUG nova.virt.hardware [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.635 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.712 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.712 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance network_info: |[{"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.713 248514 DEBUG oslo_concurrency.lockutils [req-10d2e5f3-106b-47f6-8dfb-203efce0344c req-ac6542a3-e683-4ba2-96ac-ab3a98f14dea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.714 248514 DEBUG nova.network.neutron [req-10d2e5f3-106b-47f6-8dfb-203efce0344c req-ac6542a3-e683-4ba2-96ac-ab3a98f14dea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Refreshing network info cache for port b1a08ea3-3044-4caa-a944-744bd324adc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.717 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Start _get_guest_xml network_info=[{"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.722 248514 WARNING nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.727 248514 DEBUG nova.virt.libvirt.host [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.727 248514 DEBUG nova.virt.libvirt.host [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.730 248514 DEBUG nova.virt.libvirt.host [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.730 248514 DEBUG nova.virt.libvirt.host [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.730 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.731 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.731 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.731 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.731 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.731 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.732 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.732 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.732 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.732 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.733 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.733 248514 DEBUG nova.virt.hardware [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:26:56 compute-0 nova_compute[248510]: 2025-12-13 08:26:56.736 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:56 compute-0 ceph-mon[76537]: pgmap v1862: 321 pgs: 321 active+clean; 180 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.0 MiB/s wr, 230 op/s
Dec 13 08:26:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 180 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 123 op/s
Dec 13 08:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440384431' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.220 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.246 248514 DEBUG nova.storage.rbd_utils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.250 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3923789039' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.305 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.327 248514 DEBUG nova.storage.rbd_utils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 2e52d555-08dd-49fb-a73a-eded391e154c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.331 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.431 248514 DEBUG nova.compute.manager [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.482 248514 INFO nova.compute.manager [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] instance snapshotting
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.483 248514 WARNING nova.compute.manager [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] trying to snapshot a non-running instance: (state: 4 expected: 1)
Dec 13 08:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148673449' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.783 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.784 248514 DEBUG nova.virt.libvirt.vif [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-571322410',display_name='tempest-ServerDiskConfigTestJSON-server-571322410',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-571322410',id=46,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-br4xsr3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:52Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=05e06a6b-e157-4cd9-88c0-889fa4cfd9fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.785 248514 DEBUG nova.network.os_vif_util [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.786 248514 DEBUG nova.network.os_vif_util [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.787 248514 DEBUG nova.objects.instance [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_devices' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.806 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <uuid>05e06a6b-e157-4cd9-88c0-889fa4cfd9fa</uuid>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <name>instance-0000002e</name>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-571322410</nova:name>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:26:56</nova:creationTime>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:user uuid="a5bc32e49dbd4372a006913090b9ef0f">tempest-ServerDiskConfigTestJSON-167971983-project-member</nova:user>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:project uuid="9aea752cb9b648a7aa9b3f634ced797e">tempest-ServerDiskConfigTestJSON-167971983</nova:project>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:port uuid="57241dd9-27dd-49bc-befb-1ef45674d6be">
Dec 13 08:26:57 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <system>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="serial">05e06a6b-e157-4cd9-88c0-889fa4cfd9fa</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="uuid">05e06a6b-e157-4cd9-88c0-889fa4cfd9fa</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </system>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <os>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </os>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <features>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </features>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1b:6e:ff"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <target dev="tap57241dd9-27"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/console.log" append="off"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <video>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </video>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:26:57 compute-0 nova_compute[248510]: </domain>
Dec 13 08:26:57 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.812 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Preparing to wait for external event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.813 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.813 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.813 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.814 248514 DEBUG nova.virt.libvirt.vif [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-571322410',display_name='tempest-ServerDiskConfigTestJSON-server-571322410',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-571322410',id=46,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-br4xsr3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:52Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=05e06a6b-e157-4cd9-88c0-889fa4cfd9fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.814 248514 DEBUG nova.network.os_vif_util [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.815 248514 DEBUG nova.network.os_vif_util [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.816 248514 DEBUG os_vif [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.816 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.817 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.818 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.821 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57241dd9-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.821 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57241dd9-27, col_values=(('external_ids', {'iface-id': '57241dd9-27dd-49bc-befb-1ef45674d6be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:6e:ff', 'vm-uuid': '05e06a6b-e157-4cd9-88c0-889fa4cfd9fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.823 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:57 compute-0 NetworkManager[50376]: <info>  [1765614417.8248] manager: (tap57241dd9-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.826 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.831 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.832 248514 INFO os_vif [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27')
Dec 13 08:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003873112' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.891 248514 INFO nova.virt.libvirt.driver [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Beginning cold snapshot process
Dec 13 08:26:57 compute-0 ceph-mon[76537]: pgmap v1863: 321 pgs: 321 active+clean; 180 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 123 op/s
Dec 13 08:26:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1440384431' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3923789039' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2148673449' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2003873112' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.900 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.902 248514 DEBUG nova.virt.libvirt.vif [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1441755634',display_name='tempest-DeleteServersTestJSON-server-1441755634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1441755634',id=45,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-a2jlts09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:51Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=2e52d555-08dd-49fb-a73a-eded391e154c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.902 248514 DEBUG nova.network.os_vif_util [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.903 248514 DEBUG nova.network.os_vif_util [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:8e:7b,bridge_name='br-int',has_traffic_filtering=True,id=b1a08ea3-3044-4caa-a944-744bd324adc9,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1a08ea3-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.904 248514 DEBUG nova.objects.instance [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e52d555-08dd-49fb-a73a-eded391e154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.931 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.932 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.932 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No VIF found with MAC fa:16:3e:1b:6e:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.933 248514 INFO nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Using config drive
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.950 248514 DEBUG nova.storage.rbd_utils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.957 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <uuid>2e52d555-08dd-49fb-a73a-eded391e154c</uuid>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <name>instance-0000002d</name>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:name>tempest-DeleteServersTestJSON-server-1441755634</nova:name>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:26:56</nova:creationTime>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:user uuid="b6f7967ce16c45d394188c1302b02907">tempest-DeleteServersTestJSON-991966373-project-member</nova:user>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:project uuid="6f6beadbb0244529b8dfc1abff8e8e10">tempest-DeleteServersTestJSON-991966373</nova:project>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <nova:port uuid="b1a08ea3-3044-4caa-a944-744bd324adc9">
Dec 13 08:26:57 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <system>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="serial">2e52d555-08dd-49fb-a73a-eded391e154c</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="uuid">2e52d555-08dd-49fb-a73a-eded391e154c</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </system>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <os>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </os>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <features>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </features>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2e52d555-08dd-49fb-a73a-eded391e154c_disk">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2e52d555-08dd-49fb-a73a-eded391e154c_disk.config">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </source>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:26:57 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e9:8e:7b"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <target dev="tapb1a08ea3-30"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c/console.log" append="off"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <video>
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </video>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:26:57 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:26:57 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:26:57 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:26:57 compute-0 nova_compute[248510]: </domain>
Dec 13 08:26:57 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.958 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Preparing to wait for external event network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.959 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.959 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.959 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.960 248514 DEBUG nova.virt.libvirt.vif [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1441755634',display_name='tempest-DeleteServersTestJSON-server-1441755634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1441755634',id=45,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-a2jlts09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:26:51Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=2e52d555-08dd-49fb-a73a-eded391e154c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.960 248514 DEBUG nova.network.os_vif_util [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.960 248514 DEBUG nova.network.os_vif_util [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:8e:7b,bridge_name='br-int',has_traffic_filtering=True,id=b1a08ea3-3044-4caa-a944-744bd324adc9,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1a08ea3-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.961 248514 DEBUG os_vif [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:8e:7b,bridge_name='br-int',has_traffic_filtering=True,id=b1a08ea3-3044-4caa-a944-744bd324adc9,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1a08ea3-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.961 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.962 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.965 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1a08ea3-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.966 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1a08ea3-30, col_values=(('external_ids', {'iface-id': 'b1a08ea3-3044-4caa-a944-744bd324adc9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:8e:7b', 'vm-uuid': '2e52d555-08dd-49fb-a73a-eded391e154c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.967 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:57 compute-0 NetworkManager[50376]: <info>  [1765614417.9689] manager: (tapb1a08ea3-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:57 compute-0 nova_compute[248510]: 2025-12-13 08:26:57.977 248514 INFO os_vif [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:8e:7b,bridge_name='br-int',has_traffic_filtering=True,id=b1a08ea3-3044-4caa-a944-744bd324adc9,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1a08ea3-30')
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.067 248514 DEBUG nova.virt.libvirt.imagebackend [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.086 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.086 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.087 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No VIF found with MAC fa:16:3e:e9:8e:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.087 248514 INFO nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Using config drive
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.112 248514 DEBUG nova.storage.rbd_utils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 2e52d555-08dd-49fb-a73a-eded391e154c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.394 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614403.3929067, 5ad89163-9711-4d02-94be-db41412cd173 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.395 248514 INFO nova.compute.manager [-] [instance: 5ad89163-9711-4d02-94be-db41412cd173] VM Stopped (Lifecycle Event)
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.514 248514 DEBUG nova.storage.rbd_utils [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(61dbb0d3db9245c4b227d018a780975f) on rbd image(9a28a763-a06b-4a67-ad17-93fae3cc602f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.553 248514 DEBUG nova.compute.manager [None req-abb22223-8ade-443e-a6a7-39f6cc83c9d8 - - - - - -] [instance: 5ad89163-9711-4d02-94be-db41412cd173] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.604 248514 DEBUG nova.network.neutron [req-10d2e5f3-106b-47f6-8dfb-203efce0344c req-ac6542a3-e683-4ba2-96ac-ab3a98f14dea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Updated VIF entry in instance network info cache for port b1a08ea3-3044-4caa-a944-744bd324adc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.605 248514 DEBUG nova.network.neutron [req-10d2e5f3-106b-47f6-8dfb-203efce0344c req-ac6542a3-e683-4ba2-96ac-ab3a98f14dea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Updating instance_info_cache with network_info: [{"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.626 248514 DEBUG oslo_concurrency.lockutils [req-10d2e5f3-106b-47f6-8dfb-203efce0344c req-ac6542a3-e683-4ba2-96ac-ab3a98f14dea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.689 248514 DEBUG nova.network.neutron [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Updated VIF entry in instance network info cache for port 57241dd9-27dd-49bc-befb-1ef45674d6be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.689 248514 DEBUG nova.network.neutron [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Updating instance_info_cache with network_info: [{"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.711 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.711 248514 DEBUG nova.compute.manager [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received event network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.712 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.712 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.712 248514 DEBUG oslo_concurrency.lockutils [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.712 248514 DEBUG nova.compute.manager [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] No waiting events found dispatching network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.712 248514 WARNING nova.compute.manager [req-b960c2b1-3cb5-45ea-8a68-90ef7a92bf8e req-138e6a39-c563-4547-a917-c6ed9c5fae65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received unexpected event network-vif-plugged-b22a4b91-ffcc-486e-8d60-13b257ea1fef for instance with vm_state suspended and task_state None.
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.905 248514 INFO nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Creating config drive at /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config
Dec 13 08:26:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.910 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bbk3qm4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Dec 13 08:26:58 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.950 248514 INFO nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Creating config drive at /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c/disk.config
Dec 13 08:26:58 compute-0 nova_compute[248510]: 2025-12-13 08:26:58.956 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplcs6atbx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.014 248514 DEBUG nova.storage.rbd_utils [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning vms/9a28a763-a06b-4a67-ad17-93fae3cc602f_disk@61dbb0d3db9245c4b227d018a780975f to images/12b16a85-c3b5-4d27-b424-146af2cff83d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.054 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bbk3qm4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.077 248514 DEBUG nova.storage.rbd_utils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.081 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.112 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplcs6atbx" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.122 248514 DEBUG nova.storage.rbd_utils [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] flattening images/12b16a85-c3b5-4d27-b424-146af2cff83d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:26:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 180 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 147 op/s
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.196 248514 DEBUG nova.storage.rbd_utils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 2e52d555-08dd-49fb-a73a-eded391e154c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.208 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c/disk.config 2e52d555-08dd-49fb-a73a-eded391e154c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.362 248514 DEBUG oslo_concurrency.processutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.363 248514 INFO nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Deleting local config drive /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config because it was imported into RBD.
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.373 248514 DEBUG nova.storage.rbd_utils [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(61dbb0d3db9245c4b227d018a780975f) on rbd image(9a28a763-a06b-4a67-ad17-93fae3cc602f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.381 248514 DEBUG oslo_concurrency.processutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c/disk.config 2e52d555-08dd-49fb-a73a-eded391e154c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.382 248514 INFO nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Deleting local config drive /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c/disk.config because it was imported into RBD.
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.4537] manager: (tap57241dd9-27): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Dec 13 08:26:59 compute-0 kernel: tap57241dd9-27: entered promiscuous mode
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.4620] manager: (tapb1a08ea3-30): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Dec 13 08:26:59 compute-0 kernel: tapb1a08ea3-30: entered promiscuous mode
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00396|binding|INFO|Claiming lport 57241dd9-27dd-49bc-befb-1ef45674d6be for this chassis.
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00397|binding|INFO|57241dd9-27dd-49bc-befb-1ef45674d6be: Claiming fa:16:3e:1b:6e:ff 10.100.0.12
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.518 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:6e:ff 10.100.0.12'], port_security=['fa:16:3e:1b:6e:ff 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '05e06a6b-e157-4cd9-88c0-889fa4cfd9fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=57241dd9-27dd-49bc-befb-1ef45674d6be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.519 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 57241dd9-27dd-49bc-befb-1ef45674d6be in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 bound to our chassis
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.521 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:26:59 compute-0 systemd-udevd[295795]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:26:59 compute-0 systemd-udevd[295796]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.533 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc94927d-16ec-4ab2-ac4a-564f910ca5b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.534 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c63049d-61 in ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.536 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c63049d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.536 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1621d3bb-d366-4ca7-8441-7535b50dfd76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[abee2e6b-308b-4b15-bfe2-495c61c8991b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.5452] device (tap57241dd9-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.5459] device (tap57241dd9-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:26:59 compute-0 systemd-machined[210538]: New machine qemu-51-instance-0000002d.
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.5507] device (tapb1a08ea3-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.5514] device (tapb1a08ea3-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:26:59 compute-0 systemd-machined[210538]: New machine qemu-50-instance-0000002e.
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.550 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2b6386-27bd-4a8d-8a9b-25044be69742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.577 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8ea4c1-ec76-4c36-b31f-6948dd85a52c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002e.
Dec 13 08:26:59 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002d.
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.613 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[53f00f6d-72ae-435d-bb32-735f510294c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.6222] manager: (tap6c63049d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.620 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[44ea4c4a-bab0-43dc-913e-d8af4a97fae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 systemd-udevd[295802]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00398|binding|INFO|Claiming lport b1a08ea3-3044-4caa-a944-744bd324adc9 for this chassis.
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00399|binding|INFO|b1a08ea3-3044-4caa-a944-744bd324adc9: Claiming fa:16:3e:e9:8e:7b 10.100.0.4
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.637 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:8e:7b 10.100.0.4'], port_security=['fa:16:3e:e9:8e:7b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2e52d555-08dd-49fb-a73a-eded391e154c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b1a08ea3-3044-4caa-a944-744bd324adc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00400|binding|INFO|Setting lport 57241dd9-27dd-49bc-befb-1ef45674d6be ovn-installed in OVS
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00401|binding|INFO|Setting lport 57241dd9-27dd-49bc-befb-1ef45674d6be up in Southbound
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.661 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2444e1ea-cba0-4dd4-9a81-3d4f38f920ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.667 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0325fe22-6b1c-41ee-8dd8-e06c3b8b1093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.6984] device (tap6c63049d-60): carrier: link connected
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.706 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e42b76c-bbef-4863-b9fb-6131960d645c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.726 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d5ccb7-ed86-48e8-9141-a86aaa1e9d00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699691, 'reachable_time': 37224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295837, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.743 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50b0ae77-62e3-43c9-a284-ef4eef6495b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699691, 'tstamp': 699691}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295838, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.759 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[66ab13d1-8cb3-4fb9-abd9-845190b68a19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699691, 'reachable_time': 37224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295839, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00402|binding|INFO|Setting lport b1a08ea3-3044-4caa-a944-744bd324adc9 ovn-installed in OVS
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00403|binding|INFO|Setting lport b1a08ea3-3044-4caa-a944-744bd324adc9 up in Southbound
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.790 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.792 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e0dc1dbd-81cd-4468-824d-93261103a960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.840 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5169fdd9-7742-4c80-91ae-ebfc86767438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.842 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.842 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.842 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c63049d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:59 compute-0 NetworkManager[50376]: <info>  [1765614419.8457] manager: (tap6c63049d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 kernel: tap6c63049d-60: entered promiscuous mode
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.852 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.853 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c63049d-60, col_values=(('external_ids', {'iface-id': 'b410790c-12b7-4a29-87e5-13a29af9c319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:26:59 compute-0 ovn_controller[148476]: 2025-12-13T08:26:59Z|00404|binding|INFO|Releasing lport b410790c-12b7-4a29-87e5-13a29af9c319 from this chassis (sb_readonly=0)
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.854 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.876 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.877 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3541d4e1-c234-474f-a062-dbedce98ceb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.878 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:26:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:26:59.879 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'env', 'PROCESS_TAG=haproxy-6c63049d-63e9-47af-99e2-ce1403a42891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c63049d-63e9-47af-99e2-ce1403a42891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:26:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Dec 13 08:26:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Dec 13 08:26:59 compute-0 ceph-mon[76537]: osdmap e200: 3 total, 3 up, 3 in
Dec 13 08:26:59 compute-0 ceph-mon[76537]: pgmap v1865: 321 pgs: 321 active+clean; 180 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 147 op/s
Dec 13 08:26:59 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.956 248514 DEBUG nova.storage.rbd_utils [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(snap) on rbd image(12b16a85-c3b5-4d27-b424-146af2cff83d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.995 248514 DEBUG nova.compute.manager [req-88ccadbb-d5a1-4f8f-a968-86f84416a445 req-ba4de6c8-052d-47e6-b950-71283d88aa6e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.996 248514 DEBUG oslo_concurrency.lockutils [req-88ccadbb-d5a1-4f8f-a968-86f84416a445 req-ba4de6c8-052d-47e6-b950-71283d88aa6e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.996 248514 DEBUG oslo_concurrency.lockutils [req-88ccadbb-d5a1-4f8f-a968-86f84416a445 req-ba4de6c8-052d-47e6-b950-71283d88aa6e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.997 248514 DEBUG oslo_concurrency.lockutils [req-88ccadbb-d5a1-4f8f-a968-86f84416a445 req-ba4de6c8-052d-47e6-b950-71283d88aa6e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:26:59 compute-0 nova_compute[248510]: 2025-12-13 08:26:59.998 248514 DEBUG nova.compute.manager [req-88ccadbb-d5a1-4f8f-a968-86f84416a445 req-ba4de6c8-052d-47e6-b950-71283d88aa6e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Processing event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.019 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.020 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614420.0194128, 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.021 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] VM Started (Lifecycle Event)
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.024 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.032 248514 INFO nova.virt.libvirt.driver [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance spawned successfully.
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.032 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:27:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:00 compute-0 podman[295937]: 2025-12-13 08:27:00.297782258 +0000 UTC m=+0.056229199 container create adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.322 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.331 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.332 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.332 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.333 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.333 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.334 248514 DEBUG nova.virt.libvirt.driver [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.339 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:00 compute-0 podman[295937]: 2025-12-13 08:27:00.269606922 +0000 UTC m=+0.028053883 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:27:00 compute-0 systemd[1]: Started libpod-conmon-adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428.scope.
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.386 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.387 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614420.0204537, 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.387 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] VM Paused (Lifecycle Event)
Dec 13 08:27:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8924b4bd683d0d1a96ce2be5709406ddd9f1ab4a0c8f2faf0563e9edc4e4ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.415 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:00 compute-0 podman[295937]: 2025-12-13 08:27:00.41542945 +0000 UTC m=+0.173876391 container init adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.421 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614420.0238974, 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.421 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] VM Resumed (Lifecycle Event)
Dec 13 08:27:00 compute-0 podman[295937]: 2025-12-13 08:27:00.423330905 +0000 UTC m=+0.181777846 container start adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.427 248514 INFO nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Took 8.06 seconds to spawn the instance on the hypervisor.
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.428 248514 DEBUG nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:00 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[295990]: [NOTICE]   (296001) : New worker (296003) forked
Dec 13 08:27:00 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[295990]: [NOTICE]   (296001) : Loading success.
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.448 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.452 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.492 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b1a08ea3-3044-4caa-a944-744bd324adc9 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.494 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[773627a4-1a86-447e-a6bb-cc59f5b8e2f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.504 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85372fca-a1 in ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.506 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85372fca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.506 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0407629-90e9-4202-a45c-508c0a9d90a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.507 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[574b0f39-4579-435c-9bac-866219715a81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.517 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4876e470-b21e-45a7-97f2-9fa96bcb4ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.529 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03112fea-6d33-4da1-a598-456e9621ec13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.558 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6a345265-1eb9-4e37-81ab-e15e7b7b5d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 systemd-udevd[295825]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:00 compute-0 NetworkManager[50376]: <info>  [1765614420.5644] manager: (tap85372fca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/185)
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.562 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[152fd326-e8c4-4ed7-9086-1565b1e22c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.599 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6892adab-6ae6-4694-8e31-3f94029575e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.604 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17b1db65-43d1-4c4c-b139-e744f1f91ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 NetworkManager[50376]: <info>  [1765614420.6310] device (tap85372fca-a0): carrier: link connected
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.636 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb62b37-9f01-4575-b1fa-b632aba06d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3255c610-9b46-4e60-b9e2-9f746c2c8a7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699784, 'reachable_time': 28422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296025, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[153c5510-f9be-4b74-8ac5-bb5613277353]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:30d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699784, 'tstamp': 699784}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296026, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.701 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9156ad3c-a9c3-41ed-be60-9f0cf6a72bd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699784, 'reachable_time': 28422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296027, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b4078024-b7c9-4842-b219-c08a11743421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.787 248514 INFO nova.compute.manager [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Took 10.25 seconds to build instance.
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.799 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614420.462759, 2e52d555-08dd-49fb-a73a-eded391e154c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.800 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] VM Started (Lifecycle Event)
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.821 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca78548a-6497-47bb-9f7a-abdfcd9a27a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.823 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.823 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.824 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85372fca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.826 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:00 compute-0 NetworkManager[50376]: <info>  [1765614420.8274] manager: (tap85372fca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Dec 13 08:27:00 compute-0 kernel: tap85372fca-a0: entered promiscuous mode
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.833 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85372fca-a0, col_values=(('external_ids', {'iface-id': '2c0f4981-0ad0-478e-b1ad-551d231022ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:00 compute-0 ovn_controller[148476]: 2025-12-13T08:27:00Z|00405|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.858 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.859 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ab4076-3587-4446-8139-b0c754edeea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.860 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:27:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:00.862 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'env', 'PROCESS_TAG=haproxy-85372fca-ab50-48b6-8c21-507f630c205a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85372fca-ab50-48b6-8c21-507f630c205a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.892 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.896 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614420.4628413, 2e52d555-08dd-49fb-a73a-eded391e154c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:00 compute-0 nova_compute[248510]: 2025-12-13 08:27:00.896 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] VM Paused (Lifecycle Event)
Dec 13 08:27:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Dec 13 08:27:00 compute-0 ceph-mon[76537]: osdmap e201: 3 total, 3 up, 3 in
Dec 13 08:27:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Dec 13 08:27:00 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.054 248514 DEBUG oslo_concurrency.lockutils [None req-7f2a5c02-1d01-4e23-9a4e-cb3ef7b79f95 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.091 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.095 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 180 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 170 B/s wr, 29 op/s
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.195 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.196 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.241 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.242 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:27:01 compute-0 podman[296062]: 2025-12-13 08:27:01.25596669 +0000 UTC m=+0.063551059 container create 66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:27:01 compute-0 systemd[1]: Started libpod-conmon-66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa.scope.
Dec 13 08:27:01 compute-0 podman[296062]: 2025-12-13 08:27:01.225515078 +0000 UTC m=+0.033099497 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:27:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac85f2c891ca9ad238d2609a122f5ad4c4b069e530674acfa08d9e4f0f7ce65c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:01 compute-0 podman[296062]: 2025-12-13 08:27:01.350153974 +0000 UTC m=+0.157738353 container init 66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 08:27:01 compute-0 podman[296062]: 2025-12-13 08:27:01.359754361 +0000 UTC m=+0.167338730 container start 66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:27:01 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[296077]: [NOTICE]   (296081) : New worker (296083) forked
Dec 13 08:27:01 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[296077]: [NOTICE]   (296081) : Loading success.
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.523 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.524 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.531 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.532 248514 INFO nova.compute.claims [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:27:01 compute-0 nova_compute[248510]: 2025-12-13 08:27:01.752 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:01 compute-0 ceph-mon[76537]: osdmap e202: 3 total, 3 up, 3 in
Dec 13 08:27:01 compute-0 ceph-mon[76537]: pgmap v1868: 321 pgs: 321 active+clean; 180 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 170 B/s wr, 29 op/s
Dec 13 08:27:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3752788989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.310 248514 DEBUG nova.compute.manager [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.311 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.311 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.312 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.312 248514 DEBUG nova.compute.manager [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No waiting events found dispatching network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.312 248514 WARNING nova.compute.manager [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received unexpected event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with vm_state active and task_state None.
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.312 248514 DEBUG nova.compute.manager [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received event network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.313 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.313 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.313 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.313 248514 DEBUG nova.compute.manager [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Processing event network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.313 248514 DEBUG nova.compute.manager [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received event network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.314 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.314 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.314 248514 DEBUG oslo_concurrency.lockutils [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.314 248514 DEBUG nova.compute.manager [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] No waiting events found dispatching network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.314 248514 WARNING nova.compute.manager [req-26fb2086-195b-4fb4-aafc-06e6833839b5 req-fc7e007c-6193-4e50-8499-a76d524ca2b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received unexpected event network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 for instance with vm_state building and task_state spawning.
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.315 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.321 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.321 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614422.32087, 2e52d555-08dd-49fb-a73a-eded391e154c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.322 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] VM Resumed (Lifecycle Event)
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.327 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance spawned successfully.
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.327 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.348 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.357 248514 DEBUG nova.compute.provider_tree [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.389 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.393 248514 DEBUG nova.scheduler.client.report [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.399 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.399 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.400 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.400 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.400 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.401 248514 DEBUG nova.virt.libvirt.driver [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.420 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.524 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.636 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.637 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.683 248514 INFO nova.virt.libvirt.driver [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Snapshot image upload complete
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.684 248514 INFO nova.compute.manager [None req-db369bdf-4011-4fec-ac0d-d7fbf8c21aa4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Took 5.20 seconds to snapshot the instance on the hypervisor.
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.718 248514 INFO nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Took 11.29 seconds to spawn the instance on the hypervisor.
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.719 248514 DEBUG nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.830 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.830 248514 DEBUG nova.network.neutron [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.842 248514 INFO nova.compute.manager [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Took 12.74 seconds to build instance.
Dec 13 08:27:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3752788989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.967 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:02 compute-0 nova_compute[248510]: 2025-12-13 08:27:02.974 248514 INFO nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.010 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.015 248514 DEBUG oslo_concurrency.lockutils [None req-1f7f69d0-e2e5-4b19-a4f2-dd173576c6eb b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 201 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 636 KiB/s wr, 189 op/s
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.197 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.198 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.254 248514 DEBUG nova.policy [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '65a6b617130a42ac9c3d9b4abf6a1cfb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3490ad817e664ff6b12c4ea88192b667', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.419 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.421 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.421 248514 INFO nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Creating image(s)
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.450 248514 DEBUG nova.storage.rbd_utils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 98240df6-1cba-40e1-833c-24611270ed83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.488 248514 DEBUG nova.storage.rbd_utils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 98240df6-1cba-40e1-833c-24611270ed83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.520 248514 DEBUG nova.storage.rbd_utils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 98240df6-1cba-40e1-833c-24611270ed83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.527 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.570 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.608 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.610 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.611 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.612 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.638 248514 DEBUG nova.storage.rbd_utils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 98240df6-1cba-40e1-833c-24611270ed83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.643 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 98240df6-1cba-40e1-833c-24611270ed83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.856 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.856 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.867 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.867 248514 INFO nova.compute.claims [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:27:03 compute-0 nova_compute[248510]: 2025-12-13 08:27:03.934 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 98240df6-1cba-40e1-833c-24611270ed83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:03 compute-0 ceph-mon[76537]: pgmap v1869: 321 pgs: 321 active+clean; 201 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 636 KiB/s wr, 189 op/s
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.001 248514 DEBUG nova.storage.rbd_utils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] resizing rbd image 98240df6-1cba-40e1-833c-24611270ed83_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.087 248514 DEBUG nova.objects.instance [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'migration_context' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.233 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.234 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Ensure instance console log exists: /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.234 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.235 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.235 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:04 compute-0 nova_compute[248510]: 2025-12-13 08:27:04.967 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:05 compute-0 nova_compute[248510]: 2025-12-13 08:27:05.101 248514 INFO nova.compute.manager [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Rebuilding instance
Dec 13 08:27:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 237 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 8.9 MiB/s rd, 3.7 MiB/s wr, 336 op/s
Dec 13 08:27:05 compute-0 nova_compute[248510]: 2025-12-13 08:27:05.336 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994635498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:05 compute-0 nova_compute[248510]: 2025-12-13 08:27:05.535 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:05 compute-0 nova_compute[248510]: 2025-12-13 08:27:05.542 248514 DEBUG nova.compute.provider_tree [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:05 compute-0 nova_compute[248510]: 2025-12-13 08:27:05.942 248514 DEBUG nova.compute.manager [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:06 compute-0 nova_compute[248510]: 2025-12-13 08:27:06.120 248514 DEBUG nova.scheduler.client.report [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:06 compute-0 ceph-mon[76537]: pgmap v1870: 321 pgs: 321 active+clean; 237 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 8.9 MiB/s rd, 3.7 MiB/s wr, 336 op/s
Dec 13 08:27:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2994635498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:06 compute-0 nova_compute[248510]: 2025-12-13 08:27:06.254 248514 DEBUG nova.network.neutron [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Successfully created port: bdc94f2e-b14e-4e39-bea0-978ff56ff722 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:27:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 237 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.9 MiB/s wr, 263 op/s
Dec 13 08:27:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Dec 13 08:27:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Dec 13 08:27:07 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Dec 13 08:27:07 compute-0 nova_compute[248510]: 2025-12-13 08:27:07.389 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_requests' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:07 compute-0 nova_compute[248510]: 2025-12-13 08:27:07.455 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:07 compute-0 nova_compute[248510]: 2025-12-13 08:27:07.456 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:27:07 compute-0 nova_compute[248510]: 2025-12-13 08:27:07.465 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_devices' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:07 compute-0 nova_compute[248510]: 2025-12-13 08:27:07.887 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'resources' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:07 compute-0 nova_compute[248510]: 2025-12-13 08:27:07.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:08 compute-0 ceph-mon[76537]: pgmap v1871: 321 pgs: 321 active+clean; 237 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.9 MiB/s wr, 263 op/s
Dec 13 08:27:08 compute-0 ceph-mon[76537]: osdmap e203: 3 total, 3 up, 3 in
Dec 13 08:27:08 compute-0 nova_compute[248510]: 2025-12-13 08:27:08.444 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:27:08 compute-0 nova_compute[248510]: 2025-12-13 08:27:08.445 248514 DEBUG nova.network.neutron [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:27:08 compute-0 nova_compute[248510]: 2025-12-13 08:27:08.456 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'migration_context' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.151 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.153 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.158 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.159 248514 INFO nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.170 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:27:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 227 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 5.2 MiB/s wr, 374 op/s
Dec 13 08:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:27:09
Dec 13 08:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'volumes', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec 13 08:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.223 248514 DEBUG nova.policy [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '65a6b617130a42ac9c3d9b4abf6a1cfb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3490ad817e664ff6b12c4ea88192b667', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.913 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.916 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9a28a763-a06b-4a67-ad17-93fae3cc602f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.916 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.917 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.917 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.917 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.918 248514 INFO nova.compute.manager [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Terminating instance
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.921 248514 DEBUG nova.compute.manager [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.922 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.928 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "2e52d555-08dd-49fb-a73a-eded391e154c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.929 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.930 248514 INFO nova.compute.manager [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Shelving
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.949 248514 INFO nova.virt.libvirt.driver [-] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Instance destroyed successfully.
Dec 13 08:27:09 compute-0 nova_compute[248510]: 2025-12-13 08:27:09.951 248514 DEBUG nova.objects.instance [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid 9a28a763-a06b-4a67-ad17-93fae3cc602f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.055 248514 DEBUG nova.virt.libvirt.vif [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1563022373',display_name='tempest-ImagesTestJSON-server-1563022373',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1563022373',id=44,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:26:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-h9h6w1n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:02Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9a28a763-a06b-4a67-ad17-93fae3cc602f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.055 248514 DEBUG nova.network.os_vif_util [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "address": "fa:16:3e:51:d5:bd", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb22a4b91-ff", "ovs_interfaceid": "b22a4b91-ffcc-486e-8d60-13b257ea1fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.056 248514 DEBUG nova.network.os_vif_util [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:d5:bd,bridge_name='br-int',has_traffic_filtering=True,id=b22a4b91-ffcc-486e-8d60-13b257ea1fef,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb22a4b91-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.057 248514 DEBUG os_vif [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:d5:bd,bridge_name='br-int',has_traffic_filtering=True,id=b22a4b91-ffcc-486e-8d60-13b257ea1fef,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb22a4b91-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.063 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.064 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb22a4b91-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.078 248514 INFO os_vif [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:d5:bd,bridge_name='br-int',has_traffic_filtering=True,id=b22a4b91-ffcc-486e-8d60-13b257ea1fef,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb22a4b91-ff')
Dec 13 08:27:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Dec 13 08:27:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Dec 13 08:27:10 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.107 248514 DEBUG nova.virt.libvirt.driver [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.115 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614415.1154637, 9a28a763-a06b-4a67-ad17-93fae3cc602f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.116 248514 INFO nova.compute.manager [-] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] VM Stopped (Lifecycle Event)
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.207 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.208 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.220 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.221 248514 INFO nova.compute.claims [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.242 248514 DEBUG nova.compute.manager [None req-5fb70610-4512-4955-85d1-f74689a4f9fd - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.246 248514 DEBUG nova.compute.manager [None req-5fb70610-4512-4955-85d1-f74689a4f9fd - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:10 compute-0 ceph-mon[76537]: pgmap v1873: 321 pgs: 321 active+clean; 227 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 5.2 MiB/s wr, 374 op/s
Dec 13 08:27:10 compute-0 ceph-mon[76537]: osdmap e204: 3 total, 3 up, 3 in
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.370 248514 INFO nova.virt.libvirt.driver [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Deleting instance files /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f_del
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.371 248514 INFO nova.virt.libvirt.driver [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Deletion of /var/lib/nova/instances/9a28a763-a06b-4a67-ad17-93fae3cc602f_del complete
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.491 248514 INFO nova.compute.manager [None req-5fb70610-4512-4955-85d1-f74689a4f9fd - - - - - -] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] During sync_power_state the instance has a pending task (deleting). Skip.
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.581 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.583 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.583 248514 INFO nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Creating image(s)
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:27:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.612 248514 DEBUG nova.storage.rbd_utils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 41602b99-e7f2-450c-885e-51d07a1236d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.641 248514 DEBUG nova.storage.rbd_utils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 41602b99-e7f2-450c-885e-51d07a1236d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.666 248514 DEBUG nova.storage.rbd_utils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 41602b99-e7f2-450c-885e-51d07a1236d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.671 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.724 248514 INFO nova.compute.manager [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Took 0.80 seconds to destroy the instance on the hypervisor.
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.725 248514 DEBUG oslo.service.loopingcall [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.725 248514 DEBUG nova.compute.manager [-] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.726 248514 DEBUG nova.network.neutron [-] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.749 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.750 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.750 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.751 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.769 248514 DEBUG nova.storage.rbd_utils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 41602b99-e7f2-450c-885e-51d07a1236d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:10 compute-0 nova_compute[248510]: 2025-12-13 08:27:10.773 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 41602b99-e7f2-450c-885e-51d07a1236d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.041 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.086 248514 DEBUG nova.network.neutron [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Successfully updated port: bdc94f2e-b14e-4e39-bea0-978ff56ff722 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.094 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 41602b99-e7f2-450c-885e-51d07a1236d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.163 248514 DEBUG nova.storage.rbd_utils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] resizing rbd image 41602b99-e7f2-450c-885e-51d07a1236d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:27:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 227 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.9 MiB/s wr, 265 op/s
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.259 248514 DEBUG nova.objects.instance [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'migration_context' on Instance uuid 41602b99-e7f2-450c-885e-51d07a1236d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.457 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.458 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquired lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.461 248514 DEBUG nova.network.neutron [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.462 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.463 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Ensure instance console log exists: /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.463 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.464 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.464 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358362214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.683 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.691 248514 DEBUG nova.compute.provider_tree [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.737 248514 DEBUG nova.compute.manager [req-1196649d-a15a-42d5-a4e2-72dd1e763dc6 req-0021b9f9-b94f-480b-a5b3-79456f4d3167 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-changed-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.738 248514 DEBUG nova.compute.manager [req-1196649d-a15a-42d5-a4e2-72dd1e763dc6 req-0021b9f9-b94f-480b-a5b3-79456f4d3167 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Refreshing instance network info cache due to event network-changed-bdc94f2e-b14e-4e39-bea0-978ff56ff722. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.738 248514 DEBUG oslo_concurrency.lockutils [req-1196649d-a15a-42d5-a4e2-72dd1e763dc6 req-0021b9f9-b94f-480b-a5b3-79456f4d3167 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.748 248514 DEBUG nova.scheduler.client.report [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:11 compute-0 nova_compute[248510]: 2025-12-13 08:27:11.990 248514 DEBUG nova.network.neutron [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.071 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.072 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:27:12 compute-0 ovn_controller[148476]: 2025-12-13T08:27:12Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:6e:ff 10.100.0.12
Dec 13 08:27:12 compute-0 ovn_controller[148476]: 2025-12-13T08:27:12Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:6e:ff 10.100.0.12
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.198 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.199 248514 DEBUG nova.network.neutron [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.251 248514 INFO nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:27:12 compute-0 ceph-mon[76537]: pgmap v1875: 321 pgs: 321 active+clean; 227 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.9 MiB/s wr, 265 op/s
Dec 13 08:27:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3358362214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.552 248514 DEBUG nova.network.neutron [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Successfully created port: c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.576 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.680 248514 DEBUG nova.policy [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '65a6b617130a42ac9c3d9b4abf6a1cfb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3490ad817e664ff6b12c4ea88192b667', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:27:12 compute-0 nova_compute[248510]: 2025-12-13 08:27:12.773 248514 DEBUG nova.network.neutron [-] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.139 248514 INFO nova.compute.manager [-] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Took 2.41 seconds to deallocate network for instance.
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.146 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.147 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.148 248514 INFO nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Creating image(s)
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.171 248514 DEBUG nova.storage.rbd_utils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 238 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.200 248514 DEBUG nova.storage.rbd_utils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.229 248514 DEBUG nova.storage.rbd_utils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.233 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.316 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.317 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.318 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.318 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.347 248514 DEBUG nova.storage.rbd_utils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.352 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.412 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.413 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.551 248514 DEBUG oslo_concurrency.processutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:13 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.771 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.859 248514 DEBUG nova.storage.rbd_utils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] resizing rbd image e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:27:13 compute-0 nova_compute[248510]: 2025-12-13 08:27:13.938 248514 DEBUG nova.objects.instance [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'migration_context' on Instance uuid e6e0fdaf-f934-4e56-8e59-4c4475bacd26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.041 248514 DEBUG nova.network.neutron [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updating instance_info_cache with network_info: [{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701585601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.146 248514 DEBUG oslo_concurrency.processutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.151 248514 DEBUG nova.compute.provider_tree [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:14 compute-0 ceph-mon[76537]: pgmap v1876: 321 pgs: 321 active+clean; 238 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Dec 13 08:27:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2701585601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.312 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.313 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Ensure instance console log exists: /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.313 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.314 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.314 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.324 248514 DEBUG nova.compute.manager [req-edbef303-0790-4d46-8348-982c80e1dc67 req-4acb01d6-cd4c-4a2e-adda-7b0deb1032aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a28a763-a06b-4a67-ad17-93fae3cc602f] Received event network-vif-deleted-b22a4b91-ffcc-486e-8d60-13b257ea1fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.338 248514 DEBUG nova.scheduler.client.report [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.399 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Releasing lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.400 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance network_info: |[{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.400 248514 DEBUG oslo_concurrency.lockutils [req-1196649d-a15a-42d5-a4e2-72dd1e763dc6 req-0021b9f9-b94f-480b-a5b3-79456f4d3167 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.400 248514 DEBUG nova.network.neutron [req-1196649d-a15a-42d5-a4e2-72dd1e763dc6 req-0021b9f9-b94f-480b-a5b3-79456f4d3167 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Refreshing network info cache for port bdc94f2e-b14e-4e39-bea0-978ff56ff722 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.403 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Start _get_guest_xml network_info=[{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.407 248514 WARNING nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.411 248514 DEBUG nova.virt.libvirt.host [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.412 248514 DEBUG nova.virt.libvirt.host [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.414 248514 DEBUG nova.virt.libvirt.host [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.415 248514 DEBUG nova.virt.libvirt.host [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.415 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.415 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.416 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.416 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.416 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.416 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.416 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.417 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.417 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.417 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.417 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.417 248514 DEBUG nova.virt.hardware [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.420 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.535 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.597 248514 DEBUG nova.network.neutron [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Successfully created port: b03c2424-77e3-49e1-b55f-f317911025b6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.726 248514 INFO nova.scheduler.client.report [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance 9a28a763-a06b-4a67-ad17-93fae3cc602f
Dec 13 08:27:14 compute-0 nova_compute[248510]: 2025-12-13 08:27:14.881 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:27:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1752879647' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:27:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273406201' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:27:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1752879647' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.010 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.034 248514 DEBUG nova.storage.rbd_utils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 98240df6-1cba-40e1-833c-24611270ed83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.038 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.125 248514 DEBUG oslo_concurrency.lockutils [None req-0b0ed078-377c-4bf1-a545-3116601861aa 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9a28a763-a06b-4a67-ad17-93fae3cc602f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 297 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 11 MiB/s wr, 349 op/s
Dec 13 08:27:15 compute-0 ovn_controller[148476]: 2025-12-13T08:27:15Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:8e:7b 10.100.0.4
Dec 13 08:27:15 compute-0 ovn_controller[148476]: 2025-12-13T08:27:15Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:8e:7b 10.100.0.4
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.228 248514 DEBUG nova.network.neutron [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Successfully updated port: c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:27:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1752879647' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:27:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/273406201' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1752879647' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:27:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766941941' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.580 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.582 248514 DEBUG nova.virt.libvirt.vif [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:03Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.582 248514 DEBUG nova.network.os_vif_util [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.583 248514 DEBUG nova.network.os_vif_util [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.584 248514 DEBUG nova.objects.instance [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.780 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "refresh_cache-41602b99-e7f2-450c-885e-51d07a1236d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.781 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquired lock "refresh_cache-41602b99-e7f2-450c-885e-51d07a1236d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.781 248514 DEBUG nova.network.neutron [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.863 248514 DEBUG nova.compute.manager [req-b3e2a2a1-0b82-41a7-ad00-e97065cae549 req-9239ed4a-f912-4bcd-a7d8-86ba58919edb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-changed-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.864 248514 DEBUG nova.compute.manager [req-b3e2a2a1-0b82-41a7-ad00-e97065cae549 req-9239ed4a-f912-4bcd-a7d8-86ba58919edb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Refreshing instance network info cache due to event network-changed-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.864 248514 DEBUG oslo_concurrency.lockutils [req-b3e2a2a1-0b82-41a7-ad00-e97065cae549 req-9239ed4a-f912-4bcd-a7d8-86ba58919edb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-41602b99-e7f2-450c-885e-51d07a1236d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.886 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <uuid>98240df6-1cba-40e1-833c-24611270ed83</uuid>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <name>instance-0000002f</name>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-612802196</nova:name>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:27:14</nova:creationTime>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <nova:user uuid="65a6b617130a42ac9c3d9b4abf6a1cfb">tempest-ListServerFiltersTestJSON-1229542462-project-member</nova:user>
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <nova:project uuid="3490ad817e664ff6b12c4ea88192b667">tempest-ListServerFiltersTestJSON-1229542462</nova:project>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <nova:port uuid="bdc94f2e-b14e-4e39-bea0-978ff56ff722">
Dec 13 08:27:15 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <system>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <entry name="serial">98240df6-1cba-40e1-833c-24611270ed83</entry>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <entry name="uuid">98240df6-1cba-40e1-833c-24611270ed83</entry>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </system>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <os>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   </os>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <features>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   </features>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/98240df6-1cba-40e1-833c-24611270ed83_disk">
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/98240df6-1cba-40e1-833c-24611270ed83_disk.config">
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:15 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:d6:ce:b2"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <target dev="tapbdc94f2e-b1"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/console.log" append="off"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <video>
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </video>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:27:15 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:27:15 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:27:15 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:27:15 compute-0 nova_compute[248510]: </domain>
Dec 13 08:27:15 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.887 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Preparing to wait for external event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.888 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.888 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.888 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.889 248514 DEBUG nova.virt.libvirt.vif [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:03Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.890 248514 DEBUG nova.network.os_vif_util [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.890 248514 DEBUG nova.network.os_vif_util [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.891 248514 DEBUG os_vif [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.892 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.892 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.892 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.898 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdc94f2e-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.899 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdc94f2e-b1, col_values=(('external_ids', {'iface-id': 'bdc94f2e-b14e-4e39-bea0-978ff56ff722', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:ce:b2', 'vm-uuid': '98240df6-1cba-40e1-833c-24611270ed83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:15 compute-0 NetworkManager[50376]: <info>  [1765614435.9018] manager: (tapbdc94f2e-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.911 248514 INFO os_vif [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1')
Dec 13 08:27:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:15.982 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:15.984 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.986 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.988 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:15 compute-0 nova_compute[248510]: 2025-12-13 08:27:15.989 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:16 compute-0 ceph-mon[76537]: pgmap v1877: 321 pgs: 321 active+clean; 297 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 11 MiB/s wr, 349 op/s
Dec 13 08:27:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2766941941' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.350 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.350 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.351 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No VIF found with MAC fa:16:3e:d6:ce:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.352 248514 INFO nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Using config drive
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.373 248514 DEBUG nova.storage.rbd_utils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 98240df6-1cba-40e1-833c-24611270ed83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.700 248514 DEBUG nova.network.neutron [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.750 248514 DEBUG nova.network.neutron [req-1196649d-a15a-42d5-a4e2-72dd1e763dc6 req-0021b9f9-b94f-480b-a5b3-79456f4d3167 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updated VIF entry in instance network info cache for port bdc94f2e-b14e-4e39-bea0-978ff56ff722. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.751 248514 DEBUG nova.network.neutron [req-1196649d-a15a-42d5-a4e2-72dd1e763dc6 req-0021b9f9-b94f-480b-a5b3-79456f4d3167 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updating instance_info_cache with network_info: [{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.909 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:27:16 compute-0 nova_compute[248510]: 2025-12-13 08:27:16.922 248514 DEBUG oslo_concurrency.lockutils [req-1196649d-a15a-42d5-a4e2-72dd1e763dc6 req-0021b9f9-b94f-480b-a5b3-79456f4d3167 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:17 compute-0 nova_compute[248510]: 2025-12-13 08:27:17.010 248514 DEBUG nova.network.neutron [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Successfully updated port: b03c2424-77e3-49e1-b55f-f317911025b6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:27:17 compute-0 sudo[296780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:27:17 compute-0 sudo[296780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:17 compute-0 sudo[296780]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 297 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 9.2 MiB/s wr, 281 op/s
Dec 13 08:27:17 compute-0 sudo[296823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 08:27:17 compute-0 podman[296806]: 2025-12-13 08:27:17.220732236 +0000 UTC m=+0.065104848 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 08:27:17 compute-0 sudo[296823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:17 compute-0 podman[296805]: 2025-12-13 08:27:17.231308897 +0000 UTC m=+0.078947469 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:27:17 compute-0 podman[296804]: 2025-12-13 08:27:17.290432775 +0000 UTC m=+0.139126043 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:27:17 compute-0 nova_compute[248510]: 2025-12-13 08:27:17.550 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "refresh_cache-e6e0fdaf-f934-4e56-8e59-4c4475bacd26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:17 compute-0 nova_compute[248510]: 2025-12-13 08:27:17.551 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquired lock "refresh_cache-e6e0fdaf-f934-4e56-8e59-4c4475bacd26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:17 compute-0 nova_compute[248510]: 2025-12-13 08:27:17.551 248514 DEBUG nova.network.neutron [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:27:17 compute-0 nova_compute[248510]: 2025-12-13 08:27:17.566 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:17 compute-0 nova_compute[248510]: 2025-12-13 08:27:17.567 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:17 compute-0 nova_compute[248510]: 2025-12-13 08:27:17.576 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:27:17 compute-0 nova_compute[248510]: 2025-12-13 08:27:17.577 248514 INFO nova.compute.claims [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:27:17 compute-0 podman[296942]: 2025-12-13 08:27:17.725513831 +0000 UTC m=+0.055476760 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:27:17 compute-0 podman[296942]: 2025-12-13 08:27:17.826627466 +0000 UTC m=+0.156590415 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:27:18 compute-0 ceph-mon[76537]: pgmap v1878: 321 pgs: 321 active+clean; 297 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 9.2 MiB/s wr, 281 op/s
Dec 13 08:27:18 compute-0 sudo[296823]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:27:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:27:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:18 compute-0 sudo[297131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:27:18 compute-0 sudo[297131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:18 compute-0 sudo[297131]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.705 248514 DEBUG nova.network.neutron [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.726 248514 INFO nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Creating config drive at /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/disk.config
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.731 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjgqz0096 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:18 compute-0 sudo[297156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:27:18 compute-0 sudo[297156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.769 248514 DEBUG nova.compute.manager [req-5bbd93ad-b8fd-427f-bf47-eba6b77919a5 req-3a10cc6b-93cd-4e90-b485-fd59ef52b195 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-changed-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.770 248514 DEBUG nova.compute.manager [req-5bbd93ad-b8fd-427f-bf47-eba6b77919a5 req-3a10cc6b-93cd-4e90-b485-fd59ef52b195 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Refreshing instance network info cache due to event network-changed-b03c2424-77e3-49e1-b55f-f317911025b6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.770 248514 DEBUG oslo_concurrency.lockutils [req-5bbd93ad-b8fd-427f-bf47-eba6b77919a5 req-3a10cc6b-93cd-4e90-b485-fd59ef52b195 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e6e0fdaf-f934-4e56-8e59-4c4475bacd26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.874 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjgqz0096" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.898 248514 DEBUG nova.storage.rbd_utils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 98240df6-1cba-40e1-833c-24611270ed83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:18 compute-0 nova_compute[248510]: 2025-12-13 08:27:18.901 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/disk.config 98240df6-1cba-40e1-833c-24611270ed83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.037 248514 DEBUG oslo_concurrency.processutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/disk.config 98240df6-1cba-40e1-833c-24611270ed83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.038 248514 INFO nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Deleting local config drive /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/disk.config because it was imported into RBD.
Dec 13 08:27:19 compute-0 kernel: tapbdc94f2e-b1: entered promiscuous mode
Dec 13 08:27:19 compute-0 NetworkManager[50376]: <info>  [1765614439.1064] manager: (tapbdc94f2e-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Dec 13 08:27:19 compute-0 ovn_controller[148476]: 2025-12-13T08:27:19Z|00406|binding|INFO|Claiming lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 for this chassis.
Dec 13 08:27:19 compute-0 ovn_controller[148476]: 2025-12-13T08:27:19Z|00407|binding|INFO|bdc94f2e-b14e-4e39-bea0-978ff56ff722: Claiming fa:16:3e:d6:ce:b2 10.100.0.10
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:19 compute-0 systemd-udevd[297245]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:19 compute-0 NetworkManager[50376]: <info>  [1765614439.1515] device (tapbdc94f2e-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:27:19 compute-0 NetworkManager[50376]: <info>  [1765614439.1521] device (tapbdc94f2e-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.161 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:19 compute-0 ovn_controller[148476]: 2025-12-13T08:27:19Z|00408|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 ovn-installed in OVS
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.170 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 339 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 9.4 MiB/s wr, 247 op/s
Dec 13 08:27:19 compute-0 systemd-machined[210538]: New machine qemu-52-instance-0000002f.
Dec 13 08:27:19 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002f.
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.313 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:27:19 compute-0 sudo[297156]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:27:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:27:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:27:19 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:27:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:27:19 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:27:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:27:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:27:19 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:27:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:27:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:27:19 compute-0 sudo[297273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:27:19 compute-0 sudo[297273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:19 compute-0 sudo[297273]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:27:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:27:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:27:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:27:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:27:19 compute-0 sudo[297298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:27:19 compute-0 sudo[297298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.794 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614439.7933996, 98240df6-1cba-40e1-833c-24611270ed83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.795 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Started (Lifecycle Event)
Dec 13 08:27:19 compute-0 nova_compute[248510]: 2025-12-13 08:27:19.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:19 compute-0 podman[297379]: 2025-12-13 08:27:19.979096275 +0000 UTC m=+0.045557105 container create bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 08:27:20 compute-0 systemd[1]: Started libpod-conmon-bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe.scope.
Dec 13 08:27:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:20 compute-0 podman[297379]: 2025-12-13 08:27:19.959374819 +0000 UTC m=+0.025835679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:27:20 compute-0 podman[297379]: 2025-12-13 08:27:20.070387048 +0000 UTC m=+0.136847908 container init bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:27:20 compute-0 podman[297379]: 2025-12-13 08:27:20.078728043 +0000 UTC m=+0.145188873 container start bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:27:20 compute-0 podman[297379]: 2025-12-13 08:27:20.081721777 +0000 UTC m=+0.148182607 container attach bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:27:20 compute-0 quizzical_ardinghelli[297395]: 167 167
Dec 13 08:27:20 compute-0 systemd[1]: libpod-bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe.scope: Deactivated successfully.
Dec 13 08:27:20 compute-0 podman[297379]: 2025-12-13 08:27:20.085825979 +0000 UTC m=+0.152286809 container died bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 08:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f826b62578c8e2e358981c57406c96399364484384ec4becfe10353e0e7445aa-merged.mount: Deactivated successfully.
Dec 13 08:27:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:20 compute-0 podman[297379]: 2025-12-13 08:27:20.133122176 +0000 UTC m=+0.199583006 container remove bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:27:20 compute-0 systemd[1]: libpod-conmon-bda9b6dfec11481631553fdc7271a212ba9c99dc34a652a53ebc30a01ba7fcbe.scope: Deactivated successfully.
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.319 248514 DEBUG nova.virt.libvirt.driver [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:27:20 compute-0 podman[297419]: 2025-12-13 08:27:20.322477008 +0000 UTC m=+0.044055218 container create a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cohen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 08:27:20 compute-0 systemd[1]: Started libpod-conmon-a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1.scope.
Dec 13 08:27:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9368394a41a030cba58c06def4c1b74158d70f7ace0da95a215936a102969f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9368394a41a030cba58c06def4c1b74158d70f7ace0da95a215936a102969f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9368394a41a030cba58c06def4c1b74158d70f7ace0da95a215936a102969f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9368394a41a030cba58c06def4c1b74158d70f7ace0da95a215936a102969f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9368394a41a030cba58c06def4c1b74158d70f7ace0da95a215936a102969f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:20 compute-0 podman[297419]: 2025-12-13 08:27:20.303976801 +0000 UTC m=+0.025555021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:27:20 compute-0 podman[297419]: 2025-12-13 08:27:20.407871485 +0000 UTC m=+0.129449705 container init a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cohen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:27:20 compute-0 podman[297419]: 2025-12-13 08:27:20.415282828 +0000 UTC m=+0.136861028 container start a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:27:20 compute-0 podman[297419]: 2025-12-13 08:27:20.418995439 +0000 UTC m=+0.140573659 container attach a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cohen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:27:20 compute-0 ovn_controller[148476]: 2025-12-13T08:27:20Z|00409|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 up in Southbound
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.474 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:b2 10.100.0.10'], port_security=['fa:16:3e:d6:ce:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '98240df6-1cba-40e1-833c-24611270ed83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bdc94f2e-b14e-4e39-bea0-978ff56ff722) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.477 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bdc94f2e-b14e-4e39-bea0-978ff56ff722 in datapath 7576f079-0439-46aa-98af-04f80cd254ca bound to our chassis
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.480 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[10dad7d8-7522-4da7-ad48-f7849683af9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.494 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7576f079-01 in ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.498 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7576f079-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.498 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a294913f-3d6e-472f-acd7-230431d269da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.501 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42c5abe0-aeec-4a54-8975-a52271108fc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.515 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e05ee4-ce54-40c1-b2c5-f247cf4191d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.529 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.534 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614439.7942185, 98240df6-1cba-40e1-833c-24611270ed83 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.535 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Paused (Lifecycle Event)
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45cd2b39-0853-410b-9656-bcd06172e4aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.567 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d903a622-6414-4f07-9487-76e5425d6608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 NetworkManager[50376]: <info>  [1765614440.5797] manager: (tap7576f079-00): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.585 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a818a6d-8b8b-4bcd-a946-5708abe8f051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ceph-mon[76537]: pgmap v1879: 321 pgs: 321 active+clean; 339 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 9.4 MiB/s wr, 247 op/s
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.637 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bf3421-3096-4ba9-864f-6d69673e1ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.641 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[54c01c91-c9aa-4cd7-aa49-e17e7b62ce33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 NetworkManager[50376]: <info>  [1765614440.6779] device (tap7576f079-00): carrier: link connected
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.685 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5626251-cbe0-41a9-acb0-7af2b3331f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.707 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa47c95e-b61b-41c8-ab28-e54d5009d1a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297469, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.730 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97885e80-d762-46ed-b24b-0709b4fc09f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:f19b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701789, 'tstamp': 701789}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297471, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.750 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffc7dd8-3aa4-4e54-83d6-cf80fc95793f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297474, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.798 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84713bfb-f981-4d82-8924-7b1fadc22bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.820 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.869 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88cb5499-eb2f-4f2e-96e1-7bb079a98b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.873 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:27:20 compute-0 NetworkManager[50376]: <info>  [1765614440.8793] manager: (tap7576f079-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Dec 13 08:27:20 compute-0 kernel: tap7576f079-00: entered promiscuous mode
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0025579316887591914 of space, bias 1.0, pg target 0.7673795066277574 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669256194553672 of space, bias 1.0, pg target 0.20007768583661015 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.111260673143889e-07 of space, bias 4.0, pg target 0.0008533512807772667 quantized to 16 (current 32)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:27:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.884 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:20 compute-0 ovn_controller[148476]: 2025-12-13T08:27:20Z|00410|binding|INFO|Releasing lport 5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8 from this chassis (sb_readonly=0)
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.887 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7576f079-0439-46aa-98af-04f80cd254ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7576f079-0439-46aa-98af-04f80cd254ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.887 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f52d18-087c-4ca3-8f00-fa4d76a0b409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.888 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/7576f079-0439-46aa-98af-04f80cd254ca.pid.haproxy
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:27:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:20.889 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'env', 'PROCESS_TAG=haproxy-7576f079-0439-46aa-98af-04f80cd254ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7576f079-0439-46aa-98af-04f80cd254ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:20 compute-0 nova_compute[248510]: 2025-12-13 08:27:20.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:20 compute-0 funny_cohen[297436]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:27:20 compute-0 funny_cohen[297436]: --> All data devices are unavailable
Dec 13 08:27:20 compute-0 systemd[1]: libpod-a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1.scope: Deactivated successfully.
Dec 13 08:27:20 compute-0 podman[297419]: 2025-12-13 08:27:20.968449545 +0000 UTC m=+0.690027745 container died a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 13 08:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9368394a41a030cba58c06def4c1b74158d70f7ace0da95a215936a102969f9-merged.mount: Deactivated successfully.
Dec 13 08:27:21 compute-0 podman[297419]: 2025-12-13 08:27:21.035260734 +0000 UTC m=+0.756838924 container remove a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Dec 13 08:27:21 compute-0 systemd[1]: libpod-conmon-a4637afe63cf257616723fa2cbba4de3eada484a5557f9b0e5f669512dbbc8b1.scope: Deactivated successfully.
Dec 13 08:27:21 compute-0 sudo[297298]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.112 248514 DEBUG nova.network.neutron [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Updating instance_info_cache with network_info: [{"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:21 compute-0 sudo[297524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:27:21 compute-0 sudo[297524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:21 compute-0 sudo[297524]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 339 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 759 KiB/s rd, 8.4 MiB/s wr, 223 op/s
Dec 13 08:27:21 compute-0 sudo[297559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:27:21 compute-0 sudo[297559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:21 compute-0 podman[297595]: 2025-12-13 08:27:21.280820703 +0000 UTC m=+0.051292987 container create 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:27:21 compute-0 systemd[1]: Started libpod-conmon-8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77.scope.
Dec 13 08:27:21 compute-0 podman[297595]: 2025-12-13 08:27:21.253785256 +0000 UTC m=+0.024257540 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:27:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4cf732e89cfd948592bd93195ade27ca38a2f5cf5bd9d4f7b2379dcc373df76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:21 compute-0 podman[297595]: 2025-12-13 08:27:21.379697522 +0000 UTC m=+0.150169806 container init 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:27:21 compute-0 podman[297595]: 2025-12-13 08:27:21.385118856 +0000 UTC m=+0.155591140 container start 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:27:21 compute-0 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [NOTICE]   (297615) : New worker (297618) forked
Dec 13 08:27:21 compute-0 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [NOTICE]   (297615) : Loading success.
Dec 13 08:27:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1072239009' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.472 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.479 248514 DEBUG nova.compute.provider_tree [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:21 compute-0 kernel: tap57241dd9-27 (unregistering): left promiscuous mode
Dec 13 08:27:21 compute-0 NetworkManager[50376]: <info>  [1765614441.5386] device (tap57241dd9-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:27:21 compute-0 ovn_controller[148476]: 2025-12-13T08:27:21Z|00411|binding|INFO|Releasing lport 57241dd9-27dd-49bc-befb-1ef45674d6be from this chassis (sb_readonly=0)
Dec 13 08:27:21 compute-0 ovn_controller[148476]: 2025-12-13T08:27:21Z|00412|binding|INFO|Setting lport 57241dd9-27dd-49bc-befb-1ef45674d6be down in Southbound
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:21 compute-0 ovn_controller[148476]: 2025-12-13T08:27:21Z|00413|binding|INFO|Removing iface tap57241dd9-27 ovn-installed in OVS
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:21 compute-0 podman[297639]: 2025-12-13 08:27:21.560498923 +0000 UTC m=+0.052664300 container create a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:21 compute-0 systemd[1]: Started libpod-conmon-a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3.scope.
Dec 13 08:27:21 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Dec 13 08:27:21 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002e.scope: Consumed 12.974s CPU time.
Dec 13 08:27:21 compute-0 systemd-machined[210538]: Machine qemu-50-instance-0000002e terminated.
Dec 13 08:27:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:21 compute-0 podman[297639]: 2025-12-13 08:27:21.538617934 +0000 UTC m=+0.030783341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:27:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1072239009' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:21 compute-0 podman[297639]: 2025-12-13 08:27:21.644462375 +0000 UTC m=+0.136627802 container init a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:27:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:21.648 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:6e:ff 10.100.0.12'], port_security=['fa:16:3e:1b:6e:ff 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '05e06a6b-e157-4cd9-88c0-889fa4cfd9fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=57241dd9-27dd-49bc-befb-1ef45674d6be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:21.650 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 57241dd9-27dd-49bc-befb-1ef45674d6be in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 unbound from our chassis
Dec 13 08:27:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:21.651 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c63049d-63e9-47af-99e2-ce1403a42891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:27:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:21.652 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[692edec5-8ead-4224-8c78-43c8a3db3745]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:21.653 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace which is not needed anymore
Dec 13 08:27:21 compute-0 podman[297639]: 2025-12-13 08:27:21.654431631 +0000 UTC m=+0.146597008 container start a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 08:27:21 compute-0 podman[297639]: 2025-12-13 08:27:21.658274226 +0000 UTC m=+0.150439663 container attach a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 08:27:21 compute-0 charming_thompson[297658]: 167 167
Dec 13 08:27:21 compute-0 systemd[1]: libpod-a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3.scope: Deactivated successfully.
Dec 13 08:27:21 compute-0 podman[297639]: 2025-12-13 08:27:21.662266454 +0000 UTC m=+0.154431851 container died a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_thompson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.690 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e70dd68a9438efec3fb467f10fdd5ccd0ff11cb7b287da0d928c38a011558d4-merged.mount: Deactivated successfully.
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.698 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:21 compute-0 podman[297639]: 2025-12-13 08:27:21.714568005 +0000 UTC m=+0.206733382 container remove a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_thompson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:27:21 compute-0 systemd[1]: libpod-conmon-a2ee02507c9d3b09ca72886e36a267aac6873c0af325d5f7144aab1adaaef0e3.scope: Deactivated successfully.
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.733 248514 DEBUG nova.scheduler.client.report [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.740 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Releasing lock "refresh_cache-41602b99-e7f2-450c-885e-51d07a1236d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.742 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Instance network_info: |[{"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.742 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.743 248514 DEBUG oslo_concurrency.lockutils [req-b3e2a2a1-0b82-41a7-ad00-e97065cae549 req-9239ed4a-f912-4bcd-a7d8-86ba58919edb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-41602b99-e7f2-450c-885e-51d07a1236d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.743 248514 DEBUG nova.network.neutron [req-b3e2a2a1-0b82-41a7-ad00-e97065cae549 req-9239ed4a-f912-4bcd-a7d8-86ba58919edb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Refreshing network info cache for port c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.745 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Start _get_guest_xml network_info=[{"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': 'c10f1898-20b3-4bc9-8a36-2ee01b39c9ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.752 248514 WARNING nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.760 248514 DEBUG nova.virt.libvirt.host [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.761 248514 DEBUG nova.virt.libvirt.host [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.767 248514 DEBUG nova.virt.libvirt.host [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.768 248514 DEBUG nova.virt.libvirt.host [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.769 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.769 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.769 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.770 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.770 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.770 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.770 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.770 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.770 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.771 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.771 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.771 248514 DEBUG nova.virt.hardware [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.775 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:21 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[295990]: [NOTICE]   (296001) : haproxy version is 2.8.14-c23fe91
Dec 13 08:27:21 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[295990]: [NOTICE]   (296001) : path to executable is /usr/sbin/haproxy
Dec 13 08:27:21 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[295990]: [WARNING]  (296001) : Exiting Master process...
Dec 13 08:27:21 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[295990]: [ALERT]    (296001) : Current worker (296003) exited with code 143 (Terminated)
Dec 13 08:27:21 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[295990]: [WARNING]  (296001) : All workers exited. Exiting... (0)
Dec 13 08:27:21 compute-0 systemd[1]: libpod-adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428.scope: Deactivated successfully.
Dec 13 08:27:21 compute-0 podman[297689]: 2025-12-13 08:27:21.840771019 +0000 UTC m=+0.061513449 container died adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:27:21 compute-0 podman[297689]: 2025-12-13 08:27:21.908045549 +0000 UTC m=+0.128787979 container cleanup adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 08:27:21 compute-0 systemd[1]: libpod-conmon-adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428.scope: Deactivated successfully.
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.926 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:21 compute-0 nova_compute[248510]: 2025-12-13 08:27:21.927 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:27:21 compute-0 podman[297733]: 2025-12-13 08:27:21.960648697 +0000 UTC m=+0.044685894 container create e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:27:21 compute-0 podman[297742]: 2025-12-13 08:27:21.980608339 +0000 UTC m=+0.046861497 container remove adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:27:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:21.989 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[936668be-73f6-479a-be78-3033835853d4]: (4, ('Sat Dec 13 08:27:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428)\nadaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428\nSat Dec 13 08:27:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428)\nadaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:21.994 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d164b841-8eae-4795-a58d-229555275777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:21.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:21 compute-0 kernel: tap6c63049d-60: left promiscuous mode
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-db8924b4bd683d0d1a96ce2be5709406ddd9f1ab4a0c8f2faf0563e9edc4e4ea-merged.mount: Deactivated successfully.
Dec 13 08:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adaef06104dfe3f8245b381c2567dc49a52a2b4914025c385f0642ab8a053428-userdata-shm.mount: Deactivated successfully.
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.025 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86160928-3947-4e85-9132-8dc590d6adaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:22 compute-0 systemd[1]: Started libpod-conmon-e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940.scope.
Dec 13 08:27:22 compute-0 podman[297733]: 2025-12-13 08:27:21.943622046 +0000 UTC m=+0.027659263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.049 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1aac0deb-199c-4eca-8101-e53c1e17b13e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.055 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e03f71-4321-4dc6-bca6-8ad1129fb31c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.069 248514 DEBUG nova.network.neutron [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Updating instance_info_cache with network_info: [{"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c05e167f8b5dfb120c7b7623754b45349c983d73b3d18b712525eb276f5601/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c05e167f8b5dfb120c7b7623754b45349c983d73b3d18b712525eb276f5601/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c05e167f8b5dfb120c7b7623754b45349c983d73b3d18b712525eb276f5601/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c05e167f8b5dfb120c7b7623754b45349c983d73b3d18b712525eb276f5601/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.083 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2fdfb69f-bc9a-42bc-98ad-87ccbf5af78a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699682, 'reachable_time': 38476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297792, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c63049d\x2d63e9\x2d47af\x2d99e2\x2dce1403a42891.mount: Deactivated successfully.
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.089 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.090 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[90843482-56be-4a48-bc6e-7c2429e8573d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:22 compute-0 podman[297733]: 2025-12-13 08:27:22.095369931 +0000 UTC m=+0.179407158 container init e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jennings, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:27:22 compute-0 podman[297733]: 2025-12-13 08:27:22.104516456 +0000 UTC m=+0.188553673 container start e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 08:27:22 compute-0 podman[297733]: 2025-12-13 08:27:22.108601457 +0000 UTC m=+0.192638654 container attach e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.335 248514 INFO nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance shutdown successfully after 13 seconds.
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.346 248514 INFO nova.virt.libvirt.driver [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance destroyed successfully.
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.351 248514 INFO nova.virt.libvirt.driver [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance destroyed successfully.
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.352 248514 DEBUG nova.virt.libvirt.vif [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-571322410',display_name='tempest-ServerDiskConfigTestJSON-server-571322410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-571322410',id=46,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-br4xsr3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:03Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=05e06a6b-e157-4cd9-88c0-889fa4cfd9fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.352 248514 DEBUG nova.network.os_vif_util [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.353 248514 DEBUG nova.network.os_vif_util [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.353 248514 DEBUG os_vif [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.356 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.356 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57241dd9-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.359 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.361 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.364 248514 INFO os_vif [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27')
Dec 13 08:27:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840017750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:22 compute-0 kind_jennings[297786]: {
Dec 13 08:27:22 compute-0 kind_jennings[297786]:     "0": [
Dec 13 08:27:22 compute-0 kind_jennings[297786]:         {
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "devices": [
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "/dev/loop3"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             ],
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_name": "ceph_lv0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_size": "21470642176",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "name": "ceph_lv0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "tags": {
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cluster_name": "ceph",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.crush_device_class": "",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.encrypted": "0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.objectstore": "bluestore",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osd_id": "0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.type": "block",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.vdo": "0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.with_tpm": "0"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             },
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "type": "block",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "vg_name": "ceph_vg0"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:         }
Dec 13 08:27:22 compute-0 kind_jennings[297786]:     ],
Dec 13 08:27:22 compute-0 kind_jennings[297786]:     "1": [
Dec 13 08:27:22 compute-0 kind_jennings[297786]:         {
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "devices": [
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "/dev/loop4"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             ],
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_name": "ceph_lv1",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_size": "21470642176",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "name": "ceph_lv1",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "tags": {
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cluster_name": "ceph",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.crush_device_class": "",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.encrypted": "0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.objectstore": "bluestore",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osd_id": "1",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.type": "block",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.vdo": "0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.with_tpm": "0"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             },
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "type": "block",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "vg_name": "ceph_vg1"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:         }
Dec 13 08:27:22 compute-0 kind_jennings[297786]:     ],
Dec 13 08:27:22 compute-0 kind_jennings[297786]:     "2": [
Dec 13 08:27:22 compute-0 kind_jennings[297786]:         {
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "devices": [
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "/dev/loop5"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             ],
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_name": "ceph_lv2",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_size": "21470642176",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "name": "ceph_lv2",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "tags": {
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.cluster_name": "ceph",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.crush_device_class": "",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.encrypted": "0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.objectstore": "bluestore",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osd_id": "2",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.type": "block",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.vdo": "0",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:                 "ceph.with_tpm": "0"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             },
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "type": "block",
Dec 13 08:27:22 compute-0 kind_jennings[297786]:             "vg_name": "ceph_vg2"
Dec 13 08:27:22 compute-0 kind_jennings[297786]:         }
Dec 13 08:27:22 compute-0 kind_jennings[297786]:     ]
Dec 13 08:27:22 compute-0 kind_jennings[297786]: }
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.401 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:22 compute-0 systemd[1]: libpod-e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940.scope: Deactivated successfully.
Dec 13 08:27:22 compute-0 podman[297733]: 2025-12-13 08:27:22.420769839 +0000 UTC m=+0.504807056 container died e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.428 248514 DEBUG nova.storage.rbd_utils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 41602b99-e7f2-450c-885e-51d07a1236d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.438 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c05e167f8b5dfb120c7b7623754b45349c983d73b3d18b712525eb276f5601-merged.mount: Deactivated successfully.
Dec 13 08:27:22 compute-0 podman[297733]: 2025-12-13 08:27:22.470174369 +0000 UTC m=+0.554211566 container remove e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jennings, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:27:22 compute-0 systemd[1]: libpod-conmon-e9f5ba4843fbb5957e805680a7557bfeca45e279353b191633319d9eb3953940.scope: Deactivated successfully.
Dec 13 08:27:22 compute-0 sudo[297559]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:22 compute-0 kernel: tapb1a08ea3-30 (unregistering): left promiscuous mode
Dec 13 08:27:22 compute-0 NetworkManager[50376]: <info>  [1765614442.5838] device (tapb1a08ea3-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:22 compute-0 ovn_controller[148476]: 2025-12-13T08:27:22Z|00414|binding|INFO|Releasing lport b1a08ea3-3044-4caa-a944-744bd324adc9 from this chassis (sb_readonly=0)
Dec 13 08:27:22 compute-0 ovn_controller[148476]: 2025-12-13T08:27:22Z|00415|binding|INFO|Setting lport b1a08ea3-3044-4caa-a944-744bd324adc9 down in Southbound
Dec 13 08:27:22 compute-0 ovn_controller[148476]: 2025-12-13T08:27:22Z|00416|binding|INFO|Removing iface tapb1a08ea3-30 ovn-installed in OVS
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.605 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:22 compute-0 sudo[297851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:27:22 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Dec 13 08:27:22 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Consumed 13.267s CPU time.
Dec 13 08:27:22 compute-0 sudo[297851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:22 compute-0 ceph-mon[76537]: pgmap v1880: 321 pgs: 321 active+clean; 339 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 759 KiB/s rd, 8.4 MiB/s wr, 223 op/s
Dec 13 08:27:22 compute-0 systemd-machined[210538]: Machine qemu-51-instance-0000002d terminated.
Dec 13 08:27:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/840017750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:22 compute-0 sudo[297851]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:22 compute-0 sudo[297901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:27:22 compute-0 sudo[297901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.731 248514 INFO nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Deleting instance files /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_del
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.732 248514 INFO nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Deletion of /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_del complete
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.902 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:8e:7b 10.100.0.4'], port_security=['fa:16:3e:e9:8e:7b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2e52d555-08dd-49fb-a73a-eded391e154c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b1a08ea3-3044-4caa-a944-744bd324adc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.904 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b1a08ea3-3044-4caa-a944-744bd324adc9 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.907 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.907 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Releasing lock "refresh_cache-e6e0fdaf-f934-4e56-8e59-4c4475bacd26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.907 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Instance network_info: |[{"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.908 248514 DEBUG oslo_concurrency.lockutils [req-5bbd93ad-b8fd-427f-bf47-eba6b77919a5 req-3a10cc6b-93cd-4e90-b485-fd59ef52b195 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e6e0fdaf-f934-4e56-8e59-4c4475bacd26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.908 248514 DEBUG nova.network.neutron [req-5bbd93ad-b8fd-427f-bf47-eba6b77919a5 req-3a10cc6b-93cd-4e90-b485-fd59ef52b195 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Refreshing network info cache for port b03c2424-77e3-49e1-b55f-f317911025b6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.908 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f824cf22-6b2e-44d8-b0a5-92e0f32e3aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:22.908 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace which is not needed anymore
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.916 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Start _get_guest_xml network_info=[{"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.926 248514 WARNING nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.931 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.931 248514 DEBUG nova.network.neutron [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.945 248514 DEBUG nova.virt.libvirt.host [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.946 248514 DEBUG nova.virt.libvirt.host [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.953 248514 DEBUG nova.virt.libvirt.host [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.954 248514 DEBUG nova.virt.libvirt.host [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.954 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.955 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b94f71ea-8c32-460c-8f2a-ebdb7978bb9c',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.956 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.956 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.956 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.956 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.956 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.957 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.957 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.957 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.957 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.957 248514 DEBUG nova.virt.hardware [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:27:22 compute-0 nova_compute[248510]: 2025-12-13 08:27:22.960 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893235527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:23 compute-0 podman[297959]: 2025-12-13 08:27:23.028183917 +0000 UTC m=+0.046878118 container create 87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:27:23 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[296077]: [NOTICE]   (296081) : haproxy version is 2.8.14-c23fe91
Dec 13 08:27:23 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[296077]: [NOTICE]   (296081) : path to executable is /usr/sbin/haproxy
Dec 13 08:27:23 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[296077]: [WARNING]  (296081) : Exiting Master process...
Dec 13 08:27:23 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[296077]: [ALERT]    (296081) : Current worker (296083) exited with code 143 (Terminated)
Dec 13 08:27:23 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[296077]: [WARNING]  (296081) : All workers exited. Exiting... (0)
Dec 13 08:27:23 compute-0 systemd[1]: libpod-66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa.scope: Deactivated successfully.
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.045 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.048 248514 DEBUG nova.virt.libvirt.vif [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-254704430',display_name='tempest-ListServerFiltersTestJSON-instance-254704430',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-254704430',id=48,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-5xaoa0nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:10Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=41602b99-e7f2-450c-885e-51d07a1236d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.048 248514 DEBUG nova.network.os_vif_util [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.049 248514 DEBUG nova.network.os_vif_util [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.051 248514 DEBUG nova.objects.instance [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'pci_devices' on Instance uuid 41602b99-e7f2-450c-885e-51d07a1236d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:23 compute-0 podman[297973]: 2025-12-13 08:27:23.052197029 +0000 UTC m=+0.049028070 container died 66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 08:27:23 compute-0 systemd[1]: Started libpod-conmon-87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af.scope.
Dec 13 08:27:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:23 compute-0 podman[297959]: 2025-12-13 08:27:23.00968757 +0000 UTC m=+0.028381771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.137 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <uuid>41602b99-e7f2-450c-885e-51d07a1236d3</uuid>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <name>instance-00000030</name>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-254704430</nova:name>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:27:21</nova:creationTime>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <nova:user uuid="65a6b617130a42ac9c3d9b4abf6a1cfb">tempest-ListServerFiltersTestJSON-1229542462-project-member</nova:user>
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <nova:project uuid="3490ad817e664ff6b12c4ea88192b667">tempest-ListServerFiltersTestJSON-1229542462</nova:project>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <nova:port uuid="c76cbcb4-3f53-4cff-a0c1-7d8be5000c32">
Dec 13 08:27:23 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <system>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <entry name="serial">41602b99-e7f2-450c-885e-51d07a1236d3</entry>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <entry name="uuid">41602b99-e7f2-450c-885e-51d07a1236d3</entry>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </system>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <os>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   </os>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <features>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   </features>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/41602b99-e7f2-450c-885e-51d07a1236d3_disk">
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/41602b99-e7f2-450c-885e-51d07a1236d3_disk.config">
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:23 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:75:73:cb"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <target dev="tapc76cbcb4-3f"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3/console.log" append="off"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <video>
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </video>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:27:23 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:27:23 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:27:23 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:27:23 compute-0 nova_compute[248510]: </domain>
Dec 13 08:27:23 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.138 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Preparing to wait for external event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.138 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.138 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.139 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.139 248514 DEBUG nova.virt.libvirt.vif [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-254704430',display_name='tempest-ListServerFiltersTestJSON-instance-254704430',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-254704430',id=48,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-5xaoa0nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:10Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=41602b99-e7f2-450c-885e-51d07a1236d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.139 248514 DEBUG nova.network.os_vif_util [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.140 248514 DEBUG nova.network.os_vif_util [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.140 248514 DEBUG os_vif [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.141 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.142 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.145 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc76cbcb4-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.146 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc76cbcb4-3f, col_values=(('external_ids', {'iface-id': 'c76cbcb4-3f53-4cff-a0c1-7d8be5000c32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:73:cb', 'vm-uuid': '41602b99-e7f2-450c-885e-51d07a1236d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:23 compute-0 NetworkManager[50376]: <info>  [1765614443.1489] manager: (tapc76cbcb4-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.157 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.172 248514 INFO os_vif [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f')
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.360 248514 INFO nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:27:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 311 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 698 KiB/s rd, 7.7 MiB/s wr, 218 op/s
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.423 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.427 248514 INFO nova.virt.libvirt.driver [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance shutdown successfully after 13 seconds.
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.438 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance destroyed successfully.
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.439 248514 DEBUG nova.objects.instance [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2e52d555-08dd-49fb-a73a-eded391e154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:23 compute-0 podman[297959]: 2025-12-13 08:27:23.471080015 +0000 UTC m=+0.489774236 container init 87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 08:27:23 compute-0 podman[297959]: 2025-12-13 08:27:23.47820096 +0000 UTC m=+0.496895161 container start 87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa-userdata-shm.mount: Deactivated successfully.
Dec 13 08:27:23 compute-0 podman[297959]: 2025-12-13 08:27:23.48306366 +0000 UTC m=+0.501757861 container attach 87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac85f2c891ca9ad238d2609a122f5ad4c4b069e530674acfa08d9e4f0f7ce65c-merged.mount: Deactivated successfully.
Dec 13 08:27:23 compute-0 mystifying_blackwell[298008]: 167 167
Dec 13 08:27:23 compute-0 systemd[1]: libpod-87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af.scope: Deactivated successfully.
Dec 13 08:27:23 compute-0 podman[297973]: 2025-12-13 08:27:23.493390145 +0000 UTC m=+0.490221166 container cleanup 66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:27:23 compute-0 podman[297959]: 2025-12-13 08:27:23.494360369 +0000 UTC m=+0.513054570 container died 87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 08:27:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602147627' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:23 compute-0 systemd[1]: libpod-conmon-66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa.scope: Deactivated successfully.
Dec 13 08:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e7e4763e4a78a92455efba98d543edd860ddb5ee85f1c8e96c8107a27038fc-merged.mount: Deactivated successfully.
Dec 13 08:27:23 compute-0 podman[297959]: 2025-12-13 08:27:23.535301009 +0000 UTC m=+0.553995210 container remove 87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:27:23 compute-0 systemd[1]: libpod-conmon-87d263d2545ebffe9dd3bf85d3b488022a0ba308a3448eb0e833941919c031af.scope: Deactivated successfully.
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.549 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:23 compute-0 podman[298046]: 2025-12-13 08:27:23.576239879 +0000 UTC m=+0.045883343 container remove 66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.576 248514 DEBUG nova.storage.rbd_utils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.584 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.583 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1d1f95-e9da-4b66-9ed1-ec19be099773]: (4, ('Sat Dec 13 08:27:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa)\n66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa\nSat Dec 13 08:27:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa)\n66a0b8b190f9dfb1a90ef000774d750bb657fc25147fe12a4ef6d7b7542e97fa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.585 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f192082-a5d9-4d0b-84cd-da92ad641cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.586 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:23 compute-0 kernel: tap85372fca-a0: left promiscuous mode
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.619 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d58ccb2f-46c4-439b-bb63-4d69f58b4048]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d41fbbc1-d8cc-4141-bab2-bb344729a2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.634 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b7b488-86a9-48e4-b523-f20091b1a925]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3893235527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1602147627' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.652 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78cb5d36-325a-4c9a-ac87-d92f9849e540]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699776, 'reachable_time': 38680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298092, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.654 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:27:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:23.654 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[3393cd92-6c1f-475a-b44c-1d62d435e248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d85372fca\x2dab50\x2d48b6\x2d8c21\x2d507f630c205a.mount: Deactivated successfully.
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.679 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.679 248514 INFO nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Creating image(s)
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.698 248514 DEBUG nova.storage.rbd_utils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.722 248514 DEBUG nova.storage.rbd_utils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:23 compute-0 podman[298098]: 2025-12-13 08:27:23.729744177 +0000 UTC m=+0.044075109 container create 27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.755 248514 DEBUG nova.storage.rbd_utils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.764 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:23 compute-0 systemd[1]: Started libpod-conmon-27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb.scope.
Dec 13 08:27:23 compute-0 podman[298098]: 2025-12-13 08:27:23.712671786 +0000 UTC m=+0.027002738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.808 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.809 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.809 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No VIF found with MAC fa:16:3e:75:73:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.810 248514 INFO nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Using config drive
Dec 13 08:27:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62b8aedf3bab31fa3f34ab30ec59e1796c0304b4f24b5f09dd44c6e365d22f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62b8aedf3bab31fa3f34ab30ec59e1796c0304b4f24b5f09dd44c6e365d22f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62b8aedf3bab31fa3f34ab30ec59e1796c0304b4f24b5f09dd44c6e365d22f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62b8aedf3bab31fa3f34ab30ec59e1796c0304b4f24b5f09dd44c6e365d22f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:23 compute-0 podman[298098]: 2025-12-13 08:27:23.830032321 +0000 UTC m=+0.144363273 container init 27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lovelace, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.833 248514 DEBUG nova.storage.rbd_utils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 41602b99-e7f2-450c-885e-51d07a1236d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:23 compute-0 podman[298098]: 2025-12-13 08:27:23.839503165 +0000 UTC m=+0.153834097 container start 27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lovelace, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.840 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.841 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.842 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.842 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:23 compute-0 podman[298098]: 2025-12-13 08:27:23.84294884 +0000 UTC m=+0.157279842 container attach 27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.859 248514 DEBUG nova.storage.rbd_utils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.862 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:23 compute-0 nova_compute[248510]: 2025-12-13 08:27:23.964 248514 INFO nova.virt.libvirt.driver [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Beginning cold snapshot process
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.049 248514 DEBUG nova.policy [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.164 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/70838975' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.232 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.234 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.235 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.236 248514 INFO nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Creating image(s)
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.262 248514 DEBUG nova.storage.rbd_utils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.297 248514 DEBUG nova.storage.rbd_utils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.328 248514 DEBUG nova.storage.rbd_utils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.334 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.366 248514 DEBUG nova.virt.libvirt.vif [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1305127951',display_name='tempest-ListServerFiltersTestJSON-instance-1305127951',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1305127951',id=49,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-xlshtj64',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:12Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=e6e0fdaf-f934-4e56-8e59-4c4475bacd26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.367 248514 DEBUG nova.network.os_vif_util [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.368 248514 DEBUG nova.network.os_vif_util [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.369 248514 DEBUG nova.objects.instance [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'pci_devices' on Instance uuid e6e0fdaf-f934-4e56-8e59-4c4475bacd26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.392 248514 DEBUG nova.storage.rbd_utils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] resizing rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.442 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.443 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.443 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.443 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.469 248514 DEBUG nova.storage.rbd_utils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.475 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.561 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <uuid>e6e0fdaf-f934-4e56-8e59-4c4475bacd26</uuid>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <name>instance-00000031</name>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <memory>196608</memory>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1305127951</nova:name>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:27:22</nova:creationTime>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <nova:flavor name="m1.micro">
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <nova:memory>192</nova:memory>
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <nova:user uuid="65a6b617130a42ac9c3d9b4abf6a1cfb">tempest-ListServerFiltersTestJSON-1229542462-project-member</nova:user>
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <nova:project uuid="3490ad817e664ff6b12c4ea88192b667">tempest-ListServerFiltersTestJSON-1229542462</nova:project>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <nova:port uuid="b03c2424-77e3-49e1-b55f-f317911025b6">
Dec 13 08:27:24 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <system>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <entry name="serial">e6e0fdaf-f934-4e56-8e59-4c4475bacd26</entry>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <entry name="uuid">e6e0fdaf-f934-4e56-8e59-4c4475bacd26</entry>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </system>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <os>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   </os>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <features>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   </features>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk">
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk.config">
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:24 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:33:bc:28"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <target dev="tapb03c2424-77"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26/console.log" append="off"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <video>
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </video>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:27:24 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:27:24 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:27:24 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:27:24 compute-0 nova_compute[248510]: </domain>
Dec 13 08:27:24 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.562 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Preparing to wait for external event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.562 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.562 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.563 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.563 248514 DEBUG nova.virt.libvirt.vif [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1305127951',display_name='tempest-ListServerFiltersTestJSON-instance-1305127951',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1305127951',id=49,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-xlshtj64',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:12Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=e6e0fdaf-f934-4e56-8e59-4c4475bacd26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.564 248514 DEBUG nova.network.os_vif_util [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.564 248514 DEBUG nova.network.os_vif_util [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.565 248514 DEBUG os_vif [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.567 248514 DEBUG nova.compute.manager [req-38a9c882-e958-4ace-8e13-70e7f8c76353 req-ae66cd03-12fb-4930-bbc7-f6e1a1a55b40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.567 248514 DEBUG oslo_concurrency.lockutils [req-38a9c882-e958-4ace-8e13-70e7f8c76353 req-ae66cd03-12fb-4930-bbc7-f6e1a1a55b40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.568 248514 DEBUG oslo_concurrency.lockutils [req-38a9c882-e958-4ace-8e13-70e7f8c76353 req-ae66cd03-12fb-4930-bbc7-f6e1a1a55b40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.568 248514 DEBUG oslo_concurrency.lockutils [req-38a9c882-e958-4ace-8e13-70e7f8c76353 req-ae66cd03-12fb-4930-bbc7-f6e1a1a55b40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.568 248514 DEBUG nova.compute.manager [req-38a9c882-e958-4ace-8e13-70e7f8c76353 req-ae66cd03-12fb-4930-bbc7-f6e1a1a55b40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Processing event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.570 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.570 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.571 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.578 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.578 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Ensure instance console log exists: /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.579 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.579 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.579 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.581 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Start _get_guest_xml network_info=[{"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.583 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614444.5768852, 98240df6-1cba-40e1-833c-24611270ed83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.583 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Resumed (Lifecycle Event)
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.586 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.587 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb03c2424-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.589 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb03c2424-77, col_values=(('external_ids', {'iface-id': 'b03c2424-77e3-49e1-b55f-f317911025b6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:bc:28', 'vm-uuid': 'e6e0fdaf-f934-4e56-8e59-4c4475bacd26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:24 compute-0 lvm[298486]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:27:24 compute-0 lvm[298486]: VG ceph_vg0 finished
Dec 13 08:27:24 compute-0 NetworkManager[50376]: <info>  [1765614444.6212] manager: (tapb03c2424-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Dec 13 08:27:24 compute-0 lvm[298510]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:27:24 compute-0 lvm[298510]: VG ceph_vg0 finished
Dec 13 08:27:24 compute-0 lvm[298509]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:27:24 compute-0 lvm[298509]: VG ceph_vg1 finished
Dec 13 08:27:24 compute-0 ceph-mon[76537]: pgmap v1881: 321 pgs: 321 active+clean; 311 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 698 KiB/s rd, 7.7 MiB/s wr, 218 op/s
Dec 13 08:27:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/70838975' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:24 compute-0 lvm[298525]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:27:24 compute-0 lvm[298525]: VG ceph_vg2 finished
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.742 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.744 248514 INFO os_vif [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77')
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.766 248514 DEBUG nova.virt.libvirt.imagebackend [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.772 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance spawned successfully.
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.772 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:24 compute-0 clever_lovelace[298186]: {}
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.774 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.784 248514 WARNING nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.790 248514 DEBUG nova.virt.libvirt.host [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.791 248514 DEBUG nova.virt.libvirt.host [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.795 248514 DEBUG nova.virt.libvirt.host [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.795 248514 DEBUG nova.virt.libvirt.host [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.795 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.796 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.796 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.796 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.797 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.797 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.797 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.797 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.797 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.798 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.798 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.798 248514 DEBUG nova.virt.hardware [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.798 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:24 compute-0 systemd[1]: libpod-27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb.scope: Deactivated successfully.
Dec 13 08:27:24 compute-0 systemd[1]: libpod-27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb.scope: Consumed 1.511s CPU time.
Dec 13 08:27:24 compute-0 podman[298098]: 2025-12-13 08:27:24.809865907 +0000 UTC m=+1.124196839 container died 27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 08:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f62b8aedf3bab31fa3f34ab30ec59e1796c0304b4f24b5f09dd44c6e365d22f5-merged.mount: Deactivated successfully.
Dec 13 08:27:24 compute-0 podman[298098]: 2025-12-13 08:27:24.867909859 +0000 UTC m=+1.182240791 container remove 27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.868 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:24 compute-0 systemd[1]: libpod-conmon-27f04ce934d1bb2d199ab492c7475d270a7ef6d2945f3dca1f991b7fd06871fb.scope: Deactivated successfully.
Dec 13 08:27:24 compute-0 sudo[297901]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:24 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:27:24 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.950 248514 DEBUG nova.storage.rbd_utils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] resizing rbd image df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:27:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:24.987 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:24 compute-0 nova_compute[248510]: 2025-12-13 08:27:24.988 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:25 compute-0 sudo[298582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:27:25 compute-0 sudo[298582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:27:25 compute-0 sudo[298582]: pam_unix(sudo:session): session closed for user root
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.037 248514 DEBUG nova.storage.rbd_utils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] creating snapshot(654c91bcd2d34d718e94d174fe448267) on rbd image(2e52d555-08dd-49fb-a73a-eded391e154c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.066 248514 INFO nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Creating config drive at /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3/disk.config
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.071 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu2voyh68 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.108 248514 DEBUG nova.objects.instance [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid df25cd40-72b5-4e0f-90ec-8677c699d1d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.175 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.215 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu2voyh68" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.241 248514 DEBUG nova.storage.rbd_utils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image 41602b99-e7f2-450c-885e-51d07a1236d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.245 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3/disk.config 41602b99-e7f2-450c-885e-51d07a1236d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.280 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.280 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.281 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.281 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.282 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.282 248514 DEBUG nova.virt.libvirt.driver [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.397 248514 DEBUG oslo_concurrency.processutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3/disk.config 41602b99-e7f2-450c-885e-51d07a1236d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.397 248514 INFO nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Deleting local config drive /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3/disk.config because it was imported into RBD.
Dec 13 08:27:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 260 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 7.0 MiB/s wr, 224 op/s
Dec 13 08:27:25 compute-0 NetworkManager[50376]: <info>  [1765614445.4680] manager: (tapc76cbcb4-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Dec 13 08:27:25 compute-0 kernel: tapc76cbcb4-3f: entered promiscuous mode
Dec 13 08:27:25 compute-0 systemd-udevd[297894]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:25 compute-0 ovn_controller[148476]: 2025-12-13T08:27:25Z|00417|binding|INFO|Claiming lport c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 for this chassis.
Dec 13 08:27:25 compute-0 ovn_controller[148476]: 2025-12-13T08:27:25Z|00418|binding|INFO|c76cbcb4-3f53-4cff-a0c1-7d8be5000c32: Claiming fa:16:3e:75:73:cb 10.100.0.8
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:25 compute-0 NetworkManager[50376]: <info>  [1765614445.4911] device (tapc76cbcb4-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:27:25 compute-0 NetworkManager[50376]: <info>  [1765614445.4917] device (tapc76cbcb4-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:27:25 compute-0 ovn_controller[148476]: 2025-12-13T08:27:25Z|00419|binding|INFO|Setting lport c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 ovn-installed in OVS
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:25 compute-0 systemd-machined[210538]: New machine qemu-53-instance-00000030.
Dec 13 08:27:25 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000030.
Dec 13 08:27:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3189385481' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.809 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.835 248514 DEBUG nova.storage.rbd_utils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.842 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:25 compute-0 ovn_controller[148476]: 2025-12-13T08:27:25Z|00420|binding|INFO|Setting lport c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 up in Southbound
Dec 13 08:27:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:25.900 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:73:cb 10.100.0.8'], port_security=['fa:16:3e:75:73:cb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '41602b99-e7f2-450c-885e-51d07a1236d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:25.902 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 in datapath 7576f079-0439-46aa-98af-04f80cd254ca bound to our chassis
Dec 13 08:27:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:25.903 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.913 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.914 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Ensure instance console log exists: /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.914 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.914 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.915 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:25.926 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0771db72-eaba-4921-a359-3720cee5e892]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:27:25 compute-0 ceph-mon[76537]: pgmap v1882: 321 pgs: 321 active+clean; 260 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 7.0 MiB/s wr, 224 op/s
Dec 13 08:27:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3189385481' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Dec 13 08:27:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Dec 13 08:27:25 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 13 08:27:25 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Dec 13 08:27:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:25.962 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[40f416ee-26b4-44cd-b973-d253afec5601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:25.966 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ab116f40-e7c0-4063-a69c-6bf39dbd3d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.987 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614445.9860203, 41602b99-e7f2-450c-885e-51d07a1236d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:25 compute-0 nova_compute[248510]: 2025-12-13 08:27:25.987 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] VM Started (Lifecycle Event)
Dec 13 08:27:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:26.005 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[69f2ee32-caf9-4f89-be6c-3053b39d3e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.023 248514 DEBUG nova.storage.rbd_utils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] cloning vms/2e52d555-08dd-49fb-a73a-eded391e154c_disk@654c91bcd2d34d718e94d174fe448267 to images/4cef902b-321f-4d4c-9540-06884bb7f860 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:27:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:26.027 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6796a6a-b563-48c5-abe6-cbc3dfee4fc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298838, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:26.041 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1b7db4-1ad7-408d-9d67-c4b4c70c944e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298840, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298840, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:26.043 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:26.053 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:26.054 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:26.054 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.055 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:26.055 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.064 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.065 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.066 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] No VIF found with MAC fa:16:3e:33:bc:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.067 248514 INFO nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Using config drive
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.094 248514 DEBUG nova.storage.rbd_utils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.105 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614445.989602, 41602b99-e7f2-450c-885e-51d07a1236d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.105 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] VM Paused (Lifecycle Event)
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.125 248514 DEBUG nova.storage.rbd_utils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] flattening images/4cef902b-321f-4d4c-9540-06884bb7f860 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.369 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.374 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.379 248514 INFO nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Took 22.96 seconds to spawn the instance on the hypervisor.
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.380 248514 DEBUG nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.415 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602899339' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.471 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.474 248514 DEBUG nova.virt.libvirt.vif [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:26:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-571322410',display_name='tempest-ServerDiskConfigTestJSON-server-571322410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-571322410',id=46,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-br4xsr3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:23Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=05e06a6b-e157-4cd9-88c0-889fa4cfd9fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.474 248514 DEBUG nova.network.os_vif_util [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.475 248514 DEBUG nova.network.os_vif_util [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.478 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <uuid>05e06a6b-e157-4cd9-88c0-889fa4cfd9fa</uuid>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <name>instance-0000002e</name>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-571322410</nova:name>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:27:24</nova:creationTime>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <nova:user uuid="a5bc32e49dbd4372a006913090b9ef0f">tempest-ServerDiskConfigTestJSON-167971983-project-member</nova:user>
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <nova:project uuid="9aea752cb9b648a7aa9b3f634ced797e">tempest-ServerDiskConfigTestJSON-167971983</nova:project>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <nova:port uuid="57241dd9-27dd-49bc-befb-1ef45674d6be">
Dec 13 08:27:26 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <system>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <entry name="serial">05e06a6b-e157-4cd9-88c0-889fa4cfd9fa</entry>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <entry name="uuid">05e06a6b-e157-4cd9-88c0-889fa4cfd9fa</entry>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </system>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <os>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   </os>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <features>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   </features>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk">
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config">
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1b:6e:ff"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <target dev="tap57241dd9-27"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/console.log" append="off"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <video>
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </video>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:27:26 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:27:26 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:27:26 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:27:26 compute-0 nova_compute[248510]: </domain>
Dec 13 08:27:26 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.484 248514 DEBUG nova.compute.manager [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Preparing to wait for external event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.484 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.484 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.485 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.485 248514 DEBUG nova.virt.libvirt.vif [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:26:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-571322410',display_name='tempest-ServerDiskConfigTestJSON-server-571322410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-571322410',id=46,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-br4xsr3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:23Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=05e06a6b-e157-4cd9-88c0-889fa4cfd9fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.486 248514 DEBUG nova.network.os_vif_util [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.487 248514 DEBUG nova.network.os_vif_util [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.487 248514 DEBUG os_vif [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.488 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.488 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.489 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.496 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.497 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57241dd9-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.498 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57241dd9-27, col_values=(('external_ids', {'iface-id': '57241dd9-27dd-49bc-befb-1ef45674d6be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:6e:ff', 'vm-uuid': '05e06a6b-e157-4cd9-88c0-889fa4cfd9fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.500 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:26 compute-0 NetworkManager[50376]: <info>  [1765614446.5013] manager: (tap57241dd9-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.505 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.515 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.518 248514 INFO os_vif [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27')
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.521 248514 INFO nova.compute.manager [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Took 25.04 seconds to build instance.
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.617 248514 DEBUG nova.storage.rbd_utils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] removing snapshot(654c91bcd2d34d718e94d174fe448267) on rbd image(2e52d555-08dd-49fb-a73a-eded391e154c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.666 248514 DEBUG nova.network.neutron [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Successfully created port: eca7f353-3478-46ea-a63f-617a11a8f7ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.914 248514 DEBUG oslo_concurrency.lockutils [None req-8c73b82d-845b-4922-9993-45a400b00e12 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.927 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.927 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.928 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No VIF found with MAC fa:16:3e:1b:6e:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.928 248514 INFO nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Using config drive
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.948 248514 DEBUG nova.storage.rbd_utils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Dec 13 08:27:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Dec 13 08:27:26 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Dec 13 08:27:26 compute-0 ceph-mon[76537]: osdmap e205: 3 total, 3 up, 3 in
Dec 13 08:27:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2602899339' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:26 compute-0 nova_compute[248510]: 2025-12-13 08:27:26.996 248514 DEBUG nova.storage.rbd_utils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] creating snapshot(snap) on rbd image(4cef902b-321f-4d4c-9540-06884bb7f860) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.406 248514 INFO nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Creating config drive at /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26/disk.config
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.411 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wdy2yau execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 260 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 81 KiB/s wr, 64 op/s
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.443 248514 DEBUG nova.network.neutron [req-b3e2a2a1-0b82-41a7-ad00-e97065cae549 req-9239ed4a-f912-4bcd-a7d8-86ba58919edb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Updated VIF entry in instance network info cache for port c76cbcb4-3f53-4cff-a0c1-7d8be5000c32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.444 248514 DEBUG nova.network.neutron [req-b3e2a2a1-0b82-41a7-ad00-e97065cae549 req-9239ed4a-f912-4bcd-a7d8-86ba58919edb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Updating instance_info_cache with network_info: [{"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.448 248514 DEBUG nova.network.neutron [req-5bbd93ad-b8fd-427f-bf47-eba6b77919a5 req-3a10cc6b-93cd-4e90-b485-fd59ef52b195 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Updated VIF entry in instance network info cache for port b03c2424-77e3-49e1-b55f-f317911025b6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.448 248514 DEBUG nova.network.neutron [req-5bbd93ad-b8fd-427f-bf47-eba6b77919a5 req-3a10cc6b-93cd-4e90-b485-fd59ef52b195 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Updating instance_info_cache with network_info: [{"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.489 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.550 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wdy2yau" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.575 248514 DEBUG nova.storage.rbd_utils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] rbd image e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.579 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26/disk.config e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.851 248514 DEBUG oslo_concurrency.lockutils [req-5bbd93ad-b8fd-427f-bf47-eba6b77919a5 req-3a10cc6b-93cd-4e90-b485-fd59ef52b195 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e6e0fdaf-f934-4e56-8e59-4c4475bacd26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.854 248514 DEBUG oslo_concurrency.lockutils [req-b3e2a2a1-0b82-41a7-ad00-e97065cae549 req-9239ed4a-f912-4bcd-a7d8-86ba58919edb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-41602b99-e7f2-450c-885e-51d07a1236d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.855 248514 DEBUG nova.compute.manager [req-a5545c7f-9ffe-4af9-874b-4b48bcfbcf81 req-42f3d2fa-0655-43f5-a520-47bfed0db24a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.856 248514 DEBUG oslo_concurrency.lockutils [req-a5545c7f-9ffe-4af9-874b-4b48bcfbcf81 req-42f3d2fa-0655-43f5-a520-47bfed0db24a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.856 248514 DEBUG oslo_concurrency.lockutils [req-a5545c7f-9ffe-4af9-874b-4b48bcfbcf81 req-42f3d2fa-0655-43f5-a520-47bfed0db24a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.856 248514 DEBUG oslo_concurrency.lockutils [req-a5545c7f-9ffe-4af9-874b-4b48bcfbcf81 req-42f3d2fa-0655-43f5-a520-47bfed0db24a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.856 248514 DEBUG nova.compute.manager [req-a5545c7f-9ffe-4af9-874b-4b48bcfbcf81 req-42f3d2fa-0655-43f5-a520-47bfed0db24a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.857 248514 WARNING nova.compute.manager [req-a5545c7f-9ffe-4af9-874b-4b48bcfbcf81 req-42f3d2fa-0655-43f5-a520-47bfed0db24a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state active and task_state None.
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.861 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'keypairs' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.978 248514 DEBUG oslo_concurrency.processutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26/disk.config e6e0fdaf-f934-4e56-8e59-4c4475bacd26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Dec 13 08:27:27 compute-0 nova_compute[248510]: 2025-12-13 08:27:27.979 248514 INFO nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Deleting local config drive /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26/disk.config because it was imported into RBD.
Dec 13 08:27:27 compute-0 ceph-mon[76537]: osdmap e206: 3 total, 3 up, 3 in
Dec 13 08:27:27 compute-0 ceph-mon[76537]: pgmap v1885: 321 pgs: 321 active+clean; 260 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 81 KiB/s wr, 64 op/s
Dec 13 08:27:27 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Dec 13 08:27:28 compute-0 NetworkManager[50376]: <info>  [1765614448.0254] manager: (tapb03c2424-77): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Dec 13 08:27:28 compute-0 systemd-udevd[298813]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:28 compute-0 kernel: tapb03c2424-77: entered promiscuous mode
Dec 13 08:27:28 compute-0 nova_compute[248510]: 2025-12-13 08:27:28.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:28 compute-0 ovn_controller[148476]: 2025-12-13T08:27:28Z|00421|binding|INFO|Claiming lport b03c2424-77e3-49e1-b55f-f317911025b6 for this chassis.
Dec 13 08:27:28 compute-0 ovn_controller[148476]: 2025-12-13T08:27:28Z|00422|binding|INFO|b03c2424-77e3-49e1-b55f-f317911025b6: Claiming fa:16:3e:33:bc:28 10.100.0.4
Dec 13 08:27:28 compute-0 NetworkManager[50376]: <info>  [1765614448.0409] device (tapb03c2424-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:27:28 compute-0 NetworkManager[50376]: <info>  [1765614448.0433] device (tapb03c2424-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:27:28 compute-0 systemd-machined[210538]: New machine qemu-54-instance-00000031.
Dec 13 08:27:28 compute-0 ovn_controller[148476]: 2025-12-13T08:27:28Z|00423|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 ovn-installed in OVS
Dec 13 08:27:28 compute-0 nova_compute[248510]: 2025-12-13 08:27:28.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:28 compute-0 nova_compute[248510]: 2025-12-13 08:27:28.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:28 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000031.
Dec 13 08:27:28 compute-0 ovn_controller[148476]: 2025-12-13T08:27:28Z|00424|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 up in Southbound
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.332 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:bc:28 10.100.0.4'], port_security=['fa:16:3e:33:bc:28 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e6e0fdaf-f934-4e56-8e59-4c4475bacd26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b03c2424-77e3-49e1-b55f-f317911025b6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.334 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b03c2424-77e3-49e1-b55f-f317911025b6 in datapath 7576f079-0439-46aa-98af-04f80cd254ca bound to our chassis
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.336 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.358 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[751cb302-c8e9-48ac-8440-1f9691f4675f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.399 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[334ae4b5-ba28-4fe4-baa6-efd002ffd688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.402 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[27f03c8d-5463-4a02-b2bf-2d0b8f1376a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.437 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[818b9d74-1119-4eb9-9c86-eec668cff5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.456 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1290b411-3e42-4325-9c49-25ed099c3aa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299064, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.471 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b95d45a8-3766-4e5c-9f52-4e5dac867d33]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299080, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299080, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.473 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:28 compute-0 nova_compute[248510]: 2025-12-13 08:27:28.477 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:28 compute-0 nova_compute[248510]: 2025-12-13 08:27:28.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.484 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.484 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.485 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:28.485 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:28 compute-0 nova_compute[248510]: 2025-12-13 08:27:28.599 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614448.5983343, e6e0fdaf-f934-4e56-8e59-4c4475bacd26 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:28 compute-0 nova_compute[248510]: 2025-12-13 08:27:28.599 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] VM Started (Lifecycle Event)
Dec 13 08:27:28 compute-0 ceph-mon[76537]: osdmap e207: 3 total, 3 up, 3 in
Dec 13 08:27:28 compute-0 nova_compute[248510]: 2025-12-13 08:27:28.997 248514 INFO nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Creating config drive at /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.003 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps4ut4l9a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.039 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.048 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614448.598508, e6e0fdaf-f934-4e56-8e59-4c4475bacd26 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.048 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] VM Paused (Lifecycle Event)
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.146 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps4ut4l9a" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.171 248514 DEBUG nova.storage.rbd_utils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.175 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.265 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.270 248514 DEBUG nova.compute.manager [req-c8d3c67b-d7ff-4bda-9c94-1f1055eb7519 req-081b2149-9dd9-439d-bbe5-c5474b29eb7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.270 248514 DEBUG oslo_concurrency.lockutils [req-c8d3c67b-d7ff-4bda-9c94-1f1055eb7519 req-081b2149-9dd9-439d-bbe5-c5474b29eb7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.271 248514 DEBUG oslo_concurrency.lockutils [req-c8d3c67b-d7ff-4bda-9c94-1f1055eb7519 req-081b2149-9dd9-439d-bbe5-c5474b29eb7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.271 248514 DEBUG oslo_concurrency.lockutils [req-c8d3c67b-d7ff-4bda-9c94-1f1055eb7519 req-081b2149-9dd9-439d-bbe5-c5474b29eb7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.271 248514 DEBUG nova.compute.manager [req-c8d3c67b-d7ff-4bda-9c94-1f1055eb7519 req-081b2149-9dd9-439d-bbe5-c5474b29eb7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Processing event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.273 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.277 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.280 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.294 248514 INFO nova.virt.libvirt.driver [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Instance spawned successfully.
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.294 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.327 248514 DEBUG oslo_concurrency.processutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.327 248514 INFO nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Deleting local config drive /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa/disk.config because it was imported into RBD.
Dec 13 08:27:29 compute-0 kernel: tap57241dd9-27: entered promiscuous mode
Dec 13 08:27:29 compute-0 NetworkManager[50376]: <info>  [1765614449.4056] manager: (tap57241dd9-27): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Dec 13 08:27:29 compute-0 ovn_controller[148476]: 2025-12-13T08:27:29Z|00425|binding|INFO|Claiming lport 57241dd9-27dd-49bc-befb-1ef45674d6be for this chassis.
Dec 13 08:27:29 compute-0 ovn_controller[148476]: 2025-12-13T08:27:29Z|00426|binding|INFO|57241dd9-27dd-49bc-befb-1ef45674d6be: Claiming fa:16:3e:1b:6e:ff 10.100.0.12
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.414 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.415 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614449.2763784, 41602b99-e7f2-450c-885e-51d07a1236d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.415 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] VM Resumed (Lifecycle Event)
Dec 13 08:27:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 432 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 15 MiB/s wr, 478 op/s
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.421 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.435 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.436 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.437 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.438 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.438 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.439 248514 DEBUG nova.virt.libvirt.driver [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:29 compute-0 ovn_controller[148476]: 2025-12-13T08:27:29Z|00427|binding|INFO|Setting lport 57241dd9-27dd-49bc-befb-1ef45674d6be ovn-installed in OVS
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:29 compute-0 systemd-udevd[299151]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:29 compute-0 systemd-machined[210538]: New machine qemu-55-instance-0000002e.
Dec 13 08:27:29 compute-0 NetworkManager[50376]: <info>  [1765614449.4657] device (tap57241dd9-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:27:29 compute-0 NetworkManager[50376]: <info>  [1765614449.4667] device (tap57241dd9-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:27:29 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-0000002e.
Dec 13 08:27:29 compute-0 ovn_controller[148476]: 2025-12-13T08:27:29Z|00428|binding|INFO|Releasing lport 5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8 from this chassis (sb_readonly=0)
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.679 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:6e:ff 10.100.0.12'], port_security=['fa:16:3e:1b:6e:ff 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '05e06a6b-e157-4cd9-88c0-889fa4cfd9fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=57241dd9-27dd-49bc-befb-1ef45674d6be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.680 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 57241dd9-27dd-49bc-befb-1ef45674d6be in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 bound to our chassis
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.682 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:27:29 compute-0 ovn_controller[148476]: 2025-12-13T08:27:29Z|00429|binding|INFO|Setting lport 57241dd9-27dd-49bc-befb-1ef45674d6be up in Southbound
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.705 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.718 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[768b755a-e02f-43be-aff7-5f7c1c566328]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.723 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c63049d-61 in ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.725 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c63049d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2854aa03-e311-4fa0-a0a8-ce2de89116e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.726 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2afddc8-756a-4303-af5a-7610cbb5fd16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.751 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ed64efc8-9a98-4494-803a-bde851dff342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.780 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca9d214-1a21-48cf-8186-646ded71d23f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.824 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[06c31243-41a5-4ee0-bb9b-7a00e231b5db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 NetworkManager[50376]: <info>  [1765614449.8333] manager: (tap6c63049d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/197)
Dec 13 08:27:29 compute-0 systemd-udevd[299154]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.835 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a68099c4-f54c-4828-81c5-d4efae32bbbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.880 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e34406-f066-4c00-bc29-b47cbb446051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.883 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cbcdbda9-7c19-45da-bb1d-f8ec94e296a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 NetworkManager[50376]: <info>  [1765614449.9177] device (tap6c63049d-60): carrier: link connected
Dec 13 08:27:29 compute-0 nova_compute[248510]: 2025-12-13 08:27:29.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.926 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6b90f36c-de38-43a9-93ac-1960a848ba7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.956 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2bda84f3-32e1-4e5b-a275-eaff73d0b3e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702713, 'reachable_time': 21015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299188, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:29.979 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62066317-ae40-4aa4-8ccf-63ed786e5e5a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702713, 'tstamp': 702713}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299189, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:29 compute-0 ceph-mon[76537]: pgmap v1887: 321 pgs: 321 active+clean; 432 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 15 MiB/s wr, 478 op/s
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.008 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f78c3fe2-77ee-458a-b4ca-f78c3858dd68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702713, 'reachable_time': 21015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299190, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[be281d97-8746-4c32-8b5c-270312586b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.083 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Dec 13 08:27:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Dec 13 08:27:30 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.139 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b253408a-831c-4263-bdbf-94c819d0919b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.140 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.140 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.141 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c63049d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.154 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:30 compute-0 NetworkManager[50376]: <info>  [1765614450.1555] manager: (tap6c63049d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Dec 13 08:27:30 compute-0 kernel: tap6c63049d-60: entered promiscuous mode
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.164 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c63049d-60, col_values=(('external_ids', {'iface-id': 'b410790c-12b7-4a29-87e5-13a29af9c319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:30 compute-0 ovn_controller[148476]: 2025-12-13T08:27:30Z|00430|binding|INFO|Releasing lport b410790c-12b7-4a29-87e5-13a29af9c319 from this chassis (sb_readonly=0)
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.165 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.192 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.197 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.197 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614450.196961, 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.197 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] VM Started (Lifecycle Event)
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.201 248514 INFO nova.virt.libvirt.driver [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Snapshot image upload complete
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.202 248514 DEBUG nova.compute.manager [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.205 248514 DEBUG nova.compute.manager [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.205 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.205 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.206 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.206 248514 DEBUG nova.compute.manager [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No event matching network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be in dict_keys([('network-vif-plugged', '57241dd9-27dd-49bc-befb-1ef45674d6be')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.206 248514 WARNING nova.compute.manager [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received unexpected event network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with vm_state active and task_state rebuild_spawning.
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.206 248514 DEBUG nova.compute.manager [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.206 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.206 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.206 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.207 248514 DEBUG nova.compute.manager [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Processing event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.207 248514 DEBUG nova.compute.manager [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received event network-vif-unplugged-b1a08ea3-3044-4caa-a944-744bd324adc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.207 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.213 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.207 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.210 248514 DEBUG oslo_concurrency.lockutils [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.211 248514 DEBUG nova.compute.manager [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] No waiting events found dispatching network-vif-unplugged-b1a08ea3-3044-4caa-a944-744bd324adc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.211 248514 WARNING nova.compute.manager [req-bca35dc0-fdee-4aa2-89a6-99fda2c95f23 req-dcca01c3-9e56-4725-bdc9-922b404ce550 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received unexpected event network-vif-unplugged-b1a08ea3-3044-4caa-a944-744bd324adc9 for instance with vm_state active and task_state shelving_image_uploading.
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.215 248514 DEBUG nova.compute.manager [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.214 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39d4303e-1dbb-4f01-ac69-c2d051a1f5db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.217 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:27:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:30.218 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'env', 'PROCESS_TAG=haproxy-6c63049d-63e9-47af-99e2-ce1403a42891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c63049d-63e9-47af-99e2-ce1403a42891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.225 248514 INFO nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Took 19.64 seconds to spawn the instance on the hypervisor.
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.226 248514 DEBUG nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.246 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.250 248514 INFO nova.virt.libvirt.driver [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance spawned successfully.
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.250 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.281 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.285 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.320 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.321 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.321 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.321 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.322 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.322 248514 DEBUG nova.virt.libvirt.driver [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.474 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.475 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614450.1970575, 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:30 compute-0 nova_compute[248510]: 2025-12-13 08:27:30.476 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] VM Paused (Lifecycle Event)
Dec 13 08:27:30 compute-0 podman[299267]: 2025-12-13 08:27:30.707115383 +0000 UTC m=+0.062304768 container create 7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:27:30 compute-0 systemd[1]: Started libpod-conmon-7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea.scope.
Dec 13 08:27:30 compute-0 podman[299267]: 2025-12-13 08:27:30.676275422 +0000 UTC m=+0.031464807 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:27:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21bb84f9a0a64f541c7ebac5b1a61f63bfb533a2f6ce624b6609e8215dc210f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:30 compute-0 podman[299267]: 2025-12-13 08:27:30.797447552 +0000 UTC m=+0.152636957 container init 7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 08:27:30 compute-0 podman[299267]: 2025-12-13 08:27:30.80343607 +0000 UTC m=+0.158625455 container start 7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:27:30 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[299282]: [NOTICE]   (299286) : New worker (299288) forked
Dec 13 08:27:30 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[299282]: [NOTICE]   (299286) : Loading success.
Dec 13 08:27:31 compute-0 ceph-mon[76537]: osdmap e208: 3 total, 3 up, 3 in
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.274 248514 DEBUG nova.compute.manager [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.280 248514 INFO nova.compute.manager [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Shelve offloading
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.288 248514 INFO nova.compute.manager [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Took 27.45 seconds to build instance.
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.299 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance destroyed successfully.
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.300 248514 DEBUG nova.compute.manager [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.302 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.303 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.303 248514 DEBUG nova.network.neutron [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.308 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.311 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614450.2310352, 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.311 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] VM Resumed (Lifecycle Event)
Dec 13 08:27:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 432 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 16 MiB/s wr, 466 op/s
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.501 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.567 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:31 compute-0 nova_compute[248510]: 2025-12-13 08:27:31.571 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:32 compute-0 nova_compute[248510]: 2025-12-13 08:27:32.069 248514 DEBUG oslo_concurrency.lockutils [None req-67899401-24ef-4f50-ba88-022c6f5ea266 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:32 compute-0 nova_compute[248510]: 2025-12-13 08:27:32.087 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:32 compute-0 nova_compute[248510]: 2025-12-13 08:27:32.087 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:32 compute-0 nova_compute[248510]: 2025-12-13 08:27:32.087 248514 DEBUG nova.objects.instance [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:27:32 compute-0 ceph-mon[76537]: pgmap v1889: 321 pgs: 321 active+clean; 432 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 16 MiB/s wr, 466 op/s
Dec 13 08:27:32 compute-0 nova_compute[248510]: 2025-12-13 08:27:32.169 248514 DEBUG nova.network.neutron [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Successfully updated port: eca7f353-3478-46ea-a63f-617a11a8f7ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:27:32 compute-0 nova_compute[248510]: 2025-12-13 08:27:32.908 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-df25cd40-72b5-4e0f-90ec-8677c699d1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:32 compute-0 nova_compute[248510]: 2025-12-13 08:27:32.909 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-df25cd40-72b5-4e0f-90ec-8677c699d1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:32 compute-0 nova_compute[248510]: 2025-12-13 08:27:32.909 248514 DEBUG nova.network.neutron [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:27:33 compute-0 nova_compute[248510]: 2025-12-13 08:27:33.043 248514 DEBUG oslo_concurrency.lockutils [None req-8033856e-e815-448a-8297-14ba68d18072 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:33 compute-0 nova_compute[248510]: 2025-12-13 08:27:33.306 248514 DEBUG nova.compute.manager [req-beef45d2-05b9-4cd3-a028-3ace019cd204 req-433cfda5-ba8a-4009-a4bf-39789212eedf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:33 compute-0 nova_compute[248510]: 2025-12-13 08:27:33.306 248514 DEBUG oslo_concurrency.lockutils [req-beef45d2-05b9-4cd3-a028-3ace019cd204 req-433cfda5-ba8a-4009-a4bf-39789212eedf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:33 compute-0 nova_compute[248510]: 2025-12-13 08:27:33.309 248514 DEBUG oslo_concurrency.lockutils [req-beef45d2-05b9-4cd3-a028-3ace019cd204 req-433cfda5-ba8a-4009-a4bf-39789212eedf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:33 compute-0 nova_compute[248510]: 2025-12-13 08:27:33.309 248514 DEBUG oslo_concurrency.lockutils [req-beef45d2-05b9-4cd3-a028-3ace019cd204 req-433cfda5-ba8a-4009-a4bf-39789212eedf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:33 compute-0 nova_compute[248510]: 2025-12-13 08:27:33.309 248514 DEBUG nova.compute.manager [req-beef45d2-05b9-4cd3-a028-3ace019cd204 req-433cfda5-ba8a-4009-a4bf-39789212eedf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] No waiting events found dispatching network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:33 compute-0 nova_compute[248510]: 2025-12-13 08:27:33.310 248514 WARNING nova.compute.manager [req-beef45d2-05b9-4cd3-a028-3ace019cd204 req-433cfda5-ba8a-4009-a4bf-39789212eedf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received unexpected event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 for instance with vm_state active and task_state None.
Dec 13 08:27:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 432 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 14 MiB/s wr, 485 op/s
Dec 13 08:27:34 compute-0 nova_compute[248510]: 2025-12-13 08:27:34.101 248514 DEBUG nova.network.neutron [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:27:34 compute-0 ceph-mon[76537]: pgmap v1890: 321 pgs: 321 active+clean; 432 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 14 MiB/s wr, 485 op/s
Dec 13 08:27:34 compute-0 nova_compute[248510]: 2025-12-13 08:27:34.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.228 248514 DEBUG nova.compute.manager [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received event network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.229 248514 DEBUG oslo_concurrency.lockutils [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.229 248514 DEBUG oslo_concurrency.lockutils [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.230 248514 DEBUG oslo_concurrency.lockutils [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.230 248514 DEBUG nova.compute.manager [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] No waiting events found dispatching network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.230 248514 WARNING nova.compute.manager [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received unexpected event network-vif-plugged-b1a08ea3-3044-4caa-a944-744bd324adc9 for instance with vm_state shelved and task_state shelving_offloading.
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.231 248514 DEBUG nova.compute.manager [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.231 248514 DEBUG oslo_concurrency.lockutils [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.231 248514 DEBUG oslo_concurrency.lockutils [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.231 248514 DEBUG oslo_concurrency.lockutils [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.232 248514 DEBUG nova.compute.manager [req-28339ddc-a027-4aa4-b190-d0eb9f67b847 req-6ff99a84-18d9-4186-b76f-bc7e82c5225c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Processing event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.232 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.236 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614455.2363746, e6e0fdaf-f934-4e56-8e59-4c4475bacd26 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.236 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] VM Resumed (Lifecycle Event)
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.244 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.248 248514 INFO nova.virt.libvirt.driver [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Instance spawned successfully.
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.248 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.265 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.268 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.277 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.277 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.278 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.278 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.278 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.279 248514 DEBUG nova.virt.libvirt.driver [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.322 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 432 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 11 MiB/s wr, 542 op/s
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.607 248514 INFO nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Took 22.46 seconds to spawn the instance on the hypervisor.
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.608 248514 DEBUG nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:35 compute-0 nova_compute[248510]: 2025-12-13 08:27:35.969 248514 INFO nova.compute.manager [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Took 25.78 seconds to build instance.
Dec 13 08:27:36 compute-0 nova_compute[248510]: 2025-12-13 08:27:36.170 248514 DEBUG nova.network.neutron [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Updating instance_info_cache with network_info: [{"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:36 compute-0 nova_compute[248510]: 2025-12-13 08:27:36.391 248514 DEBUG oslo_concurrency.lockutils [None req-3ff0b608-1036-4acc-bbee-a7f2f9793d88 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:36 compute-0 nova_compute[248510]: 2025-12-13 08:27:36.428 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:36 compute-0 nova_compute[248510]: 2025-12-13 08:27:36.436 248514 DEBUG nova.network.neutron [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Updating instance_info_cache with network_info: [{"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:36 compute-0 nova_compute[248510]: 2025-12-13 08:27:36.440 248514 DEBUG nova.compute.manager [req-6c31e6f3-dda1-48da-b384-452cb1966f32 req-36887c58-b73a-478e-a3b8-3e1db0321e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-changed-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:36 compute-0 nova_compute[248510]: 2025-12-13 08:27:36.440 248514 DEBUG nova.compute.manager [req-6c31e6f3-dda1-48da-b384-452cb1966f32 req-36887c58-b73a-478e-a3b8-3e1db0321e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Refreshing instance network info cache due to event network-changed-eca7f353-3478-46ea-a63f-617a11a8f7ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:27:36 compute-0 nova_compute[248510]: 2025-12-13 08:27:36.440 248514 DEBUG oslo_concurrency.lockutils [req-6c31e6f3-dda1-48da-b384-452cb1966f32 req-36887c58-b73a-478e-a3b8-3e1db0321e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-df25cd40-72b5-4e0f-90ec-8677c699d1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:36 compute-0 ceph-mon[76537]: pgmap v1891: 321 pgs: 321 active+clean; 432 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 11 MiB/s wr, 542 op/s
Dec 13 08:27:36 compute-0 nova_compute[248510]: 2025-12-13 08:27:36.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.281 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-df25cd40-72b5-4e0f-90ec-8677c699d1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.281 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Instance network_info: |[{"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.281 248514 DEBUG oslo_concurrency.lockutils [req-6c31e6f3-dda1-48da-b384-452cb1966f32 req-36887c58-b73a-478e-a3b8-3e1db0321e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-df25cd40-72b5-4e0f-90ec-8677c699d1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.282 248514 DEBUG nova.network.neutron [req-6c31e6f3-dda1-48da-b384-452cb1966f32 req-36887c58-b73a-478e-a3b8-3e1db0321e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Refreshing network info cache for port eca7f353-3478-46ea-a63f-617a11a8f7ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.285 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Start _get_guest_xml network_info=[{"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.290 248514 WARNING nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.302 248514 DEBUG nova.virt.libvirt.host [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.303 248514 DEBUG nova.virt.libvirt.host [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.309 248514 DEBUG nova.virt.libvirt.host [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.310 248514 DEBUG nova.virt.libvirt.host [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.311 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.311 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.312 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.312 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.312 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.313 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.313 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.314 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.314 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.314 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.315 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.315 248514 DEBUG nova.virt.hardware [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.318 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 432 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 7.0 MiB/s wr, 366 op/s
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.831 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614442.8283207, 2e52d555-08dd-49fb-a73a-eded391e154c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.832 248514 INFO nova.compute.manager [-] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] VM Stopped (Lifecycle Event)
Dec 13 08:27:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2365335499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.894 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:37 compute-0 ovn_controller[148476]: 2025-12-13T08:27:37Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:ce:b2 10.100.0.10
Dec 13 08:27:37 compute-0 ovn_controller[148476]: 2025-12-13T08:27:37Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:ce:b2 10.100.0.10
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.927 248514 DEBUG nova.storage.rbd_utils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:37 compute-0 nova_compute[248510]: 2025-12-13 08:27:37.933 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.042 248514 DEBUG nova.compute.manager [None req-4c025f5e-1857-4434-b5e6-1b038771d9e8 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.054 248514 DEBUG nova.compute.manager [None req-4c025f5e-1857-4434-b5e6-1b038771d9e8 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.328 248514 INFO nova.compute.manager [None req-4c025f5e-1857-4434-b5e6-1b038771d9e8 - - - - - -] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Dec 13 08:27:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627262298' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:38 compute-0 ceph-mon[76537]: pgmap v1892: 321 pgs: 321 active+clean; 432 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 7.0 MiB/s wr, 366 op/s
Dec 13 08:27:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2365335499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1627262298' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.519 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.521 248514 DEBUG nova.virt.libvirt.vif [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533202396',display_name='tempest-ImagesTestJSON-server-533202396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533202396',id=50,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-e880c0lz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:23Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=df25cd40-72b5-4e0f-90ec-8677c699d1d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.521 248514 DEBUG nova.network.os_vif_util [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.523 248514 DEBUG nova.network.os_vif_util [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.524 248514 DEBUG nova.objects.instance [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df25cd40-72b5-4e0f-90ec-8677c699d1d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.597 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <uuid>df25cd40-72b5-4e0f-90ec-8677c699d1d3</uuid>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <name>instance-00000032</name>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesTestJSON-server-533202396</nova:name>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:27:37</nova:creationTime>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <nova:port uuid="eca7f353-3478-46ea-a63f-617a11a8f7ff">
Dec 13 08:27:38 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <system>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <entry name="serial">df25cd40-72b5-4e0f-90ec-8677c699d1d3</entry>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <entry name="uuid">df25cd40-72b5-4e0f-90ec-8677c699d1d3</entry>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </system>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <os>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   </os>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <features>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   </features>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk">
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk.config">
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:38 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1c:88:26"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <target dev="tapeca7f353-34"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3/console.log" append="off"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <video>
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </video>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:27:38 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:27:38 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:27:38 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:27:38 compute-0 nova_compute[248510]: </domain>
Dec 13 08:27:38 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.605 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Preparing to wait for external event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.605 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.606 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.606 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.607 248514 DEBUG nova.virt.libvirt.vif [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533202396',display_name='tempest-ImagesTestJSON-server-533202396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533202396',id=50,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-e880c0lz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:23Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=df25cd40-72b5-4e0f-90ec-8677c699d1d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.607 248514 DEBUG nova.network.os_vif_util [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.608 248514 DEBUG nova.network.os_vif_util [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.609 248514 DEBUG os_vif [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.612 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.613 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.615 248514 DEBUG nova.compute.manager [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.616 248514 DEBUG oslo_concurrency.lockutils [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.616 248514 DEBUG oslo_concurrency.lockutils [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.616 248514 DEBUG oslo_concurrency.lockutils [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.617 248514 DEBUG nova.compute.manager [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] No waiting events found dispatching network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.617 248514 WARNING nova.compute.manager [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received unexpected event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 for instance with vm_state active and task_state None.
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.617 248514 DEBUG nova.compute.manager [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.618 248514 DEBUG oslo_concurrency.lockutils [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.618 248514 DEBUG oslo_concurrency.lockutils [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.618 248514 DEBUG oslo_concurrency.lockutils [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.619 248514 DEBUG nova.compute.manager [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No waiting events found dispatching network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.619 248514 WARNING nova.compute.manager [req-02144d3d-1ed1-4259-af60-dcd368449c40 req-9973f01a-8f12-4275-8e99-09d018b8599c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received unexpected event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with vm_state active and task_state None.
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.623 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeca7f353-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.624 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeca7f353-34, col_values=(('external_ids', {'iface-id': 'eca7f353-3478-46ea-a63f-617a11a8f7ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:88:26', 'vm-uuid': 'df25cd40-72b5-4e0f-90ec-8677c699d1d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:38 compute-0 NetworkManager[50376]: <info>  [1765614458.6266] manager: (tapeca7f353-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.637 248514 INFO os_vif [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34')
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.962 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.963 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.963 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:1c:88:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.963 248514 INFO nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Using config drive
Dec 13 08:27:38 compute-0 nova_compute[248510]: 2025-12-13 08:27:38.987 248514 DEBUG nova.storage.rbd_utils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.002 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.003 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.003 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.004 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.004 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.006 248514 INFO nova.compute.manager [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Terminating instance
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.007 248514 DEBUG nova.compute.manager [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:27:39 compute-0 kernel: tap57241dd9-27 (unregistering): left promiscuous mode
Dec 13 08:27:39 compute-0 NetworkManager[50376]: <info>  [1765614459.0636] device (tap57241dd9-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:27:39 compute-0 ovn_controller[148476]: 2025-12-13T08:27:39Z|00431|binding|INFO|Releasing lport 57241dd9-27dd-49bc-befb-1ef45674d6be from this chassis (sb_readonly=0)
Dec 13 08:27:39 compute-0 ovn_controller[148476]: 2025-12-13T08:27:39Z|00432|binding|INFO|Setting lport 57241dd9-27dd-49bc-befb-1ef45674d6be down in Southbound
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 ovn_controller[148476]: 2025-12-13T08:27:39Z|00433|binding|INFO|Removing iface tap57241dd9-27 ovn-installed in OVS
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.086 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:6e:ff 10.100.0.12'], port_security=['fa:16:3e:1b:6e:ff 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '05e06a6b-e157-4cd9-88c0-889fa4cfd9fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=57241dd9-27dd-49bc-befb-1ef45674d6be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.088 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 57241dd9-27dd-49bc-befb-1ef45674d6be in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 unbound from our chassis
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.091 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c63049d-63e9-47af-99e2-ce1403a42891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.095 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70abf1b4-ba15-4c3c-a531-f37eb6b31d8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.097 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace which is not needed anymore
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.102 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Dec 13 08:27:39 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002e.scope: Consumed 9.596s CPU time.
Dec 13 08:27:39 compute-0 systemd-machined[210538]: Machine qemu-55-instance-0000002e terminated.
Dec 13 08:27:39 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[299282]: [NOTICE]   (299286) : haproxy version is 2.8.14-c23fe91
Dec 13 08:27:39 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[299282]: [NOTICE]   (299286) : path to executable is /usr/sbin/haproxy
Dec 13 08:27:39 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[299282]: [WARNING]  (299286) : Exiting Master process...
Dec 13 08:27:39 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[299282]: [WARNING]  (299286) : Exiting Master process...
Dec 13 08:27:39 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[299282]: [ALERT]    (299286) : Current worker (299288) exited with code 143 (Terminated)
Dec 13 08:27:39 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[299282]: [WARNING]  (299286) : All workers exited. Exiting... (0)
Dec 13 08:27:39 compute-0 systemd[1]: libpod-7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea.scope: Deactivated successfully.
Dec 13 08:27:39 compute-0 podman[299409]: 2025-12-13 08:27:39.254104547 +0000 UTC m=+0.054157237 container died 7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.259 248514 INFO nova.virt.libvirt.driver [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Instance destroyed successfully.
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.260 248514 DEBUG nova.objects.instance [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'resources' on Instance uuid 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.278 248514 DEBUG nova.virt.libvirt.vif [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:26:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-571322410',display_name='tempest-ServerDiskConfigTestJSON-server-571322410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-571322410',id=46,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-br4xsr3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:32Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=05e06a6b-e157-4cd9-88c0-889fa4cfd9fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.278 248514 DEBUG nova.network.os_vif_util [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "57241dd9-27dd-49bc-befb-1ef45674d6be", "address": "fa:16:3e:1b:6e:ff", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57241dd9-27", "ovs_interfaceid": "57241dd9-27dd-49bc-befb-1ef45674d6be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.279 248514 DEBUG nova.network.os_vif_util [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.279 248514 DEBUG os_vif [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.286 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.287 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57241dd9-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea-userdata-shm.mount: Deactivated successfully.
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d21bb84f9a0a64f541c7ebac5b1a61f63bfb533a2f6ce624b6609e8215dc210f-merged.mount: Deactivated successfully.
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.302 248514 INFO os_vif [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:6e:ff,bridge_name='br-int',has_traffic_filtering=True,id=57241dd9-27dd-49bc-befb-1ef45674d6be,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57241dd9-27')
Dec 13 08:27:39 compute-0 podman[299409]: 2025-12-13 08:27:39.318240249 +0000 UTC m=+0.118292939 container cleanup 7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:27:39 compute-0 systemd[1]: libpod-conmon-7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea.scope: Deactivated successfully.
Dec 13 08:27:39 compute-0 podman[299472]: 2025-12-13 08:27:39.398441638 +0000 UTC m=+0.053420139 container remove 7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.407 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c244e42d-4bdd-496d-97dd-cf1ff1a2b981]: (4, ('Sat Dec 13 08:27:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea)\n7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea\nSat Dec 13 08:27:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea)\n7c652b35fc645dbfb8202400af2f00dee4d5b60f7087ab15c9213d1278da0dea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.410 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f59b8a8-c0c0-4051-b9fe-30b34817617d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.411 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 kernel: tap6c63049d-60: left promiscuous mode
Dec 13 08:27:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 465 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 2.6 MiB/s wr, 332 op/s
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.437 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.440 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[564f0eb5-45e7-43f8-8419-3d4ba04182b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.451 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[440aab69-ec71-4cad-81f1-feaf0b761415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.452 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b7b8f2-a5eb-4cea-a7fb-bbf8b45b412b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5406ea9-00c1-4450-8723-66eb5462819b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702703, 'reachable_time': 38998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299493, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c63049d\x2d63e9\x2d47af\x2d99e2\x2dce1403a42891.mount: Deactivated successfully.
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.470 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:27:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:39.470 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1174954f-1191-4bdd-8d71-b03434000b52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.836 248514 INFO nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Creating config drive at /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3/disk.config
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.841 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9d3bgey execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.899 248514 DEBUG nova.network.neutron [req-6c31e6f3-dda1-48da-b384-452cb1966f32 req-36887c58-b73a-478e-a3b8-3e1db0321e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Updated VIF entry in instance network info cache for port eca7f353-3478-46ea-a63f-617a11a8f7ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.900 248514 DEBUG nova.network.neutron [req-6c31e6f3-dda1-48da-b384-452cb1966f32 req-36887c58-b73a-478e-a3b8-3e1db0321e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Updating instance_info_cache with network_info: [{"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.916 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Instance destroyed successfully.
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.917 248514 DEBUG nova.objects.instance [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'resources' on Instance uuid 2e52d555-08dd-49fb-a73a-eded391e154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.924 248514 DEBUG oslo_concurrency.lockutils [req-6c31e6f3-dda1-48da-b384-452cb1966f32 req-36887c58-b73a-478e-a3b8-3e1db0321e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-df25cd40-72b5-4e0f-90ec-8677c699d1d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.931 248514 DEBUG nova.virt.libvirt.vif [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1441755634',display_name='tempest-DeleteServersTestJSON-server-1441755634',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1441755634',id=45,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-a2jlts09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member',shelved_at='2025-12-13T08:27:30.201992',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4cef902b-321f-4d4c-9540-06884bb7f860'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:24Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=2e52d555-08dd-49fb-a73a-eded391e154c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.931 248514 DEBUG nova.network.os_vif_util [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": "b1a08ea3-3044-4caa-a944-744bd324adc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.933 248514 DEBUG nova.network.os_vif_util [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:8e:7b,bridge_name='br-int',has_traffic_filtering=True,id=b1a08ea3-3044-4caa-a944-744bd324adc9,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1a08ea3-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.934 248514 DEBUG os_vif [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:8e:7b,bridge_name='br-int',has_traffic_filtering=True,id=b1a08ea3-3044-4caa-a944-744bd324adc9,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1a08ea3-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.938 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1a08ea3-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.943 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.943 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.947 248514 INFO os_vif [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:8e:7b,bridge_name='br-int',has_traffic_filtering=True,id=b1a08ea3-3044-4caa-a944-744bd324adc9,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1a08ea3-30')
Dec 13 08:27:39 compute-0 nova_compute[248510]: 2025-12-13 08:27:39.985 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9d3bgey" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.010 248514 DEBUG nova.storage.rbd_utils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.013 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3/disk.config df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:27:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.352 248514 DEBUG oslo_concurrency.processutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3/disk.config df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.354 248514 INFO nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Deleting local config drive /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3/disk.config because it was imported into RBD.
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.402 248514 INFO nova.virt.libvirt.driver [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Deleting instance files /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_del
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.403 248514 INFO nova.virt.libvirt.driver [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Deletion of /var/lib/nova/instances/05e06a6b-e157-4cd9-88c0-889fa4cfd9fa_del complete
Dec 13 08:27:40 compute-0 NetworkManager[50376]: <info>  [1765614460.4327] manager: (tapeca7f353-34): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Dec 13 08:27:40 compute-0 systemd-udevd[299389]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:40 compute-0 kernel: tapeca7f353-34: entered promiscuous mode
Dec 13 08:27:40 compute-0 ovn_controller[148476]: 2025-12-13T08:27:40Z|00434|binding|INFO|Claiming lport eca7f353-3478-46ea-a63f-617a11a8f7ff for this chassis.
Dec 13 08:27:40 compute-0 ovn_controller[148476]: 2025-12-13T08:27:40Z|00435|binding|INFO|eca7f353-3478-46ea-a63f-617a11a8f7ff: Claiming fa:16:3e:1c:88:26 10.100.0.12
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:40 compute-0 NetworkManager[50376]: <info>  [1765614460.4521] device (tapeca7f353-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:27:40 compute-0 NetworkManager[50376]: <info>  [1765614460.4527] device (tapeca7f353-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.450 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:88:26 10.100.0.12'], port_security=['fa:16:3e:1c:88:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'df25cd40-72b5-4e0f-90ec-8677c699d1d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=eca7f353-3478-46ea-a63f-617a11a8f7ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.454 158419 INFO neutron.agent.ovn.metadata.agent [-] Port eca7f353-3478-46ea-a63f-617a11a8f7ff in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.455 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.464 248514 DEBUG nova.compute.manager [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Received event network-changed-b1a08ea3-3044-4caa-a944-744bd324adc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.464 248514 DEBUG nova.compute.manager [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Refreshing instance network info cache due to event network-changed-b1a08ea3-3044-4caa-a944-744bd324adc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.465 248514 DEBUG oslo_concurrency.lockutils [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.465 248514 DEBUG oslo_concurrency.lockutils [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.465 248514 DEBUG nova.network.neutron [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Refreshing network info cache for port b1a08ea3-3044-4caa-a944-744bd324adc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.471 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b508dcf-5c03-4e65-8c7e-47545e2c3a98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.472 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87bd91d0-e1 in ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.475 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87bd91d0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.476 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75bf02d2-0ff1-4139-9150-fc1f50967b54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.477 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f3029034-c4b1-43cf-bb32-ed9980e5c5c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.480 248514 INFO nova.compute.manager [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Took 1.47 seconds to destroy the instance on the hypervisor.
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.480 248514 DEBUG oslo.service.loopingcall [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.481 248514 DEBUG nova.compute.manager [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.481 248514 DEBUG nova.network.neutron [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:27:40 compute-0 systemd-machined[210538]: New machine qemu-56-instance-00000032.
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.490 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[59b52a5b-9b09-42ba-8dd1-78e775a9370d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000032.
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.521 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f489e85-24db-4621-be46-766ab00c53f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_controller[148476]: 2025-12-13T08:27:40Z|00436|binding|INFO|Setting lport eca7f353-3478-46ea-a63f-617a11a8f7ff ovn-installed in OVS
Dec 13 08:27:40 compute-0 ovn_controller[148476]: 2025-12-13T08:27:40Z|00437|binding|INFO|Setting lport eca7f353-3478-46ea-a63f-617a11a8f7ff up in Southbound
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.557 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[910733f2-5b37-491b-842a-e1994815105a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 NetworkManager[50376]: <info>  [1765614460.5652] manager: (tap87bd91d0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.564 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62cc267d-8597-4f31-9724-b0f641a0068a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.601 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[58c75f42-ee9a-47a2-a730-1e6b0c4b47ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.604 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[35cc5cde-76df-466b-b42b-d84ea0e89c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 NetworkManager[50376]: <info>  [1765614460.6345] device (tap87bd91d0-e0): carrier: link connected
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.644 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b039167a-3725-4f87-a33e-4286942aaf9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[094268ae-15ee-488b-848c-33957b644683]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703784, 'reachable_time': 44592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299603, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.696 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35160fdf-7602-4cb6-bbe1-9993a397b367]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:ec7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703784, 'tstamp': 703784}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299604, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.720 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8040b7b1-7c38-4989-b790-b992077a3490]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703784, 'reachable_time': 44592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299605, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ceph-mon[76537]: pgmap v1893: 321 pgs: 321 active+clean; 465 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 2.6 MiB/s wr, 332 op/s
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0d05cb-ac8c-437e-96ec-c47dce05ab0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.842 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[324a32d3-6cb8-495f-abec-262e8fd12cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.844 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.844 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.845 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:40 compute-0 NetworkManager[50376]: <info>  [1765614460.8475] manager: (tap87bd91d0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Dec 13 08:27:40 compute-0 kernel: tap87bd91d0-e0: entered promiscuous mode
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.851 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:40 compute-0 ovn_controller[148476]: 2025-12-13T08:27:40Z|00438|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.852 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.853 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.854 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.855 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97f8d586-bf5b-4c69-bc3c-a9fce426c2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.856 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:27:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:40.859 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'env', 'PROCESS_TAG=haproxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87bd91d0-eead-49b6-8f92-f8d0dba555dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.875 248514 INFO nova.virt.libvirt.driver [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Deleting instance files /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c_del
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.875 248514 INFO nova.virt.libvirt.driver [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Deletion of /var/lib/nova/instances/2e52d555-08dd-49fb-a73a-eded391e154c_del complete
Dec 13 08:27:40 compute-0 nova_compute[248510]: 2025-12-13 08:27:40.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.021 248514 INFO nova.scheduler.client.report [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Deleted allocations for instance 2e52d555-08dd-49fb-a73a-eded391e154c
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.095 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.095 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.152 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614461.1515229, df25cd40-72b5-4e0f-90ec-8677c699d1d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.152 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] VM Started (Lifecycle Event)
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.179 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.183 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614461.154637, df25cd40-72b5-4e0f-90ec-8677c699d1d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.183 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] VM Paused (Lifecycle Event)
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.212 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.218 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.236 248514 DEBUG oslo_concurrency.processutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.274 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:41 compute-0 podman[299678]: 2025-12-13 08:27:41.322291307 +0000 UTC m=+0.048694733 container create e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:27:41 compute-0 systemd[1]: Started libpod-conmon-e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d.scope.
Dec 13 08:27:41 compute-0 podman[299678]: 2025-12-13 08:27:41.296748747 +0000 UTC m=+0.023152203 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:27:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8666a1b8044e354b810eaa4281d1b07b7db2628215dd5a45bddcfe0eef16ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:41 compute-0 podman[299678]: 2025-12-13 08:27:41.413700112 +0000 UTC m=+0.140103568 container init e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:27:41 compute-0 podman[299678]: 2025-12-13 08:27:41.419229069 +0000 UTC m=+0.145632495 container start e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:27:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 465 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 2.3 MiB/s wr, 294 op/s
Dec 13 08:27:41 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [NOTICE]   (299716) : New worker (299718) forked
Dec 13 08:27:41 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [NOTICE]   (299716) : Loading success.
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658466818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.842 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.843 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.843 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.843 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.844 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No waiting events found dispatching network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.844 248514 WARNING nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received unexpected event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with vm_state active and task_state deleting.
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.844 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.845 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.845 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.845 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.845 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No waiting events found dispatching network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.846 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.846 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.846 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.847 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.849 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.850 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No waiting events found dispatching network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.850 248514 WARNING nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received unexpected event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with vm_state active and task_state deleting.
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.851 248514 DEBUG oslo_concurrency.processutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.859 248514 DEBUG nova.compute.provider_tree [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.885 248514 DEBUG nova.scheduler.client.report [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:41 compute-0 nova_compute[248510]: 2025-12-13 08:27:41.926 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.006 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 32.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.092 248514 DEBUG nova.network.neutron [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:42 compute-0 rsyslogd[1002]: imjournal: 6433 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.133 248514 INFO nova.compute.manager [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Took 1.65 seconds to deallocate network for instance.
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.208 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.208 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.360 248514 DEBUG oslo_concurrency.processutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.606 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.606 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.629 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.725 248514 DEBUG nova.network.neutron [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Updated VIF entry in instance network info cache for port b1a08ea3-3044-4caa-a944-744bd324adc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.726 248514 DEBUG nova.network.neutron [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Updating instance_info_cache with network_info: [{"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": null, "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.731 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.754 248514 DEBUG oslo_concurrency.lockutils [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:42 compute-0 ceph-mon[76537]: pgmap v1894: 321 pgs: 321 active+clean; 465 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 2.3 MiB/s wr, 294 op/s
Dec 13 08:27:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1658466818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:42 compute-0 ovn_controller[148476]: 2025-12-13T08:27:42Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:73:cb 10.100.0.8
Dec 13 08:27:42 compute-0 ovn_controller[148476]: 2025-12-13T08:27:42Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:73:cb 10.100.0.8
Dec 13 08:27:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237591976' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.954 248514 DEBUG oslo_concurrency.processutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.961 248514 DEBUG nova.compute.provider_tree [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:42 compute-0 nova_compute[248510]: 2025-12-13 08:27:42.983 248514 DEBUG nova.scheduler.client.report [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.012 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.017 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.029 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.029 248514 INFO nova.compute.claims [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.057 248514 INFO nova.scheduler.client.report [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Deleted allocations for instance 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.140 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.270 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 439 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.3 MiB/s wr, 320 op/s
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:27:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3237591976' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212434085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.825 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.832 248514 DEBUG nova.compute.provider_tree [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.856 248514 DEBUG nova.scheduler.client.report [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.884 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.884 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.962 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.962 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:27:43 compute-0 nova_compute[248510]: 2025-12-13 08:27:43.988 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.008 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.081 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.082 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.082 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.084 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.554 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.555 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.556 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating image(s)
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.581 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.607 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.631 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.635 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.678 248514 DEBUG nova.compute.manager [req-3622120c-43cd-44e5-bf98-824565320837 req-246a116d-e482-4903-a225-8d4c397a7e6a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-deleted-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.722 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.723 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.723 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.723 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.746 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.749 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:44 compute-0 ceph-mon[76537]: pgmap v1895: 321 pgs: 321 active+clean; 439 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.3 MiB/s wr, 320 op/s
Dec 13 08:27:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4212434085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:44 compute-0 nova_compute[248510]: 2025-12-13 08:27:44.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.060 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.141 248514 DEBUG nova.policy [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5bc32e49dbd4372a006913090b9ef0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.149 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] resizing rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.239 248514 DEBUG nova.objects.instance [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'migration_context' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.266 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.266 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Ensure instance console log exists: /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.267 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.267 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.267 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 372 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.3 MiB/s wr, 354 op/s
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.501 248514 DEBUG nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.501 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.501 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.503 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.503 248514 DEBUG nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Processing event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.503 248514 DEBUG nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 DEBUG nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] No waiting events found dispatching network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 WARNING nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received unexpected event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff for instance with vm_state building and task_state spawning.
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.505 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.520 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614465.5168235, df25cd40-72b5-4e0f-90ec-8677c699d1d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.522 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] VM Resumed (Lifecycle Event)
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.524 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.531 248514 INFO nova.virt.libvirt.driver [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Instance spawned successfully.
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.532 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.557 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.560 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.570 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.571 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.572 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.572 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.573 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.573 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.597 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.766 248514 INFO nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 21.53 seconds to spawn the instance on the hypervisor.
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.767 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:45 compute-0 nova_compute[248510]: 2025-12-13 08:27:45.845 248514 INFO nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 28.30 seconds to build instance.
Dec 13 08:27:46 compute-0 nova_compute[248510]: 2025-12-13 08:27:46.399 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Dec 13 08:27:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Dec 13 08:27:46 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Dec 13 08:27:46 compute-0 ceph-mon[76537]: pgmap v1896: 321 pgs: 321 active+clean; 372 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.3 MiB/s wr, 354 op/s
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.155 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updating instance_info_cache with network_info: [{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.185 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Successfully created port: d8c7cad7-f601-4205-8838-a583b6e04b0f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:27:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 372 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 304 op/s
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.567 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.568 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:47 compute-0 ovn_controller[148476]: 2025-12-13T08:27:47Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:bc:28 10.100.0.4
Dec 13 08:27:47 compute-0 ovn_controller[148476]: 2025-12-13T08:27:47Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:bc:28 10.100.0.4
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.836 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.836 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.837 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.837 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:27:47 compute-0 nova_compute[248510]: 2025-12-13 08:27:47.837 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:47 compute-0 ceph-mon[76537]: osdmap e209: 3 total, 3 up, 3 in
Dec 13 08:27:47 compute-0 ceph-mon[76537]: pgmap v1898: 321 pgs: 321 active+clean; 372 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 304 op/s
Dec 13 08:27:48 compute-0 podman[299941]: 2025-12-13 08:27:48.015078252 +0000 UTC m=+0.092584016 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 08:27:48 compute-0 podman[299942]: 2025-12-13 08:27:48.030530133 +0000 UTC m=+0.101349682 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 08:27:48 compute-0 podman[299940]: 2025-12-13 08:27:48.043012231 +0000 UTC m=+0.121704204 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:27:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537386707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.453 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/537386707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.956 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.957 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.963 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.963 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.967 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.968 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.974 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:27:48 compute-0 nova_compute[248510]: 2025-12-13 08:27:48.975 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.177 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.179 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3444MB free_disk=59.85529749840498GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.179 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.179 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.342 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Successfully updated port: d8c7cad7-f601-4205-8838-a583b6e04b0f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.387 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 98240df6-1cba-40e1-833c-24611270ed83 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.388 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 41602b99-e7f2-450c-885e-51d07a1236d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.388 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance e6e0fdaf-f934-4e56-8e59-4c4475bacd26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.388 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance df25cd40-72b5-4e0f-90ec-8677c699d1d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.388 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.389 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.389 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1216MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.424 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.424 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquired lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.425 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:27:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 369 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 350 op/s
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.519 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.658 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.968 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.973 248514 DEBUG nova.compute.manager [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:49 compute-0 ceph-mon[76537]: pgmap v1899: 321 pgs: 321 active+clean; 369 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 350 op/s
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.991 248514 DEBUG nova.compute.manager [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-changed-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.991 248514 DEBUG nova.compute.manager [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Refreshing instance network info cache due to event network-changed-d8c7cad7-f601-4205-8838-a583b6e04b0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:27:49 compute-0 nova_compute[248510]: 2025-12-13 08:27:49.991 248514 DEBUG oslo_concurrency.lockutils [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.030 248514 INFO nova.compute.manager [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] instance snapshotting
Dec 13 08:27:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606732519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Dec 13 08:27:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Dec 13 08:27:50 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.140 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.146 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.169 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.201 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.202 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.310 248514 INFO nova.virt.libvirt.driver [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Beginning live snapshot process
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.491 248514 DEBUG nova.virt.libvirt.imagebackend [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.540 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Updating instance_info_cache with network_info: [{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.570 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Releasing lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.571 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance network_info: |[{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.571 248514 DEBUG oslo_concurrency.lockutils [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.572 248514 DEBUG nova.network.neutron [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Refreshing network info cache for port d8c7cad7-f601-4205-8838-a583b6e04b0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.575 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start _get_guest_xml network_info=[{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.579 248514 WARNING nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.584 248514 DEBUG nova.virt.libvirt.host [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.585 248514 DEBUG nova.virt.libvirt.host [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.588 248514 DEBUG nova.virt.libvirt.host [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.589 248514 DEBUG nova.virt.libvirt.host [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.589 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.589 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.590 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.590 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.590 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.591 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.591 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.591 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.591 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.592 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.592 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.592 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.595 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:50 compute-0 nova_compute[248510]: 2025-12-13 08:27:50.737 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(a2cef009714948e589373b905ede49bc) on rbd image(df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:27:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2606732519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:50 compute-0 ceph-mon[76537]: osdmap e210: 3 total, 3 up, 3 in
Dec 13 08:27:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Dec 13 08:27:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Dec 13 08:27:51 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Dec 13 08:27:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731523404' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.173 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.205 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.211 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.251 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.252 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.253 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.253 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.270 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning vms/df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk@a2cef009714948e589373b905ede49bc to images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.379 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] flattening images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:27:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 369 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.8 MiB/s wr, 331 op/s
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.675 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(a2cef009714948e589373b905ede49bc) on rbd image(df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:27:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:27:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980582512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.827 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.829 248514 DEBUG nova.virt.libvirt.vif [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:44Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.829 248514 DEBUG nova.network.os_vif_util [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.831 248514 DEBUG nova.network.os_vif_util [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.832 248514 DEBUG nova.objects.instance [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.890 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <uuid>0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</uuid>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <name>instance-00000033</name>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1729116913</nova:name>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:27:50</nova:creationTime>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <nova:user uuid="a5bc32e49dbd4372a006913090b9ef0f">tempest-ServerDiskConfigTestJSON-167971983-project-member</nova:user>
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <nova:project uuid="9aea752cb9b648a7aa9b3f634ced797e">tempest-ServerDiskConfigTestJSON-167971983</nova:project>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <nova:port uuid="d8c7cad7-f601-4205-8838-a583b6e04b0f">
Dec 13 08:27:51 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <system>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <entry name="serial">0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</entry>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <entry name="uuid">0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</entry>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </system>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <os>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   </os>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <features>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   </features>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk">
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config">
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       </source>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:27:51 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:45:57:38"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <target dev="tapd8c7cad7-f6"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/console.log" append="off"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <video>
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </video>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:27:51 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:27:51 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:27:51 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:27:51 compute-0 nova_compute[248510]: </domain>
Dec 13 08:27:51 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.891 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Preparing to wait for external event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.895 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.896 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.896 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.897 248514 DEBUG nova.virt.libvirt.vif [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:44Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.897 248514 DEBUG nova.network.os_vif_util [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.898 248514 DEBUG nova.network.os_vif_util [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.899 248514 DEBUG os_vif [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.900 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.901 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.905 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.906 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8c7cad7-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.906 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8c7cad7-f6, col_values=(('external_ids', {'iface-id': 'd8c7cad7-f601-4205-8838-a583b6e04b0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:57:38', 'vm-uuid': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:51 compute-0 NetworkManager[50376]: <info>  [1765614471.9094] manager: (tapd8c7cad7-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.921 248514 INFO os_vif [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6')
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.994 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.996 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.996 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No VIF found with MAC fa:16:3e:45:57:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:27:51 compute-0 nova_compute[248510]: 2025-12-13 08:27:51.996 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Using config drive
Dec 13 08:27:52 compute-0 nova_compute[248510]: 2025-12-13 08:27:52.018 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Dec 13 08:27:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Dec 13 08:27:52 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Dec 13 08:27:52 compute-0 ceph-mon[76537]: osdmap e211: 3 total, 3 up, 3 in
Dec 13 08:27:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/731523404' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:52 compute-0 ceph-mon[76537]: pgmap v1902: 321 pgs: 321 active+clean; 369 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.8 MiB/s wr, 331 op/s
Dec 13 08:27:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1980582512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:27:52 compute-0 nova_compute[248510]: 2025-12-13 08:27:52.179 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(snap) on rbd image(e4e9ee37-4059-46f1-9bf8-83a72d0403e7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:27:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Dec 13 08:27:53 compute-0 ceph-mon[76537]: osdmap e212: 3 total, 3 up, 3 in
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.161 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating config drive at /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config
Dec 13 08:27:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.171 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8_6a34e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:53 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.264 248514 DEBUG nova.network.neutron [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Updated VIF entry in instance network info cache for port d8c7cad7-f601-4205-8838-a583b6e04b0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.265 248514 DEBUG nova.network.neutron [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Updating instance_info_cache with network_info: [{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.293 248514 DEBUG oslo_concurrency.lockutils [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.323 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8_6a34e" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.356 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.362 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 402 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 145 op/s
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.530 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.531 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deleting local config drive /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config because it was imported into RBD.
Dec 13 08:27:53 compute-0 kernel: tapd8c7cad7-f6: entered promiscuous mode
Dec 13 08:27:53 compute-0 NetworkManager[50376]: <info>  [1765614473.5821] manager: (tapd8c7cad7-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Dec 13 08:27:53 compute-0 ovn_controller[148476]: 2025-12-13T08:27:53Z|00439|binding|INFO|Claiming lport d8c7cad7-f601-4205-8838-a583b6e04b0f for this chassis.
Dec 13 08:27:53 compute-0 ovn_controller[148476]: 2025-12-13T08:27:53Z|00440|binding|INFO|d8c7cad7-f601-4205-8838-a583b6e04b0f: Claiming fa:16:3e:45:57:38 10.100.0.8
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.593 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:57:38 10.100.0.8'], port_security=['fa:16:3e:45:57:38 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d8c7cad7-f601-4205-8838-a583b6e04b0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.594 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d8c7cad7-f601-4205-8838-a583b6e04b0f in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 bound to our chassis
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.596 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.604 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.605 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:53 compute-0 ovn_controller[148476]: 2025-12-13T08:27:53Z|00441|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f ovn-installed in OVS
Dec 13 08:27:53 compute-0 ovn_controller[148476]: 2025-12-13T08:27:53Z|00442|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f up in Southbound
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94fd69fc-7662-4023-8768-03cfc294a657]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.614 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c63049d-61 in ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.616 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c63049d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.616 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d57901c-29aa-41fe-9488-86c3d732c4ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.617 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbc2735-bc23-4827-9fd6-df715a556843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 systemd-udevd[300320]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.631 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f41552-b101-43a5-a62c-cc1bf88bdd8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 systemd-machined[210538]: New machine qemu-57-instance-00000033.
Dec 13 08:27:53 compute-0 NetworkManager[50376]: <info>  [1765614473.6467] device (tapd8c7cad7-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.645 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:27:53 compute-0 NetworkManager[50376]: <info>  [1765614473.6478] device (tapd8c7cad7-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:27:53 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.649 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6642c099-2bf9-42a4-b59a-e60095f531c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.687 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc26f04-c507-4391-b453-07a9bd2fa919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 systemd-udevd[300323]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:27:53 compute-0 NetworkManager[50376]: <info>  [1765614473.6964] manager: (tap6c63049d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/205)
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.697 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd30a55-1f7d-45d3-bedd-926a47cc7784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.739 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa4c981-48a4-4cba-be09-550e39cfd7e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.743 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7295da1a-7727-4737-8ec0-7c25b0334d96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 NetworkManager[50376]: <info>  [1765614473.7757] device (tap6c63049d-60): carrier: link connected
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.787 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[06b4d152-0631-4471-86e6-9f4e9ae2cf82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.808 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24786ef4-ded4-48a5-90a6-c3546697a5a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705099, 'reachable_time': 15368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300351, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9abc9395-0035-465e-b634-63d03554c922]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705099, 'tstamp': 705099}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300352, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.848 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d036c907-fb8d-443d-ac82-a1bb98f5646b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705099, 'reachable_time': 15368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300353, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.885 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c08f4-248b-4578-8488-b985db772c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.992 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dfed81eb-6c9b-4bd4-93e0-793a055d4ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.994 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.994 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.995 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c63049d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:53 compute-0 nova_compute[248510]: 2025-12-13 08:27:53.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:53 compute-0 kernel: tap6c63049d-60: entered promiscuous mode
Dec 13 08:27:53 compute-0 NetworkManager[50376]: <info>  [1765614473.9996] manager: (tap6c63049d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.003 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c63049d-60, col_values=(('external_ids', {'iface-id': 'b410790c-12b7-4a29-87e5-13a29af9c319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:54 compute-0 ovn_controller[148476]: 2025-12-13T08:27:54Z|00443|binding|INFO|Releasing lport b410790c-12b7-4a29-87e5-13a29af9c319 from this chassis (sb_readonly=0)
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.005 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.007 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.008 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb855859-6d18-42d3-9405-0e1de83c8843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.009 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:27:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.010 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'env', 'PROCESS_TAG=haproxy-6c63049d-63e9-47af-99e2-ce1403a42891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c63049d-63e9-47af-99e2-ce1403a42891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:54 compute-0 ceph-mon[76537]: osdmap e213: 3 total, 3 up, 3 in
Dec 13 08:27:54 compute-0 ceph-mon[76537]: pgmap v1905: 321 pgs: 321 active+clean; 402 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 145 op/s
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.191 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614474.1906586, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.191 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Started (Lifecycle Event)
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.252 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614459.250947, 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.253 248514 INFO nova.compute.manager [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] VM Stopped (Lifecycle Event)
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.322 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.329 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614474.1931643, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.330 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Paused (Lifecycle Event)
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.338 248514 DEBUG nova.compute.manager [None req-e612e9e0-5e5b-4ebf-ac6e-293bb5ff1b0c - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.426 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.430 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.436 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.437 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.452 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.452 248514 INFO nova.compute.claims [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:27:54 compute-0 podman[300425]: 2025-12-13 08:27:54.457700404 +0000 UTC m=+0.070152282 container create 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.475 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:54 compute-0 systemd[1]: Started libpod-conmon-7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759.scope.
Dec 13 08:27:54 compute-0 podman[300425]: 2025-12-13 08:27:54.423733905 +0000 UTC m=+0.036185803 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:27:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a1045c2667f2928359016f34c22c18152582122077ea101da9c4a143c306b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:27:54 compute-0 podman[300425]: 2025-12-13 08:27:54.573201733 +0000 UTC m=+0.185653621 container init 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 08:27:54 compute-0 podman[300425]: 2025-12-13 08:27:54.580701238 +0000 UTC m=+0.193153106 container start 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:27:54 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [NOTICE]   (300444) : New worker (300446) forked
Dec 13 08:27:54 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [NOTICE]   (300444) : Loading success.
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.743 248514 DEBUG oslo_concurrency.lockutils [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.744 248514 DEBUG oslo_concurrency.lockutils [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.744 248514 DEBUG nova.compute.manager [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.752 248514 DEBUG nova.compute.manager [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.753 248514 DEBUG nova.objects.instance [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'flavor' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.814 248514 DEBUG nova.virt.libvirt.driver [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:27:54 compute-0 nova_compute[248510]: 2025-12-13 08:27:54.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:27:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:55.409 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 418 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.1 MiB/s wr, 132 op/s
Dec 13 08:27:55 compute-0 nova_compute[248510]: 2025-12-13 08:27:55.647 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:27:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999904515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.282 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.288 248514 DEBUG nova.compute.provider_tree [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:27:56 compute-0 ceph-mon[76537]: pgmap v1906: 321 pgs: 321 active+clean; 418 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.1 MiB/s wr, 132 op/s
Dec 13 08:27:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1999904515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.540 248514 DEBUG nova.compute.manager [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.541 248514 DEBUG oslo_concurrency.lockutils [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.541 248514 DEBUG oslo_concurrency.lockutils [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.541 248514 DEBUG oslo_concurrency.lockutils [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.542 248514 DEBUG nova.compute.manager [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Processing event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.544 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.548 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614476.5483558, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.549 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Resumed (Lifecycle Event)
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.553 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.556 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance spawned successfully.
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.557 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.791 248514 INFO nova.virt.libvirt.driver [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Snapshot image upload complete
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.792 248514 INFO nova.compute.manager [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 6.76 seconds to snapshot the instance on the hypervisor.
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.839 248514 DEBUG nova.scheduler.client.report [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.914 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.936 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.938 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.944 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.944 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.945 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.945 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.945 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.947 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.953 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:56 compute-0 nova_compute[248510]: 2025-12-13 08:27:56.953 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.055 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:27:57 compute-0 kernel: tapbdc94f2e-b1 (unregistering): left promiscuous mode
Dec 13 08:27:57 compute-0 NetworkManager[50376]: <info>  [1765614477.0964] device (tapbdc94f2e-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:57 compute-0 ovn_controller[148476]: 2025-12-13T08:27:57Z|00444|binding|INFO|Releasing lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 from this chassis (sb_readonly=0)
Dec 13 08:27:57 compute-0 ovn_controller[148476]: 2025-12-13T08:27:57Z|00445|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 down in Southbound
Dec 13 08:27:57 compute-0 ovn_controller[148476]: 2025-12-13T08:27:57Z|00446|binding|INFO|Removing iface tapbdc94f2e-b1 ovn-installed in OVS
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.120 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.129 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:b2 10.100.0.10'], port_security=['fa:16:3e:d6:ce:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '98240df6-1cba-40e1-833c-24611270ed83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bdc94f2e-b14e-4e39-bea0-978ff56ff722) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.130 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bdc94f2e-b14e-4e39-bea0-978ff56ff722 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.132 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.147 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:27:57 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.148 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:27:57 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Consumed 13.739s CPU time.
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.152 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b6ced0-56ec-43ad-a450-e886a338b6df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:57 compute-0 systemd-machined[210538]: Machine qemu-52-instance-0000002f terminated.
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.155 248514 INFO nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Took 12.60 seconds to spawn the instance on the hypervisor.
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.155 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.198 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6625854-b57b-4aba-894f-ff3b4e08ee0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.202 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7e8c47-31df-43d5-b63e-4997b0eb440b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.252 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[36ec10aa-0b9a-4290-ab6a-7aed8815f6c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.271 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45d76601-d2f8-4f7d-84cb-6344876f578e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300488, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.288 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3633d8-1e1b-4bf0-8ba9-88c362813be0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300489, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300489, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.290 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.293 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.298 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.298 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.298 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.298 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:27:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.299 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 418 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 111 op/s
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.515 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.610 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.681 248514 INFO nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Took 14.97 seconds to build instance.
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.730 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.770 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.772 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.772 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Creating image(s)
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.796 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.865 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.897 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.901 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.941 248514 DEBUG nova.policy [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6f7967ce16c45d394188c1302b02907', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.950 248514 INFO nova.virt.libvirt.driver [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance shutdown successfully after 3 seconds.
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.959 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance destroyed successfully.
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.960 248514 DEBUG nova.objects.instance [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'numa_topology' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.997 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.997 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.998 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:57 compute-0 nova_compute[248510]: 2025-12-13 08:27:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.021 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.025 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 678d2db2-0536-4744-b65c-f0a5852f35e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.253 248514 DEBUG nova.compute.manager [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.296 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 678d2db2-0536-4744-b65c-f0a5852f35e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.361 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] resizing rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:27:58 compute-0 ceph-mon[76537]: pgmap v1907: 321 pgs: 321 active+clean; 418 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 111 op/s
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.506 248514 DEBUG nova.objects.instance [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'migration_context' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.899 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.900 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Ensure instance console log exists: /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.900 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.901 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.901 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:58 compute-0 nova_compute[248510]: 2025-12-13 08:27:58.935 248514 DEBUG oslo_concurrency.lockutils [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 448 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.4 MiB/s wr, 287 op/s
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.635 248514 DEBUG nova.compute.manager [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.635 248514 DEBUG oslo_concurrency.lockutils [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.635 248514 DEBUG oslo_concurrency.lockutils [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.636 248514 DEBUG oslo_concurrency.lockutils [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.636 248514 DEBUG nova.compute.manager [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.636 248514 WARNING nova.compute.manager [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state stopped and task_state None.
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.975 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.995 248514 DEBUG nova.compute.manager [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.996 248514 DEBUG oslo_concurrency.lockutils [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.996 248514 DEBUG oslo_concurrency.lockutils [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.996 248514 DEBUG oslo_concurrency.lockutils [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.996 248514 DEBUG nova.compute.manager [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:27:59 compute-0 nova_compute[248510]: 2025-12-13 08:27:59.997 248514 WARNING nova.compute.manager [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state None.
Dec 13 08:28:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Dec 13 08:28:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Dec 13 08:28:00 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Dec 13 08:28:00 compute-0 ceph-mon[76537]: pgmap v1908: 321 pgs: 321 active+clean; 448 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.4 MiB/s wr, 287 op/s
Dec 13 08:28:00 compute-0 ceph-mon[76537]: osdmap e214: 3 total, 3 up, 3 in
Dec 13 08:28:00 compute-0 nova_compute[248510]: 2025-12-13 08:28:00.813 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Successfully created port: 3a8efc9e-7582-4b17-ab0e-b248e09932b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:28:00 compute-0 ovn_controller[148476]: 2025-12-13T08:28:00Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:88:26 10.100.0.12
Dec 13 08:28:00 compute-0 ovn_controller[148476]: 2025-12-13T08:28:00Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:88:26 10.100.0.12
Dec 13 08:28:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 448 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 208 op/s
Dec 13 08:28:01 compute-0 nova_compute[248510]: 2025-12-13 08:28:01.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:02 compute-0 ceph-mon[76537]: pgmap v1910: 321 pgs: 321 active+clean; 448 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 208 op/s
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.274 248514 DEBUG nova.compute.manager [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.274 248514 DEBUG oslo_concurrency.lockutils [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.275 248514 DEBUG oslo_concurrency.lockutils [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.275 248514 DEBUG oslo_concurrency.lockutils [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.275 248514 DEBUG nova.compute.manager [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.275 248514 WARNING nova.compute.manager [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state stopped and task_state None.
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.277 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Successfully updated port: 3a8efc9e-7582-4b17-ab0e-b248e09932b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.389 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.390 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.390 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:28:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 488 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.3 MiB/s wr, 207 op/s
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.682 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.740 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'flavor' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.784 248514 DEBUG oslo_concurrency.lockutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.785 248514 DEBUG oslo_concurrency.lockutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquired lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.785 248514 DEBUG nova.network.neutron [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.785 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'info_cache' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.791 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:03 compute-0 nova_compute[248510]: 2025-12-13 08:28:03.792 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:04 compute-0 nova_compute[248510]: 2025-12-13 08:28:04.071 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:28:04 compute-0 nova_compute[248510]: 2025-12-13 08:28:04.242 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:04 compute-0 nova_compute[248510]: 2025-12-13 08:28:04.243 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:04 compute-0 nova_compute[248510]: 2025-12-13 08:28:04.251 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:28:04 compute-0 nova_compute[248510]: 2025-12-13 08:28:04.251 248514 INFO nova.compute.claims [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:28:04 compute-0 ceph-mon[76537]: pgmap v1911: 321 pgs: 321 active+clean; 488 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.3 MiB/s wr, 207 op/s
Dec 13 08:28:04 compute-0 nova_compute[248510]: 2025-12-13 08:28:04.916 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:04 compute-0 nova_compute[248510]: 2025-12-13 08:28:04.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.089 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Updating instance_info_cache with network_info: [{"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.111 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.112 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance network_info: |[{"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.114 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start _get_guest_xml network_info=[{"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.119 248514 WARNING nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.123 248514 DEBUG nova.virt.libvirt.host [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.124 248514 DEBUG nova.virt.libvirt.host [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.126 248514 DEBUG nova.virt.libvirt.host [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.126 248514 DEBUG nova.virt.libvirt.host [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.127 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.127 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.128 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.128 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.128 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.129 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.129 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.129 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.130 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.130 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.130 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.130 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.135 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.360 248514 DEBUG nova.compute.manager [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-changed-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.361 248514 DEBUG nova.compute.manager [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Refreshing instance network info cache due to event network-changed-3a8efc9e-7582-4b17-ab0e-b248e09932b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.361 248514 DEBUG oslo_concurrency.lockutils [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.362 248514 DEBUG oslo_concurrency.lockutils [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.362 248514 DEBUG nova.network.neutron [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Refreshing network info cache for port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:28:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 497 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Dec 13 08:28:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802705485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3802705485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.554 248514 DEBUG nova.network.neutron [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updating instance_info_cache with network_info: [{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.560 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.567 248514 DEBUG nova.compute.provider_tree [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.582 248514 DEBUG oslo_concurrency.lockutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Releasing lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.599 248514 DEBUG nova.scheduler.client.report [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.617 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance destroyed successfully.
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.618 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'numa_topology' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834718991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.734 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.756 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.761 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.850 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.852 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.856 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'resources' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.892 248514 DEBUG nova.virt.libvirt.vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:58Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.894 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.895 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.896 248514 DEBUG os_vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.900 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdc94f2e-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.907 248514 INFO os_vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1')
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.916 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Start _get_guest_xml network_info=[{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.920 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.921 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.928 248514 WARNING nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.935 248514 DEBUG nova.virt.libvirt.host [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.936 248514 DEBUG nova.virt.libvirt.host [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.940 248514 DEBUG nova.virt.libvirt.host [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.940 248514 DEBUG nova.virt.libvirt.host [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.941 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.941 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.942 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.942 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.942 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.943 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.943 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.943 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.944 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.944 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.944 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.945 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.945 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.956 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:28:05 compute-0 nova_compute[248510]: 2025-12-13 08:28:05.977 248514 DEBUG oslo_concurrency.processutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.013 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.135 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.137 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.138 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Creating image(s)
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.160 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.188 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.213 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.217 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "4ca0214a272891cae00922c0f452dbc91c01667e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.218 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "4ca0214a272891cae00922c0f452dbc91c01667e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.226 248514 DEBUG nova.policy [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:28:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395874849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.376 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.377 248514 DEBUG nova.virt.libvirt.vif [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1859844308',display_name='tempest-DeleteServersTestJSON-server-1859844308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1859844308',id=52,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-vi6y6qcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:57Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=678d2db2-0536-4744-b65c-f0a5852f35e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.378 248514 DEBUG nova.network.os_vif_util [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.379 248514 DEBUG nova.network.os_vif_util [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.380 248514 DEBUG nova.objects.instance [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.400 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <uuid>678d2db2-0536-4744-b65c-f0a5852f35e0</uuid>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <name>instance-00000034</name>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <nova:name>tempest-DeleteServersTestJSON-server-1859844308</nova:name>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:28:05</nova:creationTime>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <nova:user uuid="b6f7967ce16c45d394188c1302b02907">tempest-DeleteServersTestJSON-991966373-project-member</nova:user>
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <nova:project uuid="6f6beadbb0244529b8dfc1abff8e8e10">tempest-DeleteServersTestJSON-991966373</nova:project>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <nova:port uuid="3a8efc9e-7582-4b17-ab0e-b248e09932b3">
Dec 13 08:28:06 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <system>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <entry name="serial">678d2db2-0536-4744-b65c-f0a5852f35e0</entry>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <entry name="uuid">678d2db2-0536-4744-b65c-f0a5852f35e0</entry>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </system>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <os>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   </os>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <features>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   </features>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/678d2db2-0536-4744-b65c-f0a5852f35e0_disk">
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config">
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:06 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:cc:30:15"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <target dev="tap3a8efc9e-75"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/console.log" append="off"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <video>
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </video>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:28:06 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:28:06 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:28:06 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:28:06 compute-0 nova_compute[248510]: </domain>
Dec 13 08:28:06 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.401 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Preparing to wait for external event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.401 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.402 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.402 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.403 248514 DEBUG nova.virt.libvirt.vif [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1859844308',display_name='tempest-DeleteServersTestJSON-server-1859844308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1859844308',id=52,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-vi6y6qcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:57Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=678d2db2-0536-4744-b65c-f0a5852f35e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.403 248514 DEBUG nova.network.os_vif_util [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.404 248514 DEBUG nova.network.os_vif_util [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.404 248514 DEBUG os_vif [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.405 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.405 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.408 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.408 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8efc9e-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.409 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a8efc9e-75, col_values=(('external_ids', {'iface-id': '3a8efc9e-7582-4b17-ab0e-b248e09932b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:30:15', 'vm-uuid': '678d2db2-0536-4744-b65c-f0a5852f35e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:06 compute-0 NetworkManager[50376]: <info>  [1765614486.4116] manager: (tap3a8efc9e-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.418 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.419 248514 INFO os_vif [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75')
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.507 248514 DEBUG nova.virt.libvirt.imagebackend [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 13 08:28:06 compute-0 ceph-mon[76537]: pgmap v1912: 321 pgs: 321 active+clean; 497 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Dec 13 08:28:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1834718991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/395874849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357257966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.564 248514 DEBUG nova.virt.libvirt.imagebackend [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.565 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7@snap to None/5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.593 248514 DEBUG oslo_concurrency.processutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.642 248514 DEBUG oslo_concurrency.processutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.679 248514 INFO nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Rebuilding instance
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.691 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "4ca0214a272891cae00922c0f452dbc91c01667e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.728 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.729 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.729 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No VIF found with MAC fa:16:3e:cc:30:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.730 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Using config drive
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.749 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.846 248514 DEBUG nova.objects.instance [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d8c2900-0048-4631-bbc6-0122bce8f4f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.877 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.878 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Ensure instance console log exists: /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.878 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.879 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:06 compute-0 nova_compute[248510]: 2025-12-13 08:28:06.879 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644608389' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:07 compute-0 nova_compute[248510]: 2025-12-13 08:28:07.227 248514 DEBUG oslo_concurrency.processutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:07 compute-0 nova_compute[248510]: 2025-12-13 08:28:07.229 248514 DEBUG nova.virt.libvirt.vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:58Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:28:07 compute-0 nova_compute[248510]: 2025-12-13 08:28:07.229 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:07 compute-0 nova_compute[248510]: 2025-12-13 08:28:07.230 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:07 compute-0 nova_compute[248510]: 2025-12-13 08:28:07.231 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 497 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Dec 13 08:28:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3357257966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1644608389' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.085 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <uuid>98240df6-1cba-40e1-833c-24611270ed83</uuid>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <name>instance-0000002f</name>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-612802196</nova:name>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:28:05</nova:creationTime>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <nova:user uuid="65a6b617130a42ac9c3d9b4abf6a1cfb">tempest-ListServerFiltersTestJSON-1229542462-project-member</nova:user>
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <nova:project uuid="3490ad817e664ff6b12c4ea88192b667">tempest-ListServerFiltersTestJSON-1229542462</nova:project>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <nova:port uuid="bdc94f2e-b14e-4e39-bea0-978ff56ff722">
Dec 13 08:28:08 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <system>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <entry name="serial">98240df6-1cba-40e1-833c-24611270ed83</entry>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <entry name="uuid">98240df6-1cba-40e1-833c-24611270ed83</entry>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </system>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <os>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   </os>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <features>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   </features>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/98240df6-1cba-40e1-833c-24611270ed83_disk">
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/98240df6-1cba-40e1-833c-24611270ed83_disk.config">
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:d6:ce:b2"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <target dev="tapbdc94f2e-b1"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/console.log" append="off"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <video>
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </video>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:28:08 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:28:08 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:28:08 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:28:08 compute-0 nova_compute[248510]: </domain>
Dec 13 08:28:08 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.086 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.087 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.088 248514 DEBUG nova.virt.libvirt.vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:58Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.088 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.089 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.091 248514 DEBUG os_vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.093 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.093 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.096 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdc94f2e-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.097 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdc94f2e-b1, col_values=(('external_ids', {'iface-id': 'bdc94f2e-b14e-4e39-bea0-978ff56ff722', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:ce:b2', 'vm-uuid': '98240df6-1cba-40e1-833c-24611270ed83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:08 compute-0 NetworkManager[50376]: <info>  [1765614488.1000] manager: (tapbdc94f2e-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.115 248514 INFO os_vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1')
Dec 13 08:28:08 compute-0 kernel: tapbdc94f2e-b1: entered promiscuous mode
Dec 13 08:28:08 compute-0 NetworkManager[50376]: <info>  [1765614488.1917] manager: (tapbdc94f2e-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:08 compute-0 ovn_controller[148476]: 2025-12-13T08:28:08Z|00447|binding|INFO|Claiming lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 for this chassis.
Dec 13 08:28:08 compute-0 ovn_controller[148476]: 2025-12-13T08:28:08Z|00448|binding|INFO|bdc94f2e-b14e-4e39-bea0-978ff56ff722: Claiming fa:16:3e:d6:ce:b2 10.100.0.10
Dec 13 08:28:08 compute-0 ovn_controller[148476]: 2025-12-13T08:28:08Z|00449|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 ovn-installed in OVS
Dec 13 08:28:08 compute-0 systemd-udevd[301033]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:28:08 compute-0 NetworkManager[50376]: <info>  [1765614488.2488] device (tapbdc94f2e-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:28:08 compute-0 NetworkManager[50376]: <info>  [1765614488.2498] device (tapbdc94f2e-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:08 compute-0 systemd-machined[210538]: New machine qemu-58-instance-0000002f.
Dec 13 08:28:08 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-0000002f.
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.463 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:08 compute-0 ovn_controller[148476]: 2025-12-13T08:28:08Z|00450|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 up in Southbound
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.507 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:b2 10.100.0.10'], port_security=['fa:16:3e:d6:ce:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '98240df6-1cba-40e1-833c-24611270ed83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bdc94f2e-b14e-4e39-bea0-978ff56ff722) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.509 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bdc94f2e-b14e-4e39-bea0-978ff56ff722 in datapath 7576f079-0439-46aa-98af-04f80cd254ca bound to our chassis
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.511 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.525 248514 DEBUG nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.531 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0dff9437-2922-4610-92b3-392d77a00554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:08 compute-0 ceph-mon[76537]: pgmap v1913: 321 pgs: 321 active+clean; 497 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.568 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a58b8b24-ae32-4f8c-b56e-1924aa3daa92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.575 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[52b7fd3c-e2a1-4d7d-9423-2b1f8ff69b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.606 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8f089d-3ea5-4f57-87ad-2e433dcb4723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.636 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_requests' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.637 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78f3cc5c-da5a-481e-aa84-2f68f454f41d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301050, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.656 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.662 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b901c1a5-e56e-4dfc-abc7-65d600490af6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301058, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301058, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.664 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.672 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.673 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.673 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'resources' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.677 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.678 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.690 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'migration_context' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.708 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.713 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.781 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Successfully created port: d41fdf9b-1d4a-475f-b516-69fa17b19cfb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.807 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Creating config drive at /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.819 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7ans4mg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.864 248514 DEBUG nova.compute.manager [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.867 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 98240df6-1cba-40e1-833c-24611270ed83 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.867 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614488.838627, 98240df6-1cba-40e1-833c-24611270ed83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.867 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Resumed (Lifecycle Event)
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.874 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance rebooted successfully.
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.875 248514 DEBUG nova.compute.manager [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.903 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.908 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.935 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] During sync_power_state the instance has a pending task (powering-on). Skip.
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.936 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614488.8391178, 98240df6-1cba-40e1-833c-24611270ed83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.936 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Started (Lifecycle Event)
Dec 13 08:28:08 compute-0 nova_compute[248510]: 2025-12-13 08:28:08.973 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7ans4mg" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.004 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.009 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.052 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.058 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.162 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.163 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Deleting local config drive /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config because it was imported into RBD.
Dec 13 08:28:09 compute-0 kernel: tap3a8efc9e-75: entered promiscuous mode
Dec 13 08:28:09 compute-0 NetworkManager[50376]: <info>  [1765614489.2128] manager: (tap3a8efc9e-75): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Dec 13 08:28:09 compute-0 systemd-udevd[301035]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:09 compute-0 ovn_controller[148476]: 2025-12-13T08:28:09Z|00451|binding|INFO|Claiming lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 for this chassis.
Dec 13 08:28:09 compute-0 ovn_controller[148476]: 2025-12-13T08:28:09Z|00452|binding|INFO|3a8efc9e-7582-4b17-ab0e-b248e09932b3: Claiming fa:16:3e:cc:30:15 10.100.0.4
Dec 13 08:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:28:09
Dec 13 08:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control']
Dec 13 08:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:28:09 compute-0 NetworkManager[50376]: <info>  [1765614489.2294] device (tap3a8efc9e-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:28:09 compute-0 NetworkManager[50376]: <info>  [1765614489.2306] device (tap3a8efc9e-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:28:09 compute-0 systemd-machined[210538]: New machine qemu-59-instance-00000034.
Dec 13 08:28:09 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000034.
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:09 compute-0 ovn_controller[148476]: 2025-12-13T08:28:09Z|00453|binding|INFO|Setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 ovn-installed in OVS
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 508 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.559 248514 DEBUG nova.network.neutron [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Updated VIF entry in instance network info cache for port 3a8efc9e-7582-4b17-ab0e-b248e09932b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.560 248514 DEBUG nova.network.neutron [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Updating instance_info_cache with network_info: [{"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:09 compute-0 ovn_controller[148476]: 2025-12-13T08:28:09Z|00454|binding|INFO|Setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 up in Southbound
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.771 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:30:15 10.100.0.4'], port_security=['fa:16:3e:cc:30:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678d2db2-0536-4744-b65c-f0a5852f35e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3a8efc9e-7582-4b17-ab0e-b248e09932b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.773 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 in datapath 85372fca-ab50-48b6-8c21-507f630c205a bound to our chassis
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.776 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.792 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e5680925-6500-46be-9d99-18b1b0a7deb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.794 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85372fca-a1 in ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.796 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85372fca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.796 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7251f9-93bf-42b2-94e7-cb8e35540e90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.797 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[276732c3-0c06-4924-b9c0-1bd35c54d57a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.821 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4120ed-3e73-4f72-8650-a6b8d06f5826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.840 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84d69d-28d0-4d1e-9492-17a72d5e2569]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.848 248514 DEBUG oslo_concurrency.lockutils [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.873 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[614eeaf9-d45c-4480-924c-f045bfa99521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 NetworkManager[50376]: <info>  [1765614489.8823] manager: (tap85372fca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50178412-fbc4-4961-881c-66ad8407ac7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.924 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f8121d22-2e78-4c92-9985-7e9f3e2277b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.927 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8a5b2c-ccb2-4566-8a3c-5afe93934c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 NetworkManager[50376]: <info>  [1765614489.9585] device (tap85372fca-a0): carrier: link connected
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.968 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a4357418-6515-42aa-ad86-84adb7fb137a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:09 compute-0 nova_compute[248510]: 2025-12-13 08:28:09.977 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.989 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a5050b-e1c9-46c5-956a-21b1c998d293]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706717, 'reachable_time': 18702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301181, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.006 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9819844e-1d3c-4a5a-991e-23519f87e4bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:30d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706717, 'tstamp': 706717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301182, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.029 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5441a12f-7155-4a62-998e-7824fee2bbdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706717, 'reachable_time': 18702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301183, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.040 248514 DEBUG nova.compute.manager [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.041 248514 DEBUG oslo_concurrency.lockutils [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.041 248514 DEBUG oslo_concurrency.lockutils [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.042 248514 DEBUG oslo_concurrency.lockutils [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.042 248514 DEBUG nova.compute.manager [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.042 248514 WARNING nova.compute.manager [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state active and task_state None.
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.077 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba57c5c-3665-4eab-a34f-e3b1600415fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:28:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.146 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71b85e7e-652b-4262-acce-742a85094526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.148 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.148 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.149 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85372fca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:10 compute-0 kernel: tap85372fca-a0: entered promiscuous mode
Dec 13 08:28:10 compute-0 NetworkManager[50376]: <info>  [1765614490.1517] manager: (tap85372fca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.154 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85372fca-a0, col_values=(('external_ids', {'iface-id': '2c0f4981-0ad0-478e-b1ad-551d231022ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:10 compute-0 ovn_controller[148476]: 2025-12-13T08:28:10Z|00455|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.172 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.174 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.175 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf852c1-3665-4188-90f2-1eb69cb2e8bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.176 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:28:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.176 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'env', 'PROCESS_TAG=haproxy-85372fca-ab50-48b6-8c21-507f630c205a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85372fca-ab50-48b6-8c21-507f630c205a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.256 248514 DEBUG nova.compute.manager [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.256 248514 DEBUG oslo_concurrency.lockutils [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.257 248514 DEBUG oslo_concurrency.lockutils [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.257 248514 DEBUG oslo_concurrency.lockutils [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.257 248514 DEBUG nova.compute.manager [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Processing event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.320 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.321 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614490.3193424, 678d2db2-0536-4744-b65c-f0a5852f35e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.322 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] VM Started (Lifecycle Event)
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.325 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.330 248514 INFO nova.virt.libvirt.driver [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance spawned successfully.
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.330 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.356 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.361 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.365 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.366 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.366 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.367 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.367 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.368 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.398 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.399 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614490.3196661, 678d2db2-0536-4744-b65c-f0a5852f35e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.399 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] VM Paused (Lifecycle Event)
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.430 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.435 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614490.3256648, 678d2db2-0536-4744-b65c-f0a5852f35e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.436 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] VM Resumed (Lifecycle Event)
Dec 13 08:28:10 compute-0 ceph-mon[76537]: pgmap v1914: 321 pgs: 321 active+clean; 508 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:28:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:28:10 compute-0 podman[301256]: 2025-12-13 08:28:10.638418379 +0000 UTC m=+0.068219414 container create b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:28:10 compute-0 systemd[1]: Started libpod-conmon-b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a.scope.
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.697 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:10 compute-0 podman[301256]: 2025-12-13 08:28:10.607342533 +0000 UTC m=+0.037143598 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.703 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.716 248514 INFO nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Took 12.95 seconds to spawn the instance on the hypervisor.
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.717 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f9f9ca3b39265b46e15e383b3b89331e9eff00a18a8f9d53df9e42b58de486/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:10 compute-0 podman[301256]: 2025-12-13 08:28:10.744965487 +0000 UTC m=+0.174766552 container init b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 08:28:10 compute-0 podman[301256]: 2025-12-13 08:28:10.751635622 +0000 UTC m=+0.181436657 container start b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.764 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:28:10 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [NOTICE]   (301274) : New worker (301276) forked
Dec 13 08:28:10 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [NOTICE]   (301274) : Loading success.
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.797 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Successfully updated port: d41fdf9b-1d4a-475f-b516-69fa17b19cfb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.816 248514 INFO nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Took 16.41 seconds to build instance.
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.819 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.819 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.819 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:28:10 compute-0 nova_compute[248510]: 2025-12-13 08:28:10.844 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:11 compute-0 nova_compute[248510]: 2025-12-13 08:28:11.063 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:28:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 508 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 488 KiB/s rd, 3.6 MiB/s wr, 115 op/s
Dec 13 08:28:12 compute-0 kernel: tapd8c7cad7-f6 (unregistering): left promiscuous mode
Dec 13 08:28:12 compute-0 NetworkManager[50376]: <info>  [1765614492.1577] device (tapd8c7cad7-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.172 248514 DEBUG oslo_concurrency.lockutils [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.175 248514 DEBUG oslo_concurrency.lockutils [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.182 248514 DEBUG nova.compute.manager [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.195 248514 DEBUG nova.compute.manager [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.198 248514 DEBUG nova.objects.instance [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'flavor' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:12 compute-0 ovn_controller[148476]: 2025-12-13T08:28:12Z|00456|binding|INFO|Releasing lport d8c7cad7-f601-4205-8838-a583b6e04b0f from this chassis (sb_readonly=0)
Dec 13 08:28:12 compute-0 ovn_controller[148476]: 2025-12-13T08:28:12Z|00457|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f down in Southbound
Dec 13 08:28:12 compute-0 ovn_controller[148476]: 2025-12-13T08:28:12Z|00458|binding|INFO|Removing iface tapd8c7cad7-f6 ovn-installed in OVS
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.205 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:57:38 10.100.0.8'], port_security=['fa:16:3e:45:57:38 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d8c7cad7-f601-4205-8838-a583b6e04b0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.207 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d8c7cad7-f601-4205-8838-a583b6e04b0f in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 unbound from our chassis
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.210 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c63049d-63e9-47af-99e2-ce1403a42891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.212 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c15c18-33f3-48e6-ae3a-39f4230e0ffe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.213 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace which is not needed anymore
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.221 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Dec 13 08:28:12 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 13.112s CPU time.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.248 248514 DEBUG nova.virt.libvirt.driver [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:28:12 compute-0 systemd-machined[210538]: Machine qemu-57-instance-00000033 terminated.
Dec 13 08:28:12 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [NOTICE]   (300444) : haproxy version is 2.8.14-c23fe91
Dec 13 08:28:12 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [NOTICE]   (300444) : path to executable is /usr/sbin/haproxy
Dec 13 08:28:12 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [WARNING]  (300444) : Exiting Master process...
Dec 13 08:28:12 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [WARNING]  (300444) : Exiting Master process...
Dec 13 08:28:12 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [ALERT]    (300444) : Current worker (300446) exited with code 143 (Terminated)
Dec 13 08:28:12 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [WARNING]  (300444) : All workers exited. Exiting... (0)
Dec 13 08:28:12 compute-0 systemd[1]: libpod-7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759.scope: Deactivated successfully.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 podman[301308]: 2025-12-13 08:28:12.399928102 +0000 UTC m=+0.068498212 container died 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759-userdata-shm.mount: Deactivated successfully.
Dec 13 08:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-74a1045c2667f2928359016f34c22c18152582122077ea101da9c4a143c306b5-merged.mount: Deactivated successfully.
Dec 13 08:28:12 compute-0 podman[301308]: 2025-12-13 08:28:12.462141387 +0000 UTC m=+0.130711487 container cleanup 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:28:12 compute-0 systemd[1]: libpod-conmon-7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759.scope: Deactivated successfully.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.499 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Updating instance_info_cache with network_info: [{"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:12 compute-0 podman[301343]: 2025-12-13 08:28:12.544409666 +0000 UTC m=+0.057225543 container remove 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.551 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c07f4656-19c5-4cdb-9e29-97c16234af5b]: (4, ('Sat Dec 13 08:28:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759)\n7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759\nSat Dec 13 08:28:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759)\n7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.553 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47d508ba-891d-4e99-a9b2-7406130e1dbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.554 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:12 compute-0 kernel: tap6c63049d-60: left promiscuous mode
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 ceph-mon[76537]: pgmap v1915: 321 pgs: 321 active+clean; 508 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 488 KiB/s rd, 3.6 MiB/s wr, 115 op/s
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95541b3e-7cc0-48e6-a3c1-4177d9bb4ca2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[758f70b5-f3a1-4217-99ee-5c7e2bfa0d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.604 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fff0d840-4b7a-4282-bbef-cd9ed9fc21ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.625 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41c2d835-fac9-4c3d-bc32-631f7239bb41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705089, 'reachable_time': 42510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301362, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c63049d\x2d63e9\x2d47af\x2d99e2\x2dce1403a42891.mount: Deactivated successfully.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.629 248514 DEBUG nova.compute.manager [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.629 248514 DEBUG oslo_concurrency.lockutils [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.629 248514 DEBUG oslo_concurrency.lockutils [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.629 248514 DEBUG oslo_concurrency.lockutils [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.630 248514 DEBUG nova.compute.manager [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.630 248514 WARNING nova.compute.manager [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state active and task_state None.
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.630 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:28:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.630 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[11f24a4f-7f37-48ab-8c34-b3958a7c24ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.645 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.646 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance network_info: |[{"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.650 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start _get_guest_xml network_info=[{"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:27:49Z,direct_url=<?>,disk_format='raw',id=e4e9ee37-4059-46f1-9bf8-83a72d0403e7,min_disk=1,min_ram=0,name='tempest-test-snap-671859850',owner='52e1055963294dbdb16cd95b466cd4d9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:27:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': 'e4e9ee37-4059-46f1-9bf8-83a72d0403e7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.658 248514 WARNING nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.670 248514 DEBUG nova.virt.libvirt.host [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.672 248514 DEBUG nova.virt.libvirt.host [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.677 248514 DEBUG nova.virt.libvirt.host [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.680 248514 DEBUG nova.virt.libvirt.host [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.681 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.681 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:27:49Z,direct_url=<?>,disk_format='raw',id=e4e9ee37-4059-46f1-9bf8-83a72d0403e7,min_disk=1,min_ram=0,name='tempest-test-snap-671859850',owner='52e1055963294dbdb16cd95b466cd4d9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:27:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.683 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.684 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.686 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.686 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.687 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.693 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.752 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance shutdown successfully after 4 seconds.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.764 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance destroyed successfully.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.770 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance destroyed successfully.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.771 248514 DEBUG nova.virt.libvirt.vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:06Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.772 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.773 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.774 248514 DEBUG os_vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.776 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8c7cad7-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.819 248514 DEBUG nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.820 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.820 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.821 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.821 248514 DEBUG nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] No waiting events found dispatching network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.821 248514 WARNING nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received unexpected event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 for instance with vm_state active and task_state powering-off.
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.821 248514 DEBUG nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-changed-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.822 248514 DEBUG nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Refreshing instance network info cache due to event network-changed-d41fdf9b-1d4a-475f-b516-69fa17b19cfb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.822 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.822 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.822 248514 DEBUG nova.network.neutron [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Refreshing network info cache for port d41fdf9b-1d4a-475f-b516-69fa17b19cfb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.828 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:12 compute-0 nova_compute[248510]: 2025-12-13 08:28:12.836 248514 INFO os_vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6')
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.127 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deleting instance files /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_del
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.128 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deletion of /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_del complete
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.312 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.312 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating image(s)
Dec 13 08:28:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218796268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.340 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.375 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.403 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.410 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 526 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 207 op/s
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.450 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.488 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.492 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.539 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.540 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.541 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.541 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.566 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.571 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3218796268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.918 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:13 compute-0 nova_compute[248510]: 2025-12-13 08:28:13.996 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] resizing rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:28:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297712692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.103 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.104 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Ensure instance console log exists: /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.105 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.105 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.105 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.108 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start _get_guest_xml network_info=[{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.109 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.110 248514 DEBUG nova.virt.libvirt.vif [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-622802661',display_name='tempest-ImagesTestJSON-server-622802661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-622802661',id=53,image_ref='e4e9ee37-4059-46f1-9bf8-83a72d0403e7',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-0ufcm8q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='df25cd40-72b5-4e0f-90ec-8677c699d1d3',image_min_disk='1',image_min_ram='0',image_owner_id='52e1055963294dbdb16cd95b466cd4d9',image_owner_project_name='tempest-ImagesTestJSON-1234382421',image_owner_user_name='tempest-ImagesTestJSON-1234382421-project-member',image_user_id='3b988c7ac9354c59aac9a9f41f83c20f',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:06Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=5d8c2900-0048-4631-bbc6-0122bce8f4f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.111 248514 DEBUG nova.network.os_vif_util [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.112 248514 DEBUG nova.network.os_vif_util [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.113 248514 DEBUG nova.objects.instance [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d8c2900-0048-4631-bbc6-0122bce8f4f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.119 248514 WARNING nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.125 248514 DEBUG nova.virt.libvirt.host [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.131 248514 DEBUG nova.virt.libvirt.host [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.136 248514 DEBUG nova.virt.libvirt.host [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.137 248514 DEBUG nova.virt.libvirt.host [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.138 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.138 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.138 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.138 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.140 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.140 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.140 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.144 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <uuid>5d8c2900-0048-4631-bbc6-0122bce8f4f3</uuid>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <name>instance-00000035</name>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesTestJSON-server-622802661</nova:name>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:28:12</nova:creationTime>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="e4e9ee37-4059-46f1-9bf8-83a72d0403e7"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <nova:port uuid="d41fdf9b-1d4a-475f-b516-69fa17b19cfb">
Dec 13 08:28:14 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <system>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <entry name="serial">5d8c2900-0048-4631-bbc6-0122bce8f4f3</entry>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <entry name="uuid">5d8c2900-0048-4631-bbc6-0122bce8f4f3</entry>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </system>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <os>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   </os>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <features>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   </features>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk">
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config">
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:8e:5e:3e"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <target dev="tapd41fdf9b-1d"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/console.log" append="off"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <video>
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </video>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:28:14 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:28:14 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:28:14 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:28:14 compute-0 nova_compute[248510]: </domain>
Dec 13 08:28:14 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.145 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Preparing to wait for external event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.145 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.146 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.146 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.147 248514 DEBUG nova.virt.libvirt.vif [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-622802661',display_name='tempest-ImagesTestJSON-server-622802661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-622802661',id=53,image_ref='e4e9ee37-4059-46f1-9bf8-83a72d0403e7',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-0ufcm8q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='df25cd40-72b5-4e0f-90ec-8677c699d1d3',image_min_disk='1',image_min_ram='0',image_owner_id='52e1055963294dbdb16cd95b466cd4d9',image_owner_project_name='tempest-ImagesTestJSON-1234382421',image_owner_user_name='tempest-ImagesTestJSON-1234382421-project-member',image_user_id='3b988c7ac9354c59aac9a9f41f83c20f',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:06Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=5d8c2900-0048-4631-bbc6-0122bce8f4f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.147 248514 DEBUG nova.network.os_vif_util [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.148 248514 DEBUG nova.network.os_vif_util [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.148 248514 DEBUG os_vif [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.150 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.150 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.153 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd41fdf9b-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.154 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd41fdf9b-1d, col_values=(('external_ids', {'iface-id': 'd41fdf9b-1d4a-475f-b516-69fa17b19cfb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:5e:3e', 'vm-uuid': '5d8c2900-0048-4631-bbc6-0122bce8f4f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:14 compute-0 NetworkManager[50376]: <info>  [1765614494.1564] manager: (tapd41fdf9b-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.160 248514 INFO os_vif [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d')
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.181 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.255 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.256 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.256 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:8e:5e:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.256 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Using config drive
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.279 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.481 248514 DEBUG nova.network.neutron [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Updated VIF entry in instance network info cache for port d41fdf9b-1d4a-475f-b516-69fa17b19cfb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.482 248514 DEBUG nova.network.neutron [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Updating instance_info_cache with network_info: [{"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.510 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:14 compute-0 ceph-mon[76537]: pgmap v1916: 321 pgs: 321 active+clean; 526 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 207 op/s
Dec 13 08:28:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2297712692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.705 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Creating config drive at /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.710 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuo5wh6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167132967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.758 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.785 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.790 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.855 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuo5wh6x" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.891 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.895 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:14 compute-0 nova_compute[248510]: 2025-12-13 08:28:14.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:28:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762142997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:28:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:28:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762142997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.035 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.038 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Deleting local config drive /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config because it was imported into RBD.
Dec 13 08:28:15 compute-0 kernel: tapd41fdf9b-1d: entered promiscuous mode
Dec 13 08:28:15 compute-0 NetworkManager[50376]: <info>  [1765614495.1053] manager: (tapd41fdf9b-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Dec 13 08:28:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:15 compute-0 ovn_controller[148476]: 2025-12-13T08:28:15Z|00459|binding|INFO|Claiming lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb for this chassis.
Dec 13 08:28:15 compute-0 ovn_controller[148476]: 2025-12-13T08:28:15Z|00460|binding|INFO|d41fdf9b-1d4a-475f-b516-69fa17b19cfb: Claiming fa:16:3e:8e:5e:3e 10.100.0.3
Dec 13 08:28:15 compute-0 systemd-udevd[301289]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.229 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:3e 10.100.0.3'], port_security=['fa:16:3e:8e:5e:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5d8c2900-0048-4631-bbc6-0122bce8f4f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d41fdf9b-1d4a-475f-b516-69fa17b19cfb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.230 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d41fdf9b-1d4a-475f-b516-69fa17b19cfb in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis
Dec 13 08:28:15 compute-0 NetworkManager[50376]: <info>  [1765614495.2361] device (tapd41fdf9b-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.234 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:28:15 compute-0 NetworkManager[50376]: <info>  [1765614495.2372] device (tapd41fdf9b-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:28:15 compute-0 ovn_controller[148476]: 2025-12-13T08:28:15Z|00461|binding|INFO|Setting lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb ovn-installed in OVS
Dec 13 08:28:15 compute-0 ovn_controller[148476]: 2025-12-13T08:28:15Z|00462|binding|INFO|Setting lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb up in Southbound
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.261 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4216de39-4e68-45cf-85e6-41e199f4f69a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:15 compute-0 systemd-machined[210538]: New machine qemu-60-instance-00000035.
Dec 13 08:28:15 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000035.
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.295 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[56f14f97-8e40-4fd7-b8ad-93113b72592e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.308 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1e0a50-af89-4424-a564-882eb8a0ac64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.354 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d3080060-0e30-41d4-aeff-f4551c72fd83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.376 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5609c6d2-0023-4321-bfe2-f56e2f0eda1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703784, 'reachable_time': 44592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301758, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807542780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.393 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7a3848-c256-4e62-be0c-2e557bf056ad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap87bd91d0-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703801, 'tstamp': 703801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301761, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap87bd91d0-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703805, 'tstamp': 703805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301761, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.395 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.402 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.402 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.403 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.402 248514 DEBUG nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.403 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.403 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.404 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.404 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.404 248514 DEBUG nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.404 248514 WARNING nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state rebuild_spawning.
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.405 248514 DEBUG nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.406 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.406 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.406 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.406 248514 DEBUG nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.407 248514 WARNING nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state rebuild_spawning.
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.415 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.417 248514 DEBUG nova.virt.libvirt.vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:13Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.417 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.418 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.421 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <uuid>0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</uuid>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <name>instance-00000033</name>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1729116913</nova:name>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:28:14</nova:creationTime>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <nova:user uuid="a5bc32e49dbd4372a006913090b9ef0f">tempest-ServerDiskConfigTestJSON-167971983-project-member</nova:user>
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <nova:project uuid="9aea752cb9b648a7aa9b3f634ced797e">tempest-ServerDiskConfigTestJSON-167971983</nova:project>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <nova:port uuid="d8c7cad7-f601-4205-8838-a583b6e04b0f">
Dec 13 08:28:15 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <system>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <entry name="serial">0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</entry>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <entry name="uuid">0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</entry>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </system>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <os>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   </os>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <features>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   </features>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk">
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config">
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:15 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:45:57:38"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <target dev="tapd8c7cad7-f6"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/console.log" append="off"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <video>
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </video>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:28:15 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:28:15 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:28:15 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:28:15 compute-0 nova_compute[248510]: </domain>
Dec 13 08:28:15 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.422 248514 DEBUG nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Preparing to wait for external event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.422 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.423 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.423 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.423 248514 DEBUG nova.virt.libvirt.vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:13Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.424 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.424 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.425 248514 DEBUG os_vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.425 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.426 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.426 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.429 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8c7cad7-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.430 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8c7cad7-f6, col_values=(('external_ids', {'iface-id': 'd8c7cad7-f601-4205-8838-a583b6e04b0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:57:38', 'vm-uuid': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.432 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:15 compute-0 NetworkManager[50376]: <info>  [1765614495.4342] manager: (tapd8c7cad7-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.442 248514 INFO os_vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6')
Dec 13 08:28:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 519 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 285 op/s
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.501 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.501 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.502 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No VIF found with MAC fa:16:3e:45:57:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.502 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Using config drive
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.526 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.559 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1167132967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1762142997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:28:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1762142997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:28:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3807542780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.614 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'keypairs' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.925 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614495.9248798, 5d8c2900-0048-4631-bbc6-0122bce8f4f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.926 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] VM Started (Lifecycle Event)
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.970 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.976 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614495.9250119, 5d8c2900-0048-4631-bbc6-0122bce8f4f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:15 compute-0 nova_compute[248510]: 2025-12-13 08:28:15.976 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] VM Paused (Lifecycle Event)
Dec 13 08:28:16 compute-0 nova_compute[248510]: 2025-12-13 08:28:16.321 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:16 compute-0 nova_compute[248510]: 2025-12-13 08:28:16.328 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:16 compute-0 nova_compute[248510]: 2025-12-13 08:28:16.364 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:28:16 compute-0 ceph-mon[76537]: pgmap v1917: 321 pgs: 321 active+clean; 519 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 285 op/s
Dec 13 08:28:16 compute-0 nova_compute[248510]: 2025-12-13 08:28:16.817 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating config drive at /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config
Dec 13 08:28:16 compute-0 nova_compute[248510]: 2025-12-13 08:28:16.823 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoh8a2dv8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:16 compute-0 nova_compute[248510]: 2025-12-13 08:28:16.979 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoh8a2dv8" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.005 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.010 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.152 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.154 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deleting local config drive /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config because it was imported into RBD.
Dec 13 08:28:17 compute-0 kernel: tapd8c7cad7-f6: entered promiscuous mode
Dec 13 08:28:17 compute-0 NetworkManager[50376]: <info>  [1765614497.2173] manager: (tapd8c7cad7-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Dec 13 08:28:17 compute-0 ovn_controller[148476]: 2025-12-13T08:28:17Z|00463|binding|INFO|Claiming lport d8c7cad7-f601-4205-8838-a583b6e04b0f for this chassis.
Dec 13 08:28:17 compute-0 ovn_controller[148476]: 2025-12-13T08:28:17Z|00464|binding|INFO|d8c7cad7-f601-4205-8838-a583b6e04b0f: Claiming fa:16:3e:45:57:38 10.100.0.8
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.225 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:17 compute-0 systemd-udevd[301749]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:28:17 compute-0 NetworkManager[50376]: <info>  [1765614497.2448] device (tapd8c7cad7-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:28:17 compute-0 NetworkManager[50376]: <info>  [1765614497.2456] device (tapd8c7cad7-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:28:17 compute-0 ovn_controller[148476]: 2025-12-13T08:28:17Z|00465|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f ovn-installed in OVS
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.252 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:17 compute-0 systemd-machined[210538]: New machine qemu-61-instance-00000033.
Dec 13 08:28:17 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000033.
Dec 13 08:28:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 519 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 267 op/s
Dec 13 08:28:17 compute-0 ovn_controller[148476]: 2025-12-13T08:28:17Z|00466|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f up in Southbound
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.540 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:57:38 10.100.0.8'], port_security=['fa:16:3e:45:57:38 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d8c7cad7-f601-4205-8838-a583b6e04b0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.541 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d8c7cad7-f601-4205-8838-a583b6e04b0f in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 bound to our chassis
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.543 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.563 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6649ad5-4992-4d14-b514-ddb0f0362009]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.565 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c63049d-61 in ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.567 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c63049d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.567 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee88934-4d72-4561-89b2-28d2b5e9d835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bf295939-aa2b-48c3-a5cf-374828815ef0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.587 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[21bc429b-5add-413a-be8e-e956c69e9754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9c84b5-1c37-4ce2-a85d-7ee77efd1bda]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.645 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a4881c8d-313d-45ef-a442-7bc88f518502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 NetworkManager[50376]: <info>  [1765614497.6520] manager: (tap6c63049d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/217)
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.653 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0fa864-09da-4557-81c1-e0df3788a217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.689 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0e55d867-ada1-4f31-a429-d6c543a5717a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.692 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0b842e59-d1b4-46d3-b0f7-c7d4db2bc992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 NetworkManager[50376]: <info>  [1765614497.7227] device (tap6c63049d-60): carrier: link connected
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.734 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[91b28eef-0a8a-4794-8c6f-d65669bc1fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb12f1a-c88d-47a9-92d1-69c25dd6b310]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707493, 'reachable_time': 20027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301911, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3a58c212-529d-4527-97c1-07c2e9161efb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707493, 'tstamp': 707493}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301912, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.797 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7bff5f14-b415-4ba7-bb36-0bb5a01270ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707493, 'reachable_time': 20027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301913, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.836 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[379caf59-29be-4ced-bc21-111dff56a813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b629920-09c4-4401-95ff-e212721889cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.913 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.914 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.914 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c63049d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:17 compute-0 NetworkManager[50376]: <info>  [1765614497.9168] manager: (tap6c63049d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Dec 13 08:28:17 compute-0 kernel: tap6c63049d-60: entered promiscuous mode
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.921 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c63049d-60, col_values=(('external_ids', {'iface-id': 'b410790c-12b7-4a29-87e5-13a29af9c319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:17 compute-0 ovn_controller[148476]: 2025-12-13T08:28:17Z|00467|binding|INFO|Releasing lport b410790c-12b7-4a29-87e5-13a29af9c319 from this chassis (sb_readonly=0)
Dec 13 08:28:17 compute-0 nova_compute[248510]: 2025-12-13 08:28:17.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.943 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8366a85f-5c79-449f-b8b1-38d8bc345ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.945 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:28:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.945 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'env', 'PROCESS_TAG=haproxy-6c63049d-63e9-47af-99e2-ce1403a42891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c63049d-63e9-47af-99e2-ce1403a42891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:28:18 compute-0 podman[301943]: 2025-12-13 08:28:18.395150924 +0000 UTC m=+0.066875171 container create abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 08:28:18 compute-0 podman[301943]: 2025-12-13 08:28:18.360306925 +0000 UTC m=+0.032031202 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:28:18 compute-0 systemd[1]: Started libpod-conmon-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8.scope.
Dec 13 08:28:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0850c2732f6cad671f2980e01a6b8b5803f4a1da7e4ec39d2b25d9c9189acf0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:18 compute-0 podman[301943]: 2025-12-13 08:28:18.524611609 +0000 UTC m=+0.196335876 container init abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:28:18 compute-0 podman[301943]: 2025-12-13 08:28:18.536571174 +0000 UTC m=+0.208295421 container start abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:28:18 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [NOTICE]   (302052) : New worker (302061) forked
Dec 13 08:28:18 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [NOTICE]   (302052) : Loading success.
Dec 13 08:28:18 compute-0 podman[301995]: 2025-12-13 08:28:18.576989111 +0000 UTC m=+0.136561460 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.581 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.582 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614498.580779, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.582 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Started (Lifecycle Event)
Dec 13 08:28:18 compute-0 podman[301996]: 2025-12-13 08:28:18.586112876 +0000 UTC m=+0.139498823 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 13 08:28:18 compute-0 podman[301992]: 2025-12-13 08:28:18.589730265 +0000 UTC m=+0.147728716 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:28:18 compute-0 ceph-mon[76537]: pgmap v1918: 321 pgs: 321 active+clean; 519 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 267 op/s
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.681 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.686 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614498.5809326, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.686 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Paused (Lifecycle Event)
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.711 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.714 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:18 compute-0 nova_compute[248510]: 2025-12-13 08:28:18.745 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:28:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 498 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 316 op/s
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.030 248514 DEBUG nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.031 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.032 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.032 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.032 248514 DEBUG nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Processing event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.033 248514 DEBUG nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.033 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.033 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.033 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.034 248514 DEBUG nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] No waiting events found dispatching network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.034 248514 WARNING nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received unexpected event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb for instance with vm_state building and task_state spawning.
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.035 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.037 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.051 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614500.0513272, 5d8c2900-0048-4631-bbc6-0122bce8f4f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.052 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] VM Resumed (Lifecycle Event)
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.054 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.058 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance spawned successfully.
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.058 248514 INFO nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Took 13.92 seconds to spawn the instance on the hypervisor.
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.059 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.077 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.082 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.117 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:28:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.312 248514 INFO nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Took 16.10 seconds to build instance.
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:20 compute-0 nova_compute[248510]: 2025-12-13 08:28:20.468 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:20 compute-0 ceph-mon[76537]: pgmap v1919: 321 pgs: 321 active+clean; 498 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 316 op/s
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00374151092803035 of space, bias 1.0, pg target 1.122453278409105 quantized to 32 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010130296284747075 of space, bias 1.0, pg target 0.30289585891393755 quantized to 32 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.067014208962118e-07 of space, bias 4.0, pg target 0.0008452148993918693 quantized to 16 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:28:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 08:28:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 498 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 255 op/s
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.324 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.324 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.325 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.325 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.325 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.327 248514 INFO nova.compute.manager [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Terminating instance
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.328 248514 DEBUG nova.compute.manager [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.336 248514 DEBUG nova.virt.libvirt.driver [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:28:22 compute-0 kernel: tapb03c2424-77 (unregistering): left promiscuous mode
Dec 13 08:28:22 compute-0 NetworkManager[50376]: <info>  [1765614502.3746] device (tapb03c2424-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00468|binding|INFO|Releasing lport b03c2424-77e3-49e1-b55f-f317911025b6 from this chassis (sb_readonly=0)
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00469|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 down in Southbound
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00470|binding|INFO|Removing iface tapb03c2424-77 ovn-installed in OVS
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.434 248514 DEBUG nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.433 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:bc:28 10.100.0.4'], port_security=['fa:16:3e:33:bc:28 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e6e0fdaf-f934-4e56-8e59-4c4475bacd26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b03c2424-77e3-49e1-b55f-f317911025b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.435 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b03c2424-77e3-49e1-b55f-f317911025b6 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.435 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.435 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.436 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.436 248514 DEBUG nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Processing event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.436 248514 DEBUG nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.437 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.436 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.437 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.437 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.437 248514 DEBUG nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.438 248514 WARNING nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state rebuild_spawning.
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.439 248514 DEBUG nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:28:22 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Dec 13 08:28:22 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 14.196s CPU time.
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.450 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614502.4432456, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.451 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Resumed (Lifecycle Event)
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.452 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:28:22 compute-0 systemd-machined[210538]: Machine qemu-54-instance-00000031 terminated.
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.456 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance spawned successfully.
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.456 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.464 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51bf02ff-8c2a-4d88-855d-721da7cb4165]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.509 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e876e55-dc46-4684-a8da-a6e929725c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.514 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e0306571-06ba-4333-b34f-a76d0f171337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.546 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.559 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.559 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:22 compute-0 kernel: tapb03c2424-77: entered promiscuous mode
Dec 13 08:28:22 compute-0 systemd-udevd[302077]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00471|binding|INFO|Claiming lport b03c2424-77e3-49e1-b55f-f317911025b6 for this chassis.
Dec 13 08:28:22 compute-0 NetworkManager[50376]: <info>  [1765614502.5647] manager: (tapb03c2424-77): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00472|binding|INFO|b03c2424-77e3-49e1-b55f-f317911025b6: Claiming fa:16:3e:33:bc:28 10.100.0.4
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.560 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.568 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.569 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[505fa89a-5c2f-497c-8182-0b16c508c5fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.569 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.569 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 kernel: tapb03c2424-77 (unregistering): left promiscuous mode
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.582 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.585 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:bc:28 10.100.0.4'], port_security=['fa:16:3e:33:bc:28 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e6e0fdaf-f934-4e56-8e59-4c4475bacd26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b03c2424-77e3-49e1-b55f-f317911025b6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: hostname: compute-0
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00473|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 ovn-installed in OVS
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00474|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 up in Southbound
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00475|binding|INFO|Releasing lport b03c2424-77e3-49e1-b55f-f317911025b6 from this chassis (sb_readonly=1)
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00476|if_status|INFO|Dropped 1 log messages in last 99 seconds (most recently, 99 seconds ago) due to excessive rate
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00477|if_status|INFO|Not setting lport b03c2424-77e3-49e1-b55f-f317911025b6 down as sb is readonly
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00478|binding|INFO|Removing iface tapb03c2424-77 ovn-installed in OVS
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.611 248514 INFO nova.virt.libvirt.driver [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Instance destroyed successfully.
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.612 248514 DEBUG nova.objects.instance [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'resources' on Instance uuid e6e0fdaf-f934-4e56-8e59-4c4475bacd26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f69d39c-37c4-4889-8a1c-ed9068292f4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302090, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00479|binding|INFO|Releasing lport b03c2424-77e3-49e1-b55f-f317911025b6 from this chassis (sb_readonly=0)
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00480|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 down in Southbound
Dec 13 08:28:22 compute-0 ceph-mon[76537]: pgmap v1920: 321 pgs: 321 active+clean; 498 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 255 op/s
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da916449-8398-4aa2-bd6d-639d5e70196f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302101, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302101, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.645 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.650 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:bc:28 10.100.0.4'], port_security=['fa:16:3e:33:bc:28 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e6e0fdaf-f934-4e56-8e59-4c4475bacd26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b03c2424-77e3-49e1-b55f-f317911025b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.653 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.653 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.653 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.654 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.655 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b03c2424-77e3-49e1-b55f-f317911025b6 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.657 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.678 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2591a32e-47f1-4a51-9730-2a8c63d4542e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.685 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.692 248514 DEBUG nova.virt.libvirt.vif [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1305127951',display_name='tempest-ListServerFiltersTestJSON-instance-1305127951',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1305127951',id=49,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-xlshtj64',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:35Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=e6e0fdaf-f934-4e56-8e59-4c4475bacd26,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.692 248514 DEBUG nova.network.os_vif_util [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.693 248514 DEBUG nova.network.os_vif_util [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.694 248514 DEBUG os_vif [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.696 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb03c2424-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.702 248514 INFO os_vif [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77')
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.730 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[78623917-6ed6-47c3-982b-2e31e546774b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.735 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c8373fa5-ed67-4d15-986c-c9644faaca43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.779 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[18651dbc-0b0b-420b-8e05-c317c453bf54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.811 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d37de6-7e9f-4d8b-a442-34bb9d27305d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302136, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.838 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e401b0c0-0b48-46a5-9fbc-dfbe40386784]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302137, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302137, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.840 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.847 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.848 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.849 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.850 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.853 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b03c2424-77e3-49e1-b55f-f317911025b6 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.854 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.854 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.854 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.855 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.855 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.855 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.856 248514 INFO nova.compute.manager [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Terminating instance
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.858 248514 DEBUG nova.compute.manager [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.860 248514 DEBUG nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.886 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fec3d027-2fba-453f-9f8b-b32736616013]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 kernel: tapd41fdf9b-1d (unregistering): left promiscuous mode
Dec 13 08:28:22 compute-0 NetworkManager[50376]: <info>  [1765614502.9169] device (tapd41fdf9b-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00481|binding|INFO|Releasing lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb from this chassis (sb_readonly=0)
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00482|binding|INFO|Setting lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb down in Southbound
Dec 13 08:28:22 compute-0 ovn_controller[148476]: 2025-12-13T08:28:22Z|00483|binding|INFO|Removing iface tapd41fdf9b-1d ovn-installed in OVS
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.929 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.947 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad3e4fc-f2f3-499d-a88d-182844e83aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.951 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd88e98-28c3-4dc3-8f11-70ad20625c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.966 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:3e 10.100.0.3'], port_security=['fa:16:3e:8e:5e:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5d8c2900-0048-4631-bbc6-0122bce8f4f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d41fdf9b-1d4a-475f-b516-69fa17b19cfb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.977 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.978 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:22 compute-0 nova_compute[248510]: 2025-12-13 08:28:22.978 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:28:22 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Deactivated successfully.
Dec 13 08:28:22 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Consumed 3.660s CPU time.
Dec 13 08:28:22 compute-0 systemd-machined[210538]: Machine qemu-60-instance-00000035 terminated.
Dec 13 08:28:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.993 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[afd28837-9356-431f-99df-b065c4788e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79593e59-525a-4433-a70a-09c4a39ed366]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302147, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.045 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff40368e-1d37-4476-8398-54410a4a00a8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302148, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302148, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.048 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.058 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.058 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.058 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.059 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.060 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d41fdf9b-1d4a-475f-b516-69fa17b19cfb in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.061 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.090 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab72f7a3-426b-4c1e-85a5-73ee9e85662b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.102 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance destroyed successfully.
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.102 248514 DEBUG nova.objects.instance [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid 5d8c2900-0048-4631-bbc6-0122bce8f4f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.137 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b330e9e-233e-4fdf-acdf-2445277042ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.139 248514 INFO nova.virt.libvirt.driver [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Deleting instance files /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26_del
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.141 248514 INFO nova.virt.libvirt.driver [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Deletion of /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26_del complete
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.141 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb63a97-a2df-4949-8562-be39a9d8c83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.182 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f04bcd56-478a-4f97-b56c-3b6cea642b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.204 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43e03665-8047-451e-be9c-0c514df28187]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703784, 'reachable_time': 44592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302166, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.210 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.227 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6056094b-c553-4fe9-97f5-1f1c195c90dd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap87bd91d0-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703801, 'tstamp': 703801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302167, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap87bd91d0-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703805, 'tstamp': 703805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302167, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.229 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.236 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.237 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.237 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.238 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.238 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 457 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.8 MiB/s wr, 305 op/s
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.651 248514 DEBUG nova.virt.libvirt.vif [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-622802661',display_name='tempest-ImagesTestJSON-server-622802661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-622802661',id=53,image_ref='e4e9ee37-4059-46f1-9bf8-83a72d0403e7',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-0ufcm8q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='df25cd40-72b5-4e0f-90ec-8677c699d1d3',image_min_disk='1',image_min_ram='0',image_owner_id='52e1055963294dbdb16cd95b466cd4d9',image_owner_project_name='tempest-ImagesTestJSON-1234382421',image_owner_user_name='tempest-ImagesTestJSON-1234382421-project-member',image_user_id='3b988c7ac9354c59aac9a9f41f83c20f',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:20Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=5d8c2900-0048-4631-bbc6-0122bce8f4f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.652 248514 DEBUG nova.network.os_vif_util [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.653 248514 DEBUG nova.network.os_vif_util [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.653 248514 DEBUG os_vif [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.657 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd41fdf9b-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.670 248514 INFO os_vif [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d')
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.853 248514 INFO nova.compute.manager [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Took 1.53 seconds to destroy the instance on the hypervisor.
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.854 248514 DEBUG oslo.service.loopingcall [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.855 248514 DEBUG nova.compute.manager [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:28:23 compute-0 nova_compute[248510]: 2025-12-13 08:28:23.855 248514 DEBUG nova.network.neutron [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:28:24 compute-0 nova_compute[248510]: 2025-12-13 08:28:24.164 248514 INFO nova.virt.libvirt.driver [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Deleting instance files /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3_del
Dec 13 08:28:24 compute-0 nova_compute[248510]: 2025-12-13 08:28:24.165 248514 INFO nova.virt.libvirt.driver [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Deletion of /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3_del complete
Dec 13 08:28:24 compute-0 ceph-mon[76537]: pgmap v1921: 321 pgs: 321 active+clean; 457 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.8 MiB/s wr, 305 op/s
Dec 13 08:28:24 compute-0 nova_compute[248510]: 2025-12-13 08:28:24.675 248514 INFO nova.compute.manager [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Took 1.82 seconds to destroy the instance on the hypervisor.
Dec 13 08:28:24 compute-0 nova_compute[248510]: 2025-12-13 08:28:24.675 248514 DEBUG oslo.service.loopingcall [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:28:24 compute-0 nova_compute[248510]: 2025-12-13 08:28:24.676 248514 DEBUG nova.compute.manager [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:28:24 compute-0 nova_compute[248510]: 2025-12-13 08:28:24.676 248514 DEBUG nova.network.neutron [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:28:24 compute-0 ovn_controller[148476]: 2025-12-13T08:28:24Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:ce:b2 10.100.0.10
Dec 13 08:28:24 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 13 08:28:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:24.925 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:24.927 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:28:24 compute-0 nova_compute[248510]: 2025-12-13 08:28:24.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.030 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:25 compute-0 sudo[302188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:28:25 compute-0 sudo[302188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:25 compute-0 sudo[302188]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:25 compute-0 sudo[302213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:28:25 compute-0 sudo[302213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 429 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 319 op/s
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.678 248514 DEBUG nova.compute.manager [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-unplugged-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.679 248514 DEBUG oslo_concurrency.lockutils [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.679 248514 DEBUG oslo_concurrency.lockutils [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.680 248514 DEBUG oslo_concurrency.lockutils [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.680 248514 DEBUG nova.compute.manager [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] No waiting events found dispatching network-vif-unplugged-b03c2424-77e3-49e1-b55f-f317911025b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.680 248514 DEBUG nova.compute.manager [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-unplugged-b03c2424-77e3-49e1-b55f-f317911025b6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.794 248514 DEBUG nova.compute.manager [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-unplugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG oslo_concurrency.lockutils [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG oslo_concurrency.lockutils [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG oslo_concurrency.lockutils [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG nova.compute.manager [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] No waiting events found dispatching network-vif-unplugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:25 compute-0 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG nova.compute.manager [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-unplugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:28:25 compute-0 sudo[302213]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:28:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:28:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:28:25 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:28:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:28:25 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:28:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:28:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:28:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:28:25 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:28:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:28:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:28:26 compute-0 sudo[302268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:28:26 compute-0 sudo[302268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:26 compute-0 ovn_controller[148476]: 2025-12-13T08:28:26Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:30:15 10.100.0.4
Dec 13 08:28:26 compute-0 ovn_controller[148476]: 2025-12-13T08:28:26Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:30:15 10.100.0.4
Dec 13 08:28:26 compute-0 sudo[302268]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:26 compute-0 sudo[302293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:28:26 compute-0 sudo[302293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.379 248514 DEBUG nova.network.neutron [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.403 248514 DEBUG nova.network.neutron [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:26 compute-0 podman[302331]: 2025-12-13 08:28:26.429554382 +0000 UTC m=+0.062959245 container create e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.442 248514 INFO nova.compute.manager [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Took 2.59 seconds to deallocate network for instance.
Dec 13 08:28:26 compute-0 systemd[1]: Started libpod-conmon-e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9.scope.
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.487 248514 INFO nova.compute.manager [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Took 1.81 seconds to deallocate network for instance.
Dec 13 08:28:26 compute-0 podman[302331]: 2025-12-13 08:28:26.406925463 +0000 UTC m=+0.040330346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:28:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:26 compute-0 podman[302331]: 2025-12-13 08:28:26.538240163 +0000 UTC m=+0.171645056 container init e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:28:26 compute-0 podman[302331]: 2025-12-13 08:28:26.549137502 +0000 UTC m=+0.182542365 container start e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 08:28:26 compute-0 podman[302331]: 2025-12-13 08:28:26.552759612 +0000 UTC m=+0.186164505 container attach e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:28:26 compute-0 systemd[1]: libpod-e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9.scope: Deactivated successfully.
Dec 13 08:28:26 compute-0 modest_greider[302344]: 167 167
Dec 13 08:28:26 compute-0 conmon[302344]: conmon e996c2515384e7abe4da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9.scope/container/memory.events
Dec 13 08:28:26 compute-0 podman[302331]: 2025-12-13 08:28:26.560902162 +0000 UTC m=+0.194307025 container died e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a79a754c8820a4f0dad31d9dbd878961df491583a1f6131d2d4121a13ce6e8-merged.mount: Deactivated successfully.
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.603 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.604 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:26 compute-0 podman[302331]: 2025-12-13 08:28:26.613298555 +0000 UTC m=+0.246703418 container remove e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 08:28:26 compute-0 systemd[1]: libpod-conmon-e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9.scope: Deactivated successfully.
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.718 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.718 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.719 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.719 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.720 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.721 248514 INFO nova.compute.manager [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Terminating instance
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.722 248514 DEBUG nova.compute.manager [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.724 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:26 compute-0 kernel: tapd8c7cad7-f6 (unregistering): left promiscuous mode
Dec 13 08:28:26 compute-0 ceph-mon[76537]: pgmap v1922: 321 pgs: 321 active+clean; 429 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 319 op/s
Dec 13 08:28:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:28:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:28:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:28:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:28:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:28:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:28:26 compute-0 NetworkManager[50376]: <info>  [1765614506.7694] device (tapd8c7cad7-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:26 compute-0 ovn_controller[148476]: 2025-12-13T08:28:26Z|00484|binding|INFO|Releasing lport d8c7cad7-f601-4205-8838-a583b6e04b0f from this chassis (sb_readonly=0)
Dec 13 08:28:26 compute-0 ovn_controller[148476]: 2025-12-13T08:28:26Z|00485|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f down in Southbound
Dec 13 08:28:26 compute-0 ovn_controller[148476]: 2025-12-13T08:28:26Z|00486|binding|INFO|Removing iface tapd8c7cad7-f6 ovn-installed in OVS
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.794 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:57:38 10.100.0.8'], port_security=['fa:16:3e:45:57:38 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d8c7cad7-f601-4205-8838-a583b6e04b0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.796 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d8c7cad7-f601-4205-8838-a583b6e04b0f in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 unbound from our chassis
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.799 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.800 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c63049d-63e9-47af-99e2-ce1403a42891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:28:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[37122332-ac58-48c1-af87-0a34e0b24a96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.803 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace which is not needed anymore
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.811 248514 DEBUG oslo_concurrency.processutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:26 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000033.scope: Deactivated successfully.
Dec 13 08:28:26 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000033.scope: Consumed 5.376s CPU time.
Dec 13 08:28:26 compute-0 systemd-machined[210538]: Machine qemu-61-instance-00000033 terminated.
Dec 13 08:28:26 compute-0 podman[302373]: 2025-12-13 08:28:26.866416301 +0000 UTC m=+0.057413618 container create 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 08:28:26 compute-0 systemd[1]: Started libpod-conmon-04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8.scope.
Dec 13 08:28:26 compute-0 podman[302373]: 2025-12-13 08:28:26.842288125 +0000 UTC m=+0.033285442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:28:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:26 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [NOTICE]   (302052) : haproxy version is 2.8.14-c23fe91
Dec 13 08:28:26 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [NOTICE]   (302052) : path to executable is /usr/sbin/haproxy
Dec 13 08:28:26 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [WARNING]  (302052) : Exiting Master process...
Dec 13 08:28:26 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [ALERT]    (302052) : Current worker (302061) exited with code 143 (Terminated)
Dec 13 08:28:26 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [WARNING]  (302052) : All workers exited. Exiting... (0)
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.974 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance destroyed successfully.
Dec 13 08:28:26 compute-0 systemd[1]: libpod-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8.scope: Deactivated successfully.
Dec 13 08:28:26 compute-0 nova_compute[248510]: 2025-12-13 08:28:26.975 248514 DEBUG nova.objects.instance [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'resources' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:26 compute-0 conmon[302023]: conmon abd7204c5e223c08d4aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8.scope/container/memory.events
Dec 13 08:28:26 compute-0 podman[302408]: 2025-12-13 08:28:26.978675721 +0000 UTC m=+0.058980177 container died abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:28:27 compute-0 podman[302373]: 2025-12-13 08:28:27.00419795 +0000 UTC m=+0.195195267 container init 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 08:28:27 compute-0 podman[302373]: 2025-12-13 08:28:27.014243308 +0000 UTC m=+0.205240605 container start 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8-userdata-shm.mount: Deactivated successfully.
Dec 13 08:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0850c2732f6cad671f2980e01a6b8b5803f4a1da7e4ec39d2b25d9c9189acf0-merged.mount: Deactivated successfully.
Dec 13 08:28:27 compute-0 podman[302373]: 2025-12-13 08:28:27.023036815 +0000 UTC m=+0.214034142 container attach 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 08:28:27 compute-0 podman[302408]: 2025-12-13 08:28:27.033426671 +0000 UTC m=+0.113731127 container cleanup abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:28:27 compute-0 systemd[1]: libpod-conmon-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8.scope: Deactivated successfully.
Dec 13 08:28:27 compute-0 podman[302475]: 2025-12-13 08:28:27.106471124 +0000 UTC m=+0.043709660 container remove abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.112 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[573846b4-8e1e-4d3f-8a22-41bec5b3bd0c]: (4, ('Sat Dec 13 08:28:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8)\nabd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8\nSat Dec 13 08:28:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8)\nabd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.115 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7c2199-1e68-4765-bd61-7346c174da87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.116 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:27 compute-0 kernel: tap6c63049d-60: left promiscuous mode
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a365100b-d94f-4242-811c-a071c99c937e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.151 248514 DEBUG nova.virt.libvirt.vif [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:23Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.152 248514 DEBUG nova.network.os_vif_util [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.153 248514 DEBUG nova.network.os_vif_util [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.153 248514 DEBUG os_vif [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.158 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8c7cad7-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.165 248514 INFO os_vif [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6')
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.165 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f56471b1-4037-4eac-a831-c92a96b7779f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.167 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef300a0f-2547-474e-aadb-b62a3b51af30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[631e98d5-179f-426e-9248-3de5444ed033]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707485, 'reachable_time': 36389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302494, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.193 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:28:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.193 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[66e79c4b-7952-46dd-b7c8-5d9231bc118e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065828780' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c63049d\x2d63e9\x2d47af\x2d99e2\x2dce1403a42891.mount: Deactivated successfully.
Dec 13 08:28:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 429 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 881 KiB/s wr, 212 op/s
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.463 248514 DEBUG oslo_concurrency.processutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.471 248514 DEBUG nova.compute.provider_tree [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.487 248514 INFO nova.virt.libvirt.driver [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deleting instance files /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_del
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.487 248514 INFO nova.virt.libvirt.driver [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deletion of /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_del complete
Dec 13 08:28:27 compute-0 xenodochial_noether[302411]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:28:27 compute-0 xenodochial_noether[302411]: --> All data devices are unavailable
Dec 13 08:28:27 compute-0 systemd[1]: libpod-04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8.scope: Deactivated successfully.
Dec 13 08:28:27 compute-0 podman[302373]: 2025-12-13 08:28:27.555896383 +0000 UTC m=+0.746893690 container died 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.559 248514 DEBUG nova.scheduler.client.report [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1-merged.mount: Deactivated successfully.
Dec 13 08:28:27 compute-0 podman[302373]: 2025-12-13 08:28:27.609415503 +0000 UTC m=+0.800412790 container remove 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.616 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:27 compute-0 systemd[1]: libpod-conmon-04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8.scope: Deactivated successfully.
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.620 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:27 compute-0 sudo[302293]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.654 248514 INFO nova.compute.manager [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Took 0.93 seconds to destroy the instance on the hypervisor.
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.655 248514 DEBUG oslo.service.loopingcall [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.655 248514 DEBUG nova.compute.manager [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.656 248514 DEBUG nova.network.neutron [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.660 248514 INFO nova.scheduler.client.report [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Deleted allocations for instance e6e0fdaf-f934-4e56-8e59-4c4475bacd26
Dec 13 08:28:27 compute-0 sudo[302543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:28:27 compute-0 sudo[302543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:27 compute-0 sudo[302543]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1065828780' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:27 compute-0 sudo[302568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.812 248514 DEBUG oslo_concurrency.processutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:27 compute-0 sudo[302568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.938 248514 DEBUG nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.939 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.939 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.939 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] No waiting events found dispatching network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 WARNING nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received unexpected event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 for instance with vm_state deleted and task_state None.
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.941 248514 DEBUG nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] No waiting events found dispatching network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.941 248514 WARNING nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received unexpected event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 for instance with vm_state deleted and task_state None.
Dec 13 08:28:27 compute-0 nova_compute[248510]: 2025-12-13 08:28:27.944 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:28 compute-0 podman[302625]: 2025-12-13 08:28:28.118381241 +0000 UTC m=+0.045979145 container create 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:28:28 compute-0 systemd[1]: Started libpod-conmon-56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1.scope.
Dec 13 08:28:28 compute-0 podman[302625]: 2025-12-13 08:28:28.096537642 +0000 UTC m=+0.024135576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:28:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:28 compute-0 podman[302625]: 2025-12-13 08:28:28.231605445 +0000 UTC m=+0.159203379 container init 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:28:28 compute-0 podman[302625]: 2025-12-13 08:28:28.240927965 +0000 UTC m=+0.168525869 container start 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:28:28 compute-0 podman[302625]: 2025-12-13 08:28:28.244889543 +0000 UTC m=+0.172487447 container attach 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 08:28:28 compute-0 elated_jang[302642]: 167 167
Dec 13 08:28:28 compute-0 systemd[1]: libpod-56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1.scope: Deactivated successfully.
Dec 13 08:28:28 compute-0 conmon[302642]: conmon 56b2f8ae35f87a628bde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1.scope/container/memory.events
Dec 13 08:28:28 compute-0 podman[302625]: 2025-12-13 08:28:28.249154448 +0000 UTC m=+0.176752392 container died 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-dacd29a090b89faa77af99c823eb047e0caff2e5207262e9552cd1738f94b527-merged.mount: Deactivated successfully.
Dec 13 08:28:28 compute-0 podman[302625]: 2025-12-13 08:28:28.299181162 +0000 UTC m=+0.226779076 container remove 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 08:28:28 compute-0 systemd[1]: libpod-conmon-56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1.scope: Deactivated successfully.
Dec 13 08:28:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169479124' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.396 248514 DEBUG oslo_concurrency.processutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.404 248514 DEBUG nova.compute.provider_tree [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:28 compute-0 podman[302670]: 2025-12-13 08:28:28.511284636 +0000 UTC m=+0.052926247 container create 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Dec 13 08:28:28 compute-0 systemd[1]: Started libpod-conmon-437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de.scope.
Dec 13 08:28:28 compute-0 podman[302670]: 2025-12-13 08:28:28.486958626 +0000 UTC m=+0.028600287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:28:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:28 compute-0 podman[302670]: 2025-12-13 08:28:28.612450081 +0000 UTC m=+0.154091712 container init 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 08:28:28 compute-0 podman[302670]: 2025-12-13 08:28:28.629949463 +0000 UTC m=+0.171591074 container start 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 08:28:28 compute-0 podman[302670]: 2025-12-13 08:28:28.636142296 +0000 UTC m=+0.177783927 container attach 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.681 248514 DEBUG nova.scheduler.client.report [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:28 compute-0 ceph-mon[76537]: pgmap v1923: 321 pgs: 321 active+clean; 429 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 881 KiB/s wr, 212 op/s
Dec 13 08:28:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1169479124' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.848 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.882 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.882 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] No waiting events found dispatching network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 WARNING nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received unexpected event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb for instance with vm_state deleted and task_state None.
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-deleted-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-deleted-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.886 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.886 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.886 248514 WARNING nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state deleting.
Dec 13 08:28:28 compute-0 agitated_austin[302687]: {
Dec 13 08:28:28 compute-0 agitated_austin[302687]:     "0": [
Dec 13 08:28:28 compute-0 agitated_austin[302687]:         {
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "devices": [
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "/dev/loop3"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             ],
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_name": "ceph_lv0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_size": "21470642176",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "name": "ceph_lv0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "tags": {
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cluster_name": "ceph",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.crush_device_class": "",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.encrypted": "0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.objectstore": "bluestore",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osd_id": "0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.type": "block",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.vdo": "0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.with_tpm": "0"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             },
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "type": "block",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "vg_name": "ceph_vg0"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:         }
Dec 13 08:28:28 compute-0 agitated_austin[302687]:     ],
Dec 13 08:28:28 compute-0 agitated_austin[302687]:     "1": [
Dec 13 08:28:28 compute-0 agitated_austin[302687]:         {
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "devices": [
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "/dev/loop4"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             ],
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_name": "ceph_lv1",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_size": "21470642176",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "name": "ceph_lv1",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "tags": {
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cluster_name": "ceph",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.crush_device_class": "",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.encrypted": "0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.objectstore": "bluestore",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osd_id": "1",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.type": "block",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.vdo": "0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.with_tpm": "0"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             },
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "type": "block",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "vg_name": "ceph_vg1"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:         }
Dec 13 08:28:28 compute-0 agitated_austin[302687]:     ],
Dec 13 08:28:28 compute-0 agitated_austin[302687]:     "2": [
Dec 13 08:28:28 compute-0 agitated_austin[302687]:         {
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "devices": [
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "/dev/loop5"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             ],
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_name": "ceph_lv2",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_size": "21470642176",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "name": "ceph_lv2",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "tags": {
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.cluster_name": "ceph",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.crush_device_class": "",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.encrypted": "0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.objectstore": "bluestore",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osd_id": "2",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.type": "block",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.vdo": "0",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:                 "ceph.with_tpm": "0"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             },
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "type": "block",
Dec 13 08:28:28 compute-0 agitated_austin[302687]:             "vg_name": "ceph_vg2"
Dec 13 08:28:28 compute-0 agitated_austin[302687]:         }
Dec 13 08:28:28 compute-0 agitated_austin[302687]:     ]
Dec 13 08:28:28 compute-0 agitated_austin[302687]: }
Dec 13 08:28:28 compute-0 nova_compute[248510]: 2025-12-13 08:28:28.972 248514 INFO nova.scheduler.client.report [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance 5d8c2900-0048-4631-bbc6-0122bce8f4f3
Dec 13 08:28:28 compute-0 systemd[1]: libpod-437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de.scope: Deactivated successfully.
Dec 13 08:28:28 compute-0 podman[302670]: 2025-12-13 08:28:28.996243231 +0000 UTC m=+0.537884862 container died 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a-merged.mount: Deactivated successfully.
Dec 13 08:28:29 compute-0 podman[302670]: 2025-12-13 08:28:29.042908602 +0000 UTC m=+0.584550213 container remove 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:28:29 compute-0 systemd[1]: libpod-conmon-437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de.scope: Deactivated successfully.
Dec 13 08:28:29 compute-0 nova_compute[248510]: 2025-12-13 08:28:29.090 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:29 compute-0 sudo[302568]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:29 compute-0 sudo[302707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:28:29 compute-0 sudo[302707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:29 compute-0 sudo[302707]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:29 compute-0 sudo[302732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:28:29 compute-0 sudo[302732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 405 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 360 op/s
Dec 13 08:28:29 compute-0 nova_compute[248510]: 2025-12-13 08:28:29.532 248514 DEBUG nova.network.neutron [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:29 compute-0 podman[302770]: 2025-12-13 08:28:29.571882784 +0000 UTC m=+0.050770484 container create 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:28:29 compute-0 systemd[1]: Started libpod-conmon-0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28.scope.
Dec 13 08:28:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:29 compute-0 podman[302770]: 2025-12-13 08:28:29.547658976 +0000 UTC m=+0.026546706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:28:29 compute-0 podman[302770]: 2025-12-13 08:28:29.648684969 +0000 UTC m=+0.127572699 container init 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:28:29 compute-0 podman[302770]: 2025-12-13 08:28:29.656686156 +0000 UTC m=+0.135573846 container start 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:28:29 compute-0 podman[302770]: 2025-12-13 08:28:29.660235354 +0000 UTC m=+0.139123064 container attach 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 08:28:29 compute-0 optimistic_gould[302786]: 167 167
Dec 13 08:28:29 compute-0 systemd[1]: libpod-0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28.scope: Deactivated successfully.
Dec 13 08:28:29 compute-0 conmon[302786]: conmon 0dc3e1d02c2cd0470b6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28.scope/container/memory.events
Dec 13 08:28:29 compute-0 podman[302770]: 2025-12-13 08:28:29.663571396 +0000 UTC m=+0.142459096 container died 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 08:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad9234ae3f546a1401b4d79f467b2497fda8e16b33a6e15b3ae10a0255073172-merged.mount: Deactivated successfully.
Dec 13 08:28:29 compute-0 podman[302770]: 2025-12-13 08:28:29.709633783 +0000 UTC m=+0.188521463 container remove 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 08:28:29 compute-0 systemd[1]: libpod-conmon-0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28.scope: Deactivated successfully.
Dec 13 08:28:29 compute-0 podman[302808]: 2025-12-13 08:28:29.906439719 +0000 UTC m=+0.041561017 container create 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 08:28:29 compute-0 systemd[1]: Started libpod-conmon-67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071.scope.
Dec 13 08:28:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:29 compute-0 podman[302808]: 2025-12-13 08:28:29.888424414 +0000 UTC m=+0.023545732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:28:29 compute-0 podman[302808]: 2025-12-13 08:28:29.991428056 +0000 UTC m=+0.126549354 container init 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:28:29 compute-0 podman[302808]: 2025-12-13 08:28:29.999341681 +0000 UTC m=+0.134462989 container start 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 08:28:30 compute-0 podman[302808]: 2025-12-13 08:28:30.002195381 +0000 UTC m=+0.137316679 container attach 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.027 248514 INFO nova.compute.manager [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Took 2.37 seconds to deallocate network for instance.
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.033 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.435 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.436 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.462 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.463 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.463 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.463 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.463 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.465 248514 INFO nova.compute.manager [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Terminating instance
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.466 248514 DEBUG nova.compute.manager [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:28:30 compute-0 kernel: tapc76cbcb4-3f (unregistering): left promiscuous mode
Dec 13 08:28:30 compute-0 NetworkManager[50376]: <info>  [1765614510.5094] device (tapc76cbcb4-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:28:30 compute-0 ovn_controller[148476]: 2025-12-13T08:28:30Z|00487|binding|INFO|Releasing lport c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 from this chassis (sb_readonly=0)
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:30 compute-0 ovn_controller[148476]: 2025-12-13T08:28:30Z|00488|binding|INFO|Setting lport c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 down in Southbound
Dec 13 08:28:30 compute-0 ovn_controller[148476]: 2025-12-13T08:28:30Z|00489|binding|INFO|Removing iface tapc76cbcb4-3f ovn-installed in OVS
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:30 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Dec 13 08:28:30 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 14.464s CPU time.
Dec 13 08:28:30 compute-0 systemd-machined[210538]: Machine qemu-53-instance-00000030 terminated.
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.596 248514 DEBUG oslo_concurrency.processutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.644 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:73:cb 10.100.0.8'], port_security=['fa:16:3e:75:73:cb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '41602b99-e7f2-450c-885e-51d07a1236d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.646 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.647 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.667 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.667 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.668 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8571840a-9b2f-4c57-a9f4-b6df8c10730e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.711 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4db9b708-d849-40a9-9550-025e3311079c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.716 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b06c91-3a68-4cee-bf90-c50dcd0ea3f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.717 248514 INFO nova.virt.libvirt.driver [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Instance destroyed successfully.
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.718 248514 DEBUG nova.objects.instance [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'resources' on Instance uuid 41602b99-e7f2-450c-885e-51d07a1236d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.754 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d90729fb-a851-4916-a3dc-ea9c5a36aecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2b061213-afee-442e-9b80-20b447f5524a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 19, 'rx_bytes': 826, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 19, 'rx_bytes': 826, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302920, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.778 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:28:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Dec 13 08:28:30 compute-0 ceph-mon[76537]: pgmap v1924: 321 pgs: 321 active+clean; 405 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 360 op/s
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.794 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0533e9f7-5d91-40c6-9d84-f26085df7971]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302941, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302941, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.797 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.799 248514 DEBUG nova.virt.libvirt.vif [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-254704430',display_name='tempest-ListServerFiltersTestJSON-instance-254704430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-254704430',id=48,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-5xaoa0nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:30Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=41602b99-e7f2-450c-885e-51d07a1236d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.800 248514 DEBUG nova.network.os_vif_util [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.801 248514 DEBUG nova.network.os_vif_util [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.802 248514 DEBUG os_vif [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:30 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.807 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc76cbcb4-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.808 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.809 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.810 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.811 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:30 compute-0 nova_compute[248510]: 2025-12-13 08:28:30.812 248514 INFO os_vif [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f')
Dec 13 08:28:30 compute-0 lvm[302944]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:28:30 compute-0 lvm[302944]: VG ceph_vg0 finished
Dec 13 08:28:30 compute-0 lvm[302947]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:28:30 compute-0 lvm[302947]: VG ceph_vg1 finished
Dec 13 08:28:30 compute-0 lvm[302963]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:28:30 compute-0 lvm[302963]: VG ceph_vg2 finished
Dec 13 08:28:30 compute-0 youthful_williams[302824]: {}
Dec 13 08:28:30 compute-0 systemd[1]: libpod-67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071.scope: Deactivated successfully.
Dec 13 08:28:30 compute-0 systemd[1]: libpod-67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071.scope: Consumed 1.486s CPU time.
Dec 13 08:28:30 compute-0 podman[302808]: 2025-12-13 08:28:30.957604905 +0000 UTC m=+1.092726203 container died 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8-merged.mount: Deactivated successfully.
Dec 13 08:28:31 compute-0 podman[302808]: 2025-12-13 08:28:31.015978035 +0000 UTC m=+1.151099323 container remove 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.027 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:31 compute-0 systemd[1]: libpod-conmon-67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071.scope: Deactivated successfully.
Dec 13 08:28:31 compute-0 sudo[302732]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:28:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420189846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.242 248514 DEBUG oslo_concurrency.processutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.252 248514 DEBUG nova.compute.provider_tree [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:28:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.343 248514 DEBUG nova.scheduler.client.report [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 405 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.6 MiB/s wr, 373 op/s
Dec 13 08:28:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.503 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.506 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.515 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.516 248514 INFO nova.compute.claims [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:28:31 compute-0 sudo[302984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.576 248514 INFO nova.scheduler.client.report [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Deleted allocations for instance 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c
Dec 13 08:28:31 compute-0 sudo[302984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:28:31 compute-0 sudo[302984]: pam_unix(sudo:session): session closed for user root
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.679 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.845 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:31 compute-0 ceph-mon[76537]: osdmap e215: 3 total, 3 up, 3 in
Dec 13 08:28:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2420189846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:31 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:28:31 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.902 248514 INFO nova.virt.libvirt.driver [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Deleting instance files /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3_del
Dec 13 08:28:31 compute-0 nova_compute[248510]: 2025-12-13 08:28:31.904 248514 INFO nova.virt.libvirt.driver [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Deletion of /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3_del complete
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.061 248514 DEBUG nova.compute.manager [req-2ade022e-1b0b-45fa-a5fe-4a656f53f3f5 req-852aa4bf-eb0d-461a-9061-58e4e459f7cb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-deleted-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.066 248514 INFO nova.compute.manager [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Took 1.60 seconds to destroy the instance on the hypervisor.
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.067 248514 DEBUG oslo.service.loopingcall [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.067 248514 DEBUG nova.compute.manager [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.067 248514 DEBUG nova.network.neutron [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:28:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2605926860' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.449 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.455 248514 DEBUG nova.compute.provider_tree [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.493 248514 DEBUG nova.scheduler.client.report [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:32 compute-0 sshd-session[303029]: Connection closed by 193.32.162.146 port 40504
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.531 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.531 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.532 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.532 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.533 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.534 248514 INFO nova.compute.manager [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Terminating instance
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.536 248514 DEBUG nova.compute.manager [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:28:32 compute-0 kernel: tapeca7f353-34 (unregistering): left promiscuous mode
Dec 13 08:28:32 compute-0 NetworkManager[50376]: <info>  [1765614512.5871] device (tapeca7f353-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:32 compute-0 ovn_controller[148476]: 2025-12-13T08:28:32Z|00490|binding|INFO|Releasing lport eca7f353-3478-46ea-a63f-617a11a8f7ff from this chassis (sb_readonly=0)
Dec 13 08:28:32 compute-0 ovn_controller[148476]: 2025-12-13T08:28:32Z|00491|binding|INFO|Setting lport eca7f353-3478-46ea-a63f-617a11a8f7ff down in Southbound
Dec 13 08:28:32 compute-0 ovn_controller[148476]: 2025-12-13T08:28:32Z|00492|binding|INFO|Removing iface tapeca7f353-34 ovn-installed in OVS
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.612 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:88:26 10.100.0.12'], port_security=['fa:16:3e:1c:88:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'df25cd40-72b5-4e0f-90ec-8677c699d1d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=eca7f353-3478-46ea-a63f-617a11a8f7ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.616 158419 INFO neutron.agent.ovn.metadata.agent [-] Port eca7f353-3478-46ea-a63f-617a11a8f7ff in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.617 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.618 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.618 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.619 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7c52b7-cf10-4047-825e-bfe75cfbeede]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.620 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace which is not needed anymore
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:32 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Deactivated successfully.
Dec 13 08:28:32 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Consumed 15.522s CPU time.
Dec 13 08:28:32 compute-0 systemd-machined[210538]: Machine qemu-56-instance-00000032 terminated.
Dec 13 08:28:32 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [NOTICE]   (299716) : haproxy version is 2.8.14-c23fe91
Dec 13 08:28:32 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [NOTICE]   (299716) : path to executable is /usr/sbin/haproxy
Dec 13 08:28:32 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [WARNING]  (299716) : Exiting Master process...
Dec 13 08:28:32 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [WARNING]  (299716) : Exiting Master process...
Dec 13 08:28:32 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [ALERT]    (299716) : Current worker (299718) exited with code 143 (Terminated)
Dec 13 08:28:32 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [WARNING]  (299716) : All workers exited. Exiting... (0)
Dec 13 08:28:32 compute-0 systemd[1]: libpod-e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d.scope: Deactivated successfully.
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.783 248514 INFO nova.virt.libvirt.driver [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Instance destroyed successfully.
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.784 248514 DEBUG nova.objects.instance [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid df25cd40-72b5-4e0f-90ec-8677c699d1d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:32 compute-0 podman[303054]: 2025-12-13 08:28:32.786359706 +0000 UTC m=+0.054540577 container died e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.811 248514 DEBUG nova.virt.libvirt.vif [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533202396',display_name='tempest-ImagesTestJSON-server-533202396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533202396',id=50,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-e880c0lz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:56Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=df25cd40-72b5-4e0f-90ec-8677c699d1d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.812 248514 DEBUG nova.network.os_vif_util [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.813 248514 DEBUG nova.network.os_vif_util [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.813 248514 DEBUG os_vif [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.816 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.816 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeca7f353-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.822 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.823 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d-userdata-shm.mount: Deactivated successfully.
Dec 13 08:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c8666a1b8044e354b810eaa4281d1b07b7db2628215dd5a45bddcfe0eef16ae-merged.mount: Deactivated successfully.
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.837 248514 INFO os_vif [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34')
Dec 13 08:28:32 compute-0 podman[303054]: 2025-12-13 08:28:32.838017261 +0000 UTC m=+0.106198132 container cleanup e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:28:32 compute-0 systemd[1]: libpod-conmon-e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d.scope: Deactivated successfully.
Dec 13 08:28:32 compute-0 ceph-mon[76537]: pgmap v1926: 321 pgs: 321 active+clean; 405 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.6 MiB/s wr, 373 op/s
Dec 13 08:28:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2605926860' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:32 compute-0 podman[303100]: 2025-12-13 08:28:32.916662591 +0000 UTC m=+0.050884856 container remove e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.923 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[faef0eaa-52ee-4cff-8e17-7f366c9a071b]: (4, ('Sat Dec 13 08:28:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d)\ne4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d\nSat Dec 13 08:28:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d)\ne4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.925 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[895c80c0-98cc-4ef0-97c5-aa64a604affb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.926 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.929 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:32 compute-0 kernel: tap87bd91d0-e0: left promiscuous mode
Dec 13 08:28:32 compute-0 nova_compute[248510]: 2025-12-13 08:28:32.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.951 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a28762e-e378-4112-be0d-51e61e184f54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.969 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[892ee32a-512f-4614-8aa4-c4dd9d9c4383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.971 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[12c3f298-c56d-46f0-b783-bcad1245e239]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.991 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cf67da-b47a-4a08-bcaa-5008f61560a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703776, 'reachable_time': 16673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303125, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d87bd91d0\x2deead\x2d49b6\x2d8f92\x2df8d0dba555dc.mount: Deactivated successfully.
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.995 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:28:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.996 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[268c258b-5213-408a-9201-d1fd89b42445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.006 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.030 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.122 248514 INFO nova.virt.libvirt.driver [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Deleting instance files /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3_del
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.123 248514 INFO nova.virt.libvirt.driver [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Deletion of /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3_del complete
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.330 248514 DEBUG nova.policy [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5bc32e49dbd4372a006913090b9ef0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.434 248514 DEBUG nova.virt.libvirt.driver [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:28:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 341 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 357 op/s
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.731 248514 INFO nova.compute.manager [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 1.19 seconds to destroy the instance on the hypervisor.
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.731 248514 DEBUG oslo.service.loopingcall [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.732 248514 DEBUG nova.compute.manager [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:28:33 compute-0 nova_compute[248510]: 2025-12-13 08:28:33.732 248514 DEBUG nova.network.neutron [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.218 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.220 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.220 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Creating image(s)
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.241 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.267 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.292 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.296 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.376 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.378 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.379 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.379 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.400 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.404 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7139a479-b2fe-4d64-8061-97fceda2e392_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.624 248514 DEBUG nova.network.neutron [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.652 248514 INFO nova.compute.manager [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Took 2.59 seconds to deallocate network for instance.
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.713 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7139a479-b2fe-4d64-8061-97fceda2e392_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.784 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] resizing rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.823 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.823 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.825 248514 DEBUG nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-vif-unplugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.825 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] No waiting events found dispatching network-vif-unplugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 WARNING nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received unexpected event network-vif-unplugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 for instance with vm_state deleted and task_state None.
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.827 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.827 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.827 248514 DEBUG nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] No waiting events found dispatching network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.827 248514 WARNING nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received unexpected event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 for instance with vm_state deleted and task_state None.
Dec 13 08:28:34 compute-0 ceph-mon[76537]: pgmap v1927: 321 pgs: 321 active+clean; 341 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 357 op/s
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.906 248514 DEBUG nova.objects.instance [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'migration_context' on Instance uuid 7139a479-b2fe-4d64-8061-97fceda2e392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:34.929 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:34 compute-0 nova_compute[248510]: 2025-12-13 08:28:34.996 248514 DEBUG oslo_concurrency.processutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.045 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.046 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Ensure instance console log exists: /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.046 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.046 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.047 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 229 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 269 op/s
Dec 13 08:28:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/894203805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.594 248514 DEBUG oslo_concurrency.processutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.601 248514 DEBUG nova.compute.provider_tree [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:35 compute-0 kernel: tap3a8efc9e-75 (unregistering): left promiscuous mode
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.666 248514 DEBUG nova.scheduler.client.report [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:35 compute-0 NetworkManager[50376]: <info>  [1765614515.6685] device (tap3a8efc9e-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:35 compute-0 ovn_controller[148476]: 2025-12-13T08:28:35Z|00493|binding|INFO|Releasing lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 from this chassis (sb_readonly=0)
Dec 13 08:28:35 compute-0 ovn_controller[148476]: 2025-12-13T08:28:35Z|00494|binding|INFO|Setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 down in Southbound
Dec 13 08:28:35 compute-0 ovn_controller[148476]: 2025-12-13T08:28:35Z|00495|binding|INFO|Removing iface tap3a8efc9e-75 ovn-installed in OVS
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.689 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:30:15 10.100.0.4'], port_security=['fa:16:3e:cc:30:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678d2db2-0536-4744-b65c-f0a5852f35e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3a8efc9e-7582-4b17-ab0e-b248e09932b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.691 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:28:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.693 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:28:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bc1cb8-78e0-4dc5-96de-a36182c24164]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.695 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace which is not needed anymore
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.696 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.764 248514 INFO nova.scheduler.client.report [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Deleted allocations for instance 41602b99-e7f2-450c-885e-51d07a1236d3
Dec 13 08:28:35 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Deactivated successfully.
Dec 13 08:28:35 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Consumed 15.679s CPU time.
Dec 13 08:28:35 compute-0 systemd-machined[210538]: Machine qemu-59-instance-00000034 terminated.
Dec 13 08:28:35 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [NOTICE]   (301274) : haproxy version is 2.8.14-c23fe91
Dec 13 08:28:35 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [NOTICE]   (301274) : path to executable is /usr/sbin/haproxy
Dec 13 08:28:35 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [WARNING]  (301274) : Exiting Master process...
Dec 13 08:28:35 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [WARNING]  (301274) : Exiting Master process...
Dec 13 08:28:35 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [ALERT]    (301274) : Current worker (301276) exited with code 143 (Terminated)
Dec 13 08:28:35 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [WARNING]  (301274) : All workers exited. Exiting... (0)
Dec 13 08:28:35 compute-0 systemd[1]: libpod-b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a.scope: Deactivated successfully.
Dec 13 08:28:35 compute-0 podman[303338]: 2025-12-13 08:28:35.846261205 +0000 UTC m=+0.047950765 container died b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:28:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/894203805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-01f9f9ca3b39265b46e15e383b3b89331e9eff00a18a8f9d53df9e42b58de486-merged.mount: Deactivated successfully.
Dec 13 08:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a-userdata-shm.mount: Deactivated successfully.
Dec 13 08:28:35 compute-0 podman[303338]: 2025-12-13 08:28:35.885863792 +0000 UTC m=+0.087553352 container cleanup b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:28:35 compute-0 kernel: tap3a8efc9e-75: entered promiscuous mode
Dec 13 08:28:35 compute-0 NetworkManager[50376]: <info>  [1765614515.8944] manager: (tap3a8efc9e-75): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Dec 13 08:28:35 compute-0 ovn_controller[148476]: 2025-12-13T08:28:35Z|00496|binding|INFO|Claiming lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 for this chassis.
Dec 13 08:28:35 compute-0 ovn_controller[148476]: 2025-12-13T08:28:35Z|00497|binding|INFO|3a8efc9e-7582-4b17-ab0e-b248e09932b3: Claiming fa:16:3e:cc:30:15 10.100.0.4
Dec 13 08:28:35 compute-0 kernel: tap3a8efc9e-75 (unregistering): left promiscuous mode
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:35 compute-0 systemd[1]: libpod-conmon-b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a.scope: Deactivated successfully.
Dec 13 08:28:35 compute-0 ovn_controller[148476]: 2025-12-13T08:28:35Z|00498|binding|INFO|Setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 ovn-installed in OVS
Dec 13 08:28:35 compute-0 ovn_controller[148476]: 2025-12-13T08:28:35Z|00499|if_status|INFO|Dropped 4 log messages in last 14 seconds (most recently, 14 seconds ago) due to excessive rate
Dec 13 08:28:35 compute-0 ovn_controller[148476]: 2025-12-13T08:28:35Z|00500|if_status|INFO|Not setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 down as sb is readonly
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:35 compute-0 podman[303369]: 2025-12-13 08:28:35.960004511 +0000 UTC m=+0.047073522 container remove b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:28:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.967 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[341ae1d6-e7ef-4d7f-83f3-292da2403fbe]: (4, ('Sat Dec 13 08:28:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a)\nb5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a\nSat Dec 13 08:28:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a)\nb5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.969 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb4b495-bb87-4c18-843c-978f761d6fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.970 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:35 compute-0 kernel: tap85372fca-a0: left promiscuous mode
Dec 13 08:28:35 compute-0 nova_compute[248510]: 2025-12-13 08:28:35.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[67726c50-1656-4b58-814f-84c64beb5f25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.018 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d84124a-0732-42be-ada6-f45fbb1ae24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:36 compute-0 ovn_controller[148476]: 2025-12-13T08:28:36Z|00501|binding|INFO|Releasing lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 from this chassis (sb_readonly=0)
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.022 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:30:15 10.100.0.4'], port_security=['fa:16:3e:cc:30:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678d2db2-0536-4744-b65c-f0a5852f35e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3a8efc9e-7582-4b17-ab0e-b248e09932b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36ebfc99-f9ee-43d3-a013-c186e3b345a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.027 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:30:15 10.100.0.4'], port_security=['fa:16:3e:cc:30:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678d2db2-0536-4744-b65c-f0a5852f35e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3a8efc9e-7582-4b17-ab0e-b248e09932b3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:36 compute-0 nova_compute[248510]: 2025-12-13 08:28:36.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[933f7393-67de-493e-8f2b-e4934d56b8e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706708, 'reachable_time': 41951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303394, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.048 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.048 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1c6c8d-d021-4df2-815a-454b6c712309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.049 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:28:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d85372fca\x2dab50\x2d48b6\x2d8c21\x2d507f630c205a.mount: Deactivated successfully.
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.050 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.051 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f0400c-4c85-45a7-b3b9-acc17d7b184e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.052 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.053 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:28:36 compute-0 nova_compute[248510]: 2025-12-13 08:28:36.053 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Successfully created port: 834fc672-a8af-4884-964c-481d0d8d318e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:28:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6b6cc9-abaf-4728-859a-df99dad728d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:36 compute-0 nova_compute[248510]: 2025-12-13 08:28:36.077 248514 DEBUG nova.network.neutron [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:36 compute-0 nova_compute[248510]: 2025-12-13 08:28:36.085 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:36 compute-0 nova_compute[248510]: 2025-12-13 08:28:36.457 248514 INFO nova.virt.libvirt.driver [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance shutdown successfully after 24 seconds.
Dec 13 08:28:36 compute-0 nova_compute[248510]: 2025-12-13 08:28:36.466 248514 INFO nova.virt.libvirt.driver [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance destroyed successfully.
Dec 13 08:28:36 compute-0 nova_compute[248510]: 2025-12-13 08:28:36.467 248514 DEBUG nova.objects.instance [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:36 compute-0 ceph-mon[76537]: pgmap v1928: 321 pgs: 321 active+clean; 229 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 269 op/s
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.209 248514 DEBUG nova.compute.manager [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-unplugged-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.209 248514 DEBUG oslo_concurrency.lockutils [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.209 248514 DEBUG oslo_concurrency.lockutils [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.210 248514 DEBUG oslo_concurrency.lockutils [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.210 248514 DEBUG nova.compute.manager [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] No waiting events found dispatching network-vif-unplugged-eca7f353-3478-46ea-a63f-617a11a8f7ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.210 248514 DEBUG nova.compute.manager [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-unplugged-eca7f353-3478-46ea-a63f-617a11a8f7ff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.212 248514 INFO nova.compute.manager [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 3.48 seconds to deallocate network for instance.
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.234 248514 DEBUG nova.compute.manager [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 229 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 269 op/s
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.686 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614502.606397, e6e0fdaf-f934-4e56-8e59-4c4475bacd26 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.686 248514 INFO nova.compute.manager [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] VM Stopped (Lifecycle Event)
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:37 compute-0 nova_compute[248510]: 2025-12-13 08:28:37.819 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.100 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614503.099627, 5d8c2900-0048-4631-bbc6-0122bce8f4f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.101 248514 INFO nova.compute.manager [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] VM Stopped (Lifecycle Event)
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.332 248514 DEBUG nova.compute.manager [None req-7f3dd03b-9e85-45f0-a271-c95b356dc98a - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.545 248514 DEBUG nova.compute.manager [None req-dcd39408-0442-4a8f-9748-e44028922a19 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.642 248514 DEBUG nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-vif-deleted-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.643 248514 DEBUG nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-deleted-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.644 248514 DEBUG nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-unplugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.644 248514 DEBUG oslo_concurrency.lockutils [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.644 248514 DEBUG oslo_concurrency.lockutils [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.645 248514 DEBUG oslo_concurrency.lockutils [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.645 248514 DEBUG nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] No waiting events found dispatching network-vif-unplugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.645 248514 WARNING nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received unexpected event network-vif-unplugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 for instance with vm_state active and task_state powering-off.
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.732 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:38 compute-0 nova_compute[248510]: 2025-12-13 08:28:38.733 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:38 compute-0 ceph-mon[76537]: pgmap v1929: 321 pgs: 321 active+clean; 229 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 269 op/s
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.052 248514 DEBUG oslo_concurrency.lockutils [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 26.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.079 248514 DEBUG oslo_concurrency.processutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 248 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.468 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.468 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.489 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Successfully updated port: 834fc672-a8af-4884-964c-481d0d8d318e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.584 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.584 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquired lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.585 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:28:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388726031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.663 248514 DEBUG oslo_concurrency.processutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.674 248514 DEBUG nova.compute.provider_tree [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.677 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.816 248514 DEBUG nova.scheduler.client.report [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.900 248514 DEBUG nova.compute.manager [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.901 248514 DEBUG oslo_concurrency.lockutils [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.901 248514 DEBUG oslo_concurrency.lockutils [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.901 248514 DEBUG oslo_concurrency.lockutils [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.901 248514 DEBUG nova.compute.manager [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] No waiting events found dispatching network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.902 248514 WARNING nova.compute.manager [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received unexpected event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff for instance with vm_state deleted and task_state None.
Dec 13 08:28:39 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1388726031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.935 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.949 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.956 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.956 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.956 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.957 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.957 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.958 248514 INFO nova.compute.manager [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Terminating instance
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.958 248514 DEBUG nova.compute.manager [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:28:39 compute-0 nova_compute[248510]: 2025-12-13 08:28:39.985 248514 INFO nova.scheduler.client.report [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance df25cd40-72b5-4e0f-90ec-8677c699d1d3
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.003 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.004 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.011 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.011 248514 INFO nova.compute.claims [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:28:40 compute-0 kernel: tapbdc94f2e-b1 (unregistering): left promiscuous mode
Dec 13 08:28:40 compute-0 NetworkManager[50376]: <info>  [1765614520.0227] device (tapbdc94f2e-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:40 compute-0 ovn_controller[148476]: 2025-12-13T08:28:40Z|00502|binding|INFO|Releasing lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 from this chassis (sb_readonly=0)
Dec 13 08:28:40 compute-0 ovn_controller[148476]: 2025-12-13T08:28:40Z|00503|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 down in Southbound
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.033 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:40 compute-0 ovn_controller[148476]: 2025-12-13T08:28:40Z|00504|binding|INFO|Removing iface tapbdc94f2e-b1 ovn-installed in OVS
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.043 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:b2 10.100.0.10'], port_security=['fa:16:3e:d6:ce:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '98240df6-1cba-40e1-833c-24611270ed83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bdc94f2e-b14e-4e39-bea0-978ff56ff722) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.046 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bdc94f2e-b14e-4e39-bea0-978ff56ff722 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.049 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7576f079-0439-46aa-98af-04f80cd254ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45247abc-d667-4697-bfcf-7d03cbbc0f99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.050 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca namespace which is not needed anymore
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:28:40 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Dec 13 08:28:40 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000002f.scope: Consumed 16.415s CPU time.
Dec 13 08:28:40 compute-0 systemd-machined[210538]: Machine qemu-58-instance-0000002f terminated.
Dec 13 08:28:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Dec 13 08:28:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Dec 13 08:28:40 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.162 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:40 compute-0 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [NOTICE]   (297615) : haproxy version is 2.8.14-c23fe91
Dec 13 08:28:40 compute-0 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [NOTICE]   (297615) : path to executable is /usr/sbin/haproxy
Dec 13 08:28:40 compute-0 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [WARNING]  (297615) : Exiting Master process...
Dec 13 08:28:40 compute-0 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [ALERT]    (297615) : Current worker (297618) exited with code 143 (Terminated)
Dec 13 08:28:40 compute-0 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [WARNING]  (297615) : All workers exited. Exiting... (0)
Dec 13 08:28:40 compute-0 systemd[1]: libpod-8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77.scope: Deactivated successfully.
Dec 13 08:28:40 compute-0 podman[303440]: 2025-12-13 08:28:40.196848309 +0000 UTC m=+0.056005743 container died 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.220 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance destroyed successfully.
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.221 248514 DEBUG nova.objects.instance [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'resources' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77-userdata-shm.mount: Deactivated successfully.
Dec 13 08:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4cf732e89cfd948592bd93195ade27ca38a2f5cf5bd9d4f7b2379dcc373df76-merged.mount: Deactivated successfully.
Dec 13 08:28:40 compute-0 podman[303440]: 2025-12-13 08:28:40.246946445 +0000 UTC m=+0.106103879 container cleanup 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.258 248514 DEBUG nova.virt.libvirt.vif [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:08Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.259 248514 DEBUG nova.network.os_vif_util [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.260 248514 DEBUG nova.network.os_vif_util [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.260 248514 DEBUG os_vif [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.262 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdc94f2e-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.267 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:40 compute-0 systemd[1]: libpod-conmon-8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77.scope: Deactivated successfully.
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.271 248514 INFO os_vif [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1')
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.316 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:40 compute-0 podman[303478]: 2025-12-13 08:28:40.324311374 +0000 UTC m=+0.051834770 container remove 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.330 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f552e5-22c3-4e80-97c6-a53446bba438]: (4, ('Sat Dec 13 08:28:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca (8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77)\n8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77\nSat Dec 13 08:28:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca (8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77)\n8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.333 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9c253c1c-6d2f-44e7-bb93-bdd4d6babaff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.334 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:40 compute-0 kernel: tap7576f079-00: left promiscuous mode
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.351 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.366 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.372 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[162d7d95-5f55-47b5-ad88-d066ffba25fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.390 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e2e853-b786-4cab-aaa7-cc0a3a47c48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.392 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b58573af-eb94-4d79-9da1-6b5fe727234b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.413 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c59eebd-665f-43ea-ab4c-49cbeddc4de3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701777, 'reachable_time': 33628, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303516, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.416 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:28:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.416 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[25235a98-2b33-4838-a659-b7c3c8bf5c9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d7576f079\x2d0439\x2d46aa\x2d98af\x2d04f80cd254ca.mount: Deactivated successfully.
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.681 248514 INFO nova.virt.libvirt.driver [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Deleting instance files /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83_del
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.682 248514 INFO nova.virt.libvirt.driver [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Deletion of /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83_del complete
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.852 248514 INFO nova.compute.manager [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Took 0.89 seconds to destroy the instance on the hypervisor.
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.853 248514 DEBUG oslo.service.loopingcall [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.853 248514 DEBUG nova.compute.manager [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.854 248514 DEBUG nova.network.neutron [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:28:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3943427554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:40 compute-0 ceph-mon[76537]: pgmap v1930: 321 pgs: 321 active+clean; 248 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Dec 13 08:28:40 compute-0 ceph-mon[76537]: osdmap e216: 3 total, 3 up, 3 in
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.947 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:40 compute-0 nova_compute[248510]: 2025-12-13 08:28:40.953 248514 DEBUG nova.compute.provider_tree [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.045 248514 DEBUG nova.scheduler.client.report [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.261 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.262 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:28:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 248 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.501 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.502 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.703 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.890 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:28:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3943427554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:41 compute-0 ceph-mon[76537]: pgmap v1932: 321 pgs: 321 active+clean; 248 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.969 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614506.968725, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:41 compute-0 nova_compute[248510]: 2025-12-13 08:28:41.970 248514 INFO nova.compute.manager [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Stopped (Lifecycle Event)
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.014 248514 DEBUG nova.policy [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.384 248514 DEBUG nova.compute.manager [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.385 248514 DEBUG oslo_concurrency.lockutils [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.385 248514 DEBUG oslo_concurrency.lockutils [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.385 248514 DEBUG oslo_concurrency.lockutils [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.385 248514 DEBUG nova.compute.manager [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] No waiting events found dispatching network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.386 248514 WARNING nova.compute.manager [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received unexpected event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 for instance with vm_state stopped and task_state None.
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.463 248514 DEBUG nova.compute.manager [None req-9721eb7b-73b0-436d-9f8a-153a03101012 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.567 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.568 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.569 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Creating image(s)
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.593 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.619 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.642 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.647 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.716 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.716 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.717 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.717 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.717 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.719 248514 INFO nova.compute.manager [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Terminating instance
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.719 248514 DEBUG nova.compute.manager [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.727 248514 INFO nova.virt.libvirt.driver [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance destroyed successfully.
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.729 248514 DEBUG nova.objects.instance [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'resources' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.732 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.732 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.733 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.733 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.754 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.758 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.799 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Updating instance_info_cache with network_info: [{"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.804 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.805 248514 DEBUG nova.virt.libvirt.vif [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1859844308',display_name='tempest-DeleteServersTestJSON-server-1859844308',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1859844308',id=52,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-vi6y6qcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:38Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=678d2db2-0536-4744-b65c-f0a5852f35e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.805 248514 DEBUG nova.network.os_vif_util [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.806 248514 DEBUG nova.network.os_vif_util [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.807 248514 DEBUG os_vif [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.809 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8efc9e-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.815 248514 INFO os_vif [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75')
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.832 248514 DEBUG nova.network.neutron [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.834 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Releasing lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.835 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance network_info: |[{"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.837 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start _get_guest_xml network_info=[{"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.842 248514 WARNING nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.853 248514 DEBUG nova.virt.libvirt.host [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.854 248514 DEBUG nova.virt.libvirt.host [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.859 248514 DEBUG nova.virt.libvirt.host [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.859 248514 DEBUG nova.virt.libvirt.host [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.860 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.860 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.861 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.861 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.862 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.862 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.862 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.862 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.863 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.863 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.863 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.864 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.871 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.930 248514 INFO nova.compute.manager [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Took 2.08 seconds to deallocate network for instance.
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.936 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-changed-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.937 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Refreshing instance network info cache due to event network-changed-834fc672-a8af-4884-964c-481d0d8d318e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.938 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.938 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:42 compute-0 nova_compute[248510]: 2025-12-13 08:28:42.939 248514 DEBUG nova.network.neutron [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Refreshing network info cache for port 834fc672-a8af-4884-964c-481d0d8d318e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.013 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.014 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.112 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.195 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] resizing rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.229 248514 DEBUG oslo_concurrency.processutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.324 248514 DEBUG nova.objects.instance [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.334 248514 INFO nova.virt.libvirt.driver [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Deleting instance files /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0_del
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.335 248514 INFO nova.virt.libvirt.driver [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Deletion of /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0_del complete
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.355 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.355 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Ensure instance console log exists: /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.356 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.356 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.356 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.402 248514 INFO nova.compute.manager [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Took 0.68 seconds to destroy the instance on the hypervisor.
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.403 248514 DEBUG oslo.service.loopingcall [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.403 248514 DEBUG nova.compute.manager [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.403 248514 DEBUG nova.network.neutron [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:28:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 222 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.3 MiB/s wr, 117 op/s
Dec 13 08:28:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318679882' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.520 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3318679882' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.543 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.547 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700987716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.875 248514 DEBUG oslo_concurrency.processutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.883 248514 DEBUG nova.compute.provider_tree [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.907 248514 DEBUG nova.scheduler.client.report [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.945 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:43 compute-0 nova_compute[248510]: 2025-12-13 08:28:43.982 248514 INFO nova.scheduler.client.report [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Deleted allocations for instance 98240df6-1cba-40e1-833c-24611270ed83
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.080 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325959890' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.184 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.186 248514 DEBUG nova.virt.libvirt.vif [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-771461744',display_name='tempest-ServerDiskConfigTestJSON-server-771461744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-771461744',id=54,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-lisf0ibr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:33Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=7139a479-b2fe-4d64-8061-97fceda2e392,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.187 248514 DEBUG nova.network.os_vif_util [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.188 248514 DEBUG nova.network.os_vif_util [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.189 248514 DEBUG nova.objects.instance [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7139a479-b2fe-4d64-8061-97fceda2e392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.203 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Successfully created port: 3aadc575-dc9f-4823-82c7-112e9b9832fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.213 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <uuid>7139a479-b2fe-4d64-8061-97fceda2e392</uuid>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <name>instance-00000036</name>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-771461744</nova:name>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:28:42</nova:creationTime>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <nova:user uuid="a5bc32e49dbd4372a006913090b9ef0f">tempest-ServerDiskConfigTestJSON-167971983-project-member</nova:user>
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <nova:project uuid="9aea752cb9b648a7aa9b3f634ced797e">tempest-ServerDiskConfigTestJSON-167971983</nova:project>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <nova:port uuid="834fc672-a8af-4884-964c-481d0d8d318e">
Dec 13 08:28:44 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <system>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <entry name="serial">7139a479-b2fe-4d64-8061-97fceda2e392</entry>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <entry name="uuid">7139a479-b2fe-4d64-8061-97fceda2e392</entry>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </system>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <os>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   </os>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <features>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   </features>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7139a479-b2fe-4d64-8061-97fceda2e392_disk">
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7139a479-b2fe-4d64-8061-97fceda2e392_disk.config">
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b6:94:dc"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <target dev="tap834fc672-a8"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/console.log" append="off"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <video>
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </video>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:28:44 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:28:44 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:28:44 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:28:44 compute-0 nova_compute[248510]: </domain>
Dec 13 08:28:44 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.215 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Preparing to wait for external event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.216 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.216 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.217 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.218 248514 DEBUG nova.virt.libvirt.vif [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-771461744',display_name='tempest-ServerDiskConfigTestJSON-server-771461744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-771461744',id=54,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-lisf0ibr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:33Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=7139a479-b2fe-4d64-8061-97fceda2e392,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.218 248514 DEBUG nova.network.os_vif_util [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.219 248514 DEBUG nova.network.os_vif_util [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.219 248514 DEBUG os_vif [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.221 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.221 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.226 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap834fc672-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.227 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap834fc672-a8, col_values=(('external_ids', {'iface-id': '834fc672-a8af-4884-964c-481d0d8d318e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:94:dc', 'vm-uuid': '7139a479-b2fe-4d64-8061-97fceda2e392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:44 compute-0 NetworkManager[50376]: <info>  [1765614524.2303] manager: (tap834fc672-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.235 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.236 248514 INFO os_vif [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8')
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.311 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.312 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.312 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No VIF found with MAC fa:16:3e:b6:94:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.313 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Using config drive
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.335 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:44 compute-0 ceph-mon[76537]: pgmap v1933: 321 pgs: 321 active+clean; 222 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.3 MiB/s wr, 117 op/s
Dec 13 08:28:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2700987716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3325959890' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.689 248514 DEBUG nova.compute.manager [req-e1011515-e5c8-4f3b-9ec9-e5919633a83b req-1cba1da7-1394-46ad-a76d-de0c8a012b48 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-deleted-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.817 248514 DEBUG nova.network.neutron [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.854 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.856 248514 INFO nova.compute.manager [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Took 1.45 seconds to deallocate network for instance.
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.913 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:44 compute-0 nova_compute[248510]: 2025-12-13 08:28:44.914 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.010 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Creating config drive at /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.016 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeee0okbi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.061 248514 DEBUG oslo_concurrency.processutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.101 248514 DEBUG nova.compute.manager [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.101 248514 DEBUG oslo_concurrency.lockutils [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.102 248514 DEBUG oslo_concurrency.lockutils [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.102 248514 DEBUG oslo_concurrency.lockutils [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.102 248514 DEBUG nova.compute.manager [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.102 248514 WARNING nova.compute.manager [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state deleted and task_state None.
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.103 248514 DEBUG nova.compute.manager [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-deleted-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.104 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.161 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeee0okbi" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.187 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.193 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 157 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 13 08:28:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236063891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.691 248514 DEBUG oslo_concurrency.processutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.694 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.695 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Deleting local config drive /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config because it was imported into RBD.
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.698 248514 DEBUG nova.compute.provider_tree [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.707 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614510.7056813, 41602b99-e7f2-450c-885e-51d07a1236d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.707 248514 INFO nova.compute.manager [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] VM Stopped (Lifecycle Event)
Dec 13 08:28:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/236063891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.725 248514 DEBUG nova.scheduler.client.report [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.731 248514 DEBUG nova.compute.manager [None req-199dca5c-b7be-4aeb-bf78-7b5caacdee3d - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.748 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:45 compute-0 kernel: tap834fc672-a8: entered promiscuous mode
Dec 13 08:28:45 compute-0 NetworkManager[50376]: <info>  [1765614525.7642] manager: (tap834fc672-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Dec 13 08:28:45 compute-0 ovn_controller[148476]: 2025-12-13T08:28:45Z|00505|binding|INFO|Claiming lport 834fc672-a8af-4884-964c-481d0d8d318e for this chassis.
Dec 13 08:28:45 compute-0 ovn_controller[148476]: 2025-12-13T08:28:45Z|00506|binding|INFO|834fc672-a8af-4884-964c-481d0d8d318e: Claiming fa:16:3e:b6:94:dc 10.100.0.10
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.778 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:94:dc 10.100.0.10'], port_security=['fa:16:3e:b6:94:dc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7139a479-b2fe-4d64-8061-97fceda2e392', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=834fc672-a8af-4884-964c-481d0d8d318e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.779 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 834fc672-a8af-4884-964c-481d0d8d318e in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 bound to our chassis
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.781 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:28:45 compute-0 ovn_controller[148476]: 2025-12-13T08:28:45Z|00507|binding|INFO|Setting lport 834fc672-a8af-4884-964c-481d0d8d318e up in Southbound
Dec 13 08:28:45 compute-0 ovn_controller[148476]: 2025-12-13T08:28:45Z|00508|binding|INFO|Setting lport 834fc672-a8af-4884-964c-481d0d8d318e ovn-installed in OVS
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.787 248514 INFO nova.scheduler.client.report [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Deleted allocations for instance 678d2db2-0536-4744-b65c-f0a5852f35e0
Dec 13 08:28:45 compute-0 systemd-udevd[303904]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.799 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f3db8eae-1ade-44a2-b962-15501e6f8f67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.800 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c63049d-61 in ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:28:45 compute-0 systemd-machined[210538]: New machine qemu-62-instance-00000036.
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.804 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c63049d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.804 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2de6cd9-c235-409c-93f6-43064061597b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.805 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e87ffb71-4e86-4c30-90a8-3ba0a4e808c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 NetworkManager[50376]: <info>  [1765614525.8154] device (tap834fc672-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:28:45 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000036.
Dec 13 08:28:45 compute-0 NetworkManager[50376]: <info>  [1765614525.8164] device (tap834fc672-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.819 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e232d5e4-3762-4e88-a52e-635b5755a3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.843 248514 DEBUG nova.network.neutron [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Updated VIF entry in instance network info cache for port 834fc672-a8af-4884-964c-481d0d8d318e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.844 248514 DEBUG nova.network.neutron [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Updating instance_info_cache with network_info: [{"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.850 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93151451-c564-48ae-80b8-523b47907040]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.891 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.892 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.892 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.893 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.893 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.893 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.893 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.903 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc4fda9-cb19-4038-b47a-9aa74172c2ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 systemd-udevd[303908]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2da4c41b-d7dc-4c31-bc65-5c2bea9f45e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 NetworkManager[50376]: <info>  [1765614525.9137] manager: (tap6c63049d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Dec 13 08:28:45 compute-0 nova_compute[248510]: 2025-12-13 08:28:45.941 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.951 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[945fd282-d1a8-470e-9276-0c3e85aee9d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.956 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7a2bd5-a92c-4da7-a5df-7dca03234609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:45 compute-0 NetworkManager[50376]: <info>  [1765614525.9809] device (tap6c63049d-60): carrier: link connected
Dec 13 08:28:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.987 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2227333f-4ddf-4b95-b824-59acd872080a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.009 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d646c27a-7875-4b33-90c0-85172fe95d2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710319, 'reachable_time': 19836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303937, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.031 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7322d221-b06e-4e26-a2eb-63ac69ad9ea9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710319, 'tstamp': 710319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303938, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75913810-8ee8-4484-8bfc-40fc1c3caa48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710319, 'reachable_time': 19836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303939, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db88b816-2be5-4c2b-9e67-5cfc22978a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.169 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Successfully updated port: 3aadc575-dc9f-4823-82c7-112e9b9832fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.171 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35605293-1b03-49c0-9a8f-761c0c8577c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.174 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.174 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.175 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c63049d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:46 compute-0 kernel: tap6c63049d-60: entered promiscuous mode
Dec 13 08:28:46 compute-0 NetworkManager[50376]: <info>  [1765614526.1798] manager: (tap6c63049d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.192 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c63049d-60, col_values=(('external_ids', {'iface-id': 'b410790c-12b7-4a29-87e5-13a29af9c319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:46 compute-0 ovn_controller[148476]: 2025-12-13T08:28:46Z|00509|binding|INFO|Releasing lport b410790c-12b7-4a29-87e5-13a29af9c319 from this chassis (sb_readonly=0)
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.198 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.199 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.199 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.390 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.391 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5ebf3e-1d67-48c2-ad6c-6aaaf0ddc828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.393 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:28:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.394 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'env', 'PROCESS_TAG=haproxy-6c63049d-63e9-47af-99e2-ce1403a42891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c63049d-63e9-47af-99e2-ce1403a42891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.534 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.592 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614526.5922618, 7139a479-b2fe-4d64-8061-97fceda2e392 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.593 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] VM Started (Lifecycle Event)
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.623 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.628 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614526.592531, 7139a479-b2fe-4d64-8061-97fceda2e392 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.629 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] VM Paused (Lifecycle Event)
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.655 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.660 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.692 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:28:46 compute-0 ceph-mon[76537]: pgmap v1934: 321 pgs: 321 active+clean; 157 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 13 08:28:46 compute-0 nova_compute[248510]: 2025-12-13 08:28:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:46 compute-0 podman[304013]: 2025-12-13 08:28:46.791225456 +0000 UTC m=+0.053383289 container create 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 08:28:46 compute-0 systemd[1]: Started libpod-conmon-6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5.scope.
Dec 13 08:28:46 compute-0 podman[304013]: 2025-12-13 08:28:46.763270316 +0000 UTC m=+0.025428139 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:28:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f27d429928f4f51cc8b3e9fb422bd397a12e0d5e7d42eff0f2f2f804535b5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:46 compute-0 podman[304013]: 2025-12-13 08:28:46.884658391 +0000 UTC m=+0.146816234 container init 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 08:28:46 compute-0 podman[304013]: 2025-12-13 08:28:46.890414583 +0000 UTC m=+0.152572386 container start 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:28:46 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [NOTICE]   (304032) : New worker (304034) forked
Dec 13 08:28:46 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [NOTICE]   (304032) : Loading success.
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.173 248514 DEBUG nova.compute.manager [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-changed-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.173 248514 DEBUG nova.compute.manager [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Refreshing instance network info cache due to event network-changed-3aadc575-dc9f-4823-82c7-112e9b9832fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.174 248514 DEBUG oslo_concurrency.lockutils [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.449 248514 DEBUG nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Processing event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.451 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.451 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.451 248514 DEBUG nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] No waiting events found dispatching network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.451 248514 WARNING nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received unexpected event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e for instance with vm_state building and task_state spawning.
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.452 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.460 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:28:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 157 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.462 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614527.4589188, 7139a479-b2fe-4d64-8061-97fceda2e392 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.462 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] VM Resumed (Lifecycle Event)
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.468 248514 INFO nova.virt.libvirt.driver [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance spawned successfully.
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.469 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.516 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.522 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.526 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.526 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.527 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.527 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.528 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.528 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.598 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.708 248514 INFO nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Took 13.49 seconds to spawn the instance on the hypervisor.
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.709 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.780 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614512.7797358, df25cd40-72b5-4e0f-90ec-8677c699d1d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.781 248514 INFO nova.compute.manager [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] VM Stopped (Lifecycle Event)
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.828 248514 DEBUG nova.compute.manager [None req-ffe8b032-1df9-4024-b4bb-4c0685a80df1 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.836 248514 INFO nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Took 16.84 seconds to build instance.
Dec 13 08:28:47 compute-0 nova_compute[248510]: 2025-12-13 08:28:47.867 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:48 compute-0 ceph-mon[76537]: pgmap v1935: 321 pgs: 321 active+clean; 157 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 13 08:28:48 compute-0 podman[304045]: 2025-12-13 08:28:48.973003288 +0000 UTC m=+0.053963502 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 08:28:48 compute-0 podman[304044]: 2025-12-13 08:28:48.982771229 +0000 UTC m=+0.068600973 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:28:49 compute-0 podman[304043]: 2025-12-13 08:28:49.00915988 +0000 UTC m=+0.096894631 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:28:49 compute-0 nova_compute[248510]: 2025-12-13 08:28:49.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Dec 13 08:28:49 compute-0 nova_compute[248510]: 2025-12-13 08:28:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:49 compute-0 nova_compute[248510]: 2025-12-13 08:28:49.852 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:49 compute-0 nova_compute[248510]: 2025-12-13 08:28:49.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:49 compute-0 nova_compute[248510]: 2025-12-13 08:28:49.854 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:49 compute-0 nova_compute[248510]: 2025-12-13 08:28:49.854 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:28:49 compute-0 nova_compute[248510]: 2025-12-13 08:28:49.855 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795236163' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.457 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Updating instance_info_cache with network_info: [{"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.486 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.506 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.507 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance network_info: |[{"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.508 248514 DEBUG oslo_concurrency.lockutils [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.508 248514 DEBUG nova.network.neutron [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Refreshing network info cache for port 3aadc575-dc9f-4823-82c7-112e9b9832fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.511 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start _get_guest_xml network_info=[{"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.517 248514 WARNING nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.521 248514 DEBUG nova.virt.libvirt.host [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.523 248514 DEBUG nova.virt.libvirt.host [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.528 248514 DEBUG nova.virt.libvirt.host [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.529 248514 DEBUG nova.virt.libvirt.host [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.530 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.530 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.531 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.531 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.532 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.532 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.532 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.533 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.533 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.533 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.534 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.534 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.539 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.655 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.655 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:28:50 compute-0 ceph-mon[76537]: pgmap v1936: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Dec 13 08:28:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2795236163' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.835 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.837 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=59.94623995665461GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.837 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.910 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614515.909534, 678d2db2-0536-4744-b65c-f0a5852f35e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.911 248514 INFO nova.compute.manager [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] VM Stopped (Lifecycle Event)
Dec 13 08:28:50 compute-0 nova_compute[248510]: 2025-12-13 08:28:50.970 248514 DEBUG nova.compute.manager [None req-476bc214-621f-407c-9178-97ec3640072c - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.052 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7139a479-b2fe-4d64-8061-97fceda2e392 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.053 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.053 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.054 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.133 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555582104' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.220 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.242 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.247 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 157 op/s
Dec 13 08:28:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735186484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.746 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.754 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.782 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:28:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/521739913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.847 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.848 248514 DEBUG nova.virt.libvirt.vif [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1587589866',display_name='tempest-ImagesTestJSON-server-1587589866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1587589866',id=55,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-uay75zc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:41Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9b45ce75-4bd3-4cc0-a772-2474ffc2cd52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.849 248514 DEBUG nova.network.os_vif_util [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.850 248514 DEBUG nova.network.os_vif_util [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.851 248514 DEBUG nova.objects.instance [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.854 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.855 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.879 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <uuid>9b45ce75-4bd3-4cc0-a772-2474ffc2cd52</uuid>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <name>instance-00000037</name>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesTestJSON-server-1587589866</nova:name>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:28:50</nova:creationTime>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <nova:port uuid="3aadc575-dc9f-4823-82c7-112e9b9832fe">
Dec 13 08:28:51 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <system>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <entry name="serial">9b45ce75-4bd3-4cc0-a772-2474ffc2cd52</entry>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <entry name="uuid">9b45ce75-4bd3-4cc0-a772-2474ffc2cd52</entry>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </system>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <os>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   </os>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <features>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   </features>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk">
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config">
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       </source>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:28:51 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:dd:f5:51"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <target dev="tap3aadc575-dc"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/console.log" append="off"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <video>
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </video>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:28:51 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:28:51 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:28:51 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:28:51 compute-0 nova_compute[248510]: </domain>
Dec 13 08:28:51 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.885 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Preparing to wait for external event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.886 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.886 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.886 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.887 248514 DEBUG nova.virt.libvirt.vif [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1587589866',display_name='tempest-ImagesTestJSON-server-1587589866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1587589866',id=55,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-uay75zc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:41Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9b45ce75-4bd3-4cc0-a772-2474ffc2cd52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.887 248514 DEBUG nova.network.os_vif_util [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.888 248514 DEBUG nova.network.os_vif_util [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.889 248514 DEBUG os_vif [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.890 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.890 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.894 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3aadc575-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.894 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3aadc575-dc, col_values=(('external_ids', {'iface-id': '3aadc575-dc9f-4823-82c7-112e9b9832fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:f5:51', 'vm-uuid': '9b45ce75-4bd3-4cc0-a772-2474ffc2cd52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:51 compute-0 NetworkManager[50376]: <info>  [1765614531.8973] manager: (tap3aadc575-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:51 compute-0 nova_compute[248510]: 2025-12-13 08:28:51.905 248514 INFO os_vif [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc')
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.234 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.235 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.235 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:dd:f5:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.236 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Using config drive
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.258 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1555582104' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2735186484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/521739913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.854 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.855 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.856 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:52 compute-0 nova_compute[248510]: 2025-12-13 08:28:52.856 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.043 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.043 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.076 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Creating config drive at /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.082 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcv6e4ei_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.132 248514 DEBUG nova.network.neutron [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Updated VIF entry in instance network info cache for port 3aadc575-dc9f-4823-82c7-112e9b9832fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.134 248514 DEBUG nova.network.neutron [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Updating instance_info_cache with network_info: [{"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.137 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.180 248514 DEBUG oslo_concurrency.lockutils [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.238 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcv6e4ei_" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.263 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.271 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:53 compute-0 ceph-mon[76537]: pgmap v1937: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 157 op/s
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.342 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.343 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.353 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.354 248514 INFO nova.compute.claims [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:28:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.587 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.588 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Deleting local config drive /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config because it was imported into RBD.
Dec 13 08:28:53 compute-0 NetworkManager[50376]: <info>  [1765614533.6433] manager: (tap3aadc575-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Dec 13 08:28:53 compute-0 kernel: tap3aadc575-dc: entered promiscuous mode
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.648 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:53 compute-0 ovn_controller[148476]: 2025-12-13T08:28:53Z|00510|binding|INFO|Claiming lport 3aadc575-dc9f-4823-82c7-112e9b9832fe for this chassis.
Dec 13 08:28:53 compute-0 ovn_controller[148476]: 2025-12-13T08:28:53Z|00511|binding|INFO|3aadc575-dc9f-4823-82c7-112e9b9832fe: Claiming fa:16:3e:dd:f5:51 10.100.0.11
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:53 compute-0 systemd-udevd[304280]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:28:53 compute-0 NetworkManager[50376]: <info>  [1765614533.7092] device (tap3aadc575-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:28:53 compute-0 NetworkManager[50376]: <info>  [1765614533.7104] device (tap3aadc575-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:53 compute-0 ovn_controller[148476]: 2025-12-13T08:28:53Z|00512|binding|INFO|Setting lport 3aadc575-dc9f-4823-82c7-112e9b9832fe ovn-installed in OVS
Dec 13 08:28:53 compute-0 nova_compute[248510]: 2025-12-13 08:28:53.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:53 compute-0 systemd-machined[210538]: New machine qemu-63-instance-00000037.
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.933 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:f5:51 10.100.0.11'], port_security=['fa:16:3e:dd:f5:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9b45ce75-4bd3-4cc0-a772-2474ffc2cd52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3aadc575-dc9f-4823-82c7-112e9b9832fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.935 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3aadc575-dc9f-4823-82c7-112e9b9832fe in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.937 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:28:53 compute-0 ovn_controller[148476]: 2025-12-13T08:28:53Z|00513|binding|INFO|Setting lport 3aadc575-dc9f-4823-82c7-112e9b9832fe up in Southbound
Dec 13 08:28:53 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000037.
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.953 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[317629c0-bf64-4d47-aaf1-f2ef69deaed0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.954 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87bd91d0-e1 in ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.960 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87bd91d0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.960 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[418409c3-35aa-471a-920c-d4d192c416b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.961 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[17eef8b4-a7b9-4118-82fa-2ab99926b46e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.974 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[6a02623c-023d-4b11-bc1f-7fd93b8ce468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.991 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc69cb0a-3249-4d6f-831f-a6d9f71e752f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.040 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ec2d25-7761-4cb8-8183-d45ccf0b1cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 NetworkManager[50376]: <info>  [1765614534.0473] manager: (tap87bd91d0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.046 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff38bb8-7f3b-48ee-8f57-0bfad566f957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.094 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0570a7-b3a3-4c99-ae6c-24919c8dff0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.097 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2f239d80-51ad-4648-98ce-77b6fb16cf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 NetworkManager[50376]: <info>  [1765614534.1249] device (tap87bd91d0-e0): carrier: link connected
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.135 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aea5b0dc-ded9-443d-973d-7a6a09f186eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.158 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fab31c47-1601-44d6-ba95-39a63867fb9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711134, 'reachable_time': 42720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304316, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.181 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[476383cf-40f2-4c10-ad4a-dbd2a057a4d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:ec7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711134, 'tstamp': 711134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304317, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.207 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56e7c471-8373-465d-b7da-203cc53b60e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711134, 'reachable_time': 42720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304318, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.247 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53da2727-94fb-484c-8aa1-8520c2dbed43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.326 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2d015f13-3d2c-4d11-bfe1-ececdcbcef67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.328 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.328 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.329 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:54 compute-0 kernel: tap87bd91d0-e0: entered promiscuous mode
Dec 13 08:28:54 compute-0 NetworkManager[50376]: <info>  [1765614534.3654] manager: (tap87bd91d0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.366 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.367 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:54 compute-0 ovn_controller[148476]: 2025-12-13T08:28:54Z|00514|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.385 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.387 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.388 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[de6e625b-caa3-4f1b-bbd5-f42eb2d7b0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.389 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:28:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.389 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'env', 'PROCESS_TAG=haproxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87bd91d0-eead-49b6-8f92-f8d0dba555dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:28:54 compute-0 ceph-mon[76537]: pgmap v1938: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.502 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.762 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614534.7623432, 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.763 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] VM Started (Lifecycle Event)
Dec 13 08:28:54 compute-0 podman[304411]: 2025-12-13 08:28:54.850949148 +0000 UTC m=+0.083140393 container create 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:28:54 compute-0 podman[304411]: 2025-12-13 08:28:54.790505116 +0000 UTC m=+0.022696391 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:28:54 compute-0 systemd[1]: Started libpod-conmon-2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01.scope.
Dec 13 08:28:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d06258ddb97e0e16af3f5236f13273e2b5d49c570ee6de133888013ff7c7f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:28:54 compute-0 podman[304411]: 2025-12-13 08:28:54.954748419 +0000 UTC m=+0.186939704 container init 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:28:54 compute-0 podman[304411]: 2025-12-13 08:28:54.961875725 +0000 UTC m=+0.194066980 container start 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.983 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:54 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [NOTICE]   (304431) : New worker (304433) forked
Dec 13 08:28:54 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [NOTICE]   (304431) : Loading success.
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.991 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614534.7624454, 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:54 compute-0 nova_compute[248510]: 2025-12-13 08:28:54.991 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] VM Paused (Lifecycle Event)
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.091 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:28:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/231925796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.096 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.122 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.128 248514 DEBUG nova.compute.provider_tree [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.147 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:28:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.199 248514 DEBUG nova.scheduler.client.report [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.215 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614520.214119, 98240df6-1cba-40e1-833c-24611270ed83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.215 248514 INFO nova.compute.manager [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Stopped (Lifecycle Event)
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.302 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.303 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.310 248514 DEBUG nova.compute.manager [None req-3fd42ab4-3f83-442e-8914-787ca79cd5da - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.349 248514 DEBUG nova.compute.manager [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.352 248514 DEBUG oslo_concurrency.lockutils [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.352 248514 DEBUG oslo_concurrency.lockutils [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.353 248514 DEBUG oslo_concurrency.lockutils [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.353 248514 DEBUG nova.compute.manager [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Processing event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.354 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.368 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614535.3664398, 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.379 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] VM Resumed (Lifecycle Event)
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.382 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.389 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance spawned successfully.
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.389 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:28:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:55.410 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:28:55.412 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.427 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.427 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.459 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.464 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 131 op/s
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.465 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.466 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.466 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.467 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.467 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.472 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.523 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:28:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/231925796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.618 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.635 248514 INFO nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Took 13.07 seconds to spawn the instance on the hypervisor.
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.636 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:55 compute-0 nova_compute[248510]: 2025-12-13 08:28:55.735 248514 DEBUG nova.policy [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6f7967ce16c45d394188c1302b02907', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.005 248514 INFO nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Took 16.05 seconds to build instance.
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.036 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.038 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.039 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Creating image(s)
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.064 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.092 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.121 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.125 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.169 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.217 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.218 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.219 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.219 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.251 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.256 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 917c2aaf-7701-4198-802a-0bfc5753885a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.589 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Successfully created port: a795a3d9-1388-4e72-8bfd-271816a45466 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:28:56 compute-0 ceph-mon[76537]: pgmap v1939: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 131 op/s
Dec 13 08:28:56 compute-0 nova_compute[248510]: 2025-12-13 08:28:56.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:28:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 585 KiB/s wr, 99 op/s
Dec 13 08:28:57 compute-0 nova_compute[248510]: 2025-12-13 08:28:57.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.542 248514 DEBUG nova.compute.manager [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:28:58 compute-0 ceph-mon[76537]: pgmap v1940: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 585 KiB/s wr, 99 op/s
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.560 248514 DEBUG nova.compute.manager [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.561 248514 DEBUG oslo_concurrency.lockutils [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.561 248514 DEBUG oslo_concurrency.lockutils [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.562 248514 DEBUG oslo_concurrency.lockutils [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.562 248514 DEBUG nova.compute.manager [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] No waiting events found dispatching network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.563 248514 WARNING nova.compute.manager [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received unexpected event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe for instance with vm_state active and task_state image_snapshot.
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.564 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Successfully updated port: a795a3d9-1388-4e72-8bfd-271816a45466 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.658 248514 INFO nova.compute.manager [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] instance snapshotting
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.682 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.683 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.683 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.974 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.974 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.975 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.975 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.975 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.976 248514 INFO nova.compute.manager [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Terminating instance
Dec 13 08:28:58 compute-0 nova_compute[248510]: 2025-12-13 08:28:58.977 248514 DEBUG nova.compute.manager [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:28:59 compute-0 nova_compute[248510]: 2025-12-13 08:28:59.009 248514 INFO nova.virt.libvirt.driver [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Beginning live snapshot process
Dec 13 08:28:59 compute-0 nova_compute[248510]: 2025-12-13 08:28:59.044 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:28:59 compute-0 nova_compute[248510]: 2025-12-13 08:28:59.051 248514 DEBUG nova.compute.manager [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-changed-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:28:59 compute-0 nova_compute[248510]: 2025-12-13 08:28:59.051 248514 DEBUG nova.compute.manager [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Refreshing instance network info cache due to event network-changed-a795a3d9-1388-4e72-8bfd-271816a45466. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:28:59 compute-0 nova_compute[248510]: 2025-12-13 08:28:59.051 248514 DEBUG oslo_concurrency.lockutils [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:28:59 compute-0 nova_compute[248510]: 2025-12-13 08:28:59.326 248514 DEBUG nova.virt.libvirt.imagebackend [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:28:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 170 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Dec 13 08:28:59 compute-0 nova_compute[248510]: 2025-12-13 08:28:59.570 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(36c458ad47e44a7282d63c2a8d0dce5d) on rbd image(9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:28:59 compute-0 nova_compute[248510]: 2025-12-13 08:28:59.976 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 917c2aaf-7701-4198-802a-0bfc5753885a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:00 compute-0 ceph-mon[76537]: pgmap v1941: 321 pgs: 321 active+clean; 170 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:00 compute-0 kernel: tap834fc672-a8 (unregistering): left promiscuous mode
Dec 13 08:29:00 compute-0 NetworkManager[50376]: <info>  [1765614540.1174] device (tap834fc672-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:29:00 compute-0 ovn_controller[148476]: 2025-12-13T08:29:00Z|00515|binding|INFO|Releasing lport 834fc672-a8af-4884-964c-481d0d8d318e from this chassis (sb_readonly=0)
Dec 13 08:29:00 compute-0 ovn_controller[148476]: 2025-12-13T08:29:00Z|00516|binding|INFO|Setting lport 834fc672-a8af-4884-964c-481d0d8d318e down in Southbound
Dec 13 08:29:00 compute-0 ovn_controller[148476]: 2025-12-13T08:29:00Z|00517|binding|INFO|Removing iface tap834fc672-a8 ovn-installed in OVS
Dec 13 08:29:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.171 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] resizing rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:29:00 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Deactivated successfully.
Dec 13 08:29:00 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Consumed 11.795s CPU time.
Dec 13 08:29:00 compute-0 systemd-machined[210538]: Machine qemu-62-instance-00000036 terminated.
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.278 248514 INFO nova.virt.libvirt.driver [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance destroyed successfully.
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.278 248514 DEBUG nova.objects.instance [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'resources' on Instance uuid 7139a479-b2fe-4d64-8061-97fceda2e392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.435 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:94:dc 10.100.0.10'], port_security=['fa:16:3e:b6:94:dc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7139a479-b2fe-4d64-8061-97fceda2e392', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=834fc672-a8af-4884-964c-481d0d8d318e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.436 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 834fc672-a8af-4884-964c-481d0d8d318e in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 unbound from our chassis
Dec 13 08:29:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.438 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c63049d-63e9-47af-99e2-ce1403a42891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:29:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.439 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2f1547-bf5c-4c47-bdf1-dbf6a8894416]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.439 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace which is not needed anymore
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.443 248514 DEBUG nova.virt.libvirt.vif [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:28:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-771461744',display_name='tempest-ServerDiskConfigTestJSON-server-771461744',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-771461744',id=54,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-lisf0ibr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:56Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=7139a479-b2fe-4d64-8061-97fceda2e392,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.444 248514 DEBUG nova.network.os_vif_util [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.445 248514 DEBUG nova.network.os_vif_util [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.445 248514 DEBUG os_vif [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.448 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap834fc672-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.457 248514 INFO os_vif [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8')
Dec 13 08:29:00 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [NOTICE]   (304032) : haproxy version is 2.8.14-c23fe91
Dec 13 08:29:00 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [NOTICE]   (304032) : path to executable is /usr/sbin/haproxy
Dec 13 08:29:00 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [WARNING]  (304032) : Exiting Master process...
Dec 13 08:29:00 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [ALERT]    (304032) : Current worker (304034) exited with code 143 (Terminated)
Dec 13 08:29:00 compute-0 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [WARNING]  (304032) : All workers exited. Exiting... (0)
Dec 13 08:29:00 compute-0 systemd[1]: libpod-6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5.scope: Deactivated successfully.
Dec 13 08:29:00 compute-0 podman[304694]: 2025-12-13 08:29:00.698263892 +0000 UTC m=+0.154751299 container died 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.781 248514 DEBUG nova.objects.instance [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'migration_context' on Instance uuid 917c2aaf-7701-4198-802a-0bfc5753885a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.802 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.804 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Ensure instance console log exists: /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.804 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.805 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:00 compute-0 nova_compute[248510]: 2025-12-13 08:29:00.805 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5-userdata-shm.mount: Deactivated successfully.
Dec 13 08:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f27d429928f4f51cc8b3e9fb422bd397a12e0d5e7d42eff0f2f2f804535b5b-merged.mount: Deactivated successfully.
Dec 13 08:29:00 compute-0 podman[304694]: 2025-12-13 08:29:00.973203145 +0000 UTC m=+0.429690552 container cleanup 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:29:00 compute-0 systemd[1]: libpod-conmon-6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5.scope: Deactivated successfully.
Dec 13 08:29:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Dec 13 08:29:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Dec 13 08:29:01 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Dec 13 08:29:01 compute-0 podman[304742]: 2025-12-13 08:29:01.456632913 +0000 UTC m=+0.454702880 container remove 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.459 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Updating instance_info_cache with network_info: [{"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 170 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.471 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d86b40-a636-4c78-85ff-f5c455b9cadf]: (4, ('Sat Dec 13 08:29:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5)\n6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5\nSat Dec 13 08:29:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5)\n6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[feb86db5-9b74-474c-b958-ea5caed61cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.474 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:01 compute-0 kernel: tap6c63049d-60: left promiscuous mode
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.489 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.490 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance network_info: |[{"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.490 248514 DEBUG oslo_concurrency.lockutils [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.490 248514 DEBUG nova.network.neutron [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Refreshing network info cache for port a795a3d9-1388-4e72-8bfd-271816a45466 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.493 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start _get_guest_xml network_info=[{"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.520 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[106fbbbb-54b3-47b4-846f-6abaf6c4dddd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.526 248514 WARNING nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.534 248514 DEBUG nova.virt.libvirt.host [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.535 248514 DEBUG nova.virt.libvirt.host [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c289457-8866-4c81-9f4f-0b90ab0f2179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ef512f-b8a3-40ec-a3f0-508eac89dbca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.543 248514 DEBUG nova.virt.libvirt.host [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.544 248514 DEBUG nova.virt.libvirt.host [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.545 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.545 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.546 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.546 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.547 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.547 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.547 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.548 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.548 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.548 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.549 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.549 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:29:01 compute-0 nova_compute[248510]: 2025-12-13 08:29:01.553 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.558 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a502e4-413f-4b05-809e-836d92402a2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710311, 'reachable_time': 25771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304758, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c63049d\x2d63e9\x2d47af\x2d99e2\x2dce1403a42891.mount: Deactivated successfully.
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.562 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:29:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.562 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9908cf5b-6990-4d16-aad7-d30c29500a39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830690144' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.135 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.256 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.262 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:02 compute-0 ceph-mon[76537]: osdmap e217: 3 total, 3 up, 3 in
Dec 13 08:29:02 compute-0 ceph-mon[76537]: pgmap v1943: 321 pgs: 321 active+clean; 170 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 13 08:29:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1830690144' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.419 248514 DEBUG nova.compute.manager [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-unplugged-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.420 248514 DEBUG oslo_concurrency.lockutils [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.420 248514 DEBUG oslo_concurrency.lockutils [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.420 248514 DEBUG oslo_concurrency.lockutils [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.421 248514 DEBUG nova.compute.manager [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] No waiting events found dispatching network-vif-unplugged-834fc672-a8af-4884-964c-481d0d8d318e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.421 248514 DEBUG nova.compute.manager [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-unplugged-834fc672-a8af-4884-964c-481d0d8d318e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.580 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning vms/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk@36c458ad47e44a7282d63c2a8d0dce5d to images/9644b262-cf46-483d-8fc3-333210320729 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:29:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/372351703' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.836 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.837 248514 DEBUG nova.virt.libvirt.vif [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1976093525',display_name='tempest-DeleteServersTestJSON-server-1976093525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1976093525',id=56,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-r0pczy0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:55Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=917c2aaf-7701-4198-802a-0bfc5753885a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.837 248514 DEBUG nova.network.os_vif_util [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.838 248514 DEBUG nova.network.os_vif_util [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.840 248514 DEBUG nova.objects.instance [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 917c2aaf-7701-4198-802a-0bfc5753885a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.862 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <uuid>917c2aaf-7701-4198-802a-0bfc5753885a</uuid>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <name>instance-00000038</name>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <nova:name>tempest-DeleteServersTestJSON-server-1976093525</nova:name>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:29:01</nova:creationTime>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <nova:user uuid="b6f7967ce16c45d394188c1302b02907">tempest-DeleteServersTestJSON-991966373-project-member</nova:user>
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <nova:project uuid="6f6beadbb0244529b8dfc1abff8e8e10">tempest-DeleteServersTestJSON-991966373</nova:project>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <nova:port uuid="a795a3d9-1388-4e72-8bfd-271816a45466">
Dec 13 08:29:02 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <system>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <entry name="serial">917c2aaf-7701-4198-802a-0bfc5753885a</entry>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <entry name="uuid">917c2aaf-7701-4198-802a-0bfc5753885a</entry>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </system>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <os>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   </os>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <features>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   </features>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/917c2aaf-7701-4198-802a-0bfc5753885a_disk">
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/917c2aaf-7701-4198-802a-0bfc5753885a_disk.config">
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:02 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:a8:54:48"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <target dev="tapa795a3d9-13"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/console.log" append="off"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <video>
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </video>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:29:02 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:29:02 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:29:02 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:29:02 compute-0 nova_compute[248510]: </domain>
Dec 13 08:29:02 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.864 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Preparing to wait for external event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.864 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.864 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.865 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.865 248514 DEBUG nova.virt.libvirt.vif [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1976093525',display_name='tempest-DeleteServersTestJSON-server-1976093525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1976093525',id=56,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-r0pczy0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:55Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=917c2aaf-7701-4198-802a-0bfc5753885a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.866 248514 DEBUG nova.network.os_vif_util [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.866 248514 DEBUG nova.network.os_vif_util [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.867 248514 DEBUG os_vif [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.868 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.868 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.872 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa795a3d9-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.872 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa795a3d9-13, col_values=(('external_ids', {'iface-id': 'a795a3d9-1388-4e72-8bfd-271816a45466', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:54:48', 'vm-uuid': '917c2aaf-7701-4198-802a-0bfc5753885a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:02 compute-0 NetworkManager[50376]: <info>  [1765614542.8753] manager: (tapa795a3d9-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.882 248514 INFO os_vif [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13')
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.965 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.966 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.966 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No VIF found with MAC fa:16:3e:a8:54:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.966 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Using config drive
Dec 13 08:29:02 compute-0 nova_compute[248510]: 2025-12-13 08:29:02.987 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:03 compute-0 nova_compute[248510]: 2025-12-13 08:29:03.240 248514 DEBUG nova.network.neutron [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Updated VIF entry in instance network info cache for port a795a3d9-1388-4e72-8bfd-271816a45466. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:29:03 compute-0 nova_compute[248510]: 2025-12-13 08:29:03.240 248514 DEBUG nova.network.neutron [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Updating instance_info_cache with network_info: [{"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:03 compute-0 nova_compute[248510]: 2025-12-13 08:29:03.261 248514 DEBUG oslo_concurrency.lockutils [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:03 compute-0 nova_compute[248510]: 2025-12-13 08:29:03.349 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] flattening images/9644b262-cf46-483d-8fc3-333210320729 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:29:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 174 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 08:29:03 compute-0 nova_compute[248510]: 2025-12-13 08:29:03.727 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Creating config drive at /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config
Dec 13 08:29:03 compute-0 nova_compute[248510]: 2025-12-13 08:29:03.733 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvzapdv_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:03 compute-0 nova_compute[248510]: 2025-12-13 08:29:03.884 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvzapdv_1" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.001 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.010 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:04 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/372351703' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.626 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(36c458ad47e44a7282d63c2a8d0dce5d) on rbd image(9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.692 248514 INFO nova.virt.libvirt.driver [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Deleting instance files /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392_del
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.692 248514 INFO nova.virt.libvirt.driver [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Deletion of /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392_del complete
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.711 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.711 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Deleting local config drive /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config because it was imported into RBD.
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.760 248514 INFO nova.compute.manager [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Took 5.78 seconds to destroy the instance on the hypervisor.
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.761 248514 DEBUG oslo.service.loopingcall [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.762 248514 DEBUG nova.compute.manager [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.762 248514 DEBUG nova.network.neutron [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:29:04 compute-0 kernel: tapa795a3d9-13: entered promiscuous mode
Dec 13 08:29:04 compute-0 NetworkManager[50376]: <info>  [1765614544.7805] manager: (tapa795a3d9-13): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:04 compute-0 ovn_controller[148476]: 2025-12-13T08:29:04Z|00518|binding|INFO|Claiming lport a795a3d9-1388-4e72-8bfd-271816a45466 for this chassis.
Dec 13 08:29:04 compute-0 ovn_controller[148476]: 2025-12-13T08:29:04Z|00519|binding|INFO|a795a3d9-1388-4e72-8bfd-271816a45466: Claiming fa:16:3e:a8:54:48 10.100.0.12
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.795 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:54:48 10.100.0.12'], port_security=['fa:16:3e:a8:54:48 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '917c2aaf-7701-4198-802a-0bfc5753885a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a795a3d9-1388-4e72-8bfd-271816a45466) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.796 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a795a3d9-1388-4e72-8bfd-271816a45466 in datapath 85372fca-ab50-48b6-8c21-507f630c205a bound to our chassis
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.798 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.805 248514 DEBUG nova.compute.manager [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.806 248514 DEBUG oslo_concurrency.lockutils [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.806 248514 DEBUG oslo_concurrency.lockutils [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.806 248514 DEBUG oslo_concurrency.lockutils [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.807 248514 DEBUG nova.compute.manager [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] No waiting events found dispatching network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.807 248514 WARNING nova.compute.manager [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received unexpected event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e for instance with vm_state active and task_state deleting.
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.811 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b13e68-239f-4dca-a25f-8e6239177f5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.813 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85372fca-a1 in ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.816 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85372fca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.817 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4774e6d5-5345-46ad-89f8-4fcf9326b45b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.817 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c08108f4-039c-4e70-bb94-c3e6baecda6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 systemd-udevd[304969]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.835 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[27c3fd26-13f7-4cf9-8c81-1bfffd1c0772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 NetworkManager[50376]: <info>  [1765614544.8501] device (tapa795a3d9-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:29:04 compute-0 NetworkManager[50376]: <info>  [1765614544.8512] device (tapa795a3d9-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:29:04 compute-0 systemd-machined[210538]: New machine qemu-64-instance-00000038.
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.866 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[77e27877-6b1e-4816-b2ca-73e5fe0e0000]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000038.
Dec 13 08:29:04 compute-0 ovn_controller[148476]: 2025-12-13T08:29:04Z|00520|binding|INFO|Setting lport a795a3d9-1388-4e72-8bfd-271816a45466 ovn-installed in OVS
Dec 13 08:29:04 compute-0 ovn_controller[148476]: 2025-12-13T08:29:04Z|00521|binding|INFO|Setting lport a795a3d9-1388-4e72-8bfd-271816a45466 up in Southbound
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:04 compute-0 nova_compute[248510]: 2025-12-13 08:29:04.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.906 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[89d6fba8-1df7-4c3f-80c4-5dd8778f921e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 NetworkManager[50376]: <info>  [1765614544.9129] manager: (tap85372fca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a398ba-f04f-409a-b935-c29b91a71521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.958 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0605a238-73a7-4cc0-9b5b-9baa00950033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.962 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7479ee4-df55-48b7-b820-231674c8bf93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:04 compute-0 NetworkManager[50376]: <info>  [1765614544.9916] device (tap85372fca-a0): carrier: link connected
Dec 13 08:29:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.997 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cd14b780-51e0-4b44-8b6e-7993cefc65a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.018 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09e37613-0947-4ab5-aa0c-01063b6e30c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712220, 'reachable_time': 30678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305001, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.033 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fb390305-f431-4536-b7f3-d644f3395718]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:30d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712220, 'tstamp': 712220}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305002, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.058 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2523e835-deed-4111-882c-e48fb736753b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712220, 'reachable_time': 30678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305003, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.098 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75c7a013-f14e-42ff-867f-b223744a3855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Dec 13 08:29:05 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Dec 13 08:29:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89278b6b-b773-4b18-855a-e750dfd9dbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.185 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.185 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85372fca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:05 compute-0 kernel: tap85372fca-a0: entered promiscuous mode
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.189 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:05 compute-0 NetworkManager[50376]: <info>  [1765614545.1908] manager: (tap85372fca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Dec 13 08:29:05 compute-0 ceph-mon[76537]: pgmap v1944: 321 pgs: 321 active+clean; 174 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.193 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85372fca-a0, col_values=(('external_ids', {'iface-id': '2c0f4981-0ad0-478e-b1ad-551d231022ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:05 compute-0 ovn_controller[148476]: 2025-12-13T08:29:05Z|00522|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.194 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.197 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.199 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e600ccdf-04c4-4ac5-93f3-fbdc963fb3e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.202 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:29:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.204 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'env', 'PROCESS_TAG=haproxy-85372fca-ab50-48b6-8c21-507f630c205a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85372fca-ab50-48b6-8c21-507f630c205a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 138 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.2 MiB/s wr, 229 op/s
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.621 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(snap) on rbd image(9644b262-cf46-483d-8fc3-333210320729) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:29:05 compute-0 podman[305053]: 2025-12-13 08:29:05.583413495 +0000 UTC m=+0.027201452 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.904 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614545.7925143, 917c2aaf-7701-4198-802a-0bfc5753885a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.904 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Started (Lifecycle Event)
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.928 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.933 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614545.7926009, 917c2aaf-7701-4198-802a-0bfc5753885a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.934 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Paused (Lifecycle Event)
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.961 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:05 compute-0 nova_compute[248510]: 2025-12-13 08:29:05.966 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:05 compute-0 podman[305053]: 2025-12-13 08:29:05.992495389 +0000 UTC m=+0.436283316 container create c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.024 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:06 compute-0 systemd[1]: Started libpod-conmon-c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec.scope.
Dec 13 08:29:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f712d337e457c36f21aba716ac7b7563dada02a2e75c25932692cea653917424/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:06 compute-0 podman[305053]: 2025-12-13 08:29:06.179697858 +0000 UTC m=+0.623485805 container init c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:29:06 compute-0 podman[305053]: 2025-12-13 08:29:06.187004738 +0000 UTC m=+0.630792655 container start c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:29:06 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [NOTICE]   (305114) : New worker (305116) forked
Dec 13 08:29:06 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [NOTICE]   (305114) : Loading success.
Dec 13 08:29:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Dec 13 08:29:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Dec 13 08:29:06 compute-0 ceph-mon[76537]: osdmap e218: 3 total, 3 up, 3 in
Dec 13 08:29:06 compute-0 ceph-mon[76537]: pgmap v1946: 321 pgs: 321 active+clean; 138 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.2 MiB/s wr, 229 op/s
Dec 13 08:29:06 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 9644b262-cf46-483d-8fc3-333210320729 could not be found.
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 9644b262-cf46-483d-8fc3-333210320729
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver 
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver 
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 9644b262-cf46-483d-8fc3-333210320729 could not be found.
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver 
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.808 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(snap) on rbd image(9644b262-cf46-483d-8fc3-333210320729) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.959 248514 DEBUG nova.network.neutron [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:06 compute-0 nova_compute[248510]: 2025-12-13 08:29:06.997 248514 INFO nova.compute.manager [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Took 2.23 seconds to deallocate network for instance.
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.052 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.053 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.189 248514 DEBUG oslo_concurrency.processutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.416 248514 DEBUG nova.compute.manager [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.417 248514 DEBUG oslo_concurrency.lockutils [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.418 248514 DEBUG oslo_concurrency.lockutils [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.418 248514 DEBUG oslo_concurrency.lockutils [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.418 248514 DEBUG nova.compute.manager [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Processing event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.419 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.423 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614547.423581, 917c2aaf-7701-4198-802a-0bfc5753885a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.424 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Resumed (Lifecycle Event)
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.427 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.433 248514 INFO nova.virt.libvirt.driver [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance spawned successfully.
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.434 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.453 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.462 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.463 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.464 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.465 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.466 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.466 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 138 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 120 op/s
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.472 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:07 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.517 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.550 248514 INFO nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Took 11.51 seconds to spawn the instance on the hypervisor.
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.551 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:07 compute-0 ceph-mon[76537]: osdmap e219: 3 total, 3 up, 3 in
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.634 248514 INFO nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Took 14.33 seconds to build instance.
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.652 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120351394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.879 248514 DEBUG oslo_concurrency.processutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.887 248514 DEBUG nova.compute.provider_tree [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.906 248514 DEBUG nova.scheduler.client.report [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.928 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:07 compute-0 nova_compute[248510]: 2025-12-13 08:29:07.958 248514 INFO nova.scheduler.client.report [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Deleted allocations for instance 7139a479-b2fe-4d64-8061-97fceda2e392
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.045 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.553 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.553 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.582 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.595 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.595 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.624 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:29:08 compute-0 ceph-mon[76537]: pgmap v1948: 321 pgs: 321 active+clean; 138 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 120 op/s
Dec 13 08:29:08 compute-0 ceph-mon[76537]: osdmap e220: 3 total, 3 up, 3 in
Dec 13 08:29:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3120351394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.656 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.657 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.680 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.683 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.683 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.690 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.690 248514 INFO nova.compute.claims [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.711 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.779 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:08 compute-0 nova_compute[248510]: 2025-12-13 08:29:08.911 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:29:09
Dec 13 08:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Dec 13 08:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:29:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 202 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 7.6 MiB/s wr, 368 op/s
Dec 13 08:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981163263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.497 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.504 248514 DEBUG nova.compute.provider_tree [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.581 248514 DEBUG nova.scheduler.client.report [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.618 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.619 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.622 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.628 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.628 248514 INFO nova.compute.claims [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:29:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1981163263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.797 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.797 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.835 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.840 248514 WARNING nova.compute.manager [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Image not found during snapshot: nova.exception.ImageNotFound: Image 9644b262-cf46-483d-8fc3-333210320729 could not be found.
Dec 13 08:29:09 compute-0 nova_compute[248510]: 2025-12-13 08:29:09.872 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.899136) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614549899192, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2225, "num_deletes": 259, "total_data_size": 3448339, "memory_usage": 3507744, "flush_reason": "Manual Compaction"}
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614549962906, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3350819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35442, "largest_seqno": 37666, "table_properties": {"data_size": 3340685, "index_size": 6436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21770, "raw_average_key_size": 20, "raw_value_size": 3320063, "raw_average_value_size": 3186, "num_data_blocks": 281, "num_entries": 1042, "num_filter_entries": 1042, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614368, "oldest_key_time": 1765614368, "file_creation_time": 1765614549, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 63858 microseconds, and 7962 cpu microseconds.
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.962985) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3350819 bytes OK
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.963019) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.986973) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.987022) EVENT_LOG_v1 {"time_micros": 1765614549987012, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.987056) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3438841, prev total WAL file size 3438841, number of live WAL files 2.
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.988286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3272KB)], [80(7400KB)]
Dec 13 08:29:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614549988419, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10928777, "oldest_snapshot_seqno": -1}
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6225 keys, 9110588 bytes, temperature: kUnknown
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614550083190, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9110588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9069164, "index_size": 24738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157766, "raw_average_key_size": 25, "raw_value_size": 8957597, "raw_average_value_size": 1438, "num_data_blocks": 1001, "num_entries": 6225, "num_filter_entries": 6225, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614549, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.085 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.088 248514 DEBUG nova.policy [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b4cd12d84d34c95ac78f304b6e7546d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.094 248514 DEBUG nova.compute.manager [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-deleted-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.095 248514 DEBUG nova.compute.manager [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.096 248514 DEBUG oslo_concurrency.lockutils [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.096 248514 DEBUG oslo_concurrency.lockutils [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.096 248514 DEBUG oslo_concurrency.lockutils [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.097 248514 DEBUG nova.compute.manager [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] No waiting events found dispatching network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.097 248514 WARNING nova.compute.manager [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received unexpected event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 for instance with vm_state active and task_state None.
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.083496) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9110588 bytes
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.103997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.2 rd, 96.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 6753, records dropped: 528 output_compression: NoCompression
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.104045) EVENT_LOG_v1 {"time_micros": 1765614550104028, "job": 46, "event": "compaction_finished", "compaction_time_micros": 94860, "compaction_time_cpu_micros": 24018, "output_level": 6, "num_output_files": 1, "total_output_size": 9110588, "num_input_records": 6753, "num_output_records": 6225, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614550104834, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614550106033, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.988156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:29:10 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:29:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.176 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.178 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.178 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Creating image(s)
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.247 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.274 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:10 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.375 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:10 compute-0 ovn_controller[148476]: 2025-12-13T08:29:10Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:f5:51 10.100.0.11
Dec 13 08:29:10 compute-0 ovn_controller[148476]: 2025-12-13T08:29:10Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:f5:51 10.100.0.11
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.384 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.427 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.474 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.475 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.475 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.476 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.501 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.507 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.561 248514 DEBUG nova.objects.instance [None req-e4edd0ab-9994-4934-b0b1-653c32d39a66 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 917c2aaf-7701-4198-802a-0bfc5753885a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.590 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614550.5899262, 917c2aaf-7701-4198-802a-0bfc5753885a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.590 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Paused (Lifecycle Event)
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:29:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.615 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.627 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:10 compute-0 nova_compute[248510]: 2025-12-13 08:29:10.651 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.314 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Successfully created port: c0518450-5f7c-4fa0-bf72-59ebcb5be073 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:29:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 202 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.9 MiB/s wr, 287 op/s
Dec 13 08:29:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657075699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.613 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.622 248514 DEBUG nova.compute.provider_tree [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.644 248514 DEBUG nova.scheduler.client.report [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.688 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.689 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.693 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.700 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.701 248514 INFO nova.compute.claims [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:29:11 compute-0 ceph-mon[76537]: pgmap v1950: 321 pgs: 321 active+clean; 202 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 7.6 MiB/s wr, 368 op/s
Dec 13 08:29:11 compute-0 ceph-mon[76537]: osdmap e221: 3 total, 3 up, 3 in
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.798 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.799 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.799 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.799 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.800 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.801 248514 INFO nova.compute.manager [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Terminating instance
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.802 248514 DEBUG nova.compute.manager [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.821 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.822 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.847 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.873 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.984 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.986 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:29:11 compute-0 nova_compute[248510]: 2025-12-13 08:29:11.986 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Creating image(s)
Dec 13 08:29:12 compute-0 kernel: tapa795a3d9-13 (unregistering): left promiscuous mode
Dec 13 08:29:12 compute-0 NetworkManager[50376]: <info>  [1765614552.4312] device (tapa795a3d9-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:29:12 compute-0 ovn_controller[148476]: 2025-12-13T08:29:12Z|00523|binding|INFO|Releasing lport a795a3d9-1388-4e72-8bfd-271816a45466 from this chassis (sb_readonly=0)
Dec 13 08:29:12 compute-0 ovn_controller[148476]: 2025-12-13T08:29:12Z|00524|binding|INFO|Setting lport a795a3d9-1388-4e72-8bfd-271816a45466 down in Southbound
Dec 13 08:29:12 compute-0 ovn_controller[148476]: 2025-12-13T08:29:12Z|00525|binding|INFO|Removing iface tapa795a3d9-13 ovn-installed in OVS
Dec 13 08:29:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.446 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:54:48 10.100.0.12'], port_security=['fa:16:3e:a8:54:48 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '917c2aaf-7701-4198-802a-0bfc5753885a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a795a3d9-1388-4e72-8bfd-271816a45466) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.448 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a795a3d9-1388-4e72-8bfd-271816a45466 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis
Dec 13 08:29:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.450 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:29:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.452 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07867776-e3f0-41aa-89c3-ccb7f7ee635a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.452 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace which is not needed anymore
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.476 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:12 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Deactivated successfully.
Dec 13 08:29:12 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Consumed 3.717s CPU time.
Dec 13 08:29:12 compute-0 systemd-machined[210538]: Machine qemu-64-instance-00000038 terminated.
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.511 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.537 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.543 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.596 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.648 248514 DEBUG nova.policy [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b4cd12d84d34c95ac78f304b6e7546d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.658 248514 DEBUG nova.compute.manager [None req-e4edd0ab-9994-4934-b0b1-653c32d39a66 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.659 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.659 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.660 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.660 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.685 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.692 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:12 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [NOTICE]   (305114) : haproxy version is 2.8.14-c23fe91
Dec 13 08:29:12 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [NOTICE]   (305114) : path to executable is /usr/sbin/haproxy
Dec 13 08:29:12 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [ALERT]    (305114) : Current worker (305116) exited with code 143 (Terminated)
Dec 13 08:29:12 compute-0 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [WARNING]  (305114) : All workers exited. Exiting... (0)
Dec 13 08:29:12 compute-0 systemd[1]: libpod-c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec.scope: Deactivated successfully.
Dec 13 08:29:12 compute-0 podman[305406]: 2025-12-13 08:29:12.796315573 +0000 UTC m=+0.235394999 container died c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.861 248514 DEBUG nova.compute.manager [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-unplugged-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.863 248514 DEBUG oslo_concurrency.lockutils [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.863 248514 DEBUG oslo_concurrency.lockutils [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.863 248514 DEBUG oslo_concurrency.lockutils [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.863 248514 DEBUG nova.compute.manager [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] No waiting events found dispatching network-vif-unplugged-a795a3d9-1388-4e72-8bfd-271816a45466 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.864 248514 WARNING nova.compute.manager [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received unexpected event network-vif-unplugged-a795a3d9-1388-4e72-8bfd-271816a45466 for instance with vm_state suspended and task_state None.
Dec 13 08:29:12 compute-0 nova_compute[248510]: 2025-12-13 08:29:12.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315320300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.200 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.207 248514 DEBUG nova.compute.provider_tree [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.227 248514 DEBUG nova.scheduler.client.report [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:13 compute-0 ceph-mon[76537]: pgmap v1952: 321 pgs: 321 active+clean; 202 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.9 MiB/s wr, 287 op/s
Dec 13 08:29:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1657075699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.256 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.257 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.278 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Successfully created port: dd059dc2-8c1a-49c4-b820-7cf31293c210 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.291 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Successfully updated port: c0518450-5f7c-4fa0-bf72-59ebcb5be073 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.316 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.317 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.319 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.319 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquired lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.319 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.337 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:29:13 compute-0 kernel: tap3aadc575-dc (unregistering): left promiscuous mode
Dec 13 08:29:13 compute-0 NetworkManager[50376]: <info>  [1765614553.3757] device (tap3aadc575-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.375 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:29:13 compute-0 ovn_controller[148476]: 2025-12-13T08:29:13Z|00526|binding|INFO|Releasing lport 3aadc575-dc9f-4823-82c7-112e9b9832fe from this chassis (sb_readonly=0)
Dec 13 08:29:13 compute-0 ovn_controller[148476]: 2025-12-13T08:29:13Z|00527|binding|INFO|Setting lport 3aadc575-dc9f-4823-82c7-112e9b9832fe down in Southbound
Dec 13 08:29:13 compute-0 ovn_controller[148476]: 2025-12-13T08:29:13Z|00528|binding|INFO|Removing iface tap3aadc575-dc ovn-installed in OVS
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.383 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:13.395 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:f5:51 10.100.0.11'], port_security=['fa:16:3e:dd:f5:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9b45ce75-4bd3-4cc0-a772-2474ffc2cd52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3aadc575-dc9f-4823-82c7-112e9b9832fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.461 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 195 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 309 op/s
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.504 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.507 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.508 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Creating image(s)
Dec 13 08:29:13 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Deactivated successfully.
Dec 13 08:29:13 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Consumed 12.744s CPU time.
Dec 13 08:29:13 compute-0 systemd-machined[210538]: Machine qemu-63-instance-00000037 terminated.
Dec 13 08:29:13 compute-0 NetworkManager[50376]: <info>  [1765614553.6280] manager: (tap3aadc575-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Dec 13 08:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec-userdata-shm.mount: Deactivated successfully.
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.772 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f712d337e457c36f21aba716ac7b7563dada02a2e75c25932692cea653917424-merged.mount: Deactivated successfully.
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.802 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.825 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.829 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.877 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.884 248514 DEBUG nova.policy [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b4cd12d84d34c95ac78f304b6e7546d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.891 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance destroyed successfully.
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.891 248514 DEBUG nova.objects.instance [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.923 248514 DEBUG nova.virt.libvirt.vif [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1587589866',display_name='tempest-ImagesTestJSON-server-1587589866',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1587589866',id=55,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-uay75zc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:09Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9b45ce75-4bd3-4cc0-a772-2474ffc2cd52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.923 248514 DEBUG nova.network.os_vif_util [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.924 248514 DEBUG nova.network.os_vif_util [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.924 248514 DEBUG os_vif [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3aadc575-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.931 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.932 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.932 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:13 compute-0 nova_compute[248510]: 2025-12-13 08:29:13.933 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:14 compute-0 ovn_controller[148476]: 2025-12-13T08:29:14Z|00529|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 08:29:14 compute-0 ovn_controller[148476]: 2025-12-13T08:29:14Z|00530|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.661 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:14 compute-0 podman[305406]: 2025-12-13 08:29:14.665706418 +0000 UTC m=+2.104785844 container cleanup c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.667 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1c940191-84c7-423e-901a-233b14c2acec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:14 compute-0 systemd[1]: libpod-conmon-c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec.scope: Deactivated successfully.
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.717 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Successfully updated port: dd059dc2-8c1a-49c4-b820-7cf31293c210 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.754 248514 DEBUG nova.compute.manager [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-changed-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.755 248514 DEBUG nova.compute.manager [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Refreshing instance network info cache due to event network-changed-dd059dc2-8c1a-49c4-b820-7cf31293c210. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.756 248514 DEBUG oslo_concurrency.lockutils [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.756 248514 DEBUG oslo_concurrency.lockutils [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.757 248514 DEBUG nova.network.neutron [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Refreshing network info cache for port dd059dc2-8c1a-49c4-b820-7cf31293c210 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.761 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.765 248514 INFO os_vif [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc')
Dec 13 08:29:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/315320300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:14 compute-0 ceph-mon[76537]: pgmap v1953: 321 pgs: 321 active+clean; 195 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 309 op/s
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.951 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.952 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.953 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.953 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.954 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.955 248514 INFO nova.compute.manager [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Terminating instance
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.957 248514 DEBUG nova.compute.manager [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.970 248514 INFO nova.virt.libvirt.driver [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance destroyed successfully.
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.971 248514 DEBUG nova.objects.instance [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'resources' on Instance uuid 917c2aaf-7701-4198-802a-0bfc5753885a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.987 248514 DEBUG nova.virt.libvirt.vif [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1976093525',display_name='tempest-DeleteServersTestJSON-server-1976093525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1976093525',id=56,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-r0pczy0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:12Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=917c2aaf-7701-4198-802a-0bfc5753885a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.988 248514 DEBUG nova.network.os_vif_util [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.989 248514 DEBUG nova.network.os_vif_util [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.989 248514 DEBUG os_vif [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.991 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.991 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa795a3d9-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.993 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:14 compute-0 nova_compute[248510]: 2025-12-13 08:29:14.998 248514 INFO os_vif [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13')
Dec 13 08:29:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.382 248514 DEBUG nova.network.neutron [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.385 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Successfully created port: 41420fc6-e900-4745-a3c1-4f2541c9e1f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.387 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614540.2147691, 7139a479-b2fe-4d64-8061-97fceda2e392 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.388 248514 INFO nova.compute.manager [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] VM Stopped (Lifecycle Event)
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.391 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.391 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.392 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.392 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.392 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] No waiting events found dispatching network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.392 248514 WARNING nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received unexpected event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 for instance with vm_state suspended and task_state deleting.
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.393 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received event network-changed-c0518450-5f7c-4fa0-bf72-59ebcb5be073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.393 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Refreshing instance network info cache due to event network-changed-c0518450-5f7c-4fa0-bf72-59ebcb5be073. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.393 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:29:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938618048' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:29:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:29:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938618048' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.417 248514 DEBUG nova.compute.manager [None req-2b2d81da-9b5e-4b48-9c04-0ac1a6b52694 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 169 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.5 MiB/s wr, 294 op/s
Dec 13 08:29:15 compute-0 podman[305602]: 2025-12-13 08:29:15.559537911 +0000 UTC m=+0.861410904 container remove c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.569 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4814e6bd-44f0-4949-a3cf-816570c8468b]: (4, ('Sat Dec 13 08:29:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec)\nc90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec\nSat Dec 13 08:29:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec)\nc90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45ad0200-839d-43e2-a5fc-22a93d186b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:15 compute-0 kernel: tap85372fca-a0: left promiscuous mode
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.697 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.700 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42c8f875-4417-490d-bcb9-f5890a56833e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.722 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fac2e5f-c749-4bfc-b7bf-a5e7413406dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.724 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55413d1a-df59-4fed-ba44-71dc05c9bb30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[850c5860-97b5-4704-b269-7731e6782412]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712211, 'reachable_time': 30884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305673, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d85372fca\x2dab50\x2d48b6\x2d8c21\x2d507f630c205a.mount: Deactivated successfully.
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.752 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.752 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[aaca91f0-990e-4979-9334-e1b9d2aef6af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.753 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3aadc575-dc9f-4823-82c7-112e9b9832fe in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.754 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea825946-8ad5-499d-b9bf-22b3b2808612]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.755 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace which is not needed anymore
Dec 13 08:29:15 compute-0 nova_compute[248510]: 2025-12-13 08:29:15.821 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.050 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] resizing rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:29:16 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [NOTICE]   (304431) : haproxy version is 2.8.14-c23fe91
Dec 13 08:29:16 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [NOTICE]   (304431) : path to executable is /usr/sbin/haproxy
Dec 13 08:29:16 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [WARNING]  (304431) : Exiting Master process...
Dec 13 08:29:16 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [ALERT]    (304431) : Current worker (304433) exited with code 143 (Terminated)
Dec 13 08:29:16 compute-0 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [WARNING]  (304431) : All workers exited. Exiting... (0)
Dec 13 08:29:16 compute-0 systemd[1]: libpod-2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01.scope: Deactivated successfully.
Dec 13 08:29:16 compute-0 podman[305713]: 2025-12-13 08:29:16.106294492 +0000 UTC m=+0.245785976 container died 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.364 248514 DEBUG nova.network.neutron [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.434 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Updating instance_info_cache with network_info: [{"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.480 248514 DEBUG oslo_concurrency.lockutils [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.483 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquired lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.483 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.488 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Releasing lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.488 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance network_info: |[{"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.490 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.491 248514 DEBUG nova.network.neutron [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Refreshing network info cache for port c0518450-5f7c-4fa0-bf72-59ebcb5be073 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:29:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01-userdata-shm.mount: Deactivated successfully.
Dec 13 08:29:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-80d06258ddb97e0e16af3f5236f13273e2b5d49c570ee6de133888013ff7c7f8-merged.mount: Deactivated successfully.
Dec 13 08:29:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2938618048' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:29:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2938618048' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:29:16 compute-0 ceph-mon[76537]: pgmap v1954: 321 pgs: 321 active+clean; 169 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.5 MiB/s wr, 294 op/s
Dec 13 08:29:16 compute-0 podman[305713]: 2025-12-13 08:29:16.835821151 +0000 UTC m=+0.975312635 container cleanup 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 08:29:16 compute-0 systemd[1]: libpod-conmon-2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01.scope: Deactivated successfully.
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.851 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.863 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:16 compute-0 nova_compute[248510]: 2025-12-13 08:29:16.941 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] resizing rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.086 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1c940191-84c7-423e-901a-233b14c2acec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.114 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'migration_context' on Instance uuid b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.142 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.143 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Ensure instance console log exists: /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.143 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.144 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.144 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.147 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start _get_guest_xml network_info=[{"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.153 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] resizing rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:29:17 compute-0 podman[305783]: 2025-12-13 08:29:17.157220322 +0000 UTC m=+0.290834647 container remove 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.166 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[590268c6-adc4-4874-8e47-0c5fc8b18f94]: (4, ('Sat Dec 13 08:29:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01)\n2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01\nSat Dec 13 08:29:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01)\n2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.169 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed684bb9-2926-4c89-9eaf-ed12f358cca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.170 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:17 compute-0 kernel: tap87bd91d0-e0: left promiscuous mode
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.192 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f40dbb4-5bb4-475b-b541-7c619785edb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.216 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84a7e03a-6909-4c6d-9989-96b635f39284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e16b1d0-d4a9-4185-950f-01befed22711]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.231 248514 WARNING nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.238 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.237 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e9dfba-2dad-4142-8f01-14fd96188e51]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711124, 'reachable_time': 33967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305925, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.238 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:29:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d87bd91d0\x2deead\x2d49b6\x2d8f92\x2df8d0dba555dc.mount: Deactivated successfully.
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.240 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:29:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.241 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d95a3bff-f947-46ff-85fb-89434a7d3db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.242 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.242 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.243 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.243 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.245 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.245 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.245 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.245 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.246 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.249 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.320 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'migration_context' on Instance uuid b9e0d5ab-483f-49a1-901a-c36f31ab710f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.373 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.374 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Ensure instance console log exists: /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.375 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.376 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.376 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 169 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.4 MiB/s wr, 236 op/s
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.535 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c940191-84c7-423e-901a-233b14c2acec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.556 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.556 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Ensure instance console log exists: /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.557 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.557 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.557 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322013128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/322013128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.829 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.861 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.866 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.906 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Successfully updated port: 41420fc6-e900-4745-a3c1-4f2541c9e1f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.939 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.939 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquired lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.939 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.984 248514 DEBUG nova.compute.manager [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-changed-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.984 248514 DEBUG nova.compute.manager [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Refreshing instance network info cache due to event network-changed-41420fc6-e900-4745-a3c1-4f2541c9e1f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:29:17 compute-0 nova_compute[248510]: 2025-12-13 08:29:17.984 248514 DEBUG oslo_concurrency.lockutils [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.140 248514 INFO nova.virt.libvirt.driver [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Deleting instance files /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a_del
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.141 248514 INFO nova.virt.libvirt.driver [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Deletion of /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a_del complete
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.228 248514 INFO nova.virt.libvirt.driver [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Deleting instance files /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_del
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.229 248514 INFO nova.virt.libvirt.driver [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Deletion of /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_del complete
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.235 248514 INFO nova.compute.manager [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Took 3.28 seconds to destroy the instance on the hypervisor.
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.236 248514 DEBUG oslo.service.loopingcall [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.237 248514 DEBUG nova.compute.manager [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.237 248514 DEBUG nova.network.neutron [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.304 248514 INFO nova.compute.manager [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Took 6.50 seconds to destroy the instance on the hypervisor.
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.305 248514 DEBUG oslo.service.loopingcall [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.305 248514 DEBUG nova.compute.manager [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.306 248514 DEBUG nova.network.neutron [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:29:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876366977' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.474 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.476 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-1',id=57,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:09Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.477 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.477 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.479 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.495 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.503 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <uuid>b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d</uuid>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <name>instance-00000039</name>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <nova:name>tempest-ListServersNegativeTestJSON-server-582157972-1</nova:name>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:29:17</nova:creationTime>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <nova:user uuid="3b4cd12d84d34c95ac78f304b6e7546d">tempest-ListServersNegativeTestJSON-67858047-project-member</nova:user>
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <nova:project uuid="1f0a98b431c940d98cf7e91fd7bdea03">tempest-ListServersNegativeTestJSON-67858047</nova:project>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <nova:port uuid="c0518450-5f7c-4fa0-bf72-59ebcb5be073">
Dec 13 08:29:18 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <system>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <entry name="serial">b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d</entry>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <entry name="uuid">b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d</entry>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </system>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <os>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   </os>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <features>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   </features>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk">
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config">
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:18 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ad:c1:45"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <target dev="tapc0518450-5f"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/console.log" append="off"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <video>
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </video>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:29:18 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:29:18 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:29:18 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:29:18 compute-0 nova_compute[248510]: </domain>
Dec 13 08:29:18 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.504 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Preparing to wait for external event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.504 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.504 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.504 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.505 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-1',id=57,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:09Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.505 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.506 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.507 248514 DEBUG os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.507 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.508 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.508 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.511 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.511 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0518450-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.511 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0518450-5f, col_values=(('external_ids', {'iface-id': 'c0518450-5f7c-4fa0-bf72-59ebcb5be073', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:c1:45', 'vm-uuid': 'b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.513 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:18 compute-0 NetworkManager[50376]: <info>  [1765614558.5142] manager: (tapc0518450-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.515 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.519 248514 INFO os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f')
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.609 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.609 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.610 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No VIF found with MAC fa:16:3e:ad:c1:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.610 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Using config drive
Dec 13 08:29:18 compute-0 nova_compute[248510]: 2025-12-13 08:29:18.635 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:18 compute-0 ceph-mon[76537]: pgmap v1955: 321 pgs: 321 active+clean; 169 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.4 MiB/s wr, 236 op/s
Dec 13 08:29:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2876366977' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 164 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 810 KiB/s rd, 4.1 MiB/s wr, 180 op/s
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.678 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Updating instance_info_cache with network_info: [{"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.755 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Releasing lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.756 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance network_info: |[{"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.758 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start _get_guest_xml network_info=[{"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.764 248514 WARNING nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.769 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.770 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.774 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.775 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.775 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.775 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.776 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.776 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.776 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.776 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.777 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.777 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.777 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.778 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.778 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.778 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.781 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.941 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Creating config drive at /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config
Dec 13 08:29:19 compute-0 nova_compute[248510]: 2025-12-13 08:29:19.946 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2m5f05lx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:19 compute-0 podman[306051]: 2025-12-13 08:29:19.990208951 +0000 UTC m=+0.072336245 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 08:29:20 compute-0 podman[306049]: 2025-12-13 08:29:20.000897325 +0000 UTC m=+0.094393900 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:29:20 compute-0 podman[306048]: 2025-12-13 08:29:20.03919909 +0000 UTC m=+0.132805388 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.061 248514 DEBUG nova.network.neutron [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.090 248514 INFO nova.compute.manager [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Took 1.85 seconds to deallocate network for instance.
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.096 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2m5f05lx" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:20 compute-0 ceph-mon[76537]: pgmap v1956: 321 pgs: 321 active+clean; 164 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 810 KiB/s rd, 4.1 MiB/s wr, 180 op/s
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.216 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.222 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.285 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.287 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165741887' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.402 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.403 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Deleting local config drive /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config because it was imported into RBD.
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.415 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.442 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.447 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:20 compute-0 kernel: tapc0518450-5f: entered promiscuous mode
Dec 13 08:29:20 compute-0 NetworkManager[50376]: <info>  [1765614560.4628] manager: (tapc0518450-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Dec 13 08:29:20 compute-0 ovn_controller[148476]: 2025-12-13T08:29:20Z|00531|binding|INFO|Claiming lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 for this chassis.
Dec 13 08:29:20 compute-0 ovn_controller[148476]: 2025-12-13T08:29:20Z|00532|binding|INFO|c0518450-5f7c-4fa0-bf72-59ebcb5be073: Claiming fa:16:3e:ad:c1:45 10.100.0.9
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.491 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:20 compute-0 systemd-udevd[306199]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:29:20 compute-0 systemd-machined[210538]: New machine qemu-65-instance-00000039.
Dec 13 08:29:20 compute-0 NetworkManager[50376]: <info>  [1765614560.5110] device (tapc0518450-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:29:20 compute-0 NetworkManager[50376]: <info>  [1765614560.5123] device (tapc0518450-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:29:20 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000039.
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.515 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:c1:45 10.100.0.9'], port_security=['fa:16:3e:ad:c1:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c0518450-5f7c-4fa0-bf72-59ebcb5be073) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.517 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c0518450-5f7c-4fa0-bf72-59ebcb5be073 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 bound to our chassis
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.519 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.527 248514 DEBUG nova.network.neutron [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.534 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[152fef9f-ede0-437e-9162-96568cdf47fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.535 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6669b7a-71 in ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.538 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6669b7a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.538 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b196c0-a75b-406f-9064-1a9a541b5ff4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.539 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d9cc7f-61ef-4b93-b934-9e57aba8a9a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.549 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[16ffbcea-692c-425f-8a2c-502f95d30054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.559 248514 INFO nova.compute.manager [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Took 2.25 seconds to deallocate network for instance.
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.564 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:20 compute-0 ovn_controller[148476]: 2025-12-13T08:29:20Z|00533|binding|INFO|Setting lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 ovn-installed in OVS
Dec 13 08:29:20 compute-0 ovn_controller[148476]: 2025-12-13T08:29:20Z|00534|binding|INFO|Setting lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 up in Southbound
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.575 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b0cefd-cfbc-4b9a-b38b-38b9fcdbce00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.610 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[44e87ca4-0b6d-4830-8000-b6ef3ac07153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5edcb9-47da-4656-9b52-39aa92068ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 NetworkManager[50376]: <info>  [1765614560.6204] manager: (tapf6669b7a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/236)
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.622 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.633 248514 DEBUG oslo_concurrency.processutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.656 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84351ad1-764b-4ff0-9d0c-6850fab7fb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.660 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dd676a48-cf32-4345-acef-ed94080ae25e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.681 248514 DEBUG nova.compute.manager [req-9daa44bc-ed9a-40a5-840d-7cc7da990429 req-a1e5aed9-6880-4235-9a7b-26fcde648166 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-deleted-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:20 compute-0 NetworkManager[50376]: <info>  [1765614560.6855] device (tapf6669b7a-70): carrier: link connected
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.692 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c909db25-9ca7-45a9-8380-37f9661e7e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.716 248514 DEBUG nova.network.neutron [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Updated VIF entry in instance network info cache for port c0518450-5f7c-4fa0-bf72-59ebcb5be073. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.716 248514 DEBUG nova.network.neutron [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Updating instance_info_cache with network_info: [{"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.717 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3627f39-d13c-4cd5-aaa3-587d48dba923]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306254, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.735 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11603133-0e27-4830-98a5-dc8a0efff646]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:9e66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713790, 'tstamp': 713790}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306255, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.738 248514 DEBUG nova.compute.manager [req-82885f15-0c4a-4ed6-b98b-a7cf64980288 req-e07bf532-bdc3-43da-b7a6-223981ba9839 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-deleted-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.748 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.748 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-unplugged-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.748 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.748 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] No waiting events found dispatching network-vif-unplugged-3aadc575-dc9f-4823-82c7-112e9b9832fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-unplugged-3aadc575-dc9f-4823-82c7-112e9b9832fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.750 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.750 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.750 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] No waiting events found dispatching network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.750 248514 WARNING nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received unexpected event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe for instance with vm_state active and task_state deleting.
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.757 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb261bc6-82f0-48db-93b0-3fa279955eaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306256, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[024004e3-b776-4169-8710-1b0353008cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.851 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab29a5e-891f-479b-b98b-35fbbc406680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.852 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.853 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.853 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:20 compute-0 NetworkManager[50376]: <info>  [1765614560.8568] manager: (tapf6669b7a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Dec 13 08:29:20 compute-0 kernel: tapf6669b7a-70: entered promiscuous mode
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.859 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.862 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:20 compute-0 ovn_controller[148476]: 2025-12-13T08:29:20Z|00535|binding|INFO|Releasing lport fc5b13bf-8bf9-4192-a467-a89c8b6706fb from this chassis (sb_readonly=0)
Dec 13 08:29:20 compute-0 nova_compute[248510]: 2025-12-13 08:29:20.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.887 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6669b7a-7d21-4e4e-96cd-84193f6fdcf3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6669b7a-7d21-4e4e-96cd-84193f6fdcf3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.888 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bfabae5b-c00f-4fab-a7c2-173c3c921da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.889 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/f6669b7a-7d21-4e4e-96cd-84193f6fdcf3.pid.haproxy
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:29:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.890 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'env', 'PROCESS_TAG=haproxy-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6669b7a-7d21-4e4e-96cd-84193f6fdcf3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012070986413704181 of space, bias 1.0, pg target 0.3621295924111254 quantized to 32 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670072192503635 of space, bias 1.0, pg target 0.20010216577510903 quantized to 32 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.011434439919754e-07 of space, bias 4.0, pg target 0.0008413721327903704 quantized to 16 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:29:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:29:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590400138' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.053 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.056 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-2',id=58,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:11Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b9e0d5ab-483f-49a1-901a-c36f31ab710f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.056 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.057 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.058 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9e0d5ab-483f-49a1-901a-c36f31ab710f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.087 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <uuid>b9e0d5ab-483f-49a1-901a-c36f31ab710f</uuid>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <name>instance-0000003a</name>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <nova:name>tempest-ListServersNegativeTestJSON-server-582157972-2</nova:name>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:29:19</nova:creationTime>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <nova:user uuid="3b4cd12d84d34c95ac78f304b6e7546d">tempest-ListServersNegativeTestJSON-67858047-project-member</nova:user>
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <nova:project uuid="1f0a98b431c940d98cf7e91fd7bdea03">tempest-ListServersNegativeTestJSON-67858047</nova:project>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <nova:port uuid="dd059dc2-8c1a-49c4-b820-7cf31293c210">
Dec 13 08:29:21 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <system>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <entry name="serial">b9e0d5ab-483f-49a1-901a-c36f31ab710f</entry>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <entry name="uuid">b9e0d5ab-483f-49a1-901a-c36f31ab710f</entry>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </system>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <os>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   </os>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <features>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   </features>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk">
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config">
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:5d:20:15"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <target dev="tapdd059dc2-8c"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/console.log" append="off"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <video>
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </video>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:29:21 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:29:21 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:29:21 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:29:21 compute-0 nova_compute[248510]: </domain>
Dec 13 08:29:21 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.088 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Preparing to wait for external event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.089 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.089 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.090 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.091 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-2',id=58,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:11Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b9e0d5ab-483f-49a1-901a-c36f31ab710f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.091 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.092 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.092 248514 DEBUG os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.094 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.094 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.098 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd059dc2-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.099 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd059dc2-8c, col_values=(('external_ids', {'iface-id': 'dd059dc2-8c1a-49c4-b820-7cf31293c210', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:20:15', 'vm-uuid': 'b9e0d5ab-483f-49a1-901a-c36f31ab710f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:21 compute-0 NetworkManager[50376]: <info>  [1765614561.1024] manager: (tapdd059dc2-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.104 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.109 248514 INFO os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c')
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.169 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4165741887' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/590400138' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.170 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.170 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No VIF found with MAC fa:16:3e:5d:20:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.171 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Using config drive
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.190 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144417386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.234 248514 DEBUG oslo_concurrency.processutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.239 248514 DEBUG nova.compute.provider_tree [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.255 248514 DEBUG nova.scheduler.client.report [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:21 compute-0 podman[306326]: 2025-12-13 08:29:21.265241181 +0000 UTC m=+0.045269438 container create 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.287 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.289 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:21 compute-0 systemd[1]: Started libpod-conmon-401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d.scope.
Dec 13 08:29:21 compute-0 podman[306326]: 2025-12-13 08:29:21.240904641 +0000 UTC m=+0.020932918 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.344 248514 INFO nova.scheduler.client.report [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Deleted allocations for instance 917c2aaf-7701-4198-802a-0bfc5753885a
Dec 13 08:29:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7554b3fff23359996616576e29b124b752a5d83d38de1b07b4ecde93acadd26c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:21 compute-0 podman[306326]: 2025-12-13 08:29:21.372042707 +0000 UTC m=+0.152070984 container init 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 08:29:21 compute-0 podman[306326]: 2025-12-13 08:29:21.379996743 +0000 UTC m=+0.160025000 container start 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:29:21 compute-0 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [NOTICE]   (306348) : New worker (306350) forked
Dec 13 08:29:21 compute-0 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [NOTICE]   (306348) : Loading success.
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.422 248514 DEBUG oslo_concurrency.processutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.468 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Updating instance_info_cache with network_info: [{"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.477 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 180 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 736 KiB/s rd, 5.9 MiB/s wr, 186 op/s
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.513 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Releasing lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.513 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance network_info: |[{"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.514 248514 DEBUG oslo_concurrency.lockutils [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.514 248514 DEBUG nova.network.neutron [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Refreshing network info cache for port 41420fc6-e900-4745-a3c1-4f2541c9e1f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.518 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start _get_guest_xml network_info=[{"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.522 248514 WARNING nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.532 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.532 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.536 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.537 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.537 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.537 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.538 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.538 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.539 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.539 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.539 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.539 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.540 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.540 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.540 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.540 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.544 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.864 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Creating config drive at /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.872 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbvux0b8u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.942 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614561.941771, b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.943 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] VM Started (Lifecycle Event)
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.973 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.979 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614561.9428473, b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:21 compute-0 nova_compute[248510]: 2025-12-13 08:29:21.979 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] VM Paused (Lifecycle Event)
Dec 13 08:29:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947150714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.008 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.009 248514 DEBUG oslo_concurrency.processutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.015 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.023 248514 DEBUG nova.compute.provider_tree [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.026 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbvux0b8u" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.052 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.056 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773651429' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.100 248514 DEBUG nova.scheduler.client.report [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.105 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.122 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.149 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.155 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2144417386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:22 compute-0 ceph-mon[76537]: pgmap v1957: 321 pgs: 321 active+clean; 180 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 736 KiB/s rd, 5.9 MiB/s wr, 186 op/s
Dec 13 08:29:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2947150714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3773651429' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.193 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.209 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.209 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Deleting local config drive /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config because it was imported into RBD.
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.252 248514 INFO nova.scheduler.client.report [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52
Dec 13 08:29:22 compute-0 NetworkManager[50376]: <info>  [1765614562.2715] manager: (tapdd059dc2-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/239)
Dec 13 08:29:22 compute-0 kernel: tapdd059dc2-8c: entered promiscuous mode
Dec 13 08:29:22 compute-0 NetworkManager[50376]: <info>  [1765614562.2831] device (tapdd059dc2-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:29:22 compute-0 NetworkManager[50376]: <info>  [1765614562.2842] device (tapdd059dc2-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:22 compute-0 ovn_controller[148476]: 2025-12-13T08:29:22Z|00536|binding|INFO|Claiming lport dd059dc2-8c1a-49c4-b820-7cf31293c210 for this chassis.
Dec 13 08:29:22 compute-0 ovn_controller[148476]: 2025-12-13T08:29:22Z|00537|binding|INFO|dd059dc2-8c1a-49c4-b820-7cf31293c210: Claiming fa:16:3e:5d:20:15 10.100.0.11
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.345 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:20:15 10.100.0.11'], port_security=['fa:16:3e:5d:20:15 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b9e0d5ab-483f-49a1-901a-c36f31ab710f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=dd059dc2-8c1a-49c4-b820-7cf31293c210) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.347 158419 INFO neutron.agent.ovn.metadata.agent [-] Port dd059dc2-8c1a-49c4-b820-7cf31293c210 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 bound to our chassis
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.348 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.358 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:22 compute-0 ovn_controller[148476]: 2025-12-13T08:29:22Z|00538|binding|INFO|Setting lport dd059dc2-8c1a-49c4-b820-7cf31293c210 ovn-installed in OVS
Dec 13 08:29:22 compute-0 ovn_controller[148476]: 2025-12-13T08:29:22Z|00539|binding|INFO|Setting lport dd059dc2-8c1a-49c4-b820-7cf31293c210 up in Southbound
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.362 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.369 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c96d15d-b922-4110-89d0-bf5f640b99cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:22 compute-0 systemd-machined[210538]: New machine qemu-66-instance-0000003a.
Dec 13 08:29:22 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000003a.
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.412 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[afc6d8b6-5d49-40e4-baa1-6460cd98736f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.418 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[649ebba5-7d44-44e6-a3ed-3b01f3c491e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.466 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[96e55fa7-e0ab-4571-97c6-e74918cc23dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.494 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a581d960-9833-4112-8da9-48b3a7daf068]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306550, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94d33e0d-d64b-433f-8ea9-849bc2ed4654]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713803, 'tstamp': 713803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306551, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713806, 'tstamp': 713806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306551, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.521 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.524 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.525 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.525 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.526 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605435020' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.780 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.782 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-3',id=59,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:13Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=1c940191-84c7-423e-901a-233b14c2acec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.783 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.784 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.785 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c940191-84c7-423e-901a-233b14c2acec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.809 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <uuid>1c940191-84c7-423e-901a-233b14c2acec</uuid>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <name>instance-0000003b</name>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <nova:name>tempest-ListServersNegativeTestJSON-server-582157972-3</nova:name>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:29:21</nova:creationTime>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <nova:user uuid="3b4cd12d84d34c95ac78f304b6e7546d">tempest-ListServersNegativeTestJSON-67858047-project-member</nova:user>
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <nova:project uuid="1f0a98b431c940d98cf7e91fd7bdea03">tempest-ListServersNegativeTestJSON-67858047</nova:project>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <nova:port uuid="41420fc6-e900-4745-a3c1-4f2541c9e1f5">
Dec 13 08:29:22 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <system>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <entry name="serial">1c940191-84c7-423e-901a-233b14c2acec</entry>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <entry name="uuid">1c940191-84c7-423e-901a-233b14c2acec</entry>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </system>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <os>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   </os>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <features>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   </features>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1c940191-84c7-423e-901a-233b14c2acec_disk">
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1c940191-84c7-423e-901a-233b14c2acec_disk.config">
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:22 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:de:08:03"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <target dev="tap41420fc6-e9"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/console.log" append="off"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <video>
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </video>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:29:22 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:29:22 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:29:22 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:29:22 compute-0 nova_compute[248510]: </domain>
Dec 13 08:29:22 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.811 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Preparing to wait for external event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.811 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.811 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.811 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.812 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-3',id=59,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:13Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=1c940191-84c7-423e-901a-233b14c2acec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.813 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.813 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.813 248514 DEBUG os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.814 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.814 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.815 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.818 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41420fc6-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.818 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41420fc6-e9, col_values=(('external_ids', {'iface-id': '41420fc6-e900-4745-a3c1-4f2541c9e1f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:08:03', 'vm-uuid': '1c940191-84c7-423e-901a-233b14c2acec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:22 compute-0 NetworkManager[50376]: <info>  [1765614562.8210] manager: (tap41420fc6-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.826 248514 INFO os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9')
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.828 248514 DEBUG nova.compute.manager [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.829 248514 DEBUG oslo_concurrency.lockutils [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.829 248514 DEBUG oslo_concurrency.lockutils [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.829 248514 DEBUG oslo_concurrency.lockutils [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.830 248514 DEBUG nova.compute.manager [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Processing event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.886 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.887 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.887 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No VIF found with MAC fa:16:3e:de:08:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.888 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Using config drive
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.912 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.952 248514 DEBUG nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.953 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.953 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Processing event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.955 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.955 248514 DEBUG nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] No waiting events found dispatching network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.955 248514 WARNING nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received unexpected event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 for instance with vm_state building and task_state spawning.
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.956 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.960 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614562.9598994, b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.960 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] VM Resumed (Lifecycle Event)
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.962 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.968 248514 INFO nova.virt.libvirt.driver [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance spawned successfully.
Dec 13 08:29:22 compute-0 nova_compute[248510]: 2025-12-13 08:29:22.969 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.003 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.010 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.033 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.033 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.034 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.034 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.035 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.035 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.073 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.111 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Took 12.93 seconds to spawn the instance on the hypervisor.
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.111 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/605435020' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.200 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Took 14.53 seconds to build instance.
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.223 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.321 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Creating config drive at /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.328 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7kxczao9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.458 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614563.4581342, b9e0d5ab-483f-49a1-901a-c36f31ab710f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.459 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] VM Started (Lifecycle Event)
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.462 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.467 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.471 248514 INFO nova.virt.libvirt.driver [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance spawned successfully.
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.472 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:29:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 180 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 5.5 MiB/s wr, 187 op/s
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.481 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7kxczao9" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.512 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.517 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config 1c940191-84c7-423e-901a-233b14c2acec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.559 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.567 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.573 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.574 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.575 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.575 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.576 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.577 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.591 248514 DEBUG nova.network.neutron [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Updated VIF entry in instance network info cache for port 41420fc6-e900-4745-a3c1-4f2541c9e1f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.591 248514 DEBUG nova.network.neutron [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Updating instance_info_cache with network_info: [{"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.612 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.613 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614563.4582496, b9e0d5ab-483f-49a1-901a-c36f31ab710f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.613 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] VM Paused (Lifecycle Event)
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.640 248514 DEBUG oslo_concurrency.lockutils [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.661 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.666 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614563.465802, b9e0d5ab-483f-49a1-901a-c36f31ab710f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.666 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] VM Resumed (Lifecycle Event)
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.711 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Took 11.73 seconds to spawn the instance on the hypervisor.
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.711 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.712 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config 1c940191-84c7-423e-901a-233b14c2acec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.713 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Deleting local config drive /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config because it was imported into RBD.
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.725 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.728 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:23 compute-0 kernel: tap41420fc6-e9: entered promiscuous mode
Dec 13 08:29:23 compute-0 NetworkManager[50376]: <info>  [1765614563.7787] manager: (tap41420fc6-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Dec 13 08:29:23 compute-0 ovn_controller[148476]: 2025-12-13T08:29:23Z|00540|binding|INFO|Claiming lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 for this chassis.
Dec 13 08:29:23 compute-0 ovn_controller[148476]: 2025-12-13T08:29:23Z|00541|binding|INFO|41420fc6-e900-4745-a3c1-4f2541c9e1f5: Claiming fa:16:3e:de:08:03 10.100.0.4
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.790 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.794 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:08:03 10.100.0.4'], port_security=['fa:16:3e:de:08:03 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c940191-84c7-423e-901a-233b14c2acec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=41420fc6-e900-4745-a3c1-4f2541c9e1f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.796 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 41420fc6-e900-4745-a3c1-4f2541c9e1f5 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 bound to our chassis
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.798 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 08:29:23 compute-0 ovn_controller[148476]: 2025-12-13T08:29:23Z|00542|binding|INFO|Setting lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 ovn-installed in OVS
Dec 13 08:29:23 compute-0 ovn_controller[148476]: 2025-12-13T08:29:23Z|00543|binding|INFO|Setting lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 up in Southbound
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.807 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.822 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6022150-2bfd-4bbc-84b1-b0bd0c3aadec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:23 compute-0 systemd-udevd[306671]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:29:23 compute-0 systemd-machined[210538]: New machine qemu-67-instance-0000003b.
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.838 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Took 15.15 seconds to build instance.
Dec 13 08:29:23 compute-0 NetworkManager[50376]: <info>  [1765614563.8422] device (tap41420fc6-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:29:23 compute-0 NetworkManager[50376]: <info>  [1765614563.8433] device (tap41420fc6-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:29:23 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.864 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7a527502-7182-4c6c-81f7-987225a76b0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.866 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.868 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c0dc80c3-40ba-46d5-99b8-5eb15b9d74fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.904 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f038b60c-af0a-4867-8efa-7f2ebbdf7864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.928 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d48cf201-e9a4-401c-bebc-c66d73e661a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306683, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.949 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6edbe335-dba2-43a3-bf16-ffbce6a61936]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713803, 'tstamp': 713803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306685, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713806, 'tstamp': 713806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306685, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.952 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:23 compute-0 nova_compute[248510]: 2025-12-13 08:29:23.955 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.955 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.955 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.956 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.956 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:24 compute-0 ceph-mon[76537]: pgmap v1958: 321 pgs: 321 active+clean; 180 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 5.5 MiB/s wr, 187 op/s
Dec 13 08:29:24 compute-0 nova_compute[248510]: 2025-12-13 08:29:24.361 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614564.3607767, 1c940191-84c7-423e-901a-233b14c2acec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:24 compute-0 nova_compute[248510]: 2025-12-13 08:29:24.362 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] VM Started (Lifecycle Event)
Dec 13 08:29:24 compute-0 nova_compute[248510]: 2025-12-13 08:29:24.390 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:24 compute-0 nova_compute[248510]: 2025-12-13 08:29:24.395 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614564.3609877, 1c940191-84c7-423e-901a-233b14c2acec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:24 compute-0 nova_compute[248510]: 2025-12-13 08:29:24.395 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] VM Paused (Lifecycle Event)
Dec 13 08:29:24 compute-0 nova_compute[248510]: 2025-12-13 08:29:24.419 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:24 compute-0 nova_compute[248510]: 2025-12-13 08:29:24.423 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:24 compute-0 nova_compute[248510]: 2025-12-13 08:29:24.448 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.131 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.132 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.132 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.133 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.133 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] No waiting events found dispatching network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.133 248514 WARNING nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received unexpected event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 for instance with vm_state active and task_state None.
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.133 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.134 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.134 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.134 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.135 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Processing event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.135 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.135 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.135 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.136 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.136 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] No waiting events found dispatching network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.136 248514 WARNING nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received unexpected event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 for instance with vm_state building and task_state spawning.
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.137 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.163 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614565.1474512, 1c940191-84c7-423e-901a-233b14c2acec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.163 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] VM Resumed (Lifecycle Event)
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.165 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.168 248514 INFO nova.virt.libvirt.driver [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance spawned successfully.
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.168 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.221 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.222 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.222 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.223 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.223 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.224 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.245 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.250 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.430 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 181 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.4 MiB/s wr, 213 op/s
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.480 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Took 11.97 seconds to spawn the instance on the hypervisor.
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.482 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.583 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Took 16.84 seconds to build instance.
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.614 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:25.991 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:25 compute-0 nova_compute[248510]: 2025-12-13 08:29:25.993 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:25.993 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:29:26 compute-0 ceph-mon[76537]: pgmap v1959: 321 pgs: 321 active+clean; 181 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.4 MiB/s wr, 213 op/s
Dec 13 08:29:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 181 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 201 op/s
Dec 13 08:29:27 compute-0 nova_compute[248510]: 2025-12-13 08:29:27.657 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614552.6348193, 917c2aaf-7701-4198-802a-0bfc5753885a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:27 compute-0 nova_compute[248510]: 2025-12-13 08:29:27.659 248514 INFO nova.compute.manager [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Stopped (Lifecycle Event)
Dec 13 08:29:27 compute-0 nova_compute[248510]: 2025-12-13 08:29:27.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:27.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:28 compute-0 nova_compute[248510]: 2025-12-13 08:29:28.050 248514 DEBUG nova.compute.manager [None req-e5e5bbeb-698b-439d-8010-78951fa7a7f7 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:28 compute-0 ceph-mon[76537]: pgmap v1960: 321 pgs: 321 active+clean; 181 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 201 op/s
Dec 13 08:29:28 compute-0 nova_compute[248510]: 2025-12-13 08:29:28.882 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614553.644247, 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:28 compute-0 nova_compute[248510]: 2025-12-13 08:29:28.884 248514 INFO nova.compute.manager [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] VM Stopped (Lifecycle Event)
Dec 13 08:29:28 compute-0 nova_compute[248510]: 2025-12-13 08:29:28.914 248514 DEBUG nova.compute.manager [None req-8f75ad4b-345d-4e85-94c3-3de360eb2982 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.396 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.398 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.399 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.399 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.400 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.403 248514 INFO nova.compute.manager [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Terminating instance
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.405 248514 DEBUG nova.compute.manager [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:29:29 compute-0 kernel: tapc0518450-5f (unregistering): left promiscuous mode
Dec 13 08:29:29 compute-0 NetworkManager[50376]: <info>  [1765614569.4547] device (tapc0518450-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:29:29 compute-0 ovn_controller[148476]: 2025-12-13T08:29:29Z|00544|binding|INFO|Releasing lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 from this chassis (sb_readonly=0)
Dec 13 08:29:29 compute-0 ovn_controller[148476]: 2025-12-13T08:29:29Z|00545|binding|INFO|Setting lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 down in Southbound
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.463 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:29 compute-0 ovn_controller[148476]: 2025-12-13T08:29:29Z|00546|binding|INFO|Removing iface tapc0518450-5f ovn-installed in OVS
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.471 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:c1:45 10.100.0.9'], port_security=['fa:16:3e:ad:c1:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c0518450-5f7c-4fa0-bf72-59ebcb5be073) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.472 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c0518450-5f7c-4fa0-bf72-59ebcb5be073 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 unbound from our chassis
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.474 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 08:29:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 181 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.3 MiB/s wr, 318 op/s
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.490 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:29 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Deactivated successfully.
Dec 13 08:29:29 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Consumed 7.667s CPU time.
Dec 13 08:29:29 compute-0 systemd-machined[210538]: Machine qemu-65-instance-00000039 terminated.
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.510 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c240f41-f2d0-4324-8a49-c3bd27bce9cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.549 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[694ca023-bd79-4a57-9c3a-804810642fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.553 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[961e7163-5290-478b-8ce6-a76ee1ff27bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.587 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[66480a1e-93df-4c4f-8953-34cf6ce6f6a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa36075d-ade7-431b-9ea3-dc758cb89123]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306739, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.651 248514 INFO nova.virt.libvirt.driver [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance destroyed successfully.
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.652 248514 DEBUG nova.objects.instance [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'resources' on Instance uuid b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.661 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c60a74-f9e3-4e5b-bb3f-1ef11fda70d9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713803, 'tstamp': 713803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306744, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713806, 'tstamp': 713806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306744, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.664 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.673 248514 DEBUG nova.virt.libvirt.vif [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-1',id=57,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:23Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.674 248514 DEBUG nova.network.os_vif_util [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.675 248514 DEBUG nova.network.os_vif_util [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.676 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.677 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.677 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.676 248514 DEBUG os_vif [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:29:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.678 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.681 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0518450-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:29 compute-0 nova_compute[248510]: 2025-12-13 08:29:29.687 248514 INFO os_vif [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f')
Dec 13 08:29:30 compute-0 nova_compute[248510]: 2025-12-13 08:29:30.050 248514 INFO nova.virt.libvirt.driver [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Deleting instance files /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_del
Dec 13 08:29:30 compute-0 nova_compute[248510]: 2025-12-13 08:29:30.052 248514 INFO nova.virt.libvirt.driver [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Deletion of /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_del complete
Dec 13 08:29:30 compute-0 nova_compute[248510]: 2025-12-13 08:29:30.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:30 compute-0 nova_compute[248510]: 2025-12-13 08:29:30.254 248514 INFO nova.compute.manager [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Took 0.85 seconds to destroy the instance on the hypervisor.
Dec 13 08:29:30 compute-0 nova_compute[248510]: 2025-12-13 08:29:30.255 248514 DEBUG oslo.service.loopingcall [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:29:30 compute-0 nova_compute[248510]: 2025-12-13 08:29:30.255 248514 DEBUG nova.compute.manager [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:29:30 compute-0 nova_compute[248510]: 2025-12-13 08:29:30.256 248514 DEBUG nova.network.neutron [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:29:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:30 compute-0 ceph-mon[76537]: pgmap v1961: 321 pgs: 321 active+clean; 181 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.3 MiB/s wr, 318 op/s
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.017 248514 DEBUG nova.network.neutron [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.046 248514 INFO nova.compute.manager [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Took 0.79 seconds to deallocate network for instance.
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.112 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.112 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.201 248514 DEBUG nova.compute.manager [req-c26de796-ac97-45a4-9675-57b51e89eb85 req-f5c60551-f274-4e74-9cc2-649526a33d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received event network-vif-deleted-c0518450-5f7c-4fa0-bf72-59ebcb5be073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.225 248514 DEBUG oslo_concurrency.processutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 181 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 253 op/s
Dec 13 08:29:31 compute-0 sudo[306792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:29:31 compute-0 sudo[306792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:31 compute-0 sudo[306792]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:31 compute-0 sudo[306817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:29:31 compute-0 sudo[306817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166302626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.807 248514 DEBUG oslo_concurrency.processutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.815 248514 DEBUG nova.compute.provider_tree [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.834 248514 DEBUG nova.scheduler.client.report [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.878 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:31 compute-0 nova_compute[248510]: 2025-12-13 08:29:31.916 248514 INFO nova.scheduler.client.report [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Deleted allocations for instance b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d
Dec 13 08:29:32 compute-0 nova_compute[248510]: 2025-12-13 08:29:32.005 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:32 compute-0 sudo[306817]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:29:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:29:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:29:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:29:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:29:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:29:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:29:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:29:32 compute-0 sudo[306874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:29:32 compute-0 sudo[306874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:32 compute-0 sudo[306874]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:32 compute-0 sudo[306899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:29:32 compute-0 sudo[306899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:32 compute-0 ceph-mon[76537]: pgmap v1962: 321 pgs: 321 active+clean; 181 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 253 op/s
Dec 13 08:29:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2166302626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:29:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:29:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:29:32 compute-0 podman[306934]: 2025-12-13 08:29:32.838911765 +0000 UTC m=+0.047091843 container create f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:29:32 compute-0 systemd[1]: Started libpod-conmon-f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23.scope.
Dec 13 08:29:32 compute-0 podman[306934]: 2025-12-13 08:29:32.81681326 +0000 UTC m=+0.024993358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:29:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:32 compute-0 podman[306934]: 2025-12-13 08:29:32.954028765 +0000 UTC m=+0.162208833 container init f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 08:29:32 compute-0 podman[306934]: 2025-12-13 08:29:32.963666083 +0000 UTC m=+0.171846131 container start f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 08:29:32 compute-0 podman[306934]: 2025-12-13 08:29:32.967022566 +0000 UTC m=+0.175202644 container attach f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:29:32 compute-0 blissful_perlman[306950]: 167 167
Dec 13 08:29:32 compute-0 systemd[1]: libpod-f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23.scope: Deactivated successfully.
Dec 13 08:29:33 compute-0 podman[306955]: 2025-12-13 08:29:33.020868923 +0000 UTC m=+0.031900667 container died f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:29:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf99735017157d0fb3ed06b9e8586a470ff55e1a60b5eafc8da2c3c2837fad39-merged.mount: Deactivated successfully.
Dec 13 08:29:33 compute-0 podman[306955]: 2025-12-13 08:29:33.063333571 +0000 UTC m=+0.074365295 container remove f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:29:33 compute-0 systemd[1]: libpod-conmon-f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23.scope: Deactivated successfully.
Dec 13 08:29:33 compute-0 podman[306977]: 2025-12-13 08:29:33.287728278 +0000 UTC m=+0.054318862 container create 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 08:29:33 compute-0 systemd[1]: Started libpod-conmon-56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a.scope.
Dec 13 08:29:33 compute-0 podman[306977]: 2025-12-13 08:29:33.264938875 +0000 UTC m=+0.031529489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:29:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:33 compute-0 podman[306977]: 2025-12-13 08:29:33.403285009 +0000 UTC m=+0.169875613 container init 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 08:29:33 compute-0 podman[306977]: 2025-12-13 08:29:33.410987249 +0000 UTC m=+0.177577833 container start 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 08:29:33 compute-0 podman[306977]: 2025-12-13 08:29:33.415613023 +0000 UTC m=+0.182203607 container attach 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:29:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 154 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 242 op/s
Dec 13 08:29:33 compute-0 happy_mclean[306993]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:29:33 compute-0 happy_mclean[306993]: --> All data devices are unavailable
Dec 13 08:29:33 compute-0 systemd[1]: libpod-56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a.scope: Deactivated successfully.
Dec 13 08:29:33 compute-0 podman[306977]: 2025-12-13 08:29:33.984795107 +0000 UTC m=+0.751385691 container died 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec 13 08:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df-merged.mount: Deactivated successfully.
Dec 13 08:29:34 compute-0 podman[306977]: 2025-12-13 08:29:34.033047548 +0000 UTC m=+0.799638132 container remove 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Dec 13 08:29:34 compute-0 systemd[1]: libpod-conmon-56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a.scope: Deactivated successfully.
Dec 13 08:29:34 compute-0 sudo[306899]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:34 compute-0 sudo[307026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:29:34 compute-0 sudo[307026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:34 compute-0 sudo[307026]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:34 compute-0 sudo[307051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:29:34 compute-0 sudo[307051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:34 compute-0 podman[307088]: 2025-12-13 08:29:34.568263383 +0000 UTC m=+0.052618219 container create aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:29:34 compute-0 ceph-mon[76537]: pgmap v1963: 321 pgs: 321 active+clean; 154 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 242 op/s
Dec 13 08:29:34 compute-0 systemd[1]: Started libpod-conmon-aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7.scope.
Dec 13 08:29:34 compute-0 podman[307088]: 2025-12-13 08:29:34.546314752 +0000 UTC m=+0.030669608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:29:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:34 compute-0 podman[307088]: 2025-12-13 08:29:34.662039847 +0000 UTC m=+0.146394713 container init aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Dec 13 08:29:34 compute-0 podman[307088]: 2025-12-13 08:29:34.671632754 +0000 UTC m=+0.155987590 container start aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:29:34 compute-0 inspiring_benz[307104]: 167 167
Dec 13 08:29:34 compute-0 systemd[1]: libpod-aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7.scope: Deactivated successfully.
Dec 13 08:29:34 compute-0 podman[307088]: 2025-12-13 08:29:34.680151574 +0000 UTC m=+0.164506410 container attach aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 08:29:34 compute-0 podman[307088]: 2025-12-13 08:29:34.68119047 +0000 UTC m=+0.165545306 container died aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:29:34 compute-0 nova_compute[248510]: 2025-12-13 08:29:34.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a73f73964b2e6b9b35bfb74e0cbc0a6f661be39fb8d598da99cee150d22736-merged.mount: Deactivated successfully.
Dec 13 08:29:34 compute-0 podman[307088]: 2025-12-13 08:29:34.767380406 +0000 UTC m=+0.251735242 container remove aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:29:34 compute-0 systemd[1]: libpod-conmon-aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7.scope: Deactivated successfully.
Dec 13 08:29:34 compute-0 podman[307127]: 2025-12-13 08:29:34.970720474 +0000 UTC m=+0.044103030 container create ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 08:29:35 compute-0 systemd[1]: Started libpod-conmon-ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b.scope.
Dec 13 08:29:35 compute-0 podman[307127]: 2025-12-13 08:29:34.952080824 +0000 UTC m=+0.025463400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:29:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:35 compute-0 podman[307127]: 2025-12-13 08:29:35.079268882 +0000 UTC m=+0.152651458 container init ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 13 08:29:35 compute-0 podman[307127]: 2025-12-13 08:29:35.088694114 +0000 UTC m=+0.162076670 container start ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:29:35 compute-0 podman[307127]: 2025-12-13 08:29:35.095428971 +0000 UTC m=+0.168811527 container attach ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 08:29:35 compute-0 nova_compute[248510]: 2025-12-13 08:29:35.132 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:35 compute-0 ovn_controller[148476]: 2025-12-13T08:29:35Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:20:15 10.100.0.11
Dec 13 08:29:35 compute-0 ovn_controller[148476]: 2025-12-13T08:29:35Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:20:15 10.100.0.11
Dec 13 08:29:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]: {
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:     "0": [
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:         {
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "devices": [
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "/dev/loop3"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             ],
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_name": "ceph_lv0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_size": "21470642176",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "name": "ceph_lv0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "tags": {
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cluster_name": "ceph",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.crush_device_class": "",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.encrypted": "0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.objectstore": "bluestore",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osd_id": "0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.type": "block",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.vdo": "0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.with_tpm": "0"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             },
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "type": "block",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "vg_name": "ceph_vg0"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:         }
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:     ],
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:     "1": [
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:         {
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "devices": [
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "/dev/loop4"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             ],
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_name": "ceph_lv1",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_size": "21470642176",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "name": "ceph_lv1",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "tags": {
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cluster_name": "ceph",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.crush_device_class": "",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.encrypted": "0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.objectstore": "bluestore",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osd_id": "1",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.type": "block",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.vdo": "0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.with_tpm": "0"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             },
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "type": "block",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "vg_name": "ceph_vg1"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:         }
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:     ],
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:     "2": [
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:         {
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "devices": [
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "/dev/loop5"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             ],
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_name": "ceph_lv2",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_size": "21470642176",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "name": "ceph_lv2",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "tags": {
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.cluster_name": "ceph",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.crush_device_class": "",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.encrypted": "0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.objectstore": "bluestore",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osd_id": "2",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.type": "block",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.vdo": "0",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:                 "ceph.with_tpm": "0"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             },
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "type": "block",
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:             "vg_name": "ceph_vg2"
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:         }
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]:     ]
Dec 13 08:29:35 compute-0 exciting_engelbart[307143]: }
Dec 13 08:29:35 compute-0 systemd[1]: libpod-ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b.scope: Deactivated successfully.
Dec 13 08:29:35 compute-0 podman[307127]: 2025-12-13 08:29:35.483038544 +0000 UTC m=+0.556421100 container died ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:29:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 135 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 184 KiB/s wr, 245 op/s
Dec 13 08:29:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664-merged.mount: Deactivated successfully.
Dec 13 08:29:35 compute-0 podman[307127]: 2025-12-13 08:29:35.541022955 +0000 UTC m=+0.614405531 container remove ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:29:35 compute-0 systemd[1]: libpod-conmon-ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b.scope: Deactivated successfully.
Dec 13 08:29:35 compute-0 sudo[307051]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:35 compute-0 sudo[307164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:29:35 compute-0 sudo[307164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:35 compute-0 sudo[307164]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:35 compute-0 sudo[307189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:29:35 compute-0 sudo[307189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.005 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.007 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.008 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.009 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.010 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.013 248514 INFO nova.compute.manager [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Terminating instance
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.015 248514 DEBUG nova.compute.manager [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:29:36 compute-0 kernel: tapdd059dc2-8c (unregistering): left promiscuous mode
Dec 13 08:29:36 compute-0 NetworkManager[50376]: <info>  [1765614576.0778] device (tapdd059dc2-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 ovn_controller[148476]: 2025-12-13T08:29:36Z|00547|binding|INFO|Releasing lport dd059dc2-8c1a-49c4-b820-7cf31293c210 from this chassis (sb_readonly=0)
Dec 13 08:29:36 compute-0 ovn_controller[148476]: 2025-12-13T08:29:36Z|00548|binding|INFO|Setting lport dd059dc2-8c1a-49c4-b820-7cf31293c210 down in Southbound
Dec 13 08:29:36 compute-0 ovn_controller[148476]: 2025-12-13T08:29:36Z|00549|binding|INFO|Removing iface tapdd059dc2-8c ovn-installed in OVS
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.090 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 podman[307227]: 2025-12-13 08:29:36.096204414 +0000 UTC m=+0.069523767 container create d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.096 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:20:15 10.100.0.11'], port_security=['fa:16:3e:5d:20:15 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b9e0d5ab-483f-49a1-901a-c36f31ab710f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=dd059dc2-8c1a-49c4-b820-7cf31293c210) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.100 158419 INFO neutron.agent.ovn.metadata.agent [-] Port dd059dc2-8c1a-49c4-b820-7cf31293c210 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 unbound from our chassis
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.103 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.109 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 systemd[1]: Started libpod-conmon-d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7.scope.
Dec 13 08:29:36 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Dec 13 08:29:36 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003a.scope: Consumed 12.899s CPU time.
Dec 13 08:29:36 compute-0 systemd-machined[210538]: Machine qemu-66-instance-0000003a terminated.
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.137 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa916d65-f844-4e55-ae00-a5d9a2b00320]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 podman[307227]: 2025-12-13 08:29:36.057621862 +0000 UTC m=+0.030941315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:29:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:36 compute-0 podman[307227]: 2025-12-13 08:29:36.180793411 +0000 UTC m=+0.154112844 container init d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 08:29:36 compute-0 podman[307227]: 2025-12-13 08:29:36.196305533 +0000 UTC m=+0.169624876 container start d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.195 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2d362cd2-9bad-46f2-b5b8-205c2549a075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 podman[307227]: 2025-12-13 08:29:36.200148578 +0000 UTC m=+0.173467991 container attach d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.200 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9ecb8b-3bbf-46c8-971c-1c4310e9cf30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 strange_bartik[307249]: 167 167
Dec 13 08:29:36 compute-0 systemd[1]: libpod-d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7.scope: Deactivated successfully.
Dec 13 08:29:36 compute-0 podman[307227]: 2025-12-13 08:29:36.204469405 +0000 UTC m=+0.177788748 container died d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7396f9fdae65fa922ac99dc1c38b68431d3bf19013a15af307ddb23f26a576ff-merged.mount: Deactivated successfully.
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 podman[307227]: 2025-12-13 08:29:36.251742081 +0000 UTC m=+0.225061444 container remove d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.254 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.255 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e28b9d05-00ae-4568-a694-1f75b6ed1c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.267 248514 INFO nova.virt.libvirt.driver [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance destroyed successfully.
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.268 248514 DEBUG nova.objects.instance [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'resources' on Instance uuid b9e0d5ab-483f-49a1-901a-c36f31ab710f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:36 compute-0 systemd[1]: libpod-conmon-d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7.scope: Deactivated successfully.
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.277 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2662d298-fb67-4591-8c7b-1bf1cc5be4b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307278, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.300 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[beef004a-e44c-46c9-a5d6-3267c0ec075b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713803, 'tstamp': 713803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307283, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713806, 'tstamp': 713806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307283, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.303 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.310 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.311 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.312 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.312 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.327 248514 DEBUG nova.virt.libvirt.vif [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-2',id=58,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-13T08:29:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:23Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b9e0d5ab-483f-49a1-901a-c36f31ab710f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.327 248514 DEBUG nova.network.os_vif_util [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.328 248514 DEBUG nova.network.os_vif_util [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.328 248514 DEBUG os_vif [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.330 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd059dc2-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.332 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.333 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.333 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.334 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.334 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.334 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.335 248514 INFO nova.compute.manager [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Terminating instance
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.336 248514 DEBUG nova.compute.manager [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.340 248514 INFO os_vif [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c')
Dec 13 08:29:36 compute-0 kernel: tap41420fc6-e9 (unregistering): left promiscuous mode
Dec 13 08:29:36 compute-0 NetworkManager[50376]: <info>  [1765614576.3811] device (tap41420fc6-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:29:36 compute-0 ovn_controller[148476]: 2025-12-13T08:29:36Z|00550|binding|INFO|Releasing lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 from this chassis (sb_readonly=0)
Dec 13 08:29:36 compute-0 ovn_controller[148476]: 2025-12-13T08:29:36Z|00551|binding|INFO|Setting lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 down in Southbound
Dec 13 08:29:36 compute-0 ovn_controller[148476]: 2025-12-13T08:29:36Z|00552|binding|INFO|Removing iface tap41420fc6-e9 ovn-installed in OVS
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.385 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.393 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:08:03 10.100.0.4'], port_security=['fa:16:3e:de:08:03 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c940191-84c7-423e-901a-233b14c2acec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=41420fc6-e900-4745-a3c1-4f2541c9e1f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.394 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 41420fc6-e900-4745-a3c1-4f2541c9e1f5 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 unbound from our chassis
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.396 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.397 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db99801d-5235-4b1d-af90-1ee1ff497b03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.398 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 namespace which is not needed anymore
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Dec 13 08:29:36 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Consumed 11.435s CPU time.
Dec 13 08:29:36 compute-0 systemd-machined[210538]: Machine qemu-67-instance-0000003b terminated.
Dec 13 08:29:36 compute-0 podman[307315]: 2025-12-13 08:29:36.48269668 +0000 UTC m=+0.052082516 container create fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:29:36 compute-0 systemd[1]: Started libpod-conmon-fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951.scope.
Dec 13 08:29:36 compute-0 podman[307315]: 2025-12-13 08:29:36.456727689 +0000 UTC m=+0.026113525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:29:36 compute-0 NetworkManager[50376]: <info>  [1765614576.5629] manager: (tap41420fc6-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Dec 13 08:29:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.560 248514 DEBUG nova.compute.manager [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-unplugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.564 248514 DEBUG oslo_concurrency.lockutils [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.565 248514 DEBUG oslo_concurrency.lockutils [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.565 248514 DEBUG oslo_concurrency.lockutils [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.566 248514 DEBUG nova.compute.manager [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] No waiting events found dispatching network-vif-unplugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:36 compute-0 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [NOTICE]   (306348) : haproxy version is 2.8.14-c23fe91
Dec 13 08:29:36 compute-0 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [NOTICE]   (306348) : path to executable is /usr/sbin/haproxy
Dec 13 08:29:36 compute-0 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [WARNING]  (306348) : Exiting Master process...
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.566 248514 DEBUG nova.compute.manager [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-unplugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:36 compute-0 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [ALERT]    (306348) : Current worker (306350) exited with code 143 (Terminated)
Dec 13 08:29:36 compute-0 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [WARNING]  (306348) : All workers exited. Exiting... (0)
Dec 13 08:29:36 compute-0 systemd[1]: libpod-401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d.scope: Deactivated successfully.
Dec 13 08:29:36 compute-0 podman[307345]: 2025-12-13 08:29:36.577270853 +0000 UTC m=+0.069928936 container died 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.589 248514 INFO nova.virt.libvirt.driver [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance destroyed successfully.
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.589 248514 DEBUG nova.objects.instance [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'resources' on Instance uuid 1c940191-84c7-423e-901a-233b14c2acec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:36 compute-0 podman[307315]: 2025-12-13 08:29:36.600332331 +0000 UTC m=+0.169718187 container init fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:29:36 compute-0 ceph-mon[76537]: pgmap v1964: 321 pgs: 321 active+clean; 135 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 184 KiB/s wr, 245 op/s
Dec 13 08:29:36 compute-0 podman[307315]: 2025-12-13 08:29:36.616151462 +0000 UTC m=+0.185537288 container start fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 08:29:36 compute-0 podman[307315]: 2025-12-13 08:29:36.620114629 +0000 UTC m=+0.189500475 container attach fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.623 248514 DEBUG nova.virt.libvirt.vif [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-3',id=59,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-12-13T08:29:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:25Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=1c940191-84c7-423e-901a-233b14c2acec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.624 248514 DEBUG nova.network.os_vif_util [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.625 248514 DEBUG nova.network.os_vif_util [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.625 248514 DEBUG os_vif [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.627 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41420fc6-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d-userdata-shm.mount: Deactivated successfully.
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7554b3fff23359996616576e29b124b752a5d83d38de1b07b4ecde93acadd26c-merged.mount: Deactivated successfully.
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.634 248514 INFO os_vif [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9')
Dec 13 08:29:36 compute-0 podman[307345]: 2025-12-13 08:29:36.643838005 +0000 UTC m=+0.136496098 container cleanup 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:29:36 compute-0 systemd[1]: libpod-conmon-401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d.scope: Deactivated successfully.
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.702 248514 INFO nova.virt.libvirt.driver [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Deleting instance files /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f_del
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.705 248514 INFO nova.virt.libvirt.driver [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Deletion of /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f_del complete
Dec 13 08:29:36 compute-0 podman[307406]: 2025-12-13 08:29:36.736414279 +0000 UTC m=+0.060180436 container remove 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.744 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8972899-a030-4978-a4dc-4f98860980a1]: (4, ('Sat Dec 13 08:29:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 (401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d)\n401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d\nSat Dec 13 08:29:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 (401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d)\n401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[096c2ab0-1ec8-46ec-bea5-7f00eb51f825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.748 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 kernel: tapf6669b7a-70: left promiscuous mode
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.772 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9c97d8c8-e72b-413d-a299-0c9dda0d616b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.781 248514 INFO nova.compute.manager [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Took 0.77 seconds to destroy the instance on the hypervisor.
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.782 248514 DEBUG oslo.service.loopingcall [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.783 248514 DEBUG nova.compute.manager [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.783 248514 DEBUG nova.network.neutron [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.789 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed45a98-43c3-40e0-8eb3-00b2645c8296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.791 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[be669d8a-2ac2-48cc-9535-a50510b9582e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.814 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd34303-166d-473c-962d-002892c51990]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713781, 'reachable_time': 38343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307424, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 systemd[1]: run-netns-ovnmeta\x2df6669b7a\x2d7d21\x2d4e4e\x2d96cd\x2d84193f6fdcf3.mount: Deactivated successfully.
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.819 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:29:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.820 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[24bdd4cf-446b-48d9-81cd-9562700e5968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.939 248514 INFO nova.virt.libvirt.driver [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Deleting instance files /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec_del
Dec 13 08:29:36 compute-0 nova_compute[248510]: 2025-12-13 08:29:36.940 248514 INFO nova.virt.libvirt.driver [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Deletion of /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec_del complete
Dec 13 08:29:37 compute-0 nova_compute[248510]: 2025-12-13 08:29:37.066 248514 INFO nova.compute.manager [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Took 0.73 seconds to destroy the instance on the hypervisor.
Dec 13 08:29:37 compute-0 nova_compute[248510]: 2025-12-13 08:29:37.067 248514 DEBUG oslo.service.loopingcall [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:29:37 compute-0 nova_compute[248510]: 2025-12-13 08:29:37.068 248514 DEBUG nova.compute.manager [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:29:37 compute-0 nova_compute[248510]: 2025-12-13 08:29:37.068 248514 DEBUG nova.network.neutron [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:29:37 compute-0 lvm[307496]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:29:37 compute-0 lvm[307496]: VG ceph_vg0 finished
Dec 13 08:29:37 compute-0 lvm[307498]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:29:37 compute-0 lvm[307498]: VG ceph_vg1 finished
Dec 13 08:29:37 compute-0 lvm[307499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:29:37 compute-0 lvm[307499]: VG ceph_vg2 finished
Dec 13 08:29:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 135 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 159 KiB/s wr, 178 op/s
Dec 13 08:29:37 compute-0 eloquent_black[307361]: {}
Dec 13 08:29:37 compute-0 systemd[1]: libpod-fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951.scope: Deactivated successfully.
Dec 13 08:29:37 compute-0 systemd[1]: libpod-fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951.scope: Consumed 1.475s CPU time.
Dec 13 08:29:37 compute-0 podman[307502]: 2025-12-13 08:29:37.59000461 +0000 UTC m=+0.026399552 container died fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 08:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df-merged.mount: Deactivated successfully.
Dec 13 08:29:37 compute-0 podman[307502]: 2025-12-13 08:29:37.630587662 +0000 UTC m=+0.066982594 container remove fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:29:37 compute-0 systemd[1]: libpod-conmon-fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951.scope: Deactivated successfully.
Dec 13 08:29:37 compute-0 sudo[307189]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:29:37 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:29:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:29:37 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:29:37 compute-0 sudo[307517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:29:37 compute-0 sudo[307517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:29:37 compute-0 sudo[307517]: pam_unix(sudo:session): session closed for user root
Dec 13 08:29:37 compute-0 nova_compute[248510]: 2025-12-13 08:29:37.900 248514 DEBUG nova.network.neutron [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:37 compute-0 nova_compute[248510]: 2025-12-13 08:29:37.922 248514 INFO nova.compute.manager [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Took 0.85 seconds to deallocate network for instance.
Dec 13 08:29:37 compute-0 nova_compute[248510]: 2025-12-13 08:29:37.984 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:37 compute-0 nova_compute[248510]: 2025-12-13 08:29:37.984 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.034 248514 DEBUG nova.compute.manager [req-fc8f03ff-c661-4622-9807-3eaf8ad83efc req-2fa2900d-1113-4ea5-95e5-4373e382cc47 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-deleted-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.074 248514 DEBUG oslo_concurrency.processutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428194467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.624 248514 DEBUG oslo_concurrency.processutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.633 248514 DEBUG nova.compute.provider_tree [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.671 248514 DEBUG nova.scheduler.client.report [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.700 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] No waiting events found dispatching network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 WARNING nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received unexpected event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 for instance with vm_state active and task_state deleting.
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-unplugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] No waiting events found dispatching network-vif-unplugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 WARNING nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received unexpected event network-vif-unplugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 for instance with vm_state deleted and task_state None.
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] No waiting events found dispatching network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 WARNING nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received unexpected event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 for instance with vm_state deleted and task_state None.
Dec 13 08:29:38 compute-0 ceph-mon[76537]: pgmap v1965: 321 pgs: 321 active+clean; 135 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 159 KiB/s wr, 178 op/s
Dec 13 08:29:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:29:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:29:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/428194467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.729 248514 INFO nova.scheduler.client.report [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Deleted allocations for instance 1c940191-84c7-423e-901a-233b14c2acec
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.818 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:38 compute-0 nova_compute[248510]: 2025-12-13 08:29:38.984 248514 DEBUG nova.network.neutron [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.006 248514 INFO nova.compute.manager [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Took 2.22 seconds to deallocate network for instance.
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.080 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.081 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.140 248514 DEBUG oslo_concurrency.processutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 88 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Dec 13 08:29:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1952332824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.695 248514 DEBUG oslo_concurrency.processutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.704 248514 DEBUG nova.compute.provider_tree [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:39 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1952332824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.730 248514 DEBUG nova.scheduler.client.report [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.779 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.814 248514 INFO nova.scheduler.client.report [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Deleted allocations for instance b9e0d5ab-483f-49a1-901a-c36f31ab710f
Dec 13 08:29:39 compute-0 nova_compute[248510]: 2025-12-13 08:29:39.902 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:29:40 compute-0 nova_compute[248510]: 2025-12-13 08:29:40.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:40 compute-0 ceph-mon[76537]: pgmap v1966: 321 pgs: 321 active+clean; 88 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Dec 13 08:29:41 compute-0 nova_compute[248510]: 2025-12-13 08:29:41.123 248514 DEBUG nova.compute.manager [req-12a01825-d6ba-4cc7-a211-81702060d299 req-087453db-21cd-4676-ac67-1b28cbc95587 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-deleted-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 163 op/s
Dec 13 08:29:41 compute-0 nova_compute[248510]: 2025-12-13 08:29:41.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:42 compute-0 nova_compute[248510]: 2025-12-13 08:29:42.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:42 compute-0 ceph-mon[76537]: pgmap v1967: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 163 op/s
Dec 13 08:29:43 compute-0 nova_compute[248510]: 2025-12-13 08:29:43.377 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.0 MiB/s wr, 129 op/s
Dec 13 08:29:43 compute-0 nova_compute[248510]: 2025-12-13 08:29:43.833 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:43 compute-0 nova_compute[248510]: 2025-12-13 08:29:43.834 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:43 compute-0 nova_compute[248510]: 2025-12-13 08:29:43.885 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:29:43 compute-0 nova_compute[248510]: 2025-12-13 08:29:43.998 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:43 compute-0 nova_compute[248510]: 2025-12-13 08:29:43.999 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.007 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.008 248514 INFO nova.compute.claims [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.154 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.648 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614569.6474895, b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.650 248514 INFO nova.compute.manager [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] VM Stopped (Lifecycle Event)
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.678 248514 DEBUG nova.compute.manager [None req-2c2fd371-5c54-4808-bce7-d438da2850ff - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953493313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.717 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.724 248514 DEBUG nova.compute.provider_tree [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.747 248514 DEBUG nova.scheduler.client.report [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.772 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.773 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:29:44 compute-0 ceph-mon[76537]: pgmap v1968: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.0 MiB/s wr, 129 op/s
Dec 13 08:29:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2953493313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.820 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.821 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.877 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:29:44 compute-0 nova_compute[248510]: 2025-12-13 08:29:44.906 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.041 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.043 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.044 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Creating image(s)
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.067 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.094 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.121 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.125 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.161 248514 DEBUG nova.policy [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d19f7d5ece8482dab03e4bc02fdf410', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6718df841f0471ba710516400f126fa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.195 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.196 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.196 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.197 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.219 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.223 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.581 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.659 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] resizing rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.739 248514 DEBUG nova.objects.instance [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'migration_context' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.776 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.777 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Ensure instance console log exists: /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.777 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.778 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:45 compute-0 nova_compute[248510]: 2025-12-13 08:29:45.778 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:46 compute-0 nova_compute[248510]: 2025-12-13 08:29:46.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:46 compute-0 nova_compute[248510]: 2025-12-13 08:29:46.631 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Successfully created port: b5058a06-7109-4ac0-96d8-7562e66bee25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:29:46 compute-0 nova_compute[248510]: 2025-12-13 08:29:46.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:46 compute-0 nova_compute[248510]: 2025-12-13 08:29:46.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:46 compute-0 nova_compute[248510]: 2025-12-13 08:29:46.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:29:46 compute-0 nova_compute[248510]: 2025-12-13 08:29:46.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:29:46 compute-0 nova_compute[248510]: 2025-12-13 08:29:46.807 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:29:46 compute-0 nova_compute[248510]: 2025-12-13 08:29:46.808 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:29:46 compute-0 ceph-mon[76537]: pgmap v1969: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec 13 08:29:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 1.9 MiB/s wr, 100 op/s
Dec 13 08:29:48 compute-0 nova_compute[248510]: 2025-12-13 08:29:48.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:48 compute-0 ceph-mon[76537]: pgmap v1970: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 1.9 MiB/s wr, 100 op/s
Dec 13 08:29:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 80 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.3 MiB/s wr, 127 op/s
Dec 13 08:29:49 compute-0 nova_compute[248510]: 2025-12-13 08:29:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:49 compute-0 nova_compute[248510]: 2025-12-13 08:29:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.092 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Successfully updated port: b5058a06-7109-4ac0-96d8-7562e66bee25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.115 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.115 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.116 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.186 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.274 248514 DEBUG nova.compute.manager [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-changed-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.274 248514 DEBUG nova.compute.manager [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Refreshing instance network info cache due to event network-changed-b5058a06-7109-4ac0-96d8-7562e66bee25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.274 248514 DEBUG oslo_concurrency.lockutils [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.345 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:29:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.795 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.839 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.840 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.841 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.841 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:29:50 compute-0 nova_compute[248510]: 2025-12-13 08:29:50.842 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:50 compute-0 ceph-mon[76537]: pgmap v1971: 321 pgs: 321 active+clean; 80 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.3 MiB/s wr, 127 op/s
Dec 13 08:29:50 compute-0 podman[307776]: 2025-12-13 08:29:50.988806426 +0000 UTC m=+0.064965774 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:29:50 compute-0 podman[307777]: 2025-12-13 08:29:50.997195173 +0000 UTC m=+0.065258781 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 08:29:51 compute-0 podman[307775]: 2025-12-13 08:29:51.020273293 +0000 UTC m=+0.100136362 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.039 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.041 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.066 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.093 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.094 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.135 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.263 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614576.2622378, b9e0d5ab-483f-49a1-901a-c36f31ab710f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.264 248514 INFO nova.compute.manager [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] VM Stopped (Lifecycle Event)
Dec 13 08:29:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578582874' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.427 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.444 248514 DEBUG nova.compute.manager [None req-a473ef59-971c-4ede-8bd7-c61ef44ef0c3 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 88 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 2.3 MiB/s wr, 46 op/s
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.502 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.503 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.505 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.516 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.516 248514 INFO nova.compute.claims [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.586 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614576.5860462, 1c940191-84c7-423e-901a-233b14c2acec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.586 248514 INFO nova.compute.manager [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] VM Stopped (Lifecycle Event)
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.620 248514 DEBUG nova.compute.manager [None req-43ea25d4-a5db-415e-bfa3-dc068dc28615 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.683 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.684 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4108MB free_disk=59.9709356110543GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.684 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.800 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2578582874' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.963 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.996 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.997 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance network_info: |[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.997 248514 DEBUG oslo_concurrency.lockutils [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:51 compute-0 nova_compute[248510]: 2025-12-13 08:29:51.998 248514 DEBUG nova.network.neutron [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Refreshing network info cache for port b5058a06-7109-4ac0-96d8-7562e66bee25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.003 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.008 248514 WARNING nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.013 248514 DEBUG nova.virt.libvirt.host [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.014 248514 DEBUG nova.virt.libvirt.host [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.019 248514 DEBUG nova.virt.libvirt.host [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.020 248514 DEBUG nova.virt.libvirt.host [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.020 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.021 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.021 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.022 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.022 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.022 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.022 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.027 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111001457' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.442 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.449 248514 DEBUG nova.compute.provider_tree [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.474 248514 DEBUG nova.scheduler.client.report [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.503 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.504 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.507 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.515 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.515 248514 INFO nova.compute.claims [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.586 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.587 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:29:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078143762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.619 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.652 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.657 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.699 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.728 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:29:52 compute-0 nova_compute[248510]: 2025-12-13 08:29:52.775 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:52 compute-0 ceph-mon[76537]: pgmap v1972: 321 pgs: 321 active+clean; 88 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 2.3 MiB/s wr, 46 op/s
Dec 13 08:29:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3111001457' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1078143762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.175 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.177 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.178 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Creating image(s)
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.206 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984297487' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.237 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.264 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.269 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.311 248514 DEBUG nova.policy [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f703cc8fd3b4cdabb2b154345f70a7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c25aa866d502481eb9410b7d92a1347b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.315 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.317 248514 DEBUG nova.virt.libvirt.vif [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.318 248514 DEBUG nova.network.os_vif_util [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.319 248514 DEBUG nova.network.os_vif_util [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.321 248514 DEBUG nova.objects.instance [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/658180110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.353 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.354 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.355 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.355 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.377 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.381 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d594d7c8-13f8-4e02-80d2-490469301cca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.414 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.424 248514 DEBUG nova.compute.provider_tree [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 88 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.742 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d594d7c8-13f8-4e02-80d2-490469301cca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.800 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] resizing rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.834 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <name>instance-0000003c</name>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:29:52</nova:creationTime>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 08:29:53 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <system>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </system>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <os>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   </os>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <features>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   </features>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       </source>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:29:53 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <target dev="tapb5058a06-71"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <video>
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </video>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:29:53 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:29:53 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:29:53 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:29:53 compute-0 nova_compute[248510]: </domain>
Dec 13 08:29:53 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.835 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Preparing to wait for external event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.835 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.835 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.835 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.836 248514 DEBUG nova.virt.libvirt.vif [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.836 248514 DEBUG nova.network.os_vif_util [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.837 248514 DEBUG nova.network.os_vif_util [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.837 248514 DEBUG os_vif [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.838 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.839 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.841 248514 DEBUG nova.scheduler.client.report [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.845 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.845 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:53 compute-0 NetworkManager[50376]: <info>  [1765614593.8480] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.856 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.857 248514 INFO os_vif [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:29:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/984297487' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/658180110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:53 compute-0 nova_compute[248510]: 2025-12-13 08:29:53.917 248514 DEBUG nova.objects.instance [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'migration_context' on Instance uuid d594d7c8-13f8-4e02-80d2-490469301cca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.320 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.321 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Ensure instance console log exists: /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.322 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.322 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.323 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.326 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.327 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.332 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 2.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.350 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.351 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.352 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No VIF found with MAC fa:16:3e:1f:d1:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.353 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Using config drive
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.386 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.442 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.444 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.476 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.498 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.498 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance d594d7c8-13f8-4e02-80d2-490469301cca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.498 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.499 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.499 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.514 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.606 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.656 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.660 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.661 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Creating image(s)
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.695 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.721 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.748 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.753 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.804 248514 DEBUG nova.policy [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f703cc8fd3b4cdabb2b154345f70a7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c25aa866d502481eb9410b7d92a1347b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.836 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.837 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.838 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.838 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.872 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:54 compute-0 nova_compute[248510]: 2025-12-13 08:29:54.878 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:54 compute-0 ceph-mon[76537]: pgmap v1973: 321 pgs: 321 active+clean; 88 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.117 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Creating config drive at /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.123 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwwvqxald execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.165 248514 DEBUG nova.network.neutron [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updated VIF entry in instance network info cache for port b5058a06-7109-4ac0-96d8-7562e66bee25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.165 248514 DEBUG nova.network.neutron [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290259365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.188 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.206 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.238 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.238 248514 DEBUG oslo_concurrency.lockutils [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.271 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwwvqxald" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.295 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.299 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.344 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] resizing rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:29:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.378 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Successfully created port: 13e6d879-e4c6-4a44-a982-10a62782047e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.387 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.413 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.413 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.439 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.450 248514 DEBUG nova.objects.instance [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'migration_context' on Instance uuid 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.453 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.453 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Deleting local config drive /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config because it was imported into RBD.
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.467 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.468 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Ensure instance console log exists: /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.469 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.469 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.470 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.471 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.471 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 126 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 MiB/s wr, 53 op/s
Dec 13 08:29:55 compute-0 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 08:29:55 compute-0 ovn_controller[148476]: 2025-12-13T08:29:55Z|00553|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 08:29:55 compute-0 NetworkManager[50376]: <info>  [1765614595.5140] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.514 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 ovn_controller[148476]: 2025-12-13T08:29:55Z|00554|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.527 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.529 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.531 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.544 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[899cdd55-b076-4dff-aa49-7a5a9ceda272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.545 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:29:55 compute-0 systemd-machined[210538]: New machine qemu-68-instance-0000003c.
Dec 13 08:29:55 compute-0 systemd-udevd[308392]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.547 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.548 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1659fd8a-469e-4c49-925f-116d27cc9f5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a21fefc-a136-400c-8cdd-4aa30bfb9d88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 NetworkManager[50376]: <info>  [1765614595.5609] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:29:55 compute-0 NetworkManager[50376]: <info>  [1765614595.5619] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.562 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dee295c5-0966-4247-b819-dcbd1ddcbff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003c.
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.590 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a95186ad-926d-41b0-896a-2f29fe8d6d58]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 ovn_controller[148476]: 2025-12-13T08:29:55Z|00555|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 08:29:55 compute-0 ovn_controller[148476]: 2025-12-13T08:29:55Z|00556|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.629 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[958d53be-9ced-4dc8-8488-25193aa6b752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 NetworkManager[50376]: <info>  [1765614595.6368] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.636 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc091b04-0658-4d4e-b6cf-9edde1e675a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.673 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[67339d33-2690-4b20-a42e-447bc2d77255]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.677 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9236d60e-d3f7-496f-bc63-0d57022e50fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.699 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Successfully created port: b94af62b-6c45-4269-94d9-b235090f4778 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:29:55 compute-0 NetworkManager[50376]: <info>  [1765614595.7049] device (tap43ee8730-a0): carrier: link connected
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.711 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8ac0fb-fba6-463c-944f-6a0d9f59c7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.733 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4e70c810-69c8-4ebd-aaca-3bf0e74e646d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717292, 'reachable_time': 43850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308424, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.751 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9ca534-2e1a-4f2d-89b3-108eefb3457c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717292, 'tstamp': 717292}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308425, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.772 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b1a57e-8145-4ba0-9191-24e14d339718]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717292, 'reachable_time': 43850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308426, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.812 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[80d67c90-9c3b-486c-8c24-a70e91e17ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.893 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25a4aaf9-7aa0-433f-a84e-7bfab11bd656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4290259365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.896 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.896 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.897 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:55 compute-0 NetworkManager[50376]: <info>  [1765614595.8999] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Dec 13 08:29:55 compute-0 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.902 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:29:55 compute-0 ovn_controller[148476]: 2025-12-13T08:29:55Z|00557|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 nova_compute[248510]: 2025-12-13 08:29:55.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.924 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.925 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[30d0a5e3-0d6a-4cc2-9f95-f01b09b4826b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.926 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.926 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.053 248514 DEBUG nova.compute.manager [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.053 248514 DEBUG oslo_concurrency.lockutils [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.053 248514 DEBUG oslo_concurrency.lockutils [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.054 248514 DEBUG oslo_concurrency.lockutils [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.054 248514 DEBUG nova.compute.manager [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Processing event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.255 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614596.2554512, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.256 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.259 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.262 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.267 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance spawned successfully.
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.268 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:29:56 compute-0 podman[308500]: 2025-12-13 08:29:56.373945717 +0000 UTC m=+0.051671536 container create 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.384 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.388 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.389 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.389 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.390 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.390 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.390 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.396 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:56 compute-0 systemd[1]: Started libpod-conmon-291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795.scope.
Dec 13 08:29:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:29:56 compute-0 podman[308500]: 2025-12-13 08:29:56.347425983 +0000 UTC m=+0.025151822 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9368b1b4038d92fdf8ac3d0acda6e2c35cf6264d3fdc71f92b90f08d6224132/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.451 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.451 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614596.2565768, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.451 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Paused (Lifecycle Event)
Dec 13 08:29:56 compute-0 podman[308500]: 2025-12-13 08:29:56.462505632 +0000 UTC m=+0.140231471 container init 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 08:29:56 compute-0 podman[308500]: 2025-12-13 08:29:56.467989298 +0000 UTC m=+0.145715117 container start 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:29:56 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [NOTICE]   (308519) : New worker (308521) forked
Dec 13 08:29:56 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [NOTICE]   (308519) : Loading success.
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.540 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.545 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614596.2615957, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.545 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.582 248514 INFO nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Took 11.54 seconds to spawn the instance on the hypervisor.
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.583 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.584 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.591 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.645 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.681 248514 INFO nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Took 12.72 seconds to build instance.
Dec 13 08:29:56 compute-0 nova_compute[248510]: 2025-12-13 08:29:56.701 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:56 compute-0 ceph-mon[76537]: pgmap v1974: 321 pgs: 321 active+clean; 126 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 MiB/s wr, 53 op/s
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.043 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Successfully updated port: 13e6d879-e4c6-4a44-a982-10a62782047e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.063 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.064 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquired lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.064 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.115 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Successfully updated port: b94af62b-6c45-4269-94d9-b235090f4778 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.136 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.136 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquired lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.137 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.304 248514 DEBUG nova.compute.manager [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received event network-changed-b94af62b-6c45-4269-94d9-b235090f4778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.304 248514 DEBUG nova.compute.manager [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Refreshing instance network info cache due to event network-changed-b94af62b-6c45-4269-94d9-b235090f4778. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.304 248514 DEBUG oslo_concurrency.lockutils [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.305 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.305 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.336 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.421 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.422 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.430 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.432 248514 INFO nova.compute.claims [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.446 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.453 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:29:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 126 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 MiB/s wr, 53 op/s
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.683 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.790 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.821 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.821 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 08:29:57 compute-0 nova_compute[248510]: 2025-12-13 08:29:57.867 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 08:29:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:29:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836552486' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.281 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.289 248514 DEBUG nova.compute.provider_tree [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.324 248514 DEBUG nova.scheduler.client.report [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.357 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.358 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.427 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.428 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.451 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.482 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.611 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.612 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.613 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Creating image(s)
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.634 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.659 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.678 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.682 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.722 248514 DEBUG nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.723 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.723 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.723 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 DEBUG nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 WARNING nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 DEBUG nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-changed-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 DEBUG nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Refreshing instance network info cache due to event network-changed-13e6d879-e4c6-4a44-a982-10a62782047e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.758 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.759 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.760 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.760 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.786 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.790 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 850bda47-d7a0-4d8d-a048-258b8388cab7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.838 248514 DEBUG nova.policy [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '865bfe2430ea4f9ca639a4f89c86899d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8fd9d373def4437880ac432124a30a67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:29:58 compute-0 nova_compute[248510]: 2025-12-13 08:29:58.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:58 compute-0 ceph-mon[76537]: pgmap v1975: 321 pgs: 321 active+clean; 126 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 MiB/s wr, 53 op/s
Dec 13 08:29:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2836552486' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.144 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 850bda47-d7a0-4d8d-a048-258b8388cab7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.202 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] resizing rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.235 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Updating instance_info_cache with network_info: [{"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.287 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Releasing lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.288 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance network_info: |[{"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.289 248514 DEBUG oslo_concurrency.lockutils [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.289 248514 DEBUG nova.network.neutron [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Refreshing network info cache for port b94af62b-6c45-4269-94d9-b235090f4778 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.292 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start _get_guest_xml network_info=[{"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.297 248514 DEBUG nova.objects.instance [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lazy-loading 'migration_context' on Instance uuid 850bda47-d7a0-4d8d-a048-258b8388cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.304 248514 WARNING nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.311 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.312 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.315 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.316 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.316 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.316 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.317 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.317 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.318 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.318 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.318 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.318 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.319 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.319 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.319 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.320 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.323 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.367 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.368 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Ensure instance console log exists: /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.369 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.369 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.370 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:29:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 161 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 129 op/s
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.527 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Updating instance_info_cache with network_info: [{"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.561 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Releasing lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.562 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance network_info: |[{"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.562 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.562 248514 DEBUG nova.network.neutron [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Refreshing network info cache for port 13e6d879-e4c6-4a44-a982-10a62782047e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.565 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start _get_guest_xml network_info=[{"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.570 248514 WARNING nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.577 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.577 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.581 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.581 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.582 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.582 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.583 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.583 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.583 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.583 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.584 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.584 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.584 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.584 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.585 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.585 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.589 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.818 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:29:59 compute-0 NetworkManager[50376]: <info>  [1765614599.8872] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:29:59 compute-0 NetworkManager[50376]: <info>  [1765614599.8882] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Dec 13 08:29:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:29:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822640300' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2822640300' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.933 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.958 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:29:59 compute-0 nova_compute[248510]: 2025-12-13 08:29:59.964 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.050 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 ovn_controller[148476]: 2025-12-13T08:30:00Z|00558|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.062 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624074968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.211 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.235 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.240 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547013222' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.556 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.560 248514 DEBUG nova.virt.libvirt.vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-2',id=62,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:54Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=13a5d640-9e2a-49d7-9f95-be18ebbe1cfe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.560 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.562 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.563 248514 DEBUG nova.objects.instance [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'pci_devices' on Instance uuid 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383553656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.836 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.838 248514 DEBUG nova.virt.libvirt.vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-1',id=61,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:52Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=d594d7c8-13f8-4e02-80d2-490469301cca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.838 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.839 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.840 248514 DEBUG nova.objects.instance [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'pci_devices' on Instance uuid d594d7c8-13f8-4e02-80d2-490469301cca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.857 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <uuid>13a5d640-9e2a-49d7-9f95-be18ebbe1cfe</uuid>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <name>instance-0000003e</name>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:name>tempest-tempest.common.compute-instance-1395231476-2</nova:name>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:29:59</nova:creationTime>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:user uuid="1f703cc8fd3b4cdabb2b154345f70a7c">tempest-MultipleCreateTestJSON-478861069-project-member</nova:user>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:project uuid="c25aa866d502481eb9410b7d92a1347b">tempest-MultipleCreateTestJSON-478861069</nova:project>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:port uuid="b94af62b-6c45-4269-94d9-b235090f4778">
Dec 13 08:30:00 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="serial">13a5d640-9e2a-49d7-9f95-be18ebbe1cfe</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="uuid">13a5d640-9e2a-49d7-9f95-be18ebbe1cfe</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:a6:af:90"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <target dev="tapb94af62b-6c"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/console.log" append="off"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:00 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:00 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.870 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Preparing to wait for external event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.871 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.872 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.872 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.874 248514 DEBUG nova.virt.libvirt.vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-2',id=62,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:54Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=13a5d640-9e2a-49d7-9f95-be18ebbe1cfe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.875 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.877 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.877 248514 DEBUG os_vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.879 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.880 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.885 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.885 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb94af62b-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.886 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb94af62b-6c, col_values=(('external_ids', {'iface-id': 'b94af62b-6c45-4269-94d9-b235090f4778', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:af:90', 'vm-uuid': '13a5d640-9e2a-49d7-9f95-be18ebbe1cfe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:00 compute-0 NetworkManager[50376]: <info>  [1765614600.8894] manager: (tapb94af62b-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.896 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <uuid>d594d7c8-13f8-4e02-80d2-490469301cca</uuid>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <name>instance-0000003d</name>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:name>tempest-tempest.common.compute-instance-1395231476-1</nova:name>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:29:59</nova:creationTime>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:user uuid="1f703cc8fd3b4cdabb2b154345f70a7c">tempest-MultipleCreateTestJSON-478861069-project-member</nova:user>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:project uuid="c25aa866d502481eb9410b7d92a1347b">tempest-MultipleCreateTestJSON-478861069</nova:project>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <nova:port uuid="13e6d879-e4c6-4a44-a982-10a62782047e">
Dec 13 08:30:00 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="serial">d594d7c8-13f8-4e02-80d2-490469301cca</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="uuid">d594d7c8-13f8-4e02-80d2-490469301cca</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/d594d7c8-13f8-4e02-80d2-490469301cca_disk">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/d594d7c8-13f8-4e02-80d2-490469301cca_disk.config">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:89:36:41"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <target dev="tap13e6d879-e4"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/console.log" append="off"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:00 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:00 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:00 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:00 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:00 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.896 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Preparing to wait for external event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.903 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.903 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.904 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.904 248514 DEBUG nova.virt.libvirt.vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-1',id=61,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:52Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=d594d7c8-13f8-4e02-80d2-490469301cca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.905 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.905 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.906 248514 DEBUG os_vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.908 248514 INFO os_vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c')
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.910 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.911 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.924 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13e6d879-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.924 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13e6d879-e4, col_values=(('external_ids', {'iface-id': '13e6d879-e4c6-4a44-a982-10a62782047e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:36:41', 'vm-uuid': 'd594d7c8-13f8-4e02-80d2-490469301cca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:00 compute-0 ceph-mon[76537]: pgmap v1976: 321 pgs: 321 active+clean; 161 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 129 op/s
Dec 13 08:30:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2624074968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2547013222' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2383553656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:00 compute-0 NetworkManager[50376]: <info>  [1765614600.9316] manager: (tap13e6d879-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:00 compute-0 nova_compute[248510]: 2025-12-13 08:30:00.938 248514 INFO os_vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4')
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.075 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.076 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.076 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No VIF found with MAC fa:16:3e:89:36:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.077 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Using config drive
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.102 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.118 248514 DEBUG nova.compute.manager [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-changed-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.119 248514 DEBUG nova.compute.manager [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Refreshing instance network info cache due to event network-changed-b5058a06-7109-4ac0-96d8-7562e66bee25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.119 248514 DEBUG oslo_concurrency.lockutils [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.119 248514 DEBUG oslo_concurrency.lockutils [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.119 248514 DEBUG nova.network.neutron [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Refreshing network info cache for port b5058a06-7109-4ac0-96d8-7562e66bee25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.125 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.125 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.125 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No VIF found with MAC fa:16:3e:a6:af:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.126 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Using config drive
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.149 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:01 compute-0 nova_compute[248510]: 2025-12-13 08:30:01.166 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Successfully created port: 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:30:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 210 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.5 MiB/s wr, 141 op/s
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.325 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Creating config drive at /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.331 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxugaqzdb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.477 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxugaqzdb" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.506 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.511 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.552 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Creating config drive at /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.558 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb7ulmk4g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.668 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.670 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Deleting local config drive /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config because it was imported into RBD.
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.696 248514 DEBUG nova.network.neutron [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Updated VIF entry in instance network info cache for port 13e6d879-e4c6-4a44-a982-10a62782047e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.697 248514 DEBUG nova.network.neutron [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Updating instance_info_cache with network_info: [{"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.703 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb7ulmk4g" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.735 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:02 compute-0 kernel: tapb94af62b-6c: entered promiscuous mode
Dec 13 08:30:02 compute-0 NetworkManager[50376]: <info>  [1765614602.7418] manager: (tapb94af62b-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.744 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config d594d7c8-13f8-4e02-80d2-490469301cca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:02 compute-0 ovn_controller[148476]: 2025-12-13T08:30:02Z|00559|binding|INFO|Claiming lport b94af62b-6c45-4269-94d9-b235090f4778 for this chassis.
Dec 13 08:30:02 compute-0 ovn_controller[148476]: 2025-12-13T08:30:02Z|00560|binding|INFO|b94af62b-6c45-4269-94d9-b235090f4778: Claiming fa:16:3e:a6:af:90 10.100.0.11
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.755 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:af:90 10.100.0.11'], port_security=['fa:16:3e:a6:af:90 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '13a5d640-9e2a-49d7-9f95-be18ebbe1cfe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b94af62b-6c45-4269-94d9-b235090f4778) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.757 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b94af62b-6c45-4269-94d9-b235090f4778 in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf bound to our chassis
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.758 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:02 compute-0 systemd-udevd[308956]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:02 compute-0 ovn_controller[148476]: 2025-12-13T08:30:02Z|00561|binding|INFO|Setting lport b94af62b-6c45-4269-94d9-b235090f4778 ovn-installed in OVS
Dec 13 08:30:02 compute-0 ovn_controller[148476]: 2025-12-13T08:30:02Z|00562|binding|INFO|Setting lport b94af62b-6c45-4269-94d9-b235090f4778 up in Southbound
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.772 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da4e29a8-15cb-4951-b212-8e268a9df2f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.776 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ca54d31-d1 in ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:30:02 compute-0 NetworkManager[50376]: <info>  [1765614602.7832] device (tapb94af62b-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:02 compute-0 NetworkManager[50376]: <info>  [1765614602.7842] device (tapb94af62b-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.785 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ca54d31-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e67e7972-e9d3-42a4-8b16-488014a3a694]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.788 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22052584-b073-424b-96de-1459bfd59648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 systemd-machined[210538]: New machine qemu-69-instance-0000003e.
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.797 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.803 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8087200d-79a7-43c9-b1ea-e4b0a497e304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.821 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1d02d7-3934-4f95-bb34-77f109a164f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.853 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[40fa3d20-3bec-4bd5-aae9-bc91bdd2896e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 NetworkManager[50376]: <info>  [1765614602.8617] manager: (tap2ca54d31-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.862 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b412861-3492-4ae8-8a58-81405c616ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.907 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4847f6ba-1a14-4631-a36b-795ffd05f2a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.911 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b4952305-6a7e-430c-9adb-6b3d6f1d313e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.933 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config d594d7c8-13f8-4e02-80d2-490469301cca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:02 compute-0 nova_compute[248510]: 2025-12-13 08:30:02.934 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Deleting local config drive /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config because it was imported into RBD.
Dec 13 08:30:02 compute-0 ceph-mon[76537]: pgmap v1977: 321 pgs: 321 active+clean; 210 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.5 MiB/s wr, 141 op/s
Dec 13 08:30:02 compute-0 NetworkManager[50376]: <info>  [1765614602.9422] device (tap2ca54d31-d0): carrier: link connected
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.951 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3eff1e7e-7e46-496a-a821-22adc2afd548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.979 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1580ff-faec-4ba7-a9f3-1aa6cfb97341]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718015, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309017, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 NetworkManager[50376]: <info>  [1765614603.0070] manager: (tap13e6d879-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Dec 13 08:30:03 compute-0 kernel: tap13e6d879-e4: entered promiscuous mode
Dec 13 08:30:03 compute-0 systemd-udevd[309009]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.009 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4f2196-a602-496c-99c5-9e630b891163]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:9738'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718015, 'tstamp': 718015}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309022, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ovn_controller[148476]: 2025-12-13T08:30:03Z|00563|binding|INFO|Claiming lport 13e6d879-e4c6-4a44-a982-10a62782047e for this chassis.
Dec 13 08:30:03 compute-0 ovn_controller[148476]: 2025-12-13T08:30:03Z|00564|binding|INFO|13e6d879-e4c6-4a44-a982-10a62782047e: Claiming fa:16:3e:89:36:41 10.100.0.6
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.023 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:36:41 10.100.0.6'], port_security=['fa:16:3e:89:36:41 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd594d7c8-13f8-4e02-80d2-490469301cca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=13e6d879-e4c6-4a44-a982-10a62782047e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:03 compute-0 NetworkManager[50376]: <info>  [1765614603.0318] device (tap13e6d879-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:03 compute-0 ovn_controller[148476]: 2025-12-13T08:30:03Z|00565|binding|INFO|Setting lport 13e6d879-e4c6-4a44-a982-10a62782047e ovn-installed in OVS
Dec 13 08:30:03 compute-0 ovn_controller[148476]: 2025-12-13T08:30:03Z|00566|binding|INFO|Setting lport 13e6d879-e4c6-4a44-a982-10a62782047e up in Southbound
Dec 13 08:30:03 compute-0 NetworkManager[50376]: <info>  [1765614603.0326] device (tap13e6d879-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.041 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[346391a6-1e1b-4a49-a736-bffd1aef2874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718015, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309027, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 systemd-machined[210538]: New machine qemu-70-instance-0000003d.
Dec 13 08:30:03 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003d.
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.103 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[def907e2-26e6-4b4b-ac3f-e908b7677e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.127 248514 DEBUG nova.network.neutron [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Updated VIF entry in instance network info cache for port b94af62b-6c45-4269-94d9-b235090f4778. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.128 248514 DEBUG nova.network.neutron [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Updating instance_info_cache with network_info: [{"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.189 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a32f22-56e7-45e9-b932-6f70bb321195]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.195 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.195 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.198 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:03 compute-0 NetworkManager[50376]: <info>  [1765614603.2017] manager: (tap2ca54d31-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Dec 13 08:30:03 compute-0 kernel: tap2ca54d31-d0: entered promiscuous mode
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.208 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:03 compute-0 ovn_controller[148476]: 2025-12-13T08:30:03Z|00567|binding|INFO|Releasing lport 327a65c7-a67a-4fc2-b067-82e72753566c from this chassis (sb_readonly=0)
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.209 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.228 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.230 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bed17384-eea5-493d-9e7b-2ee07d97e977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.231 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.231 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'env', 'PROCESS_TAG=haproxy-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:30:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 219 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 156 op/s
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.521 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.5213628, 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.523 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] VM Started (Lifecycle Event)
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.527 248514 DEBUG oslo_concurrency.lockutils [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:03 compute-0 podman[309152]: 2025-12-13 08:30:03.65556111 +0000 UTC m=+0.059019337 container create ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:30:03 compute-0 systemd[1]: Started libpod-conmon-ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e.scope.
Dec 13 08:30:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:03 compute-0 podman[309152]: 2025-12-13 08:30:03.623195392 +0000 UTC m=+0.026653649 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be07c0ff00db9044e874028d58961a61aa814490eafdd0ea0e2424e549f83981/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:03 compute-0 podman[309152]: 2025-12-13 08:30:03.737633565 +0000 UTC m=+0.141091822 container init ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:30:03 compute-0 podman[309152]: 2025-12-13 08:30:03.743267614 +0000 UTC m=+0.146725841 container start ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:30:03 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [NOTICE]   (309172) : New worker (309174) forked
Dec 13 08:30:03 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [NOTICE]   (309172) : Loading success.
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.814 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 13e6d879-e4c6-4a44-a982-10a62782047e in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.816 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.836 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[285396d1-0ecf-4866-b587-662c877f4eb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.867 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84561108-984a-4b36-963e-0c503636ac79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.871 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1b648267-53b3-4395-a77f-66af7a127a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.911 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9e67e7-50cf-4f2f-b3cc-ce01d39c5f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.932 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc8b85c-33a0-47ee-9746-04608c18dc9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718015, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309188, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ceph-mon[76537]: pgmap v1978: 321 pgs: 321 active+clean; 219 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 156 op/s
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.953 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f081fc74-617b-48df-92ab-1f26887a322a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718035, 'tstamp': 718035}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309189, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718039, 'tstamp': 718039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309189, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.955 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.959 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.959 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.959 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.960 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.963 248514 DEBUG nova.compute.manager [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.964 248514 DEBUG oslo_concurrency.lockutils [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.964 248514 DEBUG oslo_concurrency.lockutils [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.965 248514 DEBUG oslo_concurrency.lockutils [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.965 248514 DEBUG nova.compute.manager [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Processing event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.966 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.970 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.974 248514 INFO nova.virt.libvirt.driver [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance spawned successfully.
Dec 13 08:30:03 compute-0 nova_compute[248510]: 2025-12-13 08:30:03.975 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.724 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.732 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.737 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.737 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.738 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.739 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.739 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.740 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.755 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.756 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.5226912, 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.757 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] VM Paused (Lifecycle Event)
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.793 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.798 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.5227277, d594d7c8-13f8-4e02-80d2-490469301cca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.799 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] VM Started (Lifecycle Event)
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.825 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.830 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.5227458, d594d7c8-13f8-4e02-80d2-490469301cca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.830 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] VM Paused (Lifecycle Event)
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.842 248514 INFO nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Took 10.18 seconds to spawn the instance on the hypervisor.
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.843 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.856 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.861 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.901 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.902 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.970301, 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.902 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] VM Resumed (Lifecycle Event)
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.935 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.939 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.950 248514 INFO nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Took 13.47 seconds to build instance.
Dec 13 08:30:04 compute-0 nova_compute[248510]: 2025-12-13 08:30:04.968 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.025 248514 DEBUG nova.network.neutron [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updated VIF entry in instance network info cache for port b5058a06-7109-4ac0-96d8-7562e66bee25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.026 248514 DEBUG nova.network.neutron [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.029 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Successfully updated port: 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.083 248514 DEBUG oslo_concurrency.lockutils [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.092 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.092 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquired lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.093 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 227 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 182 op/s
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.631 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:30:05 compute-0 nova_compute[248510]: 2025-12-13 08:30:05.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.135 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.136 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.137 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.137 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.137 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] No waiting events found dispatching network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.137 248514 WARNING nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received unexpected event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 for instance with vm_state active and task_state None.
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.138 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-changed-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.138 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Refreshing instance network info cache due to event network-changed-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:30:06 compute-0 nova_compute[248510]: 2025-12-13 08:30:06.138 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:06 compute-0 ceph-mon[76537]: pgmap v1979: 321 pgs: 321 active+clean; 227 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 182 op/s
Dec 13 08:30:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 227 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 155 op/s
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.661 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Updating instance_info_cache with network_info: [{"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.687 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Releasing lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.688 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance network_info: |[{"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.689 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.689 248514 DEBUG nova.network.neutron [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Refreshing network info cache for port 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.692 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start _get_guest_xml network_info=[{"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.697 248514 WARNING nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.703 248514 DEBUG nova.virt.libvirt.host [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.704 248514 DEBUG nova.virt.libvirt.host [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.706 248514 DEBUG nova.virt.libvirt.host [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.707 248514 DEBUG nova.virt.libvirt.host [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.707 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.707 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.708 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.709 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.709 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.709 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.710 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.711 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.711 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.711 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.712 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.712 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:30:07 compute-0 nova_compute[248510]: 2025-12-13 08:30:07.715 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.232 248514 DEBUG nova.compute.manager [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.233 248514 DEBUG oslo_concurrency.lockutils [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.234 248514 DEBUG oslo_concurrency.lockutils [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.234 248514 DEBUG oslo_concurrency.lockutils [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.235 248514 DEBUG nova.compute.manager [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Processing event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.237 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.242 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614608.2415438, d594d7c8-13f8-4e02-80d2-490469301cca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.243 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] VM Resumed (Lifecycle Event)
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.246 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.252 248514 INFO nova.virt.libvirt.driver [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance spawned successfully.
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.253 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.271 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.291 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.297 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.298 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.299 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.300 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.300 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.301 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357585265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.335 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.336 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.362 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.367 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:08 compute-0 ovn_controller[148476]: 2025-12-13T08:30:08Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:30:08 compute-0 ovn_controller[148476]: 2025-12-13T08:30:08Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.415 248514 INFO nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Took 15.24 seconds to spawn the instance on the hypervisor.
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.416 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:08 compute-0 ceph-mon[76537]: pgmap v1980: 321 pgs: 321 active+clean; 227 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 155 op/s
Dec 13 08:30:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/357585265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.676 248514 INFO nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Took 17.24 seconds to build instance.
Dec 13 08:30:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255884628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.951 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.953 248514 DEBUG nova.virt.libvirt.vif [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-461167032',display_name='tempest-ImagesNegativeTestJSON-server-461167032',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-461167032',id=63,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8fd9d373def4437880ac432124a30a67',ramdisk_id='',reservation_id='r-ne053sg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-865869987',owner_user_name='tempest-ImagesNegativeTestJSON-865869987-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:58Z,user_data=None,user_id='865bfe2430ea4f9ca639a4f89c86899d',uuid=850bda47-d7a0-4d8d-a048-258b8388cab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.953 248514 DEBUG nova.network.os_vif_util [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converting VIF {"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.954 248514 DEBUG nova.network.os_vif_util [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:08 compute-0 nova_compute[248510]: 2025-12-13 08:30:08.956 248514 DEBUG nova.objects.instance [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 850bda47-d7a0-4d8d-a048-258b8388cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:30:09
Dec 13 08:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 13 08:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.420 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <uuid>850bda47-d7a0-4d8d-a048-258b8388cab7</uuid>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <name>instance-0000003f</name>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesNegativeTestJSON-server-461167032</nova:name>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:30:07</nova:creationTime>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <nova:user uuid="865bfe2430ea4f9ca639a4f89c86899d">tempest-ImagesNegativeTestJSON-865869987-project-member</nova:user>
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <nova:project uuid="8fd9d373def4437880ac432124a30a67">tempest-ImagesNegativeTestJSON-865869987</nova:project>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <nova:port uuid="554f8172-ce8f-4d4c-a511-9d7bfc8ecb45">
Dec 13 08:30:09 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <entry name="serial">850bda47-d7a0-4d8d-a048-258b8388cab7</entry>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <entry name="uuid">850bda47-d7a0-4d8d-a048-258b8388cab7</entry>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/850bda47-d7a0-4d8d-a048-258b8388cab7_disk">
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config">
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:09 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e7:c0:75"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <target dev="tap554f8172-ce"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/console.log" append="off"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:09 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:09 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:09 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:09 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:09 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.424 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Preparing to wait for external event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.424 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.426 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.427 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.429 248514 DEBUG nova.virt.libvirt.vif [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-461167032',display_name='tempest-ImagesNegativeTestJSON-server-461167032',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-461167032',id=63,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8fd9d373def4437880ac432124a30a67',ramdisk_id='',reservation_id='r-ne053sg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-865869987',owner_user_name='tempest-ImagesNegativeTestJSON-865869987-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:58Z,user_data=None,user_id='865bfe2430ea4f9ca639a4f89c86899d',uuid=850bda47-d7a0-4d8d-a048-258b8388cab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.430 248514 DEBUG nova.network.os_vif_util [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converting VIF {"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.433 248514 DEBUG nova.network.os_vif_util [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.434 248514 DEBUG os_vif [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.440 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.442 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.444 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.450 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap554f8172-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.451 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap554f8172-ce, col_values=(('external_ids', {'iface-id': '554f8172-ce8f-4d4c-a511-9d7bfc8ecb45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:c0:75', 'vm-uuid': '850bda47-d7a0-4d8d-a048-258b8388cab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:09 compute-0 NetworkManager[50376]: <info>  [1765614609.4683] manager: (tap554f8172-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.467 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.476 248514 INFO os_vif [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce')
Dec 13 08:30:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 244 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.4 MiB/s wr, 234 op/s
Dec 13 08:30:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4255884628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.722 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.723 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.723 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] No VIF found with MAC fa:16:3e:e7:c0:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.724 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Using config drive
Dec 13 08:30:09 compute-0 nova_compute[248510]: 2025-12-13 08:30:09.750 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.224 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.258 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.258 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid d594d7c8-13f8-4e02-80d2-490469301cca _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 850bda47-d7a0-4d8d-a048-258b8388cab7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.277 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Creating config drive at /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.282 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurlkq8l4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.330 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.332 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.332 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.435 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurlkq8l4" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.472 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.477 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:10 compute-0 ceph-mon[76537]: pgmap v1981: 321 pgs: 321 active+clean; 244 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.4 MiB/s wr, 234 op/s
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:30:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.654 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.655 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Deleting local config drive /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config because it was imported into RBD.
Dec 13 08:30:10 compute-0 NetworkManager[50376]: <info>  [1765614610.7124] manager: (tap554f8172-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Dec 13 08:30:10 compute-0 kernel: tap554f8172-ce: entered promiscuous mode
Dec 13 08:30:10 compute-0 ovn_controller[148476]: 2025-12-13T08:30:10Z|00568|binding|INFO|Claiming lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 for this chassis.
Dec 13 08:30:10 compute-0 ovn_controller[148476]: 2025-12-13T08:30:10Z|00569|binding|INFO|554f8172-ce8f-4d4c-a511-9d7bfc8ecb45: Claiming fa:16:3e:e7:c0:75 10.100.0.8
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.726 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:c0:75 10.100.0.8'], port_security=['fa:16:3e:e7:c0:75 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '850bda47-d7a0-4d8d-a048-258b8388cab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66bfc0e0-7de5-436f-90fb-b5e591519781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8fd9d373def4437880ac432124a30a67', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccef7c54-bce8-4925-91bd-ad28ca3c3b7f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb2b3e88-e3d3-490d-ba95-e13073ca0a5b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.728 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 in datapath 66bfc0e0-7de5-436f-90fb-b5e591519781 bound to our chassis
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.730 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66bfc0e0-7de5-436f-90fb-b5e591519781
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.743 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[349f1e67-d485-47a4-8e53-ed88306fd5a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.744 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66bfc0e0-71 in ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.746 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66bfc0e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.746 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[589877ff-1f16-49a7-a686-8e946bdffe6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf17bef6-01b0-4419-8db0-13094171b174]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:10 compute-0 systemd-udevd[309328]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.749 248514 DEBUG nova.network.neutron [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Updated VIF entry in instance network info cache for port 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.750 248514 DEBUG nova.network.neutron [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Updating instance_info_cache with network_info: [{"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:10 compute-0 ovn_controller[148476]: 2025-12-13T08:30:10Z|00570|binding|INFO|Setting lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 ovn-installed in OVS
Dec 13 08:30:10 compute-0 ovn_controller[148476]: 2025-12-13T08:30:10Z|00571|binding|INFO|Setting lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 up in Southbound
Dec 13 08:30:10 compute-0 NetworkManager[50376]: <info>  [1765614610.7688] device (tap554f8172-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.765 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[af8db099-a8b7-4a05-8784-6a5fa1472ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:10 compute-0 NetworkManager[50376]: <info>  [1765614610.7698] device (tap554f8172-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:10 compute-0 systemd-machined[210538]: New machine qemu-71-instance-0000003f.
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.868 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.870 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.871 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.871 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.872 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.872 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] No waiting events found dispatching network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.872 248514 WARNING nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received unexpected event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e for instance with vm_state building and task_state spawning.
Dec 13 08:30:10 compute-0 nova_compute[248510]: 2025-12-13 08:30:10.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:10 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-0000003f.
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.881 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69ffe9e6-fcce-4c53-b0a6-c90e731a2807]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.919 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2e3c47-1e3e-4d72-b487-1d4a1d934fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:10 compute-0 systemd-udevd[309332]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:10 compute-0 NetworkManager[50376]: <info>  [1765614610.9318] manager: (tap66bfc0e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/257)
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.930 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1f6905-44fa-470b-b8e7-735f46f8fdb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.975 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[417269c5-720c-49e8-8f5c-5a081db737a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.980 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[93630a69-c965-4f35-a13b-b6b78f20c606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:11 compute-0 NetworkManager[50376]: <info>  [1765614611.0090] device (tap66bfc0e0-70): carrier: link connected
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.016 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc9745c-7b72-4c10-a222-00558f677948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.036 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95d18016-e594-4779-9bbe-cf1d2bc6485e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66bfc0e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:ed:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718822, 'reachable_time': 27903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309361, 'error': None, 'target': 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.060 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06d15214-525e-477b-b6bb-7de8218492ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:ed50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718822, 'tstamp': 718822}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309362, 'error': None, 'target': 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.086 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7a4ad3-2636-470e-b146-6833e9f7a930]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66bfc0e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:ed:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718822, 'reachable_time': 27903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309363, 'error': None, 'target': 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.127 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[be8b7a78-0062-427e-adc0-edad04d7a740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.194 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5047986f-7d66-4b3e-a4c1-dad9f1499b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66bfc0e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.197 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66bfc0e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:11 compute-0 nova_compute[248510]: 2025-12-13 08:30:11.199 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:11 compute-0 NetworkManager[50376]: <info>  [1765614611.1997] manager: (tap66bfc0e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Dec 13 08:30:11 compute-0 kernel: tap66bfc0e0-70: entered promiscuous mode
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.202 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66bfc0e0-70, col_values=(('external_ids', {'iface-id': '679b65d4-8c76-4539-9e29-7edabab1f4fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:11 compute-0 ovn_controller[148476]: 2025-12-13T08:30:11Z|00572|binding|INFO|Releasing lport 679b65d4-8c76-4539-9e29-7edabab1f4fa from this chassis (sb_readonly=0)
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.222 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66bfc0e0-7de5-436f-90fb-b5e591519781.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66bfc0e0-7de5-436f-90fb-b5e591519781.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:30:11 compute-0 nova_compute[248510]: 2025-12-13 08:30:11.223 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.225 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca676fc7-1e89-49f5-b031-d52981b2932a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.226 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-66bfc0e0-7de5-436f-90fb-b5e591519781
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/66bfc0e0-7de5-436f-90fb-b5e591519781.pid.haproxy
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 66bfc0e0-7de5-436f-90fb-b5e591519781
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:30:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.227 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'env', 'PROCESS_TAG=haproxy-66bfc0e0-7de5-436f-90fb-b5e591519781', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66bfc0e0-7de5-436f-90fb-b5e591519781.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:30:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 256 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 215 op/s
Dec 13 08:30:11 compute-0 podman[309395]: 2025-12-13 08:30:11.615944711 +0000 UTC m=+0.057386067 container create 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:30:11 compute-0 systemd[1]: Started libpod-conmon-7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a.scope.
Dec 13 08:30:11 compute-0 podman[309395]: 2025-12-13 08:30:11.585355366 +0000 UTC m=+0.026796722 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:30:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72003b56c84a1a54322c2566a9afa6d2b3bf57da10dc3d13cd92732eb061ede0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:11 compute-0 podman[309395]: 2025-12-13 08:30:11.71397553 +0000 UTC m=+0.155416866 container init 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:30:11 compute-0 podman[309395]: 2025-12-13 08:30:11.723408203 +0000 UTC m=+0.164849539 container start 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:30:11 compute-0 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [NOTICE]   (309414) : New worker (309430) forked
Dec 13 08:30:11 compute-0 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [NOTICE]   (309414) : Loading success.
Dec 13 08:30:11 compute-0 nova_compute[248510]: 2025-12-13 08:30:11.948 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614611.9481275, 850bda47-d7a0-4d8d-a048-258b8388cab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:11 compute-0 nova_compute[248510]: 2025-12-13 08:30:11.949 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] VM Started (Lifecycle Event)
Dec 13 08:30:11 compute-0 nova_compute[248510]: 2025-12-13 08:30:11.979 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:11 compute-0 nova_compute[248510]: 2025-12-13 08:30:11.984 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614611.9515266, 850bda47-d7a0-4d8d-a048-258b8388cab7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:11 compute-0 nova_compute[248510]: 2025-12-13 08:30:11.985 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] VM Paused (Lifecycle Event)
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.016 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.046 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:12 compute-0 ceph-mon[76537]: pgmap v1982: 321 pgs: 321 active+clean; 256 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 215 op/s
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.933 248514 DEBUG nova.compute.manager [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.934 248514 DEBUG oslo_concurrency.lockutils [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.935 248514 DEBUG oslo_concurrency.lockutils [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.935 248514 DEBUG oslo_concurrency.lockutils [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.936 248514 DEBUG nova.compute.manager [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Processing event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.937 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.942 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.943 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614612.9411068, 850bda47-d7a0-4d8d-a048-258b8388cab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.943 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] VM Resumed (Lifecycle Event)
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.948 248514 INFO nova.virt.libvirt.driver [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance spawned successfully.
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.948 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.975 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.983 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.983 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.984 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.985 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.985 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.986 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:12 compute-0 nova_compute[248510]: 2025-12-13 08:30:12.991 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.023 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.053 248514 INFO nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Took 14.44 seconds to spawn the instance on the hypervisor.
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.053 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.114 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.115 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.115 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.115 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.115 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.117 248514 INFO nova.compute.manager [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Terminating instance
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.118 248514 DEBUG nova.compute.manager [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.129 248514 INFO nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Took 15.73 seconds to build instance.
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.151 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.151 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.152 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.152 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:13 compute-0 kernel: tap13e6d879-e4 (unregistering): left promiscuous mode
Dec 13 08:30:13 compute-0 NetworkManager[50376]: <info>  [1765614613.1768] device (tap13e6d879-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:30:13 compute-0 ovn_controller[148476]: 2025-12-13T08:30:13Z|00573|binding|INFO|Releasing lport 13e6d879-e4c6-4a44-a982-10a62782047e from this chassis (sb_readonly=0)
Dec 13 08:30:13 compute-0 ovn_controller[148476]: 2025-12-13T08:30:13Z|00574|binding|INFO|Setting lport 13e6d879-e4c6-4a44-a982-10a62782047e down in Southbound
Dec 13 08:30:13 compute-0 ovn_controller[148476]: 2025-12-13T08:30:13Z|00575|binding|INFO|Removing iface tap13e6d879-e4 ovn-installed in OVS
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.182 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.188 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:36:41 10.100.0.6'], port_security=['fa:16:3e:89:36:41 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd594d7c8-13f8-4e02-80d2-490469301cca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=13e6d879-e4c6-4a44-a982-10a62782047e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.191 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 13e6d879-e4c6-4a44-a982-10a62782047e in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.196 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.213 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.220 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b908be6-ecc7-423d-915e-17b542810c7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Dec 13 08:30:13 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Consumed 5.413s CPU time.
Dec 13 08:30:13 compute-0 systemd-machined[210538]: Machine qemu-70-instance-0000003d terminated.
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.254 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9e456535-8fc5-4042-afa7-44fe5ebdc947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.259 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[15fd2139-7b62-473f-8cca-71e941b719ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.292 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad3d1df-bcf5-47c1-b05a-7bfa6fb7c23d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.295 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.296 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.296 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.297 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.297 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.298 248514 INFO nova.compute.manager [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Terminating instance
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.300 248514 DEBUG nova.compute.manager [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.316 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d820a4fd-642d-483c-8abf-b9dc643d11dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718015, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309477, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.340 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1b61de-565e-4bd7-ae9e-6685a5a90829]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718035, 'tstamp': 718035}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309478, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718039, 'tstamp': 718039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309478, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.344 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 kernel: tapb94af62b-6c (unregistering): left promiscuous mode
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 NetworkManager[50376]: <info>  [1765614613.3596] device (tapb94af62b-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.361 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.361 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.362 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.362 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.366 248514 INFO nova.virt.libvirt.driver [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance destroyed successfully.
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.368 248514 DEBUG nova.objects.instance [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'resources' on Instance uuid d594d7c8-13f8-4e02-80d2-490469301cca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 ovn_controller[148476]: 2025-12-13T08:30:13Z|00576|binding|INFO|Releasing lport b94af62b-6c45-4269-94d9-b235090f4778 from this chassis (sb_readonly=0)
Dec 13 08:30:13 compute-0 ovn_controller[148476]: 2025-12-13T08:30:13Z|00577|binding|INFO|Setting lport b94af62b-6c45-4269-94d9-b235090f4778 down in Southbound
Dec 13 08:30:13 compute-0 ovn_controller[148476]: 2025-12-13T08:30:13Z|00578|binding|INFO|Removing iface tapb94af62b-6c ovn-installed in OVS
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.381 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:af:90 10.100.0.11'], port_security=['fa:16:3e:a6:af:90 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '13a5d640-9e2a-49d7-9f95-be18ebbe1cfe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b94af62b-6c45-4269-94d9-b235090f4778) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.385 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b94af62b-6c45-4269-94d9-b235090f4778 in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.389 248514 DEBUG nova.virt.libvirt.vif [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-1',id=61,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:08Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=d594d7c8-13f8-4e02-80d2-490469301cca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.391 248514 DEBUG nova.network.os_vif_util [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.391 248514 DEBUG nova.network.os_vif_util [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.394 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.396 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9379b9-368e-485c-b401-2c7530336a66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.397 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf namespace which is not needed anymore
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.407 248514 DEBUG os_vif [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.412 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13e6d879-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.420 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.422 248514 INFO os_vif [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4')
Dec 13 08:30:13 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Dec 13 08:30:13 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 10.133s CPU time.
Dec 13 08:30:13 compute-0 systemd-machined[210538]: Machine qemu-69-instance-0000003e terminated.
Dec 13 08:30:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 258 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 201 op/s
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.543 248514 INFO nova.virt.libvirt.driver [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance destroyed successfully.
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.544 248514 DEBUG nova.objects.instance [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'resources' on Instance uuid 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.569 248514 DEBUG nova.virt.libvirt.vif [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-2',id=62,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-13T08:30:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:04Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=13a5d640-9e2a-49d7-9f95-be18ebbe1cfe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.570 248514 DEBUG nova.network.os_vif_util [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.571 248514 DEBUG nova.network.os_vif_util [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.572 248514 DEBUG os_vif [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.574 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb94af62b-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.578 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.578 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.581 248514 INFO os_vif [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c')
Dec 13 08:30:13 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [NOTICE]   (309172) : haproxy version is 2.8.14-c23fe91
Dec 13 08:30:13 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [NOTICE]   (309172) : path to executable is /usr/sbin/haproxy
Dec 13 08:30:13 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [WARNING]  (309172) : Exiting Master process...
Dec 13 08:30:13 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [WARNING]  (309172) : Exiting Master process...
Dec 13 08:30:13 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [ALERT]    (309172) : Current worker (309174) exited with code 143 (Terminated)
Dec 13 08:30:13 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [WARNING]  (309172) : All workers exited. Exiting... (0)
Dec 13 08:30:13 compute-0 systemd[1]: libpod-ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e.scope: Deactivated successfully.
Dec 13 08:30:13 compute-0 podman[309530]: 2025-12-13 08:30:13.603533541 +0000 UTC m=+0.076814026 container died ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e-userdata-shm.mount: Deactivated successfully.
Dec 13 08:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-be07c0ff00db9044e874028d58961a61aa814490eafdd0ea0e2424e549f83981-merged.mount: Deactivated successfully.
Dec 13 08:30:13 compute-0 podman[309530]: 2025-12-13 08:30:13.655550655 +0000 UTC m=+0.128831140 container cleanup ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:30:13 compute-0 systemd[1]: libpod-conmon-ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e.scope: Deactivated successfully.
Dec 13 08:30:13 compute-0 podman[309586]: 2025-12-13 08:30:13.737965408 +0000 UTC m=+0.058498354 container remove ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.749 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57ed40af-edd3-420a-a71c-02fb2a5ab888]: (4, ('Sat Dec 13 08:30:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf (ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e)\nccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e\nSat Dec 13 08:30:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf (ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e)\nccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.752 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a4f006-2375-41dd-a986-4043a3c04135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.754 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 kernel: tap2ca54d31-d0: left promiscuous mode
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.780 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[617daa5c-639e-4456-83ce-e49e1e34b575]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e9886f-275a-42e9-b3d0-f560944b24ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.803 248514 INFO nova.virt.libvirt.driver [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Deleting instance files /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca_del
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.804 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8344d4ec-e07e-4959-a7c4-0a129e42c411]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.805 248514 INFO nova.virt.libvirt.driver [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Deletion of /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca_del complete
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.828 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25e76bb5-8fed-47dd-aea6-6ed5ae1d5d15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718006, 'reachable_time': 18582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309600, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d2ca54d31\x2dd7ab\x2d4904\x2da0d6\x2d4e2970bb54bf.mount: Deactivated successfully.
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.833 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:30:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.833 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1cf3a3-ee05-4b7f-ae4f-61902620209a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.912 248514 INFO nova.virt.libvirt.driver [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Deleting instance files /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_del
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.913 248514 INFO nova.virt.libvirt.driver [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Deletion of /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_del complete
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.920 248514 INFO nova.compute.manager [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Took 0.80 seconds to destroy the instance on the hypervisor.
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.920 248514 DEBUG oslo.service.loopingcall [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.921 248514 DEBUG nova.compute.manager [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.922 248514 DEBUG nova.network.neutron [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.972 248514 INFO nova.compute.manager [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Took 0.67 seconds to destroy the instance on the hypervisor.
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.972 248514 DEBUG oslo.service.loopingcall [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.973 248514 DEBUG nova.compute.manager [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:30:13 compute-0 nova_compute[248510]: 2025-12-13 08:30:13.973 248514 DEBUG nova.network.neutron [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:30:14 compute-0 ceph-mon[76537]: pgmap v1983: 321 pgs: 321 active+clean; 258 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 201 op/s
Dec 13 08:30:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:30:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1776667451' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:30:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:30:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1776667451' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.145 248514 DEBUG nova.network.neutron [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.169 248514 INFO nova.compute.manager [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Took 1.20 seconds to deallocate network for instance.
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.199 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.228 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.228 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.278 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.280 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.281 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.282 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.282 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] No waiting events found dispatching network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.282 248514 WARNING nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received unexpected event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 for instance with vm_state active and task_state None.
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.283 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-unplugged-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.284 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.285 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.286 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:15 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:30:15 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.291 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] No waiting events found dispatching network-vif-unplugged-13e6d879-e4c6-4a44-a982-10a62782047e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.292 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-unplugged-13e6d879-e4c6-4a44-a982-10a62782047e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.297 248514 DEBUG nova.compute.manager [req-5b1b83b5-e444-4c81-a553-a9106891022e req-6fc4a89a-ced6-48e6-9489-9df6a63b3a29 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received event network-vif-deleted-b94af62b-6c45-4269-94d9-b235090f4778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.369 248514 DEBUG oslo_concurrency.processutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.461 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.462 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.462 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.462 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.463 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.465 248514 INFO nova.compute.manager [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Terminating instance
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.466 248514 DEBUG nova.compute.manager [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:30:15 compute-0 kernel: tap554f8172-ce (unregistering): left promiscuous mode
Dec 13 08:30:15 compute-0 NetworkManager[50376]: <info>  [1765614615.5048] device (tap554f8172-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:30:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 189 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.7 MiB/s wr, 292 op/s
Dec 13 08:30:15 compute-0 ovn_controller[148476]: 2025-12-13T08:30:15Z|00579|binding|INFO|Releasing lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 from this chassis (sb_readonly=0)
Dec 13 08:30:15 compute-0 ovn_controller[148476]: 2025-12-13T08:30:15Z|00580|binding|INFO|Setting lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 down in Southbound
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.510 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 ovn_controller[148476]: 2025-12-13T08:30:15Z|00581|binding|INFO|Removing iface tap554f8172-ce ovn-installed in OVS
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.521 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:c0:75 10.100.0.8'], port_security=['fa:16:3e:e7:c0:75 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '850bda47-d7a0-4d8d-a048-258b8388cab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66bfc0e0-7de5-436f-90fb-b5e591519781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8fd9d373def4437880ac432124a30a67', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccef7c54-bce8-4925-91bd-ad28ca3c3b7f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb2b3e88-e3d3-490d-ba95-e13073ca0a5b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.523 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 in datapath 66bfc0e0-7de5-436f-90fb-b5e591519781 unbound from our chassis
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.525 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66bfc0e0-7de5-436f-90fb-b5e591519781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.526 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[911a1804-dc34-41c8-8722-dda130d8e2c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.526 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 namespace which is not needed anymore
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Dec 13 08:30:15 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003f.scope: Consumed 3.561s CPU time.
Dec 13 08:30:15 compute-0 systemd-machined[210538]: Machine qemu-71-instance-0000003f terminated.
Dec 13 08:30:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1776667451' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:30:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1776667451' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:30:15 compute-0 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [NOTICE]   (309414) : haproxy version is 2.8.14-c23fe91
Dec 13 08:30:15 compute-0 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [NOTICE]   (309414) : path to executable is /usr/sbin/haproxy
Dec 13 08:30:15 compute-0 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [WARNING]  (309414) : Exiting Master process...
Dec 13 08:30:15 compute-0 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [WARNING]  (309414) : Exiting Master process...
Dec 13 08:30:15 compute-0 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [ALERT]    (309414) : Current worker (309430) exited with code 143 (Terminated)
Dec 13 08:30:15 compute-0 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [WARNING]  (309414) : All workers exited. Exiting... (0)
Dec 13 08:30:15 compute-0 systemd[1]: libpod-7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a.scope: Deactivated successfully.
Dec 13 08:30:15 compute-0 podman[309648]: 2025-12-13 08:30:15.678294824 +0000 UTC m=+0.047020042 container died 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.703 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.712 248514 INFO nova.virt.libvirt.driver [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance destroyed successfully.
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.713 248514 DEBUG nova.objects.instance [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lazy-loading 'resources' on Instance uuid 850bda47-d7a0-4d8d-a048-258b8388cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a-userdata-shm.mount: Deactivated successfully.
Dec 13 08:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-72003b56c84a1a54322c2566a9afa6d2b3bf57da10dc3d13cd92732eb061ede0-merged.mount: Deactivated successfully.
Dec 13 08:30:15 compute-0 podman[309648]: 2025-12-13 08:30:15.735083145 +0000 UTC m=+0.103808363 container cleanup 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.735 248514 DEBUG nova.virt.libvirt.vif [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-461167032',display_name='tempest-ImagesNegativeTestJSON-server-461167032',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-461167032',id=63,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8fd9d373def4437880ac432124a30a67',ramdisk_id='',reservation_id='r-ne053sg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-865869987',owner_user_name='tempest-ImagesNegativeTestJSON-865869987-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:13Z,user_data=None,user_id='865bfe2430ea4f9ca639a4f89c86899d',uuid=850bda47-d7a0-4d8d-a048-258b8388cab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.736 248514 DEBUG nova.network.os_vif_util [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converting VIF {"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.736 248514 DEBUG nova.network.os_vif_util [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.737 248514 DEBUG os_vif [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.740 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap554f8172-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:15 compute-0 systemd[1]: libpod-conmon-7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a.scope: Deactivated successfully.
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.750 248514 INFO os_vif [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce')
Dec 13 08:30:15 compute-0 podman[309684]: 2025-12-13 08:30:15.82039349 +0000 UTC m=+0.057610443 container remove 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.829 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1610c90e-9f57-4127-85dd-d60c1e2745c3]: (4, ('Sat Dec 13 08:30:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 (7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a)\n7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a\nSat Dec 13 08:30:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 (7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a)\n7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78590b6b-e342-458e-b530-26e53961a773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.834 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66bfc0e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 kernel: tap66bfc0e0-70: left promiscuous mode
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.867 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd309a3-0015-4e38-9df5-4b00aba5bc28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.882 248514 DEBUG nova.network.neutron [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6ed305-5278-4a81-b974-2e0a5dedf0f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.886 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c762e79e-6d48-4757-813a-179feaef6f25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.904 248514 INFO nova.compute.manager [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Took 1.98 seconds to deallocate network for instance.
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.906 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa17e711-72c5-4ad9-a118-8b609c73dd59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718812, 'reachable_time': 26080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309717, 'error': None, 'target': 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d66bfc0e0\x2d7de5\x2d436f\x2d90fb\x2db5e591519781.mount: Deactivated successfully.
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.912 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:30:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.912 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d81962-929f-41bf-b73d-bcc476cb6515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.960 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129252889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:15 compute-0 nova_compute[248510]: 2025-12-13 08:30:15.997 248514 DEBUG oslo_concurrency.processutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.006 248514 DEBUG nova.compute.provider_tree [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.037 248514 DEBUG nova.scheduler.client.report [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.062 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.066 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.079 248514 INFO nova.virt.libvirt.driver [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Deleting instance files /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7_del
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.080 248514 INFO nova.virt.libvirt.driver [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Deletion of /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7_del complete
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.094 248514 INFO nova.scheduler.client.report [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Deleted allocations for instance 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.166 248514 INFO nova.compute.manager [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Took 0.70 seconds to destroy the instance on the hypervisor.
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.167 248514 DEBUG oslo.service.loopingcall [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.167 248514 DEBUG nova.compute.manager [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.168 248514 DEBUG nova.network.neutron [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.176 248514 DEBUG oslo_concurrency.processutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.212 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:16 compute-0 ceph-mon[76537]: pgmap v1984: 321 pgs: 321 active+clean; 189 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.7 MiB/s wr, 292 op/s
Dec 13 08:30:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3129252889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337783534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.721 248514 DEBUG oslo_concurrency.processutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.728 248514 DEBUG nova.compute.provider_tree [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.744 248514 DEBUG nova.scheduler.client.report [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.772 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.802 248514 INFO nova.scheduler.client.report [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Deleted allocations for instance d594d7c8-13f8-4e02-80d2-490469301cca
Dec 13 08:30:16 compute-0 nova_compute[248510]: 2025-12-13 08:30:16.878 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.408 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.409 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.409 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.410 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.410 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] No waiting events found dispatching network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.411 248514 WARNING nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received unexpected event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e for instance with vm_state deleted and task_state None.
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.411 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-deleted-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.411 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-unplugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.411 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.412 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.412 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.412 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] No waiting events found dispatching network-vif-unplugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.412 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-unplugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] No waiting events found dispatching network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:17 compute-0 nova_compute[248510]: 2025-12-13 08:30:17.414 248514 WARNING nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received unexpected event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 for instance with vm_state active and task_state deleting.
Dec 13 08:30:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 189 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Dec 13 08:30:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/337783534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:18 compute-0 nova_compute[248510]: 2025-12-13 08:30:18.587 248514 DEBUG nova.network.neutron [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:18 compute-0 nova_compute[248510]: 2025-12-13 08:30:18.615 248514 INFO nova.compute.manager [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Took 2.45 seconds to deallocate network for instance.
Dec 13 08:30:18 compute-0 ceph-mon[76537]: pgmap v1985: 321 pgs: 321 active+clean; 189 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Dec 13 08:30:18 compute-0 nova_compute[248510]: 2025-12-13 08:30:18.682 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:18 compute-0 nova_compute[248510]: 2025-12-13 08:30:18.683 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:18 compute-0 nova_compute[248510]: 2025-12-13 08:30:18.759 248514 DEBUG oslo_concurrency.processutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1970177452' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.349 248514 DEBUG oslo_concurrency.processutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.357 248514 DEBUG nova.compute.provider_tree [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.378 248514 DEBUG nova.scheduler.client.report [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.402 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.430 248514 INFO nova.scheduler.client.report [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Deleted allocations for instance 850bda47-d7a0-4d8d-a048-258b8388cab7
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.498 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 148 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.2 MiB/s wr, 332 op/s
Dec 13 08:30:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1970177452' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.653 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.654 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.654 248514 INFO nova.compute.manager [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Rebooting instance
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.671 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.672 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.672 248514 DEBUG nova.network.neutron [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:30:19 compute-0 nova_compute[248510]: 2025-12-13 08:30:19.697 248514 DEBUG nova.compute.manager [req-5c8be7cf-62d0-4ec2-b6a8-c1389e3b32cf req-892ecbb6-29d3-403c-ba81-d62cd4361cd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-deleted-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:20 compute-0 nova_compute[248510]: 2025-12-13 08:30:20.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:20 compute-0 ceph-mon[76537]: pgmap v1986: 321 pgs: 321 active+clean; 148 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.2 MiB/s wr, 332 op/s
Dec 13 08:30:20 compute-0 nova_compute[248510]: 2025-12-13 08:30:20.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008767810410926015 of space, bias 1.0, pg target 0.26303431232778046 quantized to 32 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672346150261075 of space, bias 1.0, pg target 0.20017038450783223 quantized to 32 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.932567058150141e-07 of space, bias 4.0, pg target 0.000831908046978017 quantized to 16 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:30:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:30:21 compute-0 nova_compute[248510]: 2025-12-13 08:30:21.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 121 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 445 KiB/s wr, 258 op/s
Dec 13 08:30:21 compute-0 nova_compute[248510]: 2025-12-13 08:30:21.891 248514 DEBUG nova.network.neutron [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:21 compute-0 nova_compute[248510]: 2025-12-13 08:30:21.914 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:21 compute-0 nova_compute[248510]: 2025-12-13 08:30:21.916 248514 DEBUG nova.compute.manager [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:21 compute-0 nova_compute[248510]: 2025-12-13 08:30:21.986 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:21 compute-0 nova_compute[248510]: 2025-12-13 08:30:21.986 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.014 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.036 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.037 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.066 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.103 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.104 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.112 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.112 248514 INFO nova.compute.claims [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.174 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:22 compute-0 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 08:30:22 compute-0 NetworkManager[50376]: <info>  [1765614622.2842] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.277 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:22 compute-0 ovn_controller[148476]: 2025-12-13T08:30:22Z|00582|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 08:30:22 compute-0 ovn_controller[148476]: 2025-12-13T08:30:22Z|00583|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 08:30:22 compute-0 ovn_controller[148476]: 2025-12-13T08:30:22Z|00584|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.299 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.300 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.302 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.305 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[102d957d-86a4-4846-9c6d-9726500024dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.305 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:22 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 08:30:22 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003c.scope: Consumed 13.792s CPU time.
Dec 13 08:30:22 compute-0 systemd-machined[210538]: Machine qemu-68-instance-0000003c terminated.
Dec 13 08:30:22 compute-0 podman[309768]: 2025-12-13 08:30:22.34971384 +0000 UTC m=+0.073243678 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:30:22 compute-0 podman[309767]: 2025-12-13 08:30:22.374281547 +0000 UTC m=+0.087885410 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:30:22 compute-0 podman[309766]: 2025-12-13 08:30:22.381849033 +0000 UTC m=+0.111938073 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:30:22 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [NOTICE]   (308519) : haproxy version is 2.8.14-c23fe91
Dec 13 08:30:22 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [NOTICE]   (308519) : path to executable is /usr/sbin/haproxy
Dec 13 08:30:22 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [WARNING]  (308519) : Exiting Master process...
Dec 13 08:30:22 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [ALERT]    (308519) : Current worker (308521) exited with code 143 (Terminated)
Dec 13 08:30:22 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [WARNING]  (308519) : All workers exited. Exiting... (0)
Dec 13 08:30:22 compute-0 systemd[1]: libpod-291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795.scope: Deactivated successfully.
Dec 13 08:30:22 compute-0 podman[309846]: 2025-12-13 08:30:22.454741872 +0000 UTC m=+0.048108218 container died 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.482 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.483 248514 DEBUG nova.objects.instance [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795-userdata-shm.mount: Deactivated successfully.
Dec 13 08:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9368b1b4038d92fdf8ac3d0acda6e2c35cf6264d3fdc71f92b90f08d6224132-merged.mount: Deactivated successfully.
Dec 13 08:30:22 compute-0 podman[309846]: 2025-12-13 08:30:22.503898305 +0000 UTC m=+0.097264651 container cleanup 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.504 248514 DEBUG nova.virt.libvirt.vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.506 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.508 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.508 248514 DEBUG os_vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.511 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.511 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:22 compute-0 systemd[1]: libpod-conmon-291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795.scope: Deactivated successfully.
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.521 248514 INFO os_vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.529 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.533 248514 WARNING nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.538 248514 DEBUG nova.virt.libvirt.host [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.538 248514 DEBUG nova.virt.libvirt.host [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.542 248514 DEBUG nova.virt.libvirt.host [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.542 248514 DEBUG nova.virt.libvirt.host [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.543 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.543 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.544 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.544 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.544 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.544 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.545 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.545 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.545 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.545 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.546 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.546 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.546 248514 DEBUG nova.objects.instance [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.567 248514 DEBUG oslo_concurrency.processutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:22 compute-0 podman[309906]: 2025-12-13 08:30:22.592660315 +0000 UTC m=+0.061360055 container remove 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.599 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[122ab475-762e-40da-863f-c7dd1ecff877]: (4, ('Sat Dec 13 08:30:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795)\n291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795\nSat Dec 13 08:30:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795)\n291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.601 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9093daff-49b3-4eec-9c90-d7b85f736d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.602 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:22 compute-0 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.624 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2919ede9-b3a6-4bda-93b6-7cf4ce00c8c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.644 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35a5d0d4-4344-4ea7-8c9c-c7e10bb6a945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.646 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[034013fc-e9f0-4e96-9480-256ed1a92ed8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:22 compute-0 ceph-mon[76537]: pgmap v1987: 321 pgs: 321 active+clean; 121 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 445 KiB/s wr, 258 op/s
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.667 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42f2e1e8-54f6-4101-bce9-ce054be0dd6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717283, 'reachable_time': 39195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309922, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.672 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:30:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.672 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e54b8c81-3f7e-4777-aee3-b8a002c5eeff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109453365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.848 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.856 248514 DEBUG nova.compute.provider_tree [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.879 248514 DEBUG nova.scheduler.client.report [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.919 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.921 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.925 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.932 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:30:22 compute-0 nova_compute[248510]: 2025-12-13 08:30:22.932 248514 INFO nova.compute.claims [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.015 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.016 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.049 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.079 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.154 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854366638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.209 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.211 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.211 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Creating image(s)
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.232 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.255 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.279 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.283 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.322 248514 DEBUG oslo_concurrency.processutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.327 248514 DEBUG nova.policy [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f703cc8fd3b4cdabb2b154345f70a7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c25aa866d502481eb9410b7d92a1347b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.369 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.377 248514 DEBUG oslo_concurrency.processutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.414 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.416 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.416 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.440 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.445 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 18653df3-1934-41dc-b6ab-d1dc122052f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 121 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 196 KiB/s wr, 202 op/s
Dec 13 08:30:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1109453365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/854366638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3678635955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.791 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 18653df3-1934-41dc-b6ab-d1dc122052f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.816 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.845 248514 DEBUG nova.compute.provider_tree [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.850 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] resizing rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.876 248514 DEBUG nova.scheduler.client.report [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.920 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.921 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.928 248514 DEBUG nova.objects.instance [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'migration_context' on Instance uuid 18653df3-1934-41dc-b6ab-d1dc122052f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1104556540' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.961 248514 DEBUG oslo_concurrency.processutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.962 248514 DEBUG nova.virt.libvirt.vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.962 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.963 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.964 248514 DEBUG nova.objects.instance [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.966 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.966 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Ensure instance console log exists: /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.966 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.967 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.967 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.992 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <name>instance-0000003c</name>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:30:22</nova:creationTime>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 08:30:23 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:23 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <target dev="tapb5058a06-71"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:23 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:23 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:23 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:23 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:23 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.995 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.995 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.996 248514 DEBUG nova.virt.libvirt.vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.996 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.997 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.997 248514 DEBUG os_vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.999 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:23 compute-0 nova_compute[248510]: 2025-12-13 08:30:23.999 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.006 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.007 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.012 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.013 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.038 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.044 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 NetworkManager[50376]: <info>  [1765614624.0465] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.049 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.053 248514 INFO os_vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.062 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:30:24 compute-0 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 08:30:24 compute-0 NetworkManager[50376]: <info>  [1765614624.1438] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Dec 13 08:30:24 compute-0 systemd-udevd[309815]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:24 compute-0 ovn_controller[148476]: 2025-12-13T08:30:24Z|00585|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 ovn_controller[148476]: 2025-12-13T08:30:24Z|00586|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.155 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:24 compute-0 NetworkManager[50376]: <info>  [1765614624.1589] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.159 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis
Dec 13 08:30:24 compute-0 NetworkManager[50376]: <info>  [1765614624.1595] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.160 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.166 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:30:24 compute-0 ovn_controller[148476]: 2025-12-13T08:30:24Z|00587|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 08:30:24 compute-0 ovn_controller[148476]: 2025-12-13T08:30:24Z|00588|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.168 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.168 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Creating image(s)
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.175 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78134152-3ad2-4c36-b544-4638b49fa2c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.176 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.179 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.179 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f378d6f-766d-4548-81e4-1b3af23775bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.181 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[656a7960-6803-4a40-8c01-55d17eef0d7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 systemd-machined[210538]: New machine qemu-72-instance-0000003c.
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.194 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1a80b501-8bb0-4207-b3e5-d900c5c2b372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.197 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:24 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000003c.
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.224 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b92f1ba-f7c2-40f6-a952-d0128f7d70b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.229 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.257 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.263 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6645fb35-5dae-4031-a7dc-3b80fff4483f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.265 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:24 compute-0 NetworkManager[50376]: <info>  [1765614624.2716] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/261)
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.271 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53a4bc3f-50d5-4d50-8d01-64b6328676b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.310 248514 DEBUG nova.policy [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f703cc8fd3b4cdabb2b154345f70a7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c25aa866d502481eb9410b7d92a1347b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.313 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[347ae49c-c52e-4a10-bc20-9fb90d8658a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.318 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[35627988-5e83-4c9d-bad7-38263dc59a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 NetworkManager[50376]: <info>  [1765614624.3498] device (tap43ee8730-a0): carrier: link connected
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.357 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2afede01-a0bb-4691-be4e-57d535a956fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.370 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.371 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.371 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.372 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.386 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e72f7956-1bf9-439e-bfe5-ccb6c8660f36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720156, 'reachable_time': 21033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310276, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.397 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.402 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.402 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[23db0687-378d-438e-a1a7-3dbe3d79418b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720156, 'tstamp': 720156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310292, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.420 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1d64514b-cb68-4fc0-b0b2-7db1fff6ed95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720156, 'reachable_time': 21033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310300, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.466 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[290ccb7b-f30c-4df7-ba4b-c248e875e347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.546 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9527ef78-ca1c-429c-aef1-3da578515e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.548 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 08:30:24 compute-0 NetworkManager[50376]: <info>  [1765614624.5521] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.555 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 ovn_controller[148476]: 2025-12-13T08:30:24Z|00589|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.558 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.559 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec035514-8b4d-4ce9-8d1f-15b3af04a4f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.560 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:30:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.560 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.604 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.604 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.615 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.616 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614624.615239, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.616 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.622 248514 DEBUG nova.compute.manager [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.626 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance rebooted successfully.
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.627 248514 DEBUG nova.compute.manager [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.643 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.716 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.724 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:24 compute-0 ceph-mon[76537]: pgmap v1988: 321 pgs: 321 active+clean; 121 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 196 KiB/s wr, 202 op/s
Dec 13 08:30:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3678635955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1104556540' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.742 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.793 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.793 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614624.6167905, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.793 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.797 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.807 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] resizing rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.846 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.851 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.903 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Successfully created port: 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.907 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.908 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG nova.compute.manager [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG oslo_concurrency.lockutils [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG oslo_concurrency.lockutils [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG oslo_concurrency.lockutils [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG nova.compute.manager [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.913 248514 WARNING nova.compute.manager [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.919 248514 DEBUG nova.objects.instance [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'migration_context' on Instance uuid 69f6dd3a-7c99-4537-8173-ec79bc6336a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.923 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.924 248514 INFO nova.compute.claims [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.955 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.955 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Ensure instance console log exists: /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.956 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.956 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:24 compute-0 nova_compute[248510]: 2025-12-13 08:30:24.956 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:24 compute-0 podman[310461]: 2025-12-13 08:30:24.973257622 +0000 UTC m=+0.056981447 container create 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:30:25 compute-0 ovn_controller[148476]: 2025-12-13T08:30:24Z|00590|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:30:25 compute-0 systemd[1]: Started libpod-conmon-6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8.scope.
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.012 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Successfully created port: 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:30:25 compute-0 podman[310461]: 2025-12-13 08:30:24.944152294 +0000 UTC m=+0.027876139 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:30:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c4e163921b64c8e9176bf88efb899af63dfc0f8f476d28a475a589e7ef09c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:25 compute-0 podman[310461]: 2025-12-13 08:30:25.067568819 +0000 UTC m=+0.151292644 container init 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:30:25 compute-0 podman[310461]: 2025-12-13 08:30:25.073757152 +0000 UTC m=+0.157480977 container start 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:25 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [NOTICE]   (310480) : New worker (310482) forked
Dec 13 08:30:25 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [NOTICE]   (310480) : Loading success.
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.172 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 181 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 205 op/s
Dec 13 08:30:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612064466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.768 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.777 248514 DEBUG nova.compute.provider_tree [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.803 248514 DEBUG nova.scheduler.client.report [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.834 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.835 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.904 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.905 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.926 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:30:25 compute-0 nova_compute[248510]: 2025-12-13 08:30:25.956 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.097 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.099 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.100 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Creating image(s)
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.125 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.159 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.184 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.188 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.271 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.272 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.273 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.273 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.299 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.304 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.633 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.707 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] resizing rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:30:26 compute-0 ceph-mon[76537]: pgmap v1989: 321 pgs: 321 active+clean; 181 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 205 op/s
Dec 13 08:30:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/612064466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.792 248514 DEBUG nova.policy [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '83bbc7cfbcdc49ab885e530a79ae26f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'def846f35c2747099dbe41221905d739', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.802 248514 DEBUG nova.objects.instance [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lazy-loading 'migration_context' on Instance uuid bc7aabfd-0b89-4d02-8aff-29f1bc423621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.824 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.825 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Ensure instance console log exists: /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.825 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.826 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:26 compute-0 nova_compute[248510]: 2025-12-13 08:30:26.826 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.416 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.417 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.417 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.417 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.417 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.418 248514 WARNING nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.418 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.418 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.418 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 WARNING nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.420 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.420 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.420 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.420 248514 WARNING nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:30:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:27.426 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:27.427 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.463 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 181 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 976 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.536 248514 INFO nova.compute.manager [None req-7bae8189-25c1-4b14-b187-fa0c8ec7cfd5 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Get console output
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.543 248514 INFO oslo.privsep.daemon [None req-7bae8189-25c1-4b14-b187-fa0c8ec7cfd5 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpvghvmluz/privsep.sock']
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.586 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Successfully updated port: 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.613 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.613 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquired lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:27 compute-0 nova_compute[248510]: 2025-12-13 08:30:27.614 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.364 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614613.3637993, d594d7c8-13f8-4e02-80d2-490469301cca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.365 248514 INFO nova.compute.manager [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] VM Stopped (Lifecycle Event)
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.386 248514 DEBUG nova.compute.manager [None req-8f3d9467-9f47-4072-99a6-26a6d730183b - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.414 248514 INFO oslo.privsep.daemon [None req-7bae8189-25c1-4b14-b187-fa0c8ec7cfd5 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Spawned new privsep daemon via rootwrap
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.251 310683 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.255 310683 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.257 310683 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.257 310683 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310683
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.498 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.511 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Successfully updated port: 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.510 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.538 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.539 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquired lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.539 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.544 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614613.5442035, 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.545 248514 INFO nova.compute.manager [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] VM Stopped (Lifecycle Event)
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.570 248514 DEBUG nova.compute.manager [None req-3fb44d25-f99b-4b4c-8594-d4db3156b6fa - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:28 compute-0 nova_compute[248510]: 2025-12-13 08:30:28.597 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Successfully created port: 132de588-b258-4d1f-9d17-7b0ef7d73a3b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:30:28 compute-0 ceph-mon[76537]: pgmap v1990: 321 pgs: 321 active+clean; 181 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 976 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.092 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:30:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 243 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.670 248514 DEBUG nova.compute.manager [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-changed-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.671 248514 DEBUG nova.compute.manager [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Refreshing instance network info cache due to event network-changed-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.672 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.694 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Successfully updated port: 132de588-b258-4d1f-9d17-7b0ef7d73a3b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.715 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.716 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquired lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.716 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.817 248514 DEBUG nova.compute.manager [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received event network-changed-132de588-b258-4d1f-9d17-7b0ef7d73a3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.818 248514 DEBUG nova.compute.manager [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Refreshing instance network info cache due to event network-changed-132de588-b258-4d1f-9d17-7b0ef7d73a3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.818 248514 DEBUG oslo_concurrency.lockutils [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:29 compute-0 nova_compute[248510]: 2025-12-13 08:30:29.995 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.162 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Updating instance_info_cache with network_info: [{"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.198 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Releasing lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.199 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance network_info: |[{"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.199 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.200 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Refreshing network info cache for port 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.204 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start _get_guest_xml network_info=[{"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.217 248514 WARNING nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.223 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.224 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.227 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.228 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.228 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.229 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.229 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.229 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.230 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.230 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.230 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.231 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.231 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.231 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.232 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.232 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.235 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:30 compute-0 ceph-mon[76537]: pgmap v1991: 321 pgs: 321 active+clean; 243 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.331 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Updating instance_info_cache with network_info: [{"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.365 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Releasing lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.366 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance network_info: |[{"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.370 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start _get_guest_xml network_info=[{"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.377 248514 WARNING nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.384 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.384 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.389 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.389 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.389 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.390 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.390 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.390 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.391 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.391 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.391 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.392 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.392 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.392 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.392 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.393 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.396 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:30.429 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.710 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614615.7088149, 850bda47-d7a0-4d8d-a048-258b8388cab7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.712 248514 INFO nova.compute.manager [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] VM Stopped (Lifecycle Event)
Dec 13 08:30:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532874596' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.862 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.887 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.892 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:30 compute-0 nova_compute[248510]: 2025-12-13 08:30:30.930 248514 DEBUG nova.compute.manager [None req-9bb4bcea-d332-4d1e-a752-1ef914a2f6dd - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002318066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.002 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.027 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.032 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/532874596' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1002318066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 260 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 157 op/s
Dec 13 08:30:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055225063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.572 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.576 248514 DEBUG nova.virt.libvirt.vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-1',id=64,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:23Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=18653df3-1934-41dc-b6ab-d1dc122052f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.577 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.579 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.581 248514 DEBUG nova.objects.instance [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'pci_devices' on Instance uuid 18653df3-1934-41dc-b6ab-d1dc122052f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429205274' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.606 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.608 248514 DEBUG nova.virt.libvirt.vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-2',id=65,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:24Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=69f6dd3a-7c99-4537-8173-ec79bc6336a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.609 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.610 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.611 248514 DEBUG nova.objects.instance [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'pci_devices' on Instance uuid 69f6dd3a-7c99-4537-8173-ec79bc6336a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.617 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <uuid>18653df3-1934-41dc-b6ab-d1dc122052f0</uuid>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <name>instance-00000040</name>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:name>tempest-MultipleCreateTestJSON-server-1821338648-1</nova:name>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:30:30</nova:creationTime>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:user uuid="1f703cc8fd3b4cdabb2b154345f70a7c">tempest-MultipleCreateTestJSON-478861069-project-member</nova:user>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:project uuid="c25aa866d502481eb9410b7d92a1347b">tempest-MultipleCreateTestJSON-478861069</nova:project>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:port uuid="6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc">
Dec 13 08:30:31 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="serial">18653df3-1934-41dc-b6ab-d1dc122052f0</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="uuid">18653df3-1934-41dc-b6ab-d1dc122052f0</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/18653df3-1934-41dc-b6ab-d1dc122052f0_disk">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e4:a5:c5"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <target dev="tap6d0e1f86-2f"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/console.log" append="off"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:31 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:31 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.631 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Preparing to wait for external event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.631 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.632 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.632 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.633 248514 DEBUG nova.virt.libvirt.vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-1',id=64,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:23Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=18653df3-1934-41dc-b6ab-d1dc122052f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.634 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.635 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.637 248514 DEBUG os_vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.640 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.641 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.645 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <uuid>69f6dd3a-7c99-4537-8173-ec79bc6336a9</uuid>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <name>instance-00000041</name>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:name>tempest-MultipleCreateTestJSON-server-1821338648-2</nova:name>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:30:30</nova:creationTime>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:user uuid="1f703cc8fd3b4cdabb2b154345f70a7c">tempest-MultipleCreateTestJSON-478861069-project-member</nova:user>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:project uuid="c25aa866d502481eb9410b7d92a1347b">tempest-MultipleCreateTestJSON-478861069</nova:project>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <nova:port uuid="4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d">
Dec 13 08:30:31 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="serial">69f6dd3a-7c99-4537-8173-ec79bc6336a9</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="uuid">69f6dd3a-7c99-4537-8173-ec79bc6336a9</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:31 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:7f:99:c4"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <target dev="tap4892c0f3-fa"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/console.log" append="off"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:31 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:31 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:31 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:31 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:31 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.654 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Preparing to wait for external event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.655 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.655 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.655 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.656 248514 DEBUG nova.virt.libvirt.vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-2',id=65,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:24Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=69f6dd3a-7c99-4537-8173-ec79bc6336a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.657 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.657 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.658 248514 DEBUG os_vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.660 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.661 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.662 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d0e1f86-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.662 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d0e1f86-2f, col_values=(('external_ids', {'iface-id': '6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:a5:c5', 'vm-uuid': '18653df3-1934-41dc-b6ab-d1dc122052f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:31 compute-0 NetworkManager[50376]: <info>  [1765614631.6650] manager: (tap6d0e1f86-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.673 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Updating instance_info_cache with network_info: [{"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.676 248514 INFO os_vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f')
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.677 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4892c0f3-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.678 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4892c0f3-fa, col_values=(('external_ids', {'iface-id': '4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:99:c4', 'vm-uuid': '69f6dd3a-7c99-4537-8173-ec79bc6336a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:31 compute-0 NetworkManager[50376]: <info>  [1765614631.6806] manager: (tap4892c0f3-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.688 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.689 248514 INFO os_vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa')
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.698 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Releasing lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.698 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance network_info: |[{"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.699 248514 DEBUG oslo_concurrency.lockutils [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.699 248514 DEBUG nova.network.neutron [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Refreshing network info cache for port 132de588-b258-4d1f-9d17-7b0ef7d73a3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.701 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start _get_guest_xml network_info=[{"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.729 248514 WARNING nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.734 248514 DEBUG nova.virt.libvirt.host [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.735 248514 DEBUG nova.virt.libvirt.host [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.751 248514 DEBUG nova.virt.libvirt.host [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.751 248514 DEBUG nova.virt.libvirt.host [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.752 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.752 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.752 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.752 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.753 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.753 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.753 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.753 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.754 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.754 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.754 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.754 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.758 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.810 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.811 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.811 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No VIF found with MAC fa:16:3e:7f:99:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.812 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Using config drive
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.836 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.849 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.850 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.850 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No VIF found with MAC fa:16:3e:e4:a5:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.851 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Using config drive
Dec 13 08:30:31 compute-0 nova_compute[248510]: 2025-12-13 08:30:31.873 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671993912' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:32 compute-0 nova_compute[248510]: 2025-12-13 08:30:32.359 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:32 compute-0 ceph-mon[76537]: pgmap v1992: 321 pgs: 321 active+clean; 260 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 157 op/s
Dec 13 08:30:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4055225063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3429205274' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2671993912' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:32 compute-0 nova_compute[248510]: 2025-12-13 08:30:32.981 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:32 compute-0 nova_compute[248510]: 2025-12-13 08:30:32.986 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.028 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Creating config drive at /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.033 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl7h55uxi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.081 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Creating config drive at /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.086 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjw00atwr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.187 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl7h55uxi" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.212 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.216 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.259 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjw00atwr" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.288 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.293 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.377 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.378 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Deleting local config drive /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config because it was imported into RBD.
Dec 13 08:30:33 compute-0 kernel: tap6d0e1f86-2f: entered promiscuous mode
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00591|binding|INFO|Claiming lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc for this chassis.
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00592|binding|INFO|6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc: Claiming fa:16:3e:e4:a5:c5 10.100.0.10
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.4479] manager: (tap6d0e1f86-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.451 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:a5:c5 10.100.0.10'], port_security=['fa:16:3e:e4:a5:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '18653df3-1934-41dc-b6ab-d1dc122052f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.452 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf bound to our chassis
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.455 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00593|binding|INFO|Setting lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc ovn-installed in OVS
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00594|binding|INFO|Setting lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc up in Southbound
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.471 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b67d0dc-f05e-4821-83bc-2bbf2fb32bff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.472 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ca54d31-d1 in ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 systemd-udevd[311006]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.474 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ca54d31-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.475 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da3ac371-136c-460c-bb68-45c986bc1eed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.479 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[48ab9c7a-12df-4f37-901d-7c95164782e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 systemd-machined[210538]: New machine qemu-73-instance-00000040.
Dec 13 08:30:33 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-00000040.
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.498 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[63aa8bfc-3c8d-4f62-8222-5256343b7790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.5031] device (tap6d0e1f86-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.5038] device (tap6d0e1f86-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 260 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 152 op/s
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.525 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.526 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Deleting local config drive /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config because it was imported into RBD.
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.526 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9306d0e-1939-4f5a-aca1-86ca9ca3be62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1676282674' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.576 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.577 248514 DEBUG nova.virt.libvirt.vif [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-406757652',display_name='tempest-ServerMetadataNegativeTestJSON-server-406757652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-406757652',id=66,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='def846f35c2747099dbe41221905d739',ramdisk_id='',reservation_id='r-aon0aotv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-402645639',owner_user_name='tempest-ServerMetadataNegativeTestJSON-402645639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:26Z,user_data=None,user_id='83bbc7cfbcdc49ab885e530a79ae26f2',uuid=bc7aabfd-0b89-4d02-8aff-29f1bc423621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.578 248514 DEBUG nova.network.os_vif_util [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converting VIF {"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.579 248514 DEBUG nova.network.os_vif_util [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.579 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2987eab0-963f-4300-9c7d-859423f6bf9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.581 248514 DEBUG nova.objects.instance [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc7aabfd-0b89-4d02-8aff-29f1bc423621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.5873] manager: (tap2ca54d31-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/266)
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.586 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db12c5d8-6315-4029-8b66-49bce925a176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.604 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <uuid>bc7aabfd-0b89-4d02-8aff-29f1bc423621</uuid>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <name>instance-00000042</name>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-406757652</nova:name>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:30:31</nova:creationTime>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <nova:user uuid="83bbc7cfbcdc49ab885e530a79ae26f2">tempest-ServerMetadataNegativeTestJSON-402645639-project-member</nova:user>
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <nova:project uuid="def846f35c2747099dbe41221905d739">tempest-ServerMetadataNegativeTestJSON-402645639</nova:project>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <nova:port uuid="132de588-b258-4d1f-9d17-7b0ef7d73a3b">
Dec 13 08:30:33 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <entry name="serial">bc7aabfd-0b89-4d02-8aff-29f1bc423621</entry>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <entry name="uuid">bc7aabfd-0b89-4d02-8aff-29f1bc423621</entry>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk">
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config">
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:33 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:7d:5e:68"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <target dev="tap132de588-b2"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/console.log" append="off"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:33 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:33 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:33 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:33 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:33 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.605 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Preparing to wait for external event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.606 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.606 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.606 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.609 248514 DEBUG nova.virt.libvirt.vif [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-406757652',display_name='tempest-ServerMetadataNegativeTestJSON-server-406757652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-406757652',id=66,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='def846f35c2747099dbe41221905d739',ramdisk_id='',reservation_id='r-aon0aotv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-402645639',owner_user_name='tempest-ServerMetadataNegativeTestJSON-402645639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:26Z,user_data=None,user_id='83bbc7cfbcdc49ab885e530a79ae26f2',uuid=bc7aabfd-0b89-4d02-8aff-29f1bc423621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.609 248514 DEBUG nova.network.os_vif_util [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converting VIF {"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.610 248514 DEBUG nova.network.os_vif_util [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.610 248514 DEBUG os_vif [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.611 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.612 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.614 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap132de588-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.615 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap132de588-b2, col_values=(('external_ids', {'iface-id': '132de588-b258-4d1f-9d17-7b0ef7d73a3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:5e:68', 'vm-uuid': 'bc7aabfd-0b89-4d02-8aff-29f1bc423621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.6173] manager: (tap132de588-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.624 248514 INFO os_vif [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2')
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.628 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[af841e84-70b9-4dad-9665-5fb79acbb969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.632 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5763f719-1fd2-403f-a77c-66d3edd8c70c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.6369] manager: (tap4892c0f3-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Dec 13 08:30:33 compute-0 systemd-udevd[311029]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:33 compute-0 kernel: tap4892c0f3-fa: entered promiscuous mode
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00595|binding|INFO|Claiming lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d for this chassis.
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00596|binding|INFO|4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d: Claiming fa:16:3e:7f:99:c4 10.100.0.5
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.6532] device (tap4892c0f3-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.6540] device (tap4892c0f3-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.655 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:99:c4 10.100.0.5'], port_security=['fa:16:3e:7f:99:c4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '69f6dd3a-7c99-4537-8173-ec79bc6336a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.6688] device (tap2ca54d31-d0): carrier: link connected
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00597|binding|INFO|Setting lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d ovn-installed in OVS
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00598|binding|INFO|Setting lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d up in Southbound
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.676 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8ec2d6-6d11-4cbd-9e56-568ee5a9120d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 systemd-machined[210538]: New machine qemu-74-instance-00000041.
Dec 13 08:30:33 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000041.
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c9b796-dcca-476b-8858-f7a8f9a79254]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721088, 'reachable_time': 41340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311062, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.704 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.705 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.705 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] No VIF found with MAC fa:16:3e:7d:5e:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.706 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Using config drive
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1de71551-67cc-44c4-8786-65c4017f8261]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:9738'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721088, 'tstamp': 721088}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311063, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.744 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[641a4896-2cf3-46cc-9c47-cb0cbe227c9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721088, 'reachable_time': 41340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311079, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.788 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c92e6c21-df8b-48dd-af87-18f4f058c39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0ab7ff-d76b-4f9b-a0a4-f61950ec2beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:33 compute-0 NetworkManager[50376]: <info>  [1765614633.8778] manager: (tap2ca54d31-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Dec 13 08:30:33 compute-0 kernel: tap2ca54d31-d0: entered promiscuous mode
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.884 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.885 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:33 compute-0 ovn_controller[148476]: 2025-12-13T08:30:33Z|00599|binding|INFO|Releasing lport 327a65c7-a67a-4fc2-b067-82e72753566c from this chassis (sb_readonly=0)
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.912 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.914 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fecc0e-00b8-4e5b-a624-a2b342c1863e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.916 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:30:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.918 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'env', 'PROCESS_TAG=haproxy-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:30:33 compute-0 ceph-mon[76537]: pgmap v1993: 321 pgs: 321 active+clean; 260 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 152 op/s
Dec 13 08:30:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1676282674' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.993 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614633.9927099, 18653df3-1934-41dc-b6ab-d1dc122052f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:33 compute-0 nova_compute[248510]: 2025-12-13 08:30:33.994 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] VM Started (Lifecycle Event)
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.018 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.023 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614633.9931607, 18653df3-1934-41dc-b6ab-d1dc122052f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.023 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] VM Paused (Lifecycle Event)
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.043 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.048 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.073 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.091 248514 DEBUG nova.compute.manager [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.091 248514 DEBUG oslo_concurrency.lockutils [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.092 248514 DEBUG oslo_concurrency.lockutils [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.092 248514 DEBUG oslo_concurrency.lockutils [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.092 248514 DEBUG nova.compute.manager [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Processing event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.093 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.098 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614634.0960023, 18653df3-1934-41dc-b6ab-d1dc122052f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.098 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] VM Resumed (Lifecycle Event)
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.099 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.104 248514 INFO nova.virt.libvirt.driver [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance spawned successfully.
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.105 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.125 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.132 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.137 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.137 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.138 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.138 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.139 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.139 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.173 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.182 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614634.1819553, 69f6dd3a-7c99-4537-8173-ec79bc6336a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.183 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] VM Started (Lifecycle Event)
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.221 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.225 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614634.1822293, 69f6dd3a-7c99-4537-8173-ec79bc6336a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.226 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] VM Paused (Lifecycle Event)
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.230 248514 INFO nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Took 11.02 seconds to spawn the instance on the hypervisor.
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.231 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.242 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.246 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.300 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.328 248514 INFO nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Took 12.25 seconds to build instance.
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.352 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:34 compute-0 podman[311207]: 2025-12-13 08:30:34.38246361 +0000 UTC m=+0.072059069 container create 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:30:34 compute-0 podman[311207]: 2025-12-13 08:30:34.344987555 +0000 UTC m=+0.034583014 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:30:34 compute-0 systemd[1]: Started libpod-conmon-83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82.scope.
Dec 13 08:30:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e59ed93a5e8226cd62fab7d9bf9fc22cba6a596a769cf7e462aabfa2f131c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:34 compute-0 podman[311207]: 2025-12-13 08:30:34.495721915 +0000 UTC m=+0.185317354 container init 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 08:30:34 compute-0 podman[311207]: 2025-12-13 08:30:34.501969579 +0000 UTC m=+0.191564998 container start 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.510 248514 DEBUG oslo_concurrency.lockutils [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.510 248514 DEBUG oslo_concurrency.lockutils [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.511 248514 DEBUG nova.compute.manager [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.514 248514 DEBUG nova.compute.manager [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.515 248514 DEBUG nova.objects.instance [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:34 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [NOTICE]   (311226) : New worker (311228) forked
Dec 13 08:30:34 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [NOTICE]   (311226) : Loading success.
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.540 248514 DEBUG nova.virt.libvirt.driver [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.564 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.569 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.587 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d072ed02-141c-4276-9632-aef67cf96051]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.626 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[be92e3aa-c27e-41b3-b939-e8605c97df79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.631 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ed30da5d-d111-4273-829d-1ae4724b8dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.666 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[656eb866-1a3f-4336-94b9-c839c816b0b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3347b323-29e2-4eb0-ada3-a23c84c83451]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721088, 'reachable_time': 41340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311242, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.707 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0e155f-7ff1-465f-815f-424dc94adb17]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721103, 'tstamp': 721103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311243, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721107, 'tstamp': 721107}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311243, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.711 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.721 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.722 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.724 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.724 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.739 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Updated VIF entry in instance network info cache for port 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.739 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Updating instance_info_cache with network_info: [{"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.768 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.769 248514 DEBUG nova.compute.manager [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-changed-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.774 248514 DEBUG nova.compute.manager [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Refreshing instance network info cache due to event network-changed-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.774 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.774 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:34 compute-0 nova_compute[248510]: 2025-12-13 08:30:34.775 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Refreshing network info cache for port 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.000 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Creating config drive at /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.005 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xdqikyh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.156 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xdqikyh" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.185 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.191 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.340 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.341 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Deleting local config drive /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config because it was imported into RBD.
Dec 13 08:30:35 compute-0 kernel: tap132de588-b2: entered promiscuous mode
Dec 13 08:30:35 compute-0 NetworkManager[50376]: <info>  [1765614635.4142] manager: (tap132de588-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Dec 13 08:30:35 compute-0 ovn_controller[148476]: 2025-12-13T08:30:35Z|00600|binding|INFO|Claiming lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b for this chassis.
Dec 13 08:30:35 compute-0 ovn_controller[148476]: 2025-12-13T08:30:35Z|00601|binding|INFO|132de588-b258-4d1f-9d17-7b0ef7d73a3b: Claiming fa:16:3e:7d:5e:68 10.100.0.5
Dec 13 08:30:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.420 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:35 compute-0 NetworkManager[50376]: <info>  [1765614635.4376] device (tap132de588-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:35 compute-0 NetworkManager[50376]: <info>  [1765614635.4383] device (tap132de588-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:35 compute-0 ovn_controller[148476]: 2025-12-13T08:30:35Z|00602|binding|INFO|Setting lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b ovn-installed in OVS
Dec 13 08:30:35 compute-0 systemd-machined[210538]: New machine qemu-75-instance-00000042.
Dec 13 08:30:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.3 MiB/s wr, 168 op/s
Dec 13 08:30:35 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000042.
Dec 13 08:30:35 compute-0 ovn_controller[148476]: 2025-12-13T08:30:35Z|00603|binding|INFO|Setting lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b up in Southbound
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.543 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:5e:68 10.100.0.5'], port_security=['fa:16:3e:7d:5e:68 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bc7aabfd-0b89-4d02-8aff-29f1bc423621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'def846f35c2747099dbe41221905d739', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75dfe6d3-3b39-436a-9304-896fdf5a4d05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fbc462b-7eea-4b98-8702-5b666823a7a9, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=132de588-b258-4d1f-9d17-7b0ef7d73a3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.545 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 132de588-b258-4d1f-9d17-7b0ef7d73a3b in datapath 96c67522-dadc-49c7-81e6-c5a1d8a2b085 bound to our chassis
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.547 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96c67522-dadc-49c7-81e6-c5a1d8a2b085
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.564 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c547bf1d-0ac0-4a2b-a6a1-92f2646ac0db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.565 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96c67522-d1 in ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.570 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96c67522-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7588210a-9894-4236-a1c9-4f683529d1f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8efeeca6-57af-49ab-83ba-67f2f0280a9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.589 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bd666d-ec15-444c-8815-4246756b3b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e51b3bbe-f295-4d40-8c57-611d43145dfe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.654 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5f09cdcb-0bba-475e-836c-87dd789be7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 NetworkManager[50376]: <info>  [1765614635.6639] manager: (tap96c67522-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.663 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e14f6219-ede8-4e6b-b5b5-9201f159d096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.707 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f34457-7d83-4388-b954-ffe2a9180bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.714 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[38ece6c3-3116-4497-97a7-5c9776d9d594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 NetworkManager[50376]: <info>  [1765614635.7407] device (tap96c67522-d0): carrier: link connected
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.751 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3f44fb-aea0-4002-8896-4a1356a2e895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.779 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e5660175-9f18-4771-86ac-c6f34421b336]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96c67522-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:77:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721295, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311317, 'error': None, 'target': 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e76505e6-5259-463e-a072-d8fea8b8017e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:7701'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721295, 'tstamp': 721295}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311318, 'error': None, 'target': 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fabe07-01fc-45cd-8887-46dcf85e7dd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96c67522-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:77:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721295, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311319, 'error': None, 'target': 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.880 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84d7c76c-54b0-4083-ba7a-0b439a4ad5b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.925 248514 DEBUG nova.network.neutron [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Updated VIF entry in instance network info cache for port 132de588-b258-4d1f-9d17-7b0ef7d73a3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.926 248514 DEBUG nova.network.neutron [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Updating instance_info_cache with network_info: [{"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.971 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4de90a-6e6f-4ac1-aa9e-af02b1b79657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.974 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96c67522-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.975 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.975 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96c67522-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.977 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:35 compute-0 NetworkManager[50376]: <info>  [1765614635.9778] manager: (tap96c67522-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Dec 13 08:30:35 compute-0 kernel: tap96c67522-d0: entered promiscuous mode
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.992 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96c67522-d0, col_values=(('external_ids', {'iface-id': '3ab67b78-7a30-440a-987a-9b2a515dc605'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:35 compute-0 nova_compute[248510]: 2025-12-13 08:30:35.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:35 compute-0 ovn_controller[148476]: 2025-12-13T08:30:35Z|00604|binding|INFO|Releasing lport 3ab67b78-7a30-440a-987a-9b2a515dc605 from this chassis (sb_readonly=0)
Dec 13 08:30:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.998 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96c67522-dadc-49c7-81e6-c5a1d8a2b085.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96c67522-dadc-49c7-81e6-c5a1d8a2b085.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62d92560-3bab-402a-83d6-52d3355d7b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:36.000 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-96c67522-dadc-49c7-81e6-c5a1d8a2b085
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/96c67522-dadc-49c7-81e6-c5a1d8a2b085.pid.haproxy
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 96c67522-dadc-49c7-81e6-c5a1d8a2b085
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:30:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:36.000 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'env', 'PROCESS_TAG=haproxy-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96c67522-dadc-49c7-81e6-c5a1d8a2b085.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.111 248514 DEBUG oslo_concurrency.lockutils [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.155 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614636.1546292, bc7aabfd-0b89-4d02-8aff-29f1bc423621 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.155 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] VM Started (Lifecycle Event)
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.185 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.190 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614636.1548202, bc7aabfd-0b89-4d02-8aff-29f1bc423621 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.190 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] VM Paused (Lifecycle Event)
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.349 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.361 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.382 248514 DEBUG nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.384 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] No waiting events found dispatching network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 WARNING nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received unexpected event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc for instance with vm_state active and task_state None.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.386 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.386 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.386 248514 DEBUG nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Processing event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.387 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.393 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.397 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.397 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614636.3940964, 69f6dd3a-7c99-4537-8173-ec79bc6336a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.398 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] VM Resumed (Lifecycle Event)
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.400 248514 INFO nova.virt.libvirt.driver [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance spawned successfully.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.400 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:30:36 compute-0 podman[311392]: 2025-12-13 08:30:36.420485716 +0000 UTC m=+0.070701236 container create 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.427 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.436 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.436 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.437 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.437 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.440 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.440 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.446 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:36 compute-0 systemd[1]: Started libpod-conmon-39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6.scope.
Dec 13 08:30:36 compute-0 podman[311392]: 2025-12-13 08:30:36.387109272 +0000 UTC m=+0.037324812 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:30:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594738986a7fafb8e2e8c20fa7ea6b435a01ddf06491d50b530c9beeaa2d34f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:36 compute-0 podman[311392]: 2025-12-13 08:30:36.527991748 +0000 UTC m=+0.178207278 container init 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.533 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:36 compute-0 podman[311392]: 2025-12-13 08:30:36.5402173 +0000 UTC m=+0.190432810 container start 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.542 248514 DEBUG nova.compute.manager [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.542 248514 DEBUG oslo_concurrency.lockutils [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.542 248514 DEBUG oslo_concurrency.lockutils [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.542 248514 DEBUG oslo_concurrency.lockutils [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.543 248514 DEBUG nova.compute.manager [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Processing event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.543 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.549 248514 INFO nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Took 12.38 seconds to spawn the instance on the hypervisor.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.549 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.551 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614636.5506535, bc7aabfd-0b89-4d02-8aff-29f1bc423621 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.551 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] VM Resumed (Lifecycle Event)
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.553 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.560 248514 INFO nova.virt.libvirt.driver [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance spawned successfully.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.561 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:30:36 compute-0 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [NOTICE]   (311411) : New worker (311413) forked
Dec 13 08:30:36 compute-0 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [NOTICE]   (311411) : Loading success.
Dec 13 08:30:36 compute-0 ceph-mon[76537]: pgmap v1994: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.3 MiB/s wr, 168 op/s
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.589 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.593 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.603 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.604 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.604 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.604 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.605 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.605 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.646 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.648 248514 INFO nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Took 14.50 seconds to build instance.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.681 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.685 248514 INFO nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Took 10.59 seconds to spawn the instance on the hypervisor.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.686 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.764 248514 INFO nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Took 11.96 seconds to build instance.
Dec 13 08:30:36 compute-0 nova_compute[248510]: 2025-12-13 08:30:36.787 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 140 op/s
Dec 13 08:30:37 compute-0 ovn_controller[148476]: 2025-12-13T08:30:37Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:30:37 compute-0 sudo[311422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:30:37 compute-0 sudo[311422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:37 compute-0 sudo[311422]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:37 compute-0 sudo[311447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:30:37 compute-0 sudo[311447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:37 compute-0 nova_compute[248510]: 2025-12-13 08:30:37.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.122 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Updated VIF entry in instance network info cache for port 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.123 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Updating instance_info_cache with network_info: [{"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.169 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.539 248514 DEBUG nova.compute.manager [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.539 248514 DEBUG oslo_concurrency.lockutils [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.539 248514 DEBUG oslo_concurrency.lockutils [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.540 248514 DEBUG oslo_concurrency.lockutils [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.540 248514 DEBUG nova.compute.manager [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] No waiting events found dispatching network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.540 248514 WARNING nova.compute.manager [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received unexpected event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d for instance with vm_state active and task_state None.
Dec 13 08:30:38 compute-0 ceph-mon[76537]: pgmap v1995: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 140 op/s
Dec 13 08:30:38 compute-0 sudo[311447]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.649 248514 DEBUG nova.compute.manager [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.650 248514 DEBUG oslo_concurrency.lockutils [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.651 248514 DEBUG oslo_concurrency.lockutils [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.651 248514 DEBUG oslo_concurrency.lockutils [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.651 248514 DEBUG nova.compute.manager [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] No waiting events found dispatching network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.652 248514 WARNING nova.compute.manager [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received unexpected event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b for instance with vm_state active and task_state None.
Dec 13 08:30:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:30:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:30:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:30:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:30:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:30:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:30:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:30:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:30:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:30:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:30:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:30:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:30:38 compute-0 sudo[311501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:30:38 compute-0 sudo[311501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:38 compute-0 sudo[311501]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:38 compute-0 sudo[311526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:30:38 compute-0 sudo[311526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:38 compute-0 nova_compute[248510]: 2025-12-13 08:30:38.808 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:39 compute-0 podman[311563]: 2025-12-13 08:30:39.11544892 +0000 UTC m=+0.055062080 container create 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 08:30:39 compute-0 systemd[1]: Started libpod-conmon-80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04.scope.
Dec 13 08:30:39 compute-0 podman[311563]: 2025-12-13 08:30:39.088269099 +0000 UTC m=+0.027882279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:30:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:39 compute-0 podman[311563]: 2025-12-13 08:30:39.205066061 +0000 UTC m=+0.144679241 container init 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:30:39 compute-0 podman[311563]: 2025-12-13 08:30:39.21432557 +0000 UTC m=+0.153938730 container start 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 08:30:39 compute-0 podman[311563]: 2025-12-13 08:30:39.218057952 +0000 UTC m=+0.157671112 container attach 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:30:39 compute-0 systemd[1]: libpod-80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04.scope: Deactivated successfully.
Dec 13 08:30:39 compute-0 festive_agnesi[311579]: 167 167
Dec 13 08:30:39 compute-0 conmon[311579]: conmon 80845921bdb9b9200b81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04.scope/container/memory.events
Dec 13 08:30:39 compute-0 podman[311584]: 2025-12-13 08:30:39.275875808 +0000 UTC m=+0.032577545 container died 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a55c1fa4d61a94eb03d6f45a1bcedabde23ad610f1df893689e8499b68a804ed-merged.mount: Deactivated successfully.
Dec 13 08:30:39 compute-0 podman[311584]: 2025-12-13 08:30:39.333612383 +0000 UTC m=+0.090314100 container remove 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 08:30:39 compute-0 systemd[1]: libpod-conmon-80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04.scope: Deactivated successfully.
Dec 13 08:30:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 301 op/s
Dec 13 08:30:39 compute-0 podman[311607]: 2025-12-13 08:30:39.58033488 +0000 UTC m=+0.061985030 container create f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 08:30:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:30:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:30:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:30:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:30:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:30:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:30:39 compute-0 systemd[1]: Started libpod-conmon-f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03.scope.
Dec 13 08:30:39 compute-0 podman[311607]: 2025-12-13 08:30:39.548663779 +0000 UTC m=+0.030313959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:30:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:39 compute-0 podman[311607]: 2025-12-13 08:30:39.686303745 +0000 UTC m=+0.167953925 container init f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 08:30:39 compute-0 podman[311607]: 2025-12-13 08:30:39.698204989 +0000 UTC m=+0.179855139 container start f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:30:39 compute-0 podman[311607]: 2025-12-13 08:30:39.70230815 +0000 UTC m=+0.183958300 container attach f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:30:40 compute-0 musing_austin[311623]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:30:40 compute-0 musing_austin[311623]: --> All data devices are unavailable
Dec 13 08:30:40 compute-0 systemd[1]: libpod-f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03.scope: Deactivated successfully.
Dec 13 08:30:40 compute-0 conmon[311623]: conmon f64889132a7feba27ed9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03.scope/container/memory.events
Dec 13 08:30:40 compute-0 podman[311607]: 2025-12-13 08:30:40.212132259 +0000 UTC m=+0.693782409 container died f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e-merged.mount: Deactivated successfully.
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.257 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.258 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.259 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.259 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.259 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:40 compute-0 podman[311607]: 2025-12-13 08:30:40.261311723 +0000 UTC m=+0.742961873 container remove f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.261 248514 INFO nova.compute.manager [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Terminating instance
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.269 248514 DEBUG nova.compute.manager [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:30:40 compute-0 systemd[1]: libpod-conmon-f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03.scope: Deactivated successfully.
Dec 13 08:30:40 compute-0 sudo[311526]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:40 compute-0 sudo[311655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:30:40 compute-0 sudo[311655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:40 compute-0 sudo[311655]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:40 compute-0 sudo[311680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:30:40 compute-0 sudo[311680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.461 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.462 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.462 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.463 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.463 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.464 248514 INFO nova.compute.manager [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Terminating instance
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.466 248514 DEBUG nova.compute.manager [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:30:40 compute-0 kernel: tap6d0e1f86-2f (unregistering): left promiscuous mode
Dec 13 08:30:40 compute-0 NetworkManager[50376]: <info>  [1765614640.6380] device (tap6d0e1f86-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:30:40 compute-0 ovn_controller[148476]: 2025-12-13T08:30:40Z|00605|binding|INFO|Releasing lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc from this chassis (sb_readonly=0)
Dec 13 08:30:40 compute-0 ovn_controller[148476]: 2025-12-13T08:30:40Z|00606|binding|INFO|Setting lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc down in Southbound
Dec 13 08:30:40 compute-0 ovn_controller[148476]: 2025-12-13T08:30:40Z|00607|binding|INFO|Removing iface tap6d0e1f86-2f ovn-installed in OVS
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.659 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 kernel: tap4892c0f3-fa (unregistering): left promiscuous mode
Dec 13 08:30:40 compute-0 NetworkManager[50376]: <info>  [1765614640.6686] device (tap4892c0f3-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:30:40 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Deactivated successfully.
Dec 13 08:30:40 compute-0 ovn_controller[148476]: 2025-12-13T08:30:40Z|00608|binding|INFO|Releasing lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d from this chassis (sb_readonly=1)
Dec 13 08:30:40 compute-0 ovn_controller[148476]: 2025-12-13T08:30:40Z|00609|if_status|INFO|Dropped 4 log messages in last 124 seconds (most recently, 124 seconds ago) due to excessive rate
Dec 13 08:30:40 compute-0 ovn_controller[148476]: 2025-12-13T08:30:40Z|00610|if_status|INFO|Not setting lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d down as sb is readonly
Dec 13 08:30:40 compute-0 ovn_controller[148476]: 2025-12-13T08:30:40Z|00611|binding|INFO|Removing iface tap4892c0f3-fa ovn-installed in OVS
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 ovn_controller[148476]: 2025-12-13T08:30:40Z|00612|binding|INFO|Setting lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d down in Southbound
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.702 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:a5:c5 10.100.0.10'], port_security=['fa:16:3e:e4:a5:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '18653df3-1934-41dc-b6ab-d1dc122052f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:40 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Consumed 6.612s CPU time.
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.704 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.709 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 08:30:40 compute-0 systemd-machined[210538]: Machine qemu-73-instance-00000040 terminated.
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Deactivated successfully.
Dec 13 08:30:40 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Consumed 4.525s CPU time.
Dec 13 08:30:40 compute-0 systemd-machined[210538]: Machine qemu-74-instance-00000041 terminated.
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.732 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:99:c4 10.100.0.5'], port_security=['fa:16:3e:7f:99:c4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '69f6dd3a-7c99-4537-8173-ec79bc6336a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.745 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e0ac7d-07e7-4ff5-bd2d-347f583cbea9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.754 248514 INFO nova.virt.libvirt.driver [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance destroyed successfully.
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.754 248514 DEBUG nova.objects.instance [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'resources' on Instance uuid 18653df3-1934-41dc-b6ab-d1dc122052f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.783 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[00c7b527-d4e3-4851-8ea3-e6c01f80136c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.787 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[81f5d8aa-4e31-4fa7-99bb-a2154b460e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.788 248514 DEBUG nova.virt.libvirt.vif [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-1',id=64,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:34Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=18653df3-1934-41dc-b6ab-d1dc122052f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.788 248514 DEBUG nova.network.os_vif_util [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.790 248514 DEBUG nova.network.os_vif_util [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.790 248514 DEBUG os_vif [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.793 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d0e1f86-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.805 248514 INFO os_vif [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f')
Dec 13 08:30:40 compute-0 ceph-mon[76537]: pgmap v1996: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 301 op/s
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.821 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a38dc4c2-0ee2-49c0-8aa8-6ff99531ae19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.845 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[955fb94c-0795-4ed5-82eb-3c4a2b2e969e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721088, 'reachable_time': 41340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311765, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:40 compute-0 podman[311732]: 2025-12-13 08:30:40.855628257 +0000 UTC m=+0.105783241 container create 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.869 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8015f952-e719-47c4-98d8-aa242120aeb1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721103, 'tstamp': 721103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311774, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721107, 'tstamp': 721107}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311774, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.880 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.880 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.882 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.884 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:30:40 compute-0 podman[311732]: 2025-12-13 08:30:40.793846802 +0000 UTC m=+0.044001786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.885 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2e10d969-a950-450f-84f2-7fe0805993d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.886 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf namespace which is not needed anymore
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.914 248514 INFO nova.virt.libvirt.driver [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance destroyed successfully.
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.915 248514 DEBUG nova.objects.instance [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'resources' on Instance uuid 69f6dd3a-7c99-4537-8173-ec79bc6336a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.931 248514 DEBUG nova.virt.libvirt.vif [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-2',id=65,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-13T08:30:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:36Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=69f6dd3a-7c99-4537-8173-ec79bc6336a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.932 248514 DEBUG nova.network.os_vif_util [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.933 248514 DEBUG nova.network.os_vif_util [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.933 248514 DEBUG os_vif [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.935 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4892c0f3-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.967 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:40 compute-0 nova_compute[248510]: 2025-12-13 08:30:40.974 248514 INFO os_vif [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa')
Dec 13 08:30:41 compute-0 systemd[1]: Started libpod-conmon-01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59.scope.
Dec 13 08:30:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:41 compute-0 nova_compute[248510]: 2025-12-13 08:30:41.088 248514 DEBUG nova.compute.manager [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-unplugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:41 compute-0 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG oslo_concurrency.lockutils [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:41 compute-0 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG oslo_concurrency.lockutils [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:41 compute-0 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG oslo_concurrency.lockutils [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:41 compute-0 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG nova.compute.manager [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] No waiting events found dispatching network-vif-unplugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:41 compute-0 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG nova.compute.manager [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-unplugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:30:41 compute-0 podman[311732]: 2025-12-13 08:30:41.263262544 +0000 UTC m=+0.513417558 container init 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:30:41 compute-0 podman[311732]: 2025-12-13 08:30:41.274185633 +0000 UTC m=+0.524340617 container start 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:30:41 compute-0 podman[311732]: 2025-12-13 08:30:41.279677459 +0000 UTC m=+0.529832463 container attach 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:30:41 compute-0 serene_ganguly[311823]: 167 167
Dec 13 08:30:41 compute-0 systemd[1]: libpod-01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59.scope: Deactivated successfully.
Dec 13 08:30:41 compute-0 podman[311732]: 2025-12-13 08:30:41.283745729 +0000 UTC m=+0.533900713 container died 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:30:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.1 MiB/s wr, 290 op/s
Dec 13 08:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e577f61c2612131839de60dfc17e43bc739eef07e64d924df227be08868e254-merged.mount: Deactivated successfully.
Dec 13 08:30:41 compute-0 podman[311732]: 2025-12-13 08:30:41.823595699 +0000 UTC m=+1.073750693 container remove 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:30:41 compute-0 nova_compute[248510]: 2025-12-13 08:30:41.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:41 compute-0 systemd[1]: libpod-conmon-01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59.scope: Deactivated successfully.
Dec 13 08:30:42 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [NOTICE]   (311226) : haproxy version is 2.8.14-c23fe91
Dec 13 08:30:42 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [NOTICE]   (311226) : path to executable is /usr/sbin/haproxy
Dec 13 08:30:42 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [WARNING]  (311226) : Exiting Master process...
Dec 13 08:30:42 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [ALERT]    (311226) : Current worker (311228) exited with code 143 (Terminated)
Dec 13 08:30:42 compute-0 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [WARNING]  (311226) : All workers exited. Exiting... (0)
Dec 13 08:30:42 compute-0 systemd[1]: libpod-83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82.scope: Deactivated successfully.
Dec 13 08:30:42 compute-0 podman[311826]: 2025-12-13 08:30:42.079405361 +0000 UTC m=+1.041571601 container died 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:30:42 compute-0 podman[311868]: 2025-12-13 08:30:42.082583539 +0000 UTC m=+0.087233213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:30:42 compute-0 podman[311868]: 2025-12-13 08:30:42.393279305 +0000 UTC m=+0.397928959 container create a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 08:30:42 compute-0 systemd[1]: Started libpod-conmon-a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a.scope.
Dec 13 08:30:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82-userdata-shm.mount: Deactivated successfully.
Dec 13 08:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84e59ed93a5e8226cd62fab7d9bf9fc22cba6a596a769cf7e462aabfa2f131c-merged.mount: Deactivated successfully.
Dec 13 08:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:42 compute-0 podman[311826]: 2025-12-13 08:30:42.696711932 +0000 UTC m=+1.658878172 container cleanup 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:42 compute-0 systemd[1]: libpod-conmon-83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82.scope: Deactivated successfully.
Dec 13 08:30:42 compute-0 podman[311868]: 2025-12-13 08:30:42.714384628 +0000 UTC m=+0.719034332 container init a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:30:42 compute-0 podman[311868]: 2025-12-13 08:30:42.725647136 +0000 UTC m=+0.730296790 container start a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:30:42 compute-0 nova_compute[248510]: 2025-12-13 08:30:42.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:42 compute-0 podman[311868]: 2025-12-13 08:30:42.792958757 +0000 UTC m=+0.797608411 container attach a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:30:42 compute-0 ceph-mon[76537]: pgmap v1997: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.1 MiB/s wr, 290 op/s
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]: {
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:     "0": [
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:         {
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "devices": [
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "/dev/loop3"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             ],
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_name": "ceph_lv0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_size": "21470642176",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "name": "ceph_lv0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "tags": {
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cluster_name": "ceph",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.crush_device_class": "",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.encrypted": "0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.objectstore": "bluestore",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osd_id": "0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.type": "block",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.vdo": "0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.with_tpm": "0"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             },
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "type": "block",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "vg_name": "ceph_vg0"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:         }
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:     ],
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:     "1": [
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:         {
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "devices": [
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "/dev/loop4"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             ],
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_name": "ceph_lv1",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_size": "21470642176",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "name": "ceph_lv1",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "tags": {
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cluster_name": "ceph",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.crush_device_class": "",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.encrypted": "0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.objectstore": "bluestore",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osd_id": "1",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.type": "block",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.vdo": "0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.with_tpm": "0"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             },
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "type": "block",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "vg_name": "ceph_vg1"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:         }
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:     ],
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:     "2": [
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:         {
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "devices": [
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "/dev/loop5"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             ],
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_name": "ceph_lv2",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_size": "21470642176",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "name": "ceph_lv2",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "tags": {
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.cluster_name": "ceph",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.crush_device_class": "",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.encrypted": "0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.objectstore": "bluestore",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osd_id": "2",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.type": "block",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.vdo": "0",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:                 "ceph.with_tpm": "0"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             },
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "type": "block",
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:             "vg_name": "ceph_vg2"
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:         }
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]:     ]
Dec 13 08:30:43 compute-0 stupefied_shirley[311901]: }
Dec 13 08:30:43 compute-0 systemd[1]: libpod-a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a.scope: Deactivated successfully.
Dec 13 08:30:43 compute-0 podman[311906]: 2025-12-13 08:30:43.092638161 +0000 UTC m=+0.361709485 container remove 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.103 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[080315d8-0c7f-4570-bee2-6623506855ce]: (4, ('Sat Dec 13 08:30:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf (83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82)\n83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82\nSat Dec 13 08:30:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf (83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82)\n83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:43 compute-0 podman[311868]: 2025-12-13 08:30:43.107445667 +0000 UTC m=+1.112095341 container died a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.106 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f1bb29-01fc-457d-a1de-6bce291c0413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.107 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:43 compute-0 kernel: tap2ca54d31-d0: left promiscuous mode
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.134 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89b4227f-0518-4486-a447-6a217e11bb85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.146 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd55093-1d76-4942-aa24-2a1e5a1cfd24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.148 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf2dff4-c797-4d0c-a2a1-448563415c25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.167 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba4cc12-6770-4784-aae2-91c217d290f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721078, 'reachable_time': 23831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311936, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d2ca54d31\x2dd7ab\x2d4904\x2da0d6\x2d4e2970bb54bf.mount: Deactivated successfully.
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.173 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:30:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.174 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[78feedf5-fd68-4369-a967-2d12e7c9b89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa-merged.mount: Deactivated successfully.
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.356 248514 INFO nova.virt.libvirt.driver [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Deleting instance files /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0_del
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.357 248514 INFO nova.virt.libvirt.driver [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Deletion of /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0_del complete
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.429 248514 INFO nova.compute.manager [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Took 3.16 seconds to destroy the instance on the hypervisor.
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.430 248514 DEBUG oslo.service.loopingcall [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.430 248514 DEBUG nova.compute.manager [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.431 248514 DEBUG nova.network.neutron [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:30:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 247 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 51 KiB/s wr, 281 op/s
Dec 13 08:30:43 compute-0 podman[311868]: 2025-12-13 08:30:43.579285969 +0000 UTC m=+1.583935623 container remove a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:30:43 compute-0 systemd[1]: libpod-conmon-a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a.scope: Deactivated successfully.
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.612 248514 INFO nova.virt.libvirt.driver [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Deleting instance files /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9_del
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.614 248514 INFO nova.virt.libvirt.driver [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Deletion of /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9_del complete
Dec 13 08:30:43 compute-0 sudo[311680]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.695 248514 INFO nova.compute.manager [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Took 3.23 seconds to destroy the instance on the hypervisor.
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.697 248514 DEBUG oslo.service.loopingcall [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.698 248514 DEBUG nova.compute.manager [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:30:43 compute-0 nova_compute[248510]: 2025-12-13 08:30:43.699 248514 DEBUG nova.network.neutron [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:30:43 compute-0 sudo[311938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:30:43 compute-0 sudo[311938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:43 compute-0 sudo[311938]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:43 compute-0 sudo[311963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:30:43 compute-0 sudo[311963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:44 compute-0 podman[312001]: 2025-12-13 08:30:44.103439112 +0000 UTC m=+0.045129265 container create e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:30:44 compute-0 systemd[1]: Started libpod-conmon-e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139.scope.
Dec 13 08:30:44 compute-0 podman[312001]: 2025-12-13 08:30:44.084775751 +0000 UTC m=+0.026465924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:30:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:44 compute-0 podman[312001]: 2025-12-13 08:30:44.216300426 +0000 UTC m=+0.157990579 container init e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 08:30:44 compute-0 podman[312001]: 2025-12-13 08:30:44.225243817 +0000 UTC m=+0.166933970 container start e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 08:30:44 compute-0 podman[312001]: 2025-12-13 08:30:44.229351428 +0000 UTC m=+0.171041581 container attach e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 08:30:44 compute-0 stupefied_panini[312017]: 167 167
Dec 13 08:30:44 compute-0 systemd[1]: libpod-e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139.scope: Deactivated successfully.
Dec 13 08:30:44 compute-0 podman[312001]: 2025-12-13 08:30:44.234560047 +0000 UTC m=+0.176250200 container died e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc555c5033e25ffe0ac28911eef765c9f9c7dddc955539de127ec1bf687a9094-merged.mount: Deactivated successfully.
Dec 13 08:30:44 compute-0 systemd[1]: libpod-conmon-e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139.scope: Deactivated successfully.
Dec 13 08:30:44 compute-0 podman[312001]: 2025-12-13 08:30:44.279556087 +0000 UTC m=+0.221246240 container remove e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.419 248514 DEBUG nova.network.neutron [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.443 248514 INFO nova.compute.manager [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Took 1.01 seconds to deallocate network for instance.
Dec 13 08:30:44 compute-0 podman[312040]: 2025-12-13 08:30:44.489931608 +0000 UTC m=+0.053400549 container create 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.495 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.496 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:44 compute-0 systemd[1]: Started libpod-conmon-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope.
Dec 13 08:30:44 compute-0 podman[312040]: 2025-12-13 08:30:44.468450698 +0000 UTC m=+0.031919659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:30:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:44 compute-0 podman[312040]: 2025-12-13 08:30:44.600472444 +0000 UTC m=+0.163941405 container init 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:30:44 compute-0 podman[312040]: 2025-12-13 08:30:44.609901887 +0000 UTC m=+0.173370828 container start 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:30:44 compute-0 podman[312040]: 2025-12-13 08:30:44.618023967 +0000 UTC m=+0.181492908 container attach 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.644 248514 DEBUG nova.virt.libvirt.driver [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.829 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.829 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.831 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.832 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.832 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] No waiting events found dispatching network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.832 248514 WARNING nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received unexpected event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc for instance with vm_state deleted and task_state None.
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.833 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-unplugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.833 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.833 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.834 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.834 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] No waiting events found dispatching network-vif-unplugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.834 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-unplugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:30:44 compute-0 ceph-mon[76537]: pgmap v1998: 321 pgs: 321 active+clean; 247 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 51 KiB/s wr, 281 op/s
Dec 13 08:30:44 compute-0 nova_compute[248510]: 2025-12-13 08:30:44.860 248514 DEBUG oslo_concurrency.processutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.026 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.027 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.027 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.028 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.028 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.029 248514 INFO nova.compute.manager [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Terminating instance
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.030 248514 DEBUG nova.compute.manager [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.058 248514 DEBUG nova.network.neutron [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:45 compute-0 kernel: tap132de588-b2 (unregistering): left promiscuous mode
Dec 13 08:30:45 compute-0 NetworkManager[50376]: <info>  [1765614645.0727] device (tap132de588-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:30:45 compute-0 ovn_controller[148476]: 2025-12-13T08:30:45Z|00613|binding|INFO|Releasing lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b from this chassis (sb_readonly=0)
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 ovn_controller[148476]: 2025-12-13T08:30:45Z|00614|binding|INFO|Setting lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b down in Southbound
Dec 13 08:30:45 compute-0 ovn_controller[148476]: 2025-12-13T08:30:45Z|00615|binding|INFO|Removing iface tap132de588-b2 ovn-installed in OVS
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.088 248514 INFO nova.compute.manager [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Took 1.39 seconds to deallocate network for instance.
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.088 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:5e:68 10.100.0.5'], port_security=['fa:16:3e:7d:5e:68 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bc7aabfd-0b89-4d02-8aff-29f1bc423621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'def846f35c2747099dbe41221905d739', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75dfe6d3-3b39-436a-9304-896fdf5a4d05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fbc462b-7eea-4b98-8702-5b666823a7a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=132de588-b258-4d1f-9d17-7b0ef7d73a3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.089 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 132de588-b258-4d1f-9d17-7b0ef7d73a3b in datapath 96c67522-dadc-49c7-81e6-c5a1d8a2b085 unbound from our chassis
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.091 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96c67522-dadc-49c7-81e6-c5a1d8a2b085, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.092 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd13bfe-0fb3-4d22-b3c5-9d81bf194425]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.093 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 namespace which is not needed anymore
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000042.scope: Deactivated successfully.
Dec 13 08:30:45 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000042.scope: Consumed 9.048s CPU time.
Dec 13 08:30:45 compute-0 systemd-machined[210538]: Machine qemu-75-instance-00000042 terminated.
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.158 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.261 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.277 248514 INFO nova.virt.libvirt.driver [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance destroyed successfully.
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.278 248514 DEBUG nova.objects.instance [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lazy-loading 'resources' on Instance uuid bc7aabfd-0b89-4d02-8aff-29f1bc423621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:45 compute-0 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [NOTICE]   (311411) : haproxy version is 2.8.14-c23fe91
Dec 13 08:30:45 compute-0 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [NOTICE]   (311411) : path to executable is /usr/sbin/haproxy
Dec 13 08:30:45 compute-0 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [ALERT]    (311411) : Current worker (311413) exited with code 143 (Terminated)
Dec 13 08:30:45 compute-0 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [WARNING]  (311411) : All workers exited. Exiting... (0)
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.296 248514 DEBUG nova.virt.libvirt.vif [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-406757652',display_name='tempest-ServerMetadataNegativeTestJSON-server-406757652',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-406757652',id=66,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='def846f35c2747099dbe41221905d739',ramdisk_id='',reservation_id='r-aon0aotv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-402645639',owner_user_name='tempest-ServerMetadataNegativeTestJSON-402645639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:36Z,user_data=None,user_id='83bbc7cfbcdc49ab885e530a79ae26f2',uuid=bc7aabfd-0b89-4d02-8aff-29f1bc423621,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.297 248514 DEBUG nova.network.os_vif_util [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converting VIF {"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.297 248514 DEBUG nova.network.os_vif_util [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.298 248514 DEBUG os_vif [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.300 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap132de588-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 systemd[1]: libpod-39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6.scope: Deactivated successfully.
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.306 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.309 248514 INFO os_vif [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2')
Dec 13 08:30:45 compute-0 podman[312157]: 2025-12-13 08:30:45.312700558 +0000 UTC m=+0.084177848 container died 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.362 248514 DEBUG nova.compute.manager [req-244defc8-2235-4951-aebf-04ae85afad64 req-a46f18ce-f877-47e0-a03a-fbce1f103c21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-deleted-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:45 compute-0 lvm[312229]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:30:45 compute-0 lvm[312229]: VG ceph_vg0 finished
Dec 13 08:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6-userdata-shm.mount: Deactivated successfully.
Dec 13 08:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-594738986a7fafb8e2e8c20fa7ea6b435a01ddf06491d50b530c9beeaa2d34f3-merged.mount: Deactivated successfully.
Dec 13 08:30:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:45 compute-0 lvm[312231]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:30:45 compute-0 lvm[312231]: VG ceph_vg1 finished
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.430163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645430219, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1121, "num_deletes": 258, "total_data_size": 1546693, "memory_usage": 1578176, "flush_reason": "Manual Compaction"}
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec 13 08:30:45 compute-0 podman[312157]: 2025-12-13 08:30:45.431756385 +0000 UTC m=+0.203233685 container cleanup 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 08:30:45 compute-0 lvm[312233]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:30:45 compute-0 lvm[312233]: VG ceph_vg2 finished
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645445258, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1518653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37667, "largest_seqno": 38787, "table_properties": {"data_size": 1513251, "index_size": 2734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12333, "raw_average_key_size": 19, "raw_value_size": 1502060, "raw_average_value_size": 2430, "num_data_blocks": 121, "num_entries": 618, "num_filter_entries": 618, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614550, "oldest_key_time": 1765614550, "file_creation_time": 1765614645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 15186 microseconds, and 5338 cpu microseconds.
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.445338) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1518653 bytes OK
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.445373) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.453926) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.454001) EVENT_LOG_v1 {"time_micros": 1765614645453986, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.454041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1541384, prev total WAL file size 1541384, number of live WAL files 2.
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.455338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323538' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1483KB)], [83(8897KB)]
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645455624, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10629241, "oldest_snapshot_seqno": -1}
Dec 13 08:30:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454432318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:45 compute-0 systemd[1]: libpod-conmon-39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6.scope: Deactivated successfully.
Dec 13 08:30:45 compute-0 relaxed_wilbur[312056]: {}
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.517 248514 DEBUG oslo_concurrency.processutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 169 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 66 KiB/s wr, 317 op/s
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.524 248514 DEBUG nova.compute.provider_tree [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.542 248514 DEBUG nova.scheduler.client.report [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6311 keys, 10508206 bytes, temperature: kUnknown
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645556300, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10508206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10464340, "index_size": 26988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 160659, "raw_average_key_size": 25, "raw_value_size": 10349359, "raw_average_value_size": 1639, "num_data_blocks": 1095, "num_entries": 6311, "num_filter_entries": 6311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:30:45 compute-0 systemd[1]: libpod-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope: Deactivated successfully.
Dec 13 08:30:45 compute-0 systemd[1]: libpod-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope: Consumed 1.535s CPU time.
Dec 13 08:30:45 compute-0 conmon[312056]: conmon 9c931010274a253ba756 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope/container/memory.events
Dec 13 08:30:45 compute-0 podman[312040]: 2025-12-13 08:30:45.559429095 +0000 UTC m=+1.122898036 container died 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.556801) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10508206 bytes
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.560308) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.4 rd, 104.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(13.9) write-amplify(6.9) OK, records in: 6843, records dropped: 532 output_compression: NoCompression
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.560337) EVENT_LOG_v1 {"time_micros": 1765614645560325, "job": 48, "event": "compaction_finished", "compaction_time_micros": 100884, "compaction_time_cpu_micros": 31704, "output_level": 6, "num_output_files": 1, "total_output_size": 10508206, "num_input_records": 6843, "num_output_records": 6311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645560849, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645562766, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.455224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.562995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.563004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.563006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.563016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:30:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.563018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.578 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.580 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:45 compute-0 podman[312235]: 2025-12-13 08:30:45.583367336 +0000 UTC m=+0.122603466 container remove 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.593 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[44d61177-38c9-4ae3-a3a5-2e8276b99dc6]: (4, ('Sat Dec 13 08:30:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 (39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6)\n39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6\nSat Dec 13 08:30:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 (39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6)\n39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8053dfb5-7fea-4149-be9c-3c0b5f1cdc90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.606 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96c67522-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:45 compute-0 kernel: tap96c67522-d0: left promiscuous mode
Dec 13 08:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1-merged.mount: Deactivated successfully.
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.635 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94a59cbc-5bc5-42fb-b6b4-54af7f856d50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.636 248514 INFO nova.scheduler.client.report [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Deleted allocations for instance 18653df3-1934-41dc-b6ab-d1dc122052f0
Dec 13 08:30:45 compute-0 podman[312040]: 2025-12-13 08:30:45.640000393 +0000 UTC m=+1.203469334 container remove 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.648 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[964b332c-d8d6-4f9d-8b05-f5c9caa866cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:45 compute-0 systemd[1]: libpod-conmon-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope: Deactivated successfully.
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.650 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[433e8c54-7b93-4111-84e1-86a7617edcf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b44cd41a-b2d4-42a7-9f54-56dfb8d3f862]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721286, 'reachable_time': 22927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312265, 'error': None, 'target': 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.679 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:30:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.679 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[463989cb-a1dc-447d-a8a4-9a63367a29f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d96c67522\x2ddadc\x2d49c7\x2d81e6\x2dc5a1d8a2b085.mount: Deactivated successfully.
Dec 13 08:30:45 compute-0 sudo[311963]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:30:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:30:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.730 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.737 248514 DEBUG oslo_concurrency.processutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:30:45 compute-0 sudo[312267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:30:45 compute-0 sudo[312267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:30:45 compute-0 sudo[312267]: pam_unix(sudo:session): session closed for user root
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.858 248514 INFO nova.virt.libvirt.driver [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Deleting instance files /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621_del
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.860 248514 INFO nova.virt.libvirt.driver [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Deletion of /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621_del complete
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.929 248514 INFO nova.compute.manager [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Took 0.90 seconds to destroy the instance on the hypervisor.
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.930 248514 DEBUG oslo.service.loopingcall [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.930 248514 DEBUG nova.compute.manager [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:30:45 compute-0 nova_compute[248510]: 2025-12-13 08:30:45.930 248514 DEBUG nova.network.neutron [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.141 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.142 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.161 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.238 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031531616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.322 248514 DEBUG oslo_concurrency.processutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.328 248514 DEBUG nova.compute.provider_tree [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.349 248514 DEBUG nova.scheduler.client.report [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.390 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.393 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.401 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.401 248514 INFO nova.compute.claims [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.461 248514 INFO nova.scheduler.client.report [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Deleted allocations for instance 69f6dd3a-7c99-4537-8173-ec79bc6336a9
Dec 13 08:30:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2454432318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:46 compute-0 ceph-mon[76537]: pgmap v1999: 321 pgs: 321 active+clean; 169 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 66 KiB/s wr, 317 op/s
Dec 13 08:30:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:30:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:30:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1031531616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.569 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.613 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.675 248514 DEBUG nova.network.neutron [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.701 248514 INFO nova.compute.manager [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Took 0.77 seconds to deallocate network for instance.
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.758 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:46 compute-0 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 08:30:46 compute-0 NetworkManager[50376]: <info>  [1765614646.9091] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:30:46 compute-0 ovn_controller[148476]: 2025-12-13T08:30:46Z|00616|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 08:30:46 compute-0 ovn_controller[148476]: 2025-12-13T08:30:46Z|00617|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 08:30:46 compute-0 ovn_controller[148476]: 2025-12-13T08:30:46Z|00618|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.925 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.926 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:30:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.928 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:30:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.929 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[271c3ae6-6d78-45bb-af57-eb1c416d088d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.929 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.961 248514 DEBUG nova.compute.manager [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.962 248514 DEBUG oslo_concurrency.lockutils [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.963 248514 DEBUG oslo_concurrency.lockutils [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.963 248514 DEBUG oslo_concurrency.lockutils [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.963 248514 DEBUG nova.compute.manager [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] No waiting events found dispatching network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.964 248514 WARNING nova.compute.manager [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received unexpected event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d for instance with vm_state deleted and task_state None.
Dec 13 08:30:46 compute-0 nova_compute[248510]: 2025-12-13 08:30:46.964 248514 DEBUG nova.compute.manager [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-deleted-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:46 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 08:30:46 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003c.scope: Consumed 13.280s CPU time.
Dec 13 08:30:46 compute-0 systemd-machined[210538]: Machine qemu-72-instance-0000003c terminated.
Dec 13 08:30:47 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [NOTICE]   (310480) : haproxy version is 2.8.14-c23fe91
Dec 13 08:30:47 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [NOTICE]   (310480) : path to executable is /usr/sbin/haproxy
Dec 13 08:30:47 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [WARNING]  (310480) : Exiting Master process...
Dec 13 08:30:47 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [WARNING]  (310480) : Exiting Master process...
Dec 13 08:30:47 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [ALERT]    (310480) : Current worker (310482) exited with code 143 (Terminated)
Dec 13 08:30:47 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [WARNING]  (310480) : All workers exited. Exiting... (0)
Dec 13 08:30:47 compute-0 systemd[1]: libpod-6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8.scope: Deactivated successfully.
Dec 13 08:30:47 compute-0 podman[312357]: 2025-12-13 08:30:47.080914636 +0000 UTC m=+0.046287683 container died 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8-userdata-shm.mount: Deactivated successfully.
Dec 13 08:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-51c4e163921b64c8e9176bf88efb899af63dfc0f8f476d28a475a589e7ef09c5-merged.mount: Deactivated successfully.
Dec 13 08:30:47 compute-0 NetworkManager[50376]: <info>  [1765614647.1401] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/273)
Dec 13 08:30:47 compute-0 podman[312357]: 2025-12-13 08:30:47.140570338 +0000 UTC m=+0.105943385 container cleanup 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:47 compute-0 systemd[1]: libpod-conmon-6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8.scope: Deactivated successfully.
Dec 13 08:30:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856961826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:47 compute-0 podman[312393]: 2025-12-13 08:30:47.218552462 +0000 UTC m=+0.049075592 container remove 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.222 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.229 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb4d734-fe39-4061-b050-a5e719501b65]: (4, ('Sat Dec 13 08:30:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8)\n6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8\nSat Dec 13 08:30:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8)\n6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.230 248514 DEBUG nova.compute.provider_tree [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.231 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b54ec3-d4e1-4224-9870-2a14a5e5b02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.232 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:47 compute-0 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.249 248514 DEBUG nova.scheduler.client.report [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.263 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46428bf5-3154-44df-9574-76c45a00ef6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.274 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.275 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.278 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.284 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15d5a692-f19a-4e52-bd77-9b18d632c2b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.287 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91679efa-08b2-455e-9c14-d5aca82cfcb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.306 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[374a2019-f168-4bf3-9acd-12b51d69127b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720147, 'reachable_time': 23962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312418, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.308 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:30:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.308 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4e32594c-69bd-4977-bd81-14bf8e32b0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.328 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.328 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.355 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.381 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.392 248514 DEBUG oslo_concurrency.processutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.477 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received event network-vif-deleted-132de588-b258-4d1f-9d17-7b0ef7d73a3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.477 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.478 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.478 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.478 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.478 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 WARNING nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state powering-off.
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.480 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.480 248514 WARNING nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state powering-off.
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.485 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.487 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.487 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Creating image(s)
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.510 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 169 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 53 KiB/s wr, 301 op/s
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.538 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.562 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.566 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.604 248514 DEBUG nova.policy [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91d0d3efedc943b48ad0fc4295b6fc7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.641 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.642 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.642 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.643 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.663 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.666 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.704 248514 INFO nova.virt.libvirt.driver [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance shutdown successfully after 13 seconds.
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.710 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.710 248514 DEBUG nova.objects.instance [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.728 248514 DEBUG nova.compute.manager [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.788 248514 DEBUG oslo_concurrency.lockutils [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:30:47 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1856961826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.928 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.929 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.929 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.929 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680485059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.964 248514 DEBUG oslo_concurrency.processutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.970 248514 DEBUG nova.compute.provider_tree [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:47 compute-0 nova_compute[248510]: 2025-12-13 08:30:47.990 248514 DEBUG nova.scheduler.client.report [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.017 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.048 248514 INFO nova.scheduler.client.report [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Deleted allocations for instance bc7aabfd-0b89-4d02-8aff-29f1bc423621
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.122 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.418 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.483 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] resizing rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.651 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Successfully created port: 0cba7327-b5c5-419e-84f1-4264a7dedf3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.856 248514 DEBUG nova.objects.instance [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'migration_context' on Instance uuid b4a46029-fa6e-4566-9187-16b9d1bfd6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.877 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.878 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Ensure instance console log exists: /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:48 compute-0 nova_compute[248510]: 2025-12-13 08:30:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:48 compute-0 ceph-mon[76537]: pgmap v2000: 321 pgs: 321 active+clean; 169 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 53 KiB/s wr, 301 op/s
Dec 13 08:30:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/680485059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:30:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 8484 writes, 38K keys, 8484 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8484 writes, 8484 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1608 writes, 7468 keys, 1608 commit groups, 1.0 writes per commit group, ingest: 10.21 MB, 0.02 MB/s
                                           Interval WAL: 1608 writes, 1608 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.2      3.17              0.16        24    0.132       0      0       0.0       0.0
                                             L6      1/0   10.02 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8     62.0     51.4      3.62              0.56        23    0.157    123K    12K       0.0       0.0
                                            Sum      1/0   10.02 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8     33.1     34.5      6.79              0.72        47    0.144    123K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.0     76.1     79.4      0.82              0.19        12    0.068     38K   3118       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     62.0     51.4      3.62              0.56        23    0.157    123K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.3      3.17              0.16        23    0.138       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.047, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.22 GB read, 0.06 MB/s read, 6.8 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 26.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000273 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1641,25.37 MB,8.34618%) FilterBlock(48,346.55 KB,0.111324%) IndexBlock(48,606.03 KB,0.19468%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 08:30:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 144 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 109 KiB/s wr, 317 op/s
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.577 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Successfully updated port: 0cba7327-b5c5-419e-84f1-4264a7dedf3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.601 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.602 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquired lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.602 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.656 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.687 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.692 248514 DEBUG oslo_concurrency.lockutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.699 248514 DEBUG nova.compute.manager [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received event network-changed-0cba7327-b5c5-419e-84f1-4264a7dedf3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.700 248514 DEBUG nova.compute.manager [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Refreshing instance network info cache due to event network-changed-0cba7327-b5c5-419e-84f1-4264a7dedf3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.700 248514 DEBUG oslo_concurrency.lockutils [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.719 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.719 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.720 248514 DEBUG oslo_concurrency.lockutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.720 248514 DEBUG nova.network.neutron [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.720 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'info_cache' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.721 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:49 compute-0 nova_compute[248510]: 2025-12-13 08:30:49.857 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.654 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Updating instance_info_cache with network_info: [{"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.679 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Releasing lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.679 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance network_info: |[{"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.680 248514 DEBUG oslo_concurrency.lockutils [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.680 248514 DEBUG nova.network.neutron [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Refreshing network info cache for port 0cba7327-b5c5-419e-84f1-4264a7dedf3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.683 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start _get_guest_xml network_info=[{"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.688 248514 WARNING nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.693 248514 DEBUG nova.virt.libvirt.host [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.694 248514 DEBUG nova.virt.libvirt.host [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.696 248514 DEBUG nova.virt.libvirt.host [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.697 248514 DEBUG nova.virt.libvirt.host [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.698 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.698 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.700 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.700 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.700 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.700 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.701 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:30:50 compute-0 nova_compute[248510]: 2025-12-13 08:30:50.704 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:50 compute-0 ceph-mon[76537]: pgmap v2001: 321 pgs: 321 active+clean; 144 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 109 KiB/s wr, 317 op/s
Dec 13 08:30:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223735418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.300 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.321 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.326 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 156 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.802 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.802 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1205994326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.879 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.882 248514 DEBUG nova.virt.libvirt.vif [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-729072340',display_name='tempest-ImagesOneServerNegativeTestJSON-server-729072340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-729072340',id=67,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-vhrthntm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:47Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=b4a46029-fa6e-4566-9187-16b9d1bfd6d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.882 248514 DEBUG nova.network.os_vif_util [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.883 248514 DEBUG nova.network.os_vif_util [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.884 248514 DEBUG nova.objects.instance [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'pci_devices' on Instance uuid b4a46029-fa6e-4566-9187-16b9d1bfd6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.909 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <uuid>b4a46029-fa6e-4566-9187-16b9d1bfd6d6</uuid>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <name>instance-00000043</name>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-729072340</nova:name>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:30:50</nova:creationTime>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <nova:user uuid="91d0d3efedc943b48ad0fc4295b6fc7c">tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member</nova:user>
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <nova:project uuid="2de328b46a6e4f588f5e2a254db7f4ef">tempest-ImagesOneServerNegativeTestJSON-1826994500</nova:project>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <nova:port uuid="0cba7327-b5c5-419e-84f1-4264a7dedf3c">
Dec 13 08:30:51 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <entry name="serial">b4a46029-fa6e-4566-9187-16b9d1bfd6d6</entry>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <entry name="uuid">b4a46029-fa6e-4566-9187-16b9d1bfd6d6</entry>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk">
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config">
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:51 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:2e:e4:66"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <target dev="tap0cba7327-b5"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/console.log" append="off"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:51 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:51 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:51 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:51 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:51 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.911 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Preparing to wait for external event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.911 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.911 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.912 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.912 248514 DEBUG nova.virt.libvirt.vif [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-729072340',display_name='tempest-ImagesOneServerNegativeTestJSON-server-729072340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-729072340',id=67,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-vhrthntm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:47Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=b4a46029-fa6e-4566-9187-16b9d1bfd6d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.913 248514 DEBUG nova.network.os_vif_util [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.913 248514 DEBUG nova.network.os_vif_util [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.914 248514 DEBUG os_vif [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.915 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.916 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.920 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cba7327-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.920 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0cba7327-b5, col_values=(('external_ids', {'iface-id': '0cba7327-b5c5-419e-84f1-4264a7dedf3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:e4:66', 'vm-uuid': 'b4a46029-fa6e-4566-9187-16b9d1bfd6d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:51 compute-0 NetworkManager[50376]: <info>  [1765614651.9236] manager: (tap0cba7327-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:51 compute-0 nova_compute[248510]: 2025-12-13 08:30:51.933 248514 INFO os_vif [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5')
Dec 13 08:30:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1223735418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1205994326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.001 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.003 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.003 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No VIF found with MAC fa:16:3e:2e:e4:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.004 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Using config drive
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.032 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674983329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.437 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.515 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.516 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.519 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.519 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.674 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.676 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4058MB free_disk=59.92822165135294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.676 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.676 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.726 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Creating config drive at /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.731 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24n24kn_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.774 248514 DEBUG nova.network.neutron [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Updated VIF entry in instance network info cache for port 0cba7327-b5c5-419e-84f1-4264a7dedf3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.775 248514 DEBUG nova.network.neutron [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Updating instance_info_cache with network_info: [{"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.863 248514 DEBUG oslo_concurrency.lockutils [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.879 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24n24kn_" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.924 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.933 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.978 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.978 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance b4a46029-fa6e-4566-9187-16b9d1bfd6d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.979 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:30:52 compute-0 nova_compute[248510]: 2025-12-13 08:30:52.979 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:30:52 compute-0 podman[312724]: 2025-12-13 08:30:52.982993891 +0000 UTC m=+0.069013024 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 08:30:53 compute-0 podman[312725]: 2025-12-13 08:30:53.007216068 +0000 UTC m=+0.089887068 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 08:30:53 compute-0 podman[312716]: 2025-12-13 08:30:53.012299864 +0000 UTC m=+0.100948062 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 13 08:30:53 compute-0 nova_compute[248510]: 2025-12-13 08:30:53.037 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 1.8 MiB/s wr, 109 op/s
Dec 13 08:30:53 compute-0 ceph-mon[76537]: pgmap v2002: 321 pgs: 321 active+clean; 156 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Dec 13 08:30:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1674983329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:30:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17534862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:54 compute-0 nova_compute[248510]: 2025-12-13 08:30:54.073 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:54 compute-0 nova_compute[248510]: 2025-12-13 08:30:54.083 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:30:54 compute-0 nova_compute[248510]: 2025-12-13 08:30:54.729 248514 DEBUG nova.network.neutron [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:30:55 compute-0 ceph-mon[76537]: pgmap v2003: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 1.8 MiB/s wr, 109 op/s
Dec 13 08:30:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/17534862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.219 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.261 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.269 248514 DEBUG oslo_concurrency.lockutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.291 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.291 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.305 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.306 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.326 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.332 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.333 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Deleting local config drive /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config because it was imported into RBD.
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.340 248514 DEBUG nova.virt.libvirt.vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.340 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.341 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.342 248514 DEBUG os_vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.345 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.358 248514 INFO os_vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.370 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.376 248514 WARNING nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.384 248514 DEBUG nova.virt.libvirt.host [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.385 248514 DEBUG nova.virt.libvirt.host [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.392 248514 DEBUG nova.virt.libvirt.host [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.393 248514 DEBUG nova.virt.libvirt.host [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.394 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.394 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.396 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.396 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.396 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.396 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.397 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.397 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:55 compute-0 kernel: tap0cba7327-b5: entered promiscuous mode
Dec 13 08:30:55 compute-0 NetworkManager[50376]: <info>  [1765614655.4092] manager: (tap0cba7327-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Dec 13 08:30:55 compute-0 ovn_controller[148476]: 2025-12-13T08:30:55Z|00619|binding|INFO|Claiming lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c for this chassis.
Dec 13 08:30:55 compute-0 ovn_controller[148476]: 2025-12-13T08:30:55Z|00620|binding|INFO|0cba7327-b5c5-419e-84f1-4264a7dedf3c: Claiming fa:16:3e:2e:e4:66 10.100.0.10
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.412 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.412 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 ovn_controller[148476]: 2025-12-13T08:30:55Z|00621|binding|INFO|Setting lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c ovn-installed in OVS
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 systemd-udevd[312848]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:55 compute-0 systemd-machined[210538]: New machine qemu-76-instance-00000043.
Dec 13 08:30:55 compute-0 NetworkManager[50376]: <info>  [1765614655.4667] device (tap0cba7327-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:55 compute-0 NetworkManager[50376]: <info>  [1765614655.4678] device (tap0cba7327-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:55 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000043.
Dec 13 08:30:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Dec 13 08:30:55 compute-0 ovn_controller[148476]: 2025-12-13T08:30:55Z|00622|binding|INFO|Setting lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c up in Southbound
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.548 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:e4:66 10.100.0.10'], port_security=['fa:16:3e:2e:e4:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4a46029-fa6e-4566-9187-16b9d1bfd6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0cba7327-b5c5-419e-84f1-4264a7dedf3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.550 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0cba7327-b5c5-419e-84f1-4264a7dedf3c in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d bound to our chassis
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.552 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.558 248514 DEBUG oslo_concurrency.processutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcdf942-d247-4086-b66f-d9edbce2cc99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.572 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527d37da-e1 in ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.575 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527d37da-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.575 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5fa1ffd-008d-48f7-be63-e7a416da4947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.576 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57819f36-5b72-4c73-8be2-b6a9af68c3da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.591 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e81f0fa2-b063-4f76-be19-6f2d90eba968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.608 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8d11e0-b5e3-488d-bc0e-7da700ef611d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.651 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f3da4de7-4749-4aa4-b582-c92e62d653b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.658 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b9652-0d59-4fac-aed3-6916ed414f65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 NetworkManager[50376]: <info>  [1765614655.6590] manager: (tap527d37da-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Dec 13 08:30:55 compute-0 systemd-udevd[312850]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.700 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[41e4481d-4deb-44b9-8d95-afee416ec292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.707 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17e4a9ae-0884-4b91-840b-116bb5eaf64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 NetworkManager[50376]: <info>  [1765614655.7401] device (tap527d37da-e0): carrier: link connected
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.748 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614640.7473803, 18653df3-1934-41dc-b6ab-d1dc122052f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.749 248514 INFO nova.compute.manager [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] VM Stopped (Lifecycle Event)
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.749 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8a7290-ede6-4063-af7e-f445c5d6d02f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.770 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc6bd92-2b18-4dd1-a95e-a669a9beea83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723295, 'reachable_time': 34222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312901, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.783 248514 DEBUG nova.compute.manager [None req-337ee93a-d7f6-4087-8bb4-47c1ee370977 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8227afc-fb76-4b75-9c9c-fe0d3a1d54b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:e196'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723295, 'tstamp': 723295}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312902, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.811 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a2333594-f27f-42b8-9323-379ad35cdb31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723295, 'reachable_time': 34222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312903, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.852 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f24e824c-c185-46e0-a450-41066e95955a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.912 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614640.9121532, 69f6dd3a-7c99-4537-8173-ec79bc6336a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.913 248514 INFO nova.compute.manager [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] VM Stopped (Lifecycle Event)
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.923 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88fe6e81-c271-45c2-b2c1-03b1c3fc9cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.925 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.925 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527d37da-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 NetworkManager[50376]: <info>  [1765614655.9277] manager: (tap527d37da-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Dec 13 08:30:55 compute-0 kernel: tap527d37da-e0: entered promiscuous mode
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.929 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527d37da-e0, col_values=(('external_ids', {'iface-id': '9bf9e6e9-c189-485c-8803-c58be1ee6099'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 ovn_controller[148476]: 2025-12-13T08:30:55Z|00623|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.949 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.950 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[519cd626-7cf1-4612-9cdf-76c08fce6784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.951 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.953 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'env', 'PROCESS_TAG=haproxy-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527d37da-eda0-4bfe-9f1d-310d58024d5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:30:55 compute-0 nova_compute[248510]: 2025-12-13 08:30:55.965 248514 DEBUG nova.compute.manager [None req-054332cc-abf0-4c53-9da6-5b4963f92f2a - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:56 compute-0 ceph-mon[76537]: pgmap v2004: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Dec 13 08:30:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1551382908' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.196 248514 DEBUG oslo_concurrency.processutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.232 248514 DEBUG oslo_concurrency.processutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.291 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.292 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.292 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:30:56 compute-0 podman[312972]: 2025-12-13 08:30:56.342984793 +0000 UTC m=+0.054353802 container create 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:30:56 compute-0 systemd[1]: Started libpod-conmon-545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae.scope.
Dec 13 08:30:56 compute-0 podman[312972]: 2025-12-13 08:30:56.31608521 +0000 UTC m=+0.027454249 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:30:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c47f5a0e1eb0f634e298d3a1de4c4aaa7f62306ce8a9171dbd54df477b2f667/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.428 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614656.428335, b4a46029-fa6e-4566-9187-16b9d1bfd6d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.429 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] VM Started (Lifecycle Event)
Dec 13 08:30:56 compute-0 podman[312972]: 2025-12-13 08:30:56.43525409 +0000 UTC m=+0.146623139 container init 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:30:56 compute-0 podman[312972]: 2025-12-13 08:30:56.441294259 +0000 UTC m=+0.152663268 container start 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 08:30:56 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [NOTICE]   (313034) : New worker (313036) forked
Dec 13 08:30:56 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [NOTICE]   (313034) : Loading success.
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.466 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.472 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614656.4285111, b4a46029-fa6e-4566-9187-16b9d1bfd6d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.473 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] VM Paused (Lifecycle Event)
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.512 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.514 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.559 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:30:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1506052271' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.838 248514 DEBUG oslo_concurrency.processutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.840 248514 DEBUG nova.virt.libvirt.vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.840 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.841 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.843 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.982 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <name>instance-0000003c</name>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:30:55</nova:creationTime>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 08:30:56 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <system>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </system>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <os>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   </os>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <features>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   </features>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       </source>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:30:56 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <target dev="tapb5058a06-71"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <video>
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </video>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:30:56 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:30:56 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:30:56 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:30:56 compute-0 nova_compute[248510]: </domain>
Dec 13 08:30:56 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.984 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.984 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.985 248514 DEBUG nova.virt.libvirt.vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.985 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.986 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.986 248514 DEBUG os_vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.987 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.987 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.988 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.990 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.991 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:56 compute-0 NetworkManager[50376]: <info>  [1765614656.9946] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Dec 13 08:30:56 compute-0 nova_compute[248510]: 2025-12-13 08:30:56.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.003 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.005 248514 INFO os_vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:30:57 compute-0 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 08:30:57 compute-0 NetworkManager[50376]: <info>  [1765614657.0916] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Dec 13 08:30:57 compute-0 systemd-udevd[312872]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:30:57 compute-0 ovn_controller[148476]: 2025-12-13T08:30:57Z|00624|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 08:30:57 compute-0 ovn_controller[148476]: 2025-12-13T08:30:57Z|00625|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:57 compute-0 NetworkManager[50376]: <info>  [1765614657.1046] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:30:57 compute-0 NetworkManager[50376]: <info>  [1765614657.1056] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:30:57 compute-0 ovn_controller[148476]: 2025-12-13T08:30:57Z|00626|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:57 compute-0 systemd-machined[210538]: New machine qemu-77-instance-0000003c.
Dec 13 08:30:57 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-0000003c.
Dec 13 08:30:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1551382908' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1506052271' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:30:57 compute-0 ovn_controller[148476]: 2025-12-13T08:30:57Z|00627|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.297 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.299 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.302 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.322 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[843baf84-e8c3-4e5c-aef9-faa9776fadb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.324 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.326 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.326 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3edbb4a4-bcc2-421c-8664-3333b050ceba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.327 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89186f67-2c66-41f8-a61b-3d3c69dd8e55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.342 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4919fb-1747-41a2-8647-a1bdd3989649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.364 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cbae5e66-4230-472b-a7d5-5360347799ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.394 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fa346783-1f2f-4ac0-8e89-df6f52e61b42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.400 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe80c21d-9e37-4c0d-a8d1-4647df2650aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 NetworkManager[50376]: <info>  [1765614657.4020] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.441 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f306b98a-12a2-4d98-94ca-f7e80128524e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.445 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4b19776f-ecc5-4886-9c3e-1105c0bf44a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 NetworkManager[50376]: <info>  [1765614657.4729] device (tap43ee8730-a0): carrier: link connected
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.480 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8bec8b-db7d-4535-990c-6166bf8ca1b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.499 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c6e035-cfd3-4f8e-b558-7286f8e5cdf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723468, 'reachable_time': 38411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313079, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea25c2f4-0a7a-4ddf-9602-4a66ebb44c6e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723468, 'tstamp': 723468}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313080, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef2fdff-fd66-466d-b696-7982373a888c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723468, 'reachable_time': 38411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313081, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8072db-7e0f-4932-839b-8df6c3df91e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.641 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e6aaa3-f081-4ba1-a3ef-0e73a95e2790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.642 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:57 compute-0 NetworkManager[50376]: <info>  [1765614657.6463] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Dec 13 08:30:57 compute-0 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.649 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:30:57 compute-0 ovn_controller[148476]: 2025-12-13T08:30:57Z|00628|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.667 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.669 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[98a7940e-fe2b-46c4-a42e-a7b8edb4399c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.670 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:30:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.671 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.984 248514 DEBUG nova.compute.manager [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.985 248514 DEBUG oslo_concurrency.lockutils [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.985 248514 DEBUG oslo_concurrency.lockutils [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.985 248514 DEBUG oslo_concurrency.lockutils [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.986 248514 DEBUG nova.compute.manager [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Processing event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.987 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.993 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.996 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614657.9943354, b4a46029-fa6e-4566-9187-16b9d1bfd6d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:57 compute-0 nova_compute[248510]: 2025-12-13 08:30:57.996 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] VM Resumed (Lifecycle Event)
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.001 248514 INFO nova.virt.libvirt.driver [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance spawned successfully.
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.002 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.017 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.030 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.031 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.032 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.032 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.033 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.033 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.057 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.090 248514 INFO nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Took 10.60 seconds to spawn the instance on the hypervisor.
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.091 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:58 compute-0 podman[313113]: 2025-12-13 08:30:58.096208272 +0000 UTC m=+0.057341776 container create 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:30:58 compute-0 systemd[1]: Started libpod-conmon-0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5.scope.
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.155 248514 INFO nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Took 11.94 seconds to build instance.
Dec 13 08:30:58 compute-0 podman[313113]: 2025-12-13 08:30:58.06369848 +0000 UTC m=+0.024832044 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:30:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2dff49460421801b39e16b895a11c4e399a618cd7fbd30a89711e20d6bd165c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:30:58 compute-0 ceph-mon[76537]: pgmap v2005: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 13 08:30:58 compute-0 podman[313113]: 2025-12-13 08:30:58.186540721 +0000 UTC m=+0.147674245 container init 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 08:30:58 compute-0 podman[313113]: 2025-12-13 08:30:58.192705873 +0000 UTC m=+0.153839377 container start 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.192 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.202 248514 DEBUG nova.compute.manager [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.202 248514 DEBUG oslo_concurrency.lockutils [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.202 248514 DEBUG oslo_concurrency.lockutils [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.203 248514 DEBUG oslo_concurrency.lockutils [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.203 248514 DEBUG nova.compute.manager [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.203 248514 WARNING nova.compute.manager [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state stopped and task_state powering-on.
Dec 13 08:30:58 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [NOTICE]   (313171) : New worker (313174) forked
Dec 13 08:30:58 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [NOTICE]   (313171) : Loading success.
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.241 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.242 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614658.2412708, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.243 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.250 248514 DEBUG nova.compute.manager [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.254 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance rebooted successfully.
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.255 248514 DEBUG nova.compute.manager [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:58 compute-0 ovn_controller[148476]: 2025-12-13T08:30:58Z|00629|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:30:58 compute-0 ovn_controller[148476]: 2025-12-13T08:30:58Z|00630|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.296 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.299 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.353 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (powering-on). Skip.
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.353 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614658.250274, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.353 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.404 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:30:58 compute-0 nova_compute[248510]: 2025-12-13 08:30:58.408 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:30:58 compute-0 ovn_controller[148476]: 2025-12-13T08:30:58Z|00631|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:30:58 compute-0 ovn_controller[148476]: 2025-12-13T08:30:58Z|00632|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 08:30:59 compute-0 nova_compute[248510]: 2025-12-13 08:30:59.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:30:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 13 08:30:59 compute-0 nova_compute[248510]: 2025-12-13 08:30:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.272 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614645.2704365, bc7aabfd-0b89-4d02-8aff-29f1bc423621 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.272 248514 INFO nova.compute.manager [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] VM Stopped (Lifecycle Event)
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.294 248514 DEBUG nova.compute.manager [None req-7ec94b77-6436-4707-873c-e8c30ba18268 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.354 248514 DEBUG nova.compute.manager [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.354 248514 DEBUG oslo_concurrency.lockutils [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.355 248514 DEBUG oslo_concurrency.lockutils [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.355 248514 DEBUG oslo_concurrency.lockutils [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.355 248514 DEBUG nova.compute.manager [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.355 248514 WARNING nova.compute.manager [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:31:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.520 248514 DEBUG nova.compute.manager [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.520 248514 DEBUG oslo_concurrency.lockutils [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.522 248514 DEBUG oslo_concurrency.lockutils [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.523 248514 DEBUG oslo_concurrency.lockutils [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.523 248514 DEBUG nova.compute.manager [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] No waiting events found dispatching network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:00 compute-0 nova_compute[248510]: 2025-12-13 08:31:00.523 248514 WARNING nova.compute.manager [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received unexpected event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c for instance with vm_state active and task_state None.
Dec 13 08:31:00 compute-0 ceph-mon[76537]: pgmap v2006: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 13 08:31:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Dec 13 08:31:01 compute-0 nova_compute[248510]: 2025-12-13 08:31:01.993 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:02 compute-0 ceph-mon[76537]: pgmap v2007: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Dec 13 08:31:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 606 KiB/s wr, 142 op/s
Dec 13 08:31:03 compute-0 nova_compute[248510]: 2025-12-13 08:31:03.926 248514 INFO nova.compute.manager [None req-8ebbe8f1-3ffd-4ff2-afca-5df4106c2d82 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Pausing
Dec 13 08:31:03 compute-0 nova_compute[248510]: 2025-12-13 08:31:03.928 248514 DEBUG nova.objects.instance [None req-8ebbe8f1-3ffd-4ff2-afca-5df4106c2d82 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:03 compute-0 nova_compute[248510]: 2025-12-13 08:31:03.959 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614663.95928, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:03 compute-0 nova_compute[248510]: 2025-12-13 08:31:03.960 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Paused (Lifecycle Event)
Dec 13 08:31:03 compute-0 nova_compute[248510]: 2025-12-13 08:31:03.963 248514 DEBUG nova.compute.manager [None req-8ebbe8f1-3ffd-4ff2-afca-5df4106c2d82 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:03 compute-0 nova_compute[248510]: 2025-12-13 08:31:03.988 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:03 compute-0 nova_compute[248510]: 2025-12-13 08:31:03.993 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:04 compute-0 nova_compute[248510]: 2025-12-13 08:31:04.025 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (pausing). Skip.
Dec 13 08:31:04 compute-0 ceph-mon[76537]: pgmap v2008: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 606 KiB/s wr, 142 op/s
Dec 13 08:31:04 compute-0 nova_compute[248510]: 2025-12-13 08:31:04.950 248514 INFO nova.compute.manager [None req-dec8e43c-f14d-4313-a70c-27f775a22bcd 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Unpausing
Dec 13 08:31:04 compute-0 nova_compute[248510]: 2025-12-13 08:31:04.952 248514 DEBUG nova.objects.instance [None req-dec8e43c-f14d-4313-a70c-27f775a22bcd 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:05 compute-0 nova_compute[248510]: 2025-12-13 08:31:05.000 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614665.0005004, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:05 compute-0 virtqemud[248808]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 08:31:05 compute-0 nova_compute[248510]: 2025-12-13 08:31:05.001 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)
Dec 13 08:31:05 compute-0 virtqemud[248808]: hostname: compute-0
Dec 13 08:31:05 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 08:31:05 compute-0 nova_compute[248510]: 2025-12-13 08:31:05.005 248514 DEBUG nova.virt.libvirt.guest [None req-dec8e43c-f14d-4313-a70c-27f775a22bcd 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 08:31:05 compute-0 nova_compute[248510]: 2025-12-13 08:31:05.005 248514 DEBUG nova.compute.manager [None req-dec8e43c-f14d-4313-a70c-27f775a22bcd 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:05 compute-0 nova_compute[248510]: 2025-12-13 08:31:05.039 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:05 compute-0 nova_compute[248510]: 2025-12-13 08:31:05.042 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:05 compute-0 nova_compute[248510]: 2025-12-13 08:31:05.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Dec 13 08:31:06 compute-0 ceph-mon[76537]: pgmap v2009: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Dec 13 08:31:06 compute-0 nova_compute[248510]: 2025-12-13 08:31:06.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:07 compute-0 nova_compute[248510]: 2025-12-13 08:31:07.269 248514 DEBUG nova.compute.manager [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:07 compute-0 nova_compute[248510]: 2025-12-13 08:31:07.326 248514 INFO nova.compute.manager [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] instance snapshotting
Dec 13 08:31:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 13 08:31:07 compute-0 nova_compute[248510]: 2025-12-13 08:31:07.589 248514 INFO nova.virt.libvirt.driver [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Beginning live snapshot process
Dec 13 08:31:07 compute-0 nova_compute[248510]: 2025-12-13 08:31:07.737 248514 DEBUG nova.virt.libvirt.imagebackend [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:31:07 compute-0 nova_compute[248510]: 2025-12-13 08:31:07.972 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] creating snapshot(229866037550404a8011c7271a534aa2) on rbd image(b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:31:08 compute-0 ceph-mon[76537]: pgmap v2010: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 13 08:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:31:09
Dec 13 08:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'images', '.rgw.root']
Dec 13 08:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:31:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Dec 13 08:31:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Dec 13 08:31:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 146 op/s
Dec 13 08:31:09 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Dec 13 08:31:09 compute-0 nova_compute[248510]: 2025-12-13 08:31:09.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:09 compute-0 nova_compute[248510]: 2025-12-13 08:31:09.961 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] cloning vms/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk@229866037550404a8011c7271a534aa2 to images/26afe0db-dea0-4b7c-8b57-f424054157ea clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:31:10 compute-0 nova_compute[248510]: 2025-12-13 08:31:10.225 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:10 compute-0 ceph-mon[76537]: pgmap v2011: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 146 op/s
Dec 13 08:31:10 compute-0 ceph-mon[76537]: osdmap e222: 3 total, 3 up, 3 in
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:31:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:31:10 compute-0 nova_compute[248510]: 2025-12-13 08:31:10.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:11 compute-0 nova_compute[248510]: 2025-12-13 08:31:11.087 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] flattening images/26afe0db-dea0-4b7c-8b57-f424054157ea flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:31:11 compute-0 nova_compute[248510]: 2025-12-13 08:31:11.508 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] removing snapshot(229866037550404a8011c7271a534aa2) on rbd image(b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:31:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 204 B/s wr, 98 op/s
Dec 13 08:31:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Dec 13 08:31:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Dec 13 08:31:11 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Dec 13 08:31:11 compute-0 nova_compute[248510]: 2025-12-13 08:31:11.666 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] creating snapshot(snap) on rbd image(26afe0db-dea0-4b7c-8b57-f424054157ea) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:31:12 compute-0 nova_compute[248510]: 2025-12-13 08:31:12.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Dec 13 08:31:12 compute-0 ovn_controller[148476]: 2025-12-13T08:31:12Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:31:12 compute-0 ceph-mon[76537]: pgmap v2013: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 204 B/s wr, 98 op/s
Dec 13 08:31:12 compute-0 ceph-mon[76537]: osdmap e223: 3 total, 3 up, 3 in
Dec 13 08:31:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Dec 13 08:31:12 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Dec 13 08:31:12 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 26afe0db-dea0-4b7c-8b57-f424054157ea could not be found.
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 26afe0db-dea0-4b7c-8b57-f424054157ea
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver 
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver 
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 26afe0db-dea0-4b7c-8b57-f424054157ea could not be found.
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver 
Dec 13 08:31:13 compute-0 nova_compute[248510]: 2025-12-13 08:31:13.054 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] removing snapshot(snap) on rbd image(26afe0db-dea0-4b7c-8b57-f424054157ea) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:31:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 215 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 205 op/s
Dec 13 08:31:13 compute-0 ovn_controller[148476]: 2025-12-13T08:31:13Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:e4:66 10.100.0.10
Dec 13 08:31:13 compute-0 ovn_controller[148476]: 2025-12-13T08:31:13Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:e4:66 10.100.0.10
Dec 13 08:31:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Dec 13 08:31:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Dec 13 08:31:13 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Dec 13 08:31:13 compute-0 ceph-mon[76537]: osdmap e224: 3 total, 3 up, 3 in
Dec 13 08:31:14 compute-0 nova_compute[248510]: 2025-12-13 08:31:14.481 248514 WARNING nova.compute.manager [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Image not found during snapshot: nova.exception.ImageNotFound: Image 26afe0db-dea0-4b7c-8b57-f424054157ea could not be found.
Dec 13 08:31:15 compute-0 ceph-mon[76537]: pgmap v2016: 321 pgs: 321 active+clean; 215 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 205 op/s
Dec 13 08:31:15 compute-0 ceph-mon[76537]: osdmap e225: 3 total, 3 up, 3 in
Dec 13 08:31:15 compute-0 nova_compute[248510]: 2025-12-13 08:31:15.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 227 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.9 MiB/s wr, 364 op/s
Dec 13 08:31:15 compute-0 nova_compute[248510]: 2025-12-13 08:31:15.999 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:16 compute-0 nova_compute[248510]: 2025-12-13 08:31:16.000 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:16 compute-0 nova_compute[248510]: 2025-12-13 08:31:16.000 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:16 compute-0 nova_compute[248510]: 2025-12-13 08:31:16.001 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:16 compute-0 nova_compute[248510]: 2025-12-13 08:31:16.001 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:16 compute-0 nova_compute[248510]: 2025-12-13 08:31:16.002 248514 INFO nova.compute.manager [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Terminating instance
Dec 13 08:31:16 compute-0 nova_compute[248510]: 2025-12-13 08:31:16.003 248514 DEBUG nova.compute.manager [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:31:16 compute-0 nova_compute[248510]: 2025-12-13 08:31:16.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:16 compute-0 nova_compute[248510]: 2025-12-13 08:31:16.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:17 compute-0 nova_compute[248510]: 2025-12-13 08:31:17.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 227 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.8 MiB/s wr, 354 op/s
Dec 13 08:31:17 compute-0 kernel: tap0cba7327-b5 (unregistering): left promiscuous mode
Dec 13 08:31:17 compute-0 NetworkManager[50376]: <info>  [1765614677.6736] device (tap0cba7327-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:31:17 compute-0 ovn_controller[148476]: 2025-12-13T08:31:17Z|00633|binding|INFO|Releasing lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c from this chassis (sb_readonly=0)
Dec 13 08:31:17 compute-0 ovn_controller[148476]: 2025-12-13T08:31:17Z|00634|binding|INFO|Setting lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c down in Southbound
Dec 13 08:31:17 compute-0 nova_compute[248510]: 2025-12-13 08:31:17.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:17 compute-0 ovn_controller[148476]: 2025-12-13T08:31:17Z|00635|binding|INFO|Removing iface tap0cba7327-b5 ovn-installed in OVS
Dec 13 08:31:17 compute-0 nova_compute[248510]: 2025-12-13 08:31:17.685 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:17 compute-0 nova_compute[248510]: 2025-12-13 08:31:17.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:17 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000043.scope: Deactivated successfully.
Dec 13 08:31:17 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000043.scope: Consumed 13.775s CPU time.
Dec 13 08:31:17 compute-0 systemd-machined[210538]: Machine qemu-76-instance-00000043 terminated.
Dec 13 08:31:17 compute-0 ceph-mon[76537]: pgmap v2018: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 227 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.9 MiB/s wr, 364 op/s
Dec 13 08:31:17 compute-0 nova_compute[248510]: 2025-12-13 08:31:17.856 248514 INFO nova.virt.libvirt.driver [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance destroyed successfully.
Dec 13 08:31:17 compute-0 nova_compute[248510]: 2025-12-13 08:31:17.856 248514 DEBUG nova.objects.instance [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'resources' on Instance uuid b4a46029-fa6e-4566-9187-16b9d1bfd6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.002 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:e4:66 10.100.0.10'], port_security=['fa:16:3e:2e:e4:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4a46029-fa6e-4566-9187-16b9d1bfd6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0cba7327-b5c5-419e-84f1-4264a7dedf3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.004 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0cba7327-b5c5-419e-84f1-4264a7dedf3c in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d unbound from our chassis
Dec 13 08:31:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.008 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527d37da-eda0-4bfe-9f1d-310d58024d5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:31:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.010 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bd7968-67d4-4831-b429-942786ba3aaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.011 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace which is not needed anymore
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.067 248514 DEBUG nova.virt.libvirt.vif [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-729072340',display_name='tempest-ImagesOneServerNegativeTestJSON-server-729072340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-729072340',id=67,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-vhrthntm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:14Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=b4a46029-fa6e-4566-9187-16b9d1bfd6d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.068 248514 DEBUG nova.network.os_vif_util [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.069 248514 DEBUG nova.network.os_vif_util [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.069 248514 DEBUG os_vif [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.071 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cba7327-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:18 compute-0 nova_compute[248510]: 2025-12-13 08:31:18.079 248514 INFO os_vif [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5')
Dec 13 08:31:19 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [NOTICE]   (313034) : haproxy version is 2.8.14-c23fe91
Dec 13 08:31:19 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [NOTICE]   (313034) : path to executable is /usr/sbin/haproxy
Dec 13 08:31:19 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [WARNING]  (313034) : Exiting Master process...
Dec 13 08:31:19 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [WARNING]  (313034) : Exiting Master process...
Dec 13 08:31:19 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [ALERT]    (313034) : Current worker (313036) exited with code 143 (Terminated)
Dec 13 08:31:19 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [WARNING]  (313034) : All workers exited. Exiting... (0)
Dec 13 08:31:19 compute-0 systemd[1]: libpod-545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae.scope: Deactivated successfully.
Dec 13 08:31:19 compute-0 podman[313403]: 2025-12-13 08:31:19.1899885 +0000 UTC m=+1.050889100 container died 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:31:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 202 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.9 MiB/s wr, 294 op/s
Dec 13 08:31:19 compute-0 ceph-mon[76537]: pgmap v2019: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 227 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.8 MiB/s wr, 354 op/s
Dec 13 08:31:20 compute-0 nova_compute[248510]: 2025-12-13 08:31:20.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Dec 13 08:31:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae-userdata-shm.mount: Deactivated successfully.
Dec 13 08:31:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c47f5a0e1eb0f634e298d3a1de4c4aaa7f62306ce8a9171dbd54df477b2f667-merged.mount: Deactivated successfully.
Dec 13 08:31:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015243929236818743 of space, bias 1.0, pg target 0.4573178771045623 quantized to 32 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006673486932783207 of space, bias 1.0, pg target 0.2002046079834962 quantized to 32 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.087537217654078e-07 of space, bias 4.0, pg target 0.0007305044661184894 quantized to 16 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:31:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:31:21 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Dec 13 08:31:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 202 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 147 op/s
Dec 13 08:31:21 compute-0 podman[313403]: 2025-12-13 08:31:21.873155453 +0000 UTC m=+3.734056023 container cleanup 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:31:21 compute-0 systemd[1]: libpod-conmon-545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae.scope: Deactivated successfully.
Dec 13 08:31:21 compute-0 ceph-mon[76537]: pgmap v2020: 321 pgs: 321 active+clean; 202 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.9 MiB/s wr, 294 op/s
Dec 13 08:31:21 compute-0 nova_compute[248510]: 2025-12-13 08:31:21.998 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:21 compute-0 nova_compute[248510]: 2025-12-13 08:31:21.998 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:22 compute-0 nova_compute[248510]: 2025-12-13 08:31:22.021 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:31:22 compute-0 podman[313445]: 2025-12-13 08:31:22.435390515 +0000 UTC m=+0.536633601 container remove 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.442 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25a15ce8-afdf-4e33-ac72-3fc9e28cb993]: (4, ('Sat Dec 13 08:31:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae)\n545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae\nSat Dec 13 08:31:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae)\n545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.445 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[59d3cd0f-ea4b-4692-9256-498fe0ac1965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.446 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:22 compute-0 nova_compute[248510]: 2025-12-13 08:31:22.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:22 compute-0 kernel: tap527d37da-e0: left promiscuous mode
Dec 13 08:31:22 compute-0 nova_compute[248510]: 2025-12-13 08:31:22.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.470 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9063e7a2-f4b1-4ff9-b688-0f36a6db8bf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.487 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[563ecb1c-cbe4-44a9-a36e-b6b76409a169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.489 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b64858f9-e04f-4315-a5b5-fd9865b5a08b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.517 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffd95bb-5ca0-4f74-ae50-aea1a7e6f82b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723285, 'reachable_time': 33752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313458, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d527d37da\x2deda0\x2d4bfe\x2d9f1d\x2d310d58024d5d.mount: Deactivated successfully.
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.521 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:31:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.521 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[10ae18ed-cdf2-4c7f-93a4-5c2a38abda5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:22 compute-0 nova_compute[248510]: 2025-12-13 08:31:22.566 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:22 compute-0 nova_compute[248510]: 2025-12-13 08:31:22.566 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:22 compute-0 nova_compute[248510]: 2025-12-13 08:31:22.576 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:31:22 compute-0 nova_compute[248510]: 2025-12-13 08:31:22.576 248514 INFO nova.compute.claims [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:31:22 compute-0 nova_compute[248510]: 2025-12-13 08:31:22.888 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.102 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:23 compute-0 ceph-mon[76537]: osdmap e226: 3 total, 3 up, 3 in
Dec 13 08:31:23 compute-0 ceph-mon[76537]: pgmap v2022: 321 pgs: 321 active+clean; 202 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 147 op/s
Dec 13 08:31:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:31:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245757975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 176 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 132 op/s
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.548 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.556 248514 DEBUG nova.compute.provider_tree [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.578 248514 DEBUG nova.scheduler.client.report [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.600 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.601 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.666 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.667 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.702 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.727 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.829 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.831 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.831 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating image(s)
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.858 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.886 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.926 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.948 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:23 compute-0 nova_compute[248510]: 2025-12-13 08:31:23.994 248514 DEBUG nova.policy [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93eec08d500a4f03afb3281e9899bd6a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '71e2453379684f0ca0563f8c370ea4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:31:24 compute-0 podman[313540]: 2025-12-13 08:31:24.005585097 +0000 UTC m=+0.079793559 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:31:24 compute-0 podman[313522]: 2025-12-13 08:31:24.020723801 +0000 UTC m=+0.101375452 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:31:24 compute-0 podman[313538]: 2025-12-13 08:31:24.032470741 +0000 UTC m=+0.106105838 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:31:24 compute-0 nova_compute[248510]: 2025-12-13 08:31:24.036 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:24 compute-0 nova_compute[248510]: 2025-12-13 08:31:24.037 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:24 compute-0 nova_compute[248510]: 2025-12-13 08:31:24.037 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:24 compute-0 nova_compute[248510]: 2025-12-13 08:31:24.037 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:24 compute-0 nova_compute[248510]: 2025-12-13 08:31:24.062 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:24 compute-0 nova_compute[248510]: 2025-12-13 08:31:24.068 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a9c6de9d-63c0-43a5-9d6e-be356e504837_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4245757975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:24 compute-0 ceph-mon[76537]: pgmap v2023: 321 pgs: 321 active+clean; 176 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 132 op/s
Dec 13 08:31:24 compute-0 nova_compute[248510]: 2025-12-13 08:31:24.897 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Successfully created port: 7b3b1c0a-882e-4f33-a582-667d018090d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.277 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 130 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 17 KiB/s wr, 38 op/s
Dec 13 08:31:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.836 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.837 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.881 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.894 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Successfully updated port: 7b3b1c0a-882e-4f33-a582-667d018090d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.926 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.927 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.927 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.986 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.987 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.998 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:31:25 compute-0 nova_compute[248510]: 2025-12-13 08:31:25.999 248514 INFO nova.compute.claims [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:31:26 compute-0 nova_compute[248510]: 2025-12-13 08:31:26.020 248514 DEBUG nova.compute.manager [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-changed-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:26 compute-0 nova_compute[248510]: 2025-12-13 08:31:26.020 248514 DEBUG nova.compute.manager [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Refreshing instance network info cache due to event network-changed-7b3b1c0a-882e-4f33-a582-667d018090d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:31:26 compute-0 nova_compute[248510]: 2025-12-13 08:31:26.021 248514 DEBUG oslo_concurrency.lockutils [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:26 compute-0 nova_compute[248510]: 2025-12-13 08:31:26.140 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:31:26 compute-0 nova_compute[248510]: 2025-12-13 08:31:26.205 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Dec 13 08:31:26 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Dec 13 08:31:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:31:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929692289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:26 compute-0 nova_compute[248510]: 2025-12-13 08:31:26.948 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.743s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:26 compute-0 nova_compute[248510]: 2025-12-13 08:31:26.957 248514 DEBUG nova.compute.provider_tree [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.023 248514 DEBUG nova.scheduler.client.report [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:31:27 compute-0 ceph-mon[76537]: pgmap v2024: 321 pgs: 321 active+clean; 130 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 17 KiB/s wr, 38 op/s
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.323 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a9c6de9d-63c0-43a5-9d6e-be356e504837_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.380 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.381 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.391 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] resizing rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:31:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 130 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 23 op/s
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.562 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.563 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.590 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.611 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.741 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.743 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.743 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Creating image(s)
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.766 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.790 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.811 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.815 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.902 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.904 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.905 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.905 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.924 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:27 compute-0 nova_compute[248510]: 2025-12-13 08:31:27.928 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.031 248514 DEBUG nova.objects.instance [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'migration_context' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.056 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.056 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Ensure instance console log exists: /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.057 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.057 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.057 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.163 248514 INFO nova.virt.libvirt.driver [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Deleting instance files /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_del
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.164 248514 INFO nova.virt.libvirt.driver [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Deletion of /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_del complete
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.224 248514 INFO nova.compute.manager [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Took 12.22 seconds to destroy the instance on the hypervisor.
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.225 248514 DEBUG oslo.service.loopingcall [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.227 248514 DEBUG nova.compute.manager [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.227 248514 DEBUG nova.network.neutron [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:31:28 compute-0 ceph-mon[76537]: osdmap e227: 3 total, 3 up, 3 in
Dec 13 08:31:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2929692289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:28 compute-0 ceph-mon[76537]: pgmap v2026: 321 pgs: 321 active+clean; 130 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 23 op/s
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.599 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.604 248514 DEBUG nova.policy [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b310bdebec646949fad4ea1821b4c3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.627 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.628 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance network_info: |[{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.628 248514 DEBUG oslo_concurrency.lockutils [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.629 248514 DEBUG nova.network.neutron [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Refreshing network info cache for port 7b3b1c0a-882e-4f33-a582-667d018090d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.631 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start _get_guest_xml network_info=[{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.635 248514 WARNING nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.643 248514 DEBUG nova.virt.libvirt.host [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.643 248514 DEBUG nova.virt.libvirt.host [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.648 248514 DEBUG nova.virt.libvirt.host [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.649 248514 DEBUG nova.virt.libvirt.host [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.649 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.649 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.655 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.694 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.695 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.696 248514 INFO nova.compute.manager [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Rebooting instance
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.721 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.722 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.722 248514 DEBUG nova.network.neutron [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.858 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.930s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:28 compute-0 nova_compute[248510]: 2025-12-13 08:31:28.936 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] resizing rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.014 248514 DEBUG nova.objects.instance [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'migration_context' on Instance uuid ce9adb21-8832-4d3e-867e-b0b49bdb6850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.033 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.034 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Ensure instance console log exists: /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.034 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.035 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.035 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468583397' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.226 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.250 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.254 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2468583397' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 157 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 MiB/s wr, 60 op/s
Dec 13 08:31:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132604061' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.895 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.897 248514 DEBUG nova.virt.libvirt.vif [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1742357064',display_name='tempest-ServerRescueTestJSON-server-1742357064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1742357064',id=68,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-hpc6t4lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:23Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=a9c6de9d-63c0-43a5-9d6e-be356e504837,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.897 248514 DEBUG nova.network.os_vif_util [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.898 248514 DEBUG nova.network.os_vif_util [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.899 248514 DEBUG nova.objects.instance [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.921 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <uuid>a9c6de9d-63c0-43a5-9d6e-be356e504837</uuid>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <name>instance-00000044</name>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueTestJSON-server-1742357064</nova:name>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:31:28</nova:creationTime>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <nova:user uuid="93eec08d500a4f03afb3281e9899bd6a">tempest-ServerRescueTestJSON-1425963100-project-member</nova:user>
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <nova:project uuid="71e2453379684f0ca0563f8c370ea4a3">tempest-ServerRescueTestJSON-1425963100</nova:project>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <nova:port uuid="7b3b1c0a-882e-4f33-a582-667d018090d4">
Dec 13 08:31:29 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <system>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <entry name="serial">a9c6de9d-63c0-43a5-9d6e-be356e504837</entry>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <entry name="uuid">a9c6de9d-63c0-43a5-9d6e-be356e504837</entry>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </system>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <os>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   </os>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <features>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   </features>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk">
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config">
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:29 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:41:56:12"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <target dev="tap7b3b1c0a-88"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/console.log" append="off"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <video>
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </video>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:31:29 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:31:29 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:31:29 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:31:29 compute-0 nova_compute[248510]: </domain>
Dec 13 08:31:29 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.922 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Preparing to wait for external event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.922 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.922 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.922 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.923 248514 DEBUG nova.virt.libvirt.vif [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1742357064',display_name='tempest-ServerRescueTestJSON-server-1742357064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1742357064',id=68,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-hpc6t4lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:23Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=a9c6de9d-63c0-43a5-9d6e-be356e504837,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.923 248514 DEBUG nova.network.os_vif_util [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.924 248514 DEBUG nova.network.os_vif_util [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.924 248514 DEBUG os_vif [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.930 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b3b1c0a-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.931 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b3b1c0a-88, col_values=(('external_ids', {'iface-id': '7b3b1c0a-882e-4f33-a582-667d018090d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:56:12', 'vm-uuid': 'a9c6de9d-63c0-43a5-9d6e-be356e504837'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:29 compute-0 NetworkManager[50376]: <info>  [1765614689.9343] manager: (tap7b3b1c0a-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:29 compute-0 nova_compute[248510]: 2025-12-13 08:31:29.940 248514 INFO os_vif [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88')
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.003 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.003 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.004 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No VIF found with MAC fa:16:3e:41:56:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.004 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Using config drive
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.026 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:30.211 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:30.212 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.246 248514 DEBUG nova.network.neutron [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.268 248514 INFO nova.compute.manager [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Took 2.04 seconds to deallocate network for instance.
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.329 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.330 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:30 compute-0 ceph-mon[76537]: pgmap v2027: 321 pgs: 321 active+clean; 157 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 MiB/s wr, 60 op/s
Dec 13 08:31:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4132604061' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.438 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating config drive at /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.444 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbiqgkl39 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.481 248514 DEBUG oslo_concurrency.processutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.536 248514 DEBUG nova.compute.manager [req-e21598cd-f65b-4ac5-b937-d399f3a57a8d req-b283e31d-cf01-40a8-a2ce-51c39d3cfe9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received event network-vif-deleted-0cba7327-b5c5-419e-84f1-4264a7dedf3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.587 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbiqgkl39" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.613 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:30 compute-0 nova_compute[248510]: 2025-12-13 08:31:30.617 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:31:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76940838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.060 248514 DEBUG oslo_concurrency.processutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.069 248514 DEBUG nova.compute.provider_tree [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.126 248514 DEBUG nova.scheduler.client.report [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:31:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.166 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.212 248514 INFO nova.scheduler.client.report [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Deleted allocations for instance b4a46029-fa6e-4566-9187-16b9d1bfd6d6
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.253 248514 DEBUG nova.network.neutron [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.283 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.286 248514 DEBUG nova.compute.manager [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.294 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.492 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Successfully created port: b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:31:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 201 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 MiB/s wr, 82 op/s
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.748 248514 DEBUG nova.network.neutron [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updated VIF entry in instance network info cache for port 7b3b1c0a-882e-4f33-a582-667d018090d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.750 248514 DEBUG nova.network.neutron [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:31 compute-0 nova_compute[248510]: 2025-12-13 08:31:31.775 248514 DEBUG oslo_concurrency.lockutils [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/76940838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:32 compute-0 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 08:31:32 compute-0 NetworkManager[50376]: <info>  [1765614692.5433] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:31:32 compute-0 ovn_controller[148476]: 2025-12-13T08:31:32Z|00636|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 08:31:32 compute-0 ovn_controller[148476]: 2025-12-13T08:31:32Z|00637|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 08:31:32 compute-0 ovn_controller[148476]: 2025-12-13T08:31:32Z|00638|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.565 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.566 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:31:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.568 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:31:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.569 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[788c9a22-fc91-4352-83e1-2693e482fda2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.569 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.592 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Successfully updated port: b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:32 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 08:31:32 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003c.scope: Consumed 14.208s CPU time.
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.612 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.613 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquired lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.613 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:31:32 compute-0 systemd-machined[210538]: Machine qemu-77-instance-0000003c terminated.
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.688 248514 DEBUG nova.compute.manager [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-changed-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.688 248514 DEBUG nova.compute.manager [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Refreshing instance network info cache due to event network-changed-b2ee664d-ff99-4665-a5cc-70bd7aeb1546. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.689 248514 DEBUG oslo_concurrency.lockutils [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.846 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.847 248514 DEBUG nova.objects.instance [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.854 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614677.853914, b4a46029-fa6e-4566-9187-16b9d1bfd6d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.854 248514 INFO nova.compute.manager [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] VM Stopped (Lifecycle Event)
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.871 248514 DEBUG nova.virt.libvirt.vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.871 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.872 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.873 248514 DEBUG os_vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.874 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.878 248514 DEBUG nova.compute.manager [None req-60d99fb7-2e76-40e0-a2c6-e05c03a091d1 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.881 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.886 248514 INFO os_vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.893 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.896 248514 WARNING nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.902 248514 DEBUG nova.virt.libvirt.host [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.903 248514 DEBUG nova.virt.libvirt.host [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.906 248514 DEBUG nova.virt.libvirt.host [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.906 248514 DEBUG nova.virt.libvirt.host [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.907 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.907 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.907 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.907 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.908 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.908 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.908 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.908 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.objects.instance [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.929 248514 DEBUG oslo_concurrency.processutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:32 compute-0 nova_compute[248510]: 2025-12-13 08:31:32.967 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.352 248514 DEBUG nova.compute.manager [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.352 248514 DEBUG oslo_concurrency.lockutils [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.353 248514 DEBUG oslo_concurrency.lockutils [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.353 248514 DEBUG oslo_concurrency.lockutils [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.353 248514 DEBUG nova.compute.manager [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.353 248514 WARNING nova.compute.manager [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state reboot_started_hard.
Dec 13 08:31:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 4.3 MiB/s wr, 87 op/s
Dec 13 08:31:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316122110' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:33 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [NOTICE]   (313171) : haproxy version is 2.8.14-c23fe91
Dec 13 08:31:33 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [NOTICE]   (313171) : path to executable is /usr/sbin/haproxy
Dec 13 08:31:33 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [WARNING]  (313171) : Exiting Master process...
Dec 13 08:31:33 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [WARNING]  (313171) : Exiting Master process...
Dec 13 08:31:33 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [ALERT]    (313171) : Current worker (313174) exited with code 143 (Terminated)
Dec 13 08:31:33 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [WARNING]  (313171) : All workers exited. Exiting... (0)
Dec 13 08:31:33 compute-0 systemd[1]: libpod-0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5.scope: Deactivated successfully.
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.817 248514 DEBUG oslo_concurrency.processutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:33 compute-0 podman[314072]: 2025-12-13 08:31:33.8239087 +0000 UTC m=+1.167675822 container died 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:31:33 compute-0 ceph-mon[76537]: pgmap v2028: 321 pgs: 321 active+clean; 201 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 MiB/s wr, 82 op/s
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.877 248514 DEBUG oslo_concurrency.processutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.960 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.988 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Releasing lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.989 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance network_info: |[{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.989 248514 DEBUG oslo_concurrency.lockutils [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.990 248514 DEBUG nova.network.neutron [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Refreshing network info cache for port b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:31:33 compute-0 nova_compute[248510]: 2025-12-13 08:31:33.995 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start _get_guest_xml network_info=[{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.002 248514 WARNING nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.007 248514 DEBUG nova.virt.libvirt.host [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.008 248514 DEBUG nova.virt.libvirt.host [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.014 248514 DEBUG nova.virt.libvirt.host [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.014 248514 DEBUG nova.virt.libvirt.host [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.015 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.015 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.021 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2dff49460421801b39e16b895a11c4e399a618cd7fbd30a89711e20d6bd165c-merged.mount: Deactivated successfully.
Dec 13 08:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5-userdata-shm.mount: Deactivated successfully.
Dec 13 08:31:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300550188' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.504 248514 DEBUG oslo_concurrency.processutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.506 248514 DEBUG nova.virt.libvirt.vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.506 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.507 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.509 248514 DEBUG nova.objects.instance [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.524 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <name>instance-0000003c</name>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:31:32</nova:creationTime>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 08:31:34 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <system>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </system>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <os>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   </os>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <features>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   </features>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:34 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <target dev="tapb5058a06-71"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <video>
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </video>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:31:34 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:31:34 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:31:34 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:31:34 compute-0 nova_compute[248510]: </domain>
Dec 13 08:31:34 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.526 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.526 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.527 248514 DEBUG nova.virt.libvirt.vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.527 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.528 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.528 248514 DEBUG os_vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.530 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.530 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.532 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.533 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:34 compute-0 NetworkManager[50376]: <info>  [1765614694.5363] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.548 248514 INFO os_vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.555 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.938s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.555 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deleting local config drive /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config because it was imported into RBD.
Dec 13 08:31:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026574021' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.589 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.622 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:34 compute-0 NetworkManager[50376]: <info>  [1765614694.6295] manager: (tap7b3b1c0a-88): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Dec 13 08:31:34 compute-0 kernel: tap7b3b1c0a-88: entered promiscuous mode
Dec 13 08:31:34 compute-0 systemd-udevd[314051]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.633 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:34 compute-0 ovn_controller[148476]: 2025-12-13T08:31:34Z|00639|binding|INFO|Claiming lport 7b3b1c0a-882e-4f33-a582-667d018090d4 for this chassis.
Dec 13 08:31:34 compute-0 ovn_controller[148476]: 2025-12-13T08:31:34Z|00640|binding|INFO|7b3b1c0a-882e-4f33-a582-667d018090d4: Claiming fa:16:3e:41:56:12 10.100.0.13
Dec 13 08:31:34 compute-0 NetworkManager[50376]: <info>  [1765614694.6441] device (tap7b3b1c0a-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:31:34 compute-0 NetworkManager[50376]: <info>  [1765614694.6450] device (tap7b3b1c0a-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:31:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:34.648 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:56:12 10.100.0.13'], port_security=['fa:16:3e:41:56:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9c6de9d-63c0-43a5-9d6e-be356e504837', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7b3b1c0a-882e-4f33-a582-667d018090d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:34 compute-0 ovn_controller[148476]: 2025-12-13T08:31:34Z|00641|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 ovn-installed in OVS
Dec 13 08:31:34 compute-0 ovn_controller[148476]: 2025-12-13T08:31:34Z|00642|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 up in Southbound
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:34 compute-0 systemd-machined[210538]: New machine qemu-78-instance-00000044.
Dec 13 08:31:34 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000044.
Dec 13 08:31:34 compute-0 NetworkManager[50376]: <info>  [1765614694.8337] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Dec 13 08:31:34 compute-0 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.835 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:34 compute-0 ovn_controller[148476]: 2025-12-13T08:31:34Z|00643|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 08:31:34 compute-0 ovn_controller[148476]: 2025-12-13T08:31:34Z|00644|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:31:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:34.849 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:34 compute-0 NetworkManager[50376]: <info>  [1765614694.8522] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:31:34 compute-0 NetworkManager[50376]: <info>  [1765614694.8527] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:31:34 compute-0 ovn_controller[148476]: 2025-12-13T08:31:34Z|00645|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 08:31:34 compute-0 ovn_controller[148476]: 2025-12-13T08:31:34Z|00646|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.859 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:34 compute-0 nova_compute[248510]: 2025-12-13 08:31:34.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:34 compute-0 systemd-machined[210538]: New machine qemu-79-instance-0000003c.
Dec 13 08:31:34 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-0000003c.
Dec 13 08:31:35 compute-0 podman[314072]: 2025-12-13 08:31:35.066304324 +0000 UTC m=+2.410071446 container cleanup 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:31:35 compute-0 ceph-mon[76537]: pgmap v2029: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 4.3 MiB/s wr, 87 op/s
Dec 13 08:31:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1316122110' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1300550188' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3026574021' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:35 compute-0 systemd[1]: libpod-conmon-0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5.scope: Deactivated successfully.
Dec 13 08:31:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502144583' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.260 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.262 248514 DEBUG nova.virt.libvirt.vif [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2099788276',display_name='tempest-ServerActionsTestOtherA-server-2099788276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2099788276',id=69,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxJgtD1FEWUw7tJ8pibGATgtrZITyeOCdRqSR73HftGqNDavcdP1XHx0prQ71D2yOjUOO7ZJAEgnPXlpVAfW1QGvCbp1snKSBX1V/4lwFnsJaGPS7QewWPvSMs5UYFhVA==',key_name='tempest-keypair-180617026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-b9qwtikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=ce9adb21-8832-4d3e-867e-b0b49bdb6850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.262 248514 DEBUG nova.network.os_vif_util [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.263 248514 DEBUG nova.network.os_vif_util [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.265 248514 DEBUG nova.objects.instance [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_devices' on Instance uuid ce9adb21-8832-4d3e-867e-b0b49bdb6850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.332 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <uuid>ce9adb21-8832-4d3e-867e-b0b49bdb6850</uuid>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <name>instance-00000045</name>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestOtherA-server-2099788276</nova:name>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:31:34</nova:creationTime>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <nova:user uuid="5b310bdebec646949fad4ea1821b4c3f">tempest-ServerActionsTestOtherA-1325599242-project-member</nova:user>
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <nova:project uuid="b4d2999518df4b9f8ccbabe38976dc3c">tempest-ServerActionsTestOtherA-1325599242</nova:project>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <nova:port uuid="b2ee664d-ff99-4665-a5cc-70bd7aeb1546">
Dec 13 08:31:35 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <system>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <entry name="serial">ce9adb21-8832-4d3e-867e-b0b49bdb6850</entry>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <entry name="uuid">ce9adb21-8832-4d3e-867e-b0b49bdb6850</entry>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </system>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <os>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   </os>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <features>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   </features>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk">
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config">
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:35 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:63:3a:53"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <target dev="tapb2ee664d-ff"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/console.log" append="off"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <video>
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </video>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:31:35 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:31:35 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:31:35 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:31:35 compute-0 nova_compute[248510]: </domain>
Dec 13 08:31:35 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.334 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Preparing to wait for external event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.334 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.335 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.335 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.336 248514 DEBUG nova.virt.libvirt.vif [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2099788276',display_name='tempest-ServerActionsTestOtherA-server-2099788276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2099788276',id=69,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxJgtD1FEWUw7tJ8pibGATgtrZITyeOCdRqSR73HftGqNDavcdP1XHx0prQ71D2yOjUOO7ZJAEgnPXlpVAfW1QGvCbp1snKSBX1V/4lwFnsJaGPS7QewWPvSMs5UYFhVA==',key_name='tempest-keypair-180617026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-b9qwtikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=ce9adb21-8832-4d3e-867e-b0b49bdb6850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.336 248514 DEBUG nova.network.os_vif_util [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.338 248514 DEBUG nova.network.os_vif_util [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.339 248514 DEBUG os_vif [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.341 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.342 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.347 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2ee664d-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.348 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2ee664d-ff, col_values=(('external_ids', {'iface-id': 'b2ee664d-ff99-4665-a5cc-70bd7aeb1546', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:3a:53', 'vm-uuid': 'ce9adb21-8832-4d3e-867e-b0b49bdb6850'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 NetworkManager[50376]: <info>  [1765614695.3509] manager: (tapb2ee664d-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.353 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.357 248514 INFO os_vif [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff')
Dec 13 08:31:35 compute-0 podman[314288]: 2025-12-13 08:31:35.449644053 +0000 UTC m=+0.348288955 container remove 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.461 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614695.4604409, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.461 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Started (Lifecycle Event)
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97418971-ae57-4582-a104-ed4695b8aea3]: (4, ('Sat Dec 13 08:31:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5)\n0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5\nSat Dec 13 08:31:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5)\n0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.464 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53b9756d-2229-4090-9c73-1edc09bb245d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.466 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.491 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.494 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.494 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f43b02b5-deb0-46bc-880c-f6d1e0186d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.501 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614695.460597, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.502 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Paused (Lifecycle Event)
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.506 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.506 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.506 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No VIF found with MAC fa:16:3e:63:3a:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.507 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Using config drive
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.509 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0774e96b-c46d-4d54-aaf0-a4d403888fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.515 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c814c2-aae7-4a7d-9e5b-89e96f2304ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.538 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[135ce90a-f232-4366-ae41-1a1711ad7a82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723460, 'reachable_time': 21436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314360, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.543 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.544 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:31:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.3 MiB/s wr, 88 op/s
Dec 13 08:31:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.544 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3a8ff1-0a51-469a-a1aa-157af1f214a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.546 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b1c0a-882e-4f33-a582-667d018090d4 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.548 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09a5c771-f4fd-4c17-9319-27a5ccf066e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.550 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.551 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.557 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.566 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.566 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07d95d1d-c645-451d-afb4-9f28cbf94096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.568 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.570 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69dd3bb7-0afe-4ee5-84dd-36368b29e66e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2db8fb15-2889-4b48-ba13-d89732fe5302]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.584 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1092c58e-ec1e-484a-8512-8ad7031fb8c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.599 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.601 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3687c8-fe34-439b-b20d-e7c437d259fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.645 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ce2ca0-0d30-47e5-b0dc-5bcb498ce8d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 NetworkManager[50376]: <info>  [1765614695.6528] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/287)
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.652 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52dd9cd5-d092-46e0-adf8-2abb014f85c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 systemd-udevd[314391]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.696 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[92f4da8e-3739-42bf-ab8d-af9c93a1b16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.703 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b7289652-a343-41c1-be11-3b666abffc91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 NetworkManager[50376]: <info>  [1765614695.7350] device (tap43ee8730-a0): carrier: link connected
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.742 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c38d44-8aec-43bf-840e-e51be8ef7c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.761 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bac9e3a8-346b-402c-9fed-99e988ebc5a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314412, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.777 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42629ff2-9efc-4daa-8e5a-18cb961d8c8d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727295, 'tstamp': 727295}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314429, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.793 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[44023f03-d916-40f3-9b83-11b5a99747ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314432, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.819 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0556b5-ecd3-4a8a-bb07-bb78908dd400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.877 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84d655e6-7e4b-412c-a7e3-ea422c38f016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.881 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 NetworkManager[50376]: <info>  [1765614695.8822] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Dec 13 08:31:35 compute-0 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.887 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 ovn_controller[148476]: 2025-12-13T08:31:35Z|00647|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.910 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8871b9bc-55b8-4334-b678-693f1b8035ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.911 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:31:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.912 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.983 248514 DEBUG nova.compute.manager [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.984 248514 DEBUG oslo_concurrency.lockutils [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.984 248514 DEBUG oslo_concurrency.lockutils [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.984 248514 DEBUG oslo_concurrency.lockutils [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.984 248514 DEBUG nova.compute.manager [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Processing event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.985 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.991 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614695.9903412, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.991 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Resumed (Lifecycle Event)
Dec 13 08:31:35 compute-0 nova_compute[248510]: 2025-12-13 08:31:35.993 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.002 248514 INFO nova.virt.libvirt.driver [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance spawned successfully.
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.003 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.025 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.029 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.030 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.030 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.031 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.031 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.031 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.039 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Creating config drive at /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.045 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyunprnlr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1502144583' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:36 compute-0 ceph-mon[76537]: pgmap v2030: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.3 MiB/s wr, 88 op/s
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.090 248514 DEBUG nova.network.neutron [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updated VIF entry in instance network info cache for port b2ee664d-ff99-4665-a5cc-70bd7aeb1546. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.091 248514 DEBUG nova.network.neutron [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.094 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.097 248514 DEBUG nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.097 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.098 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.098 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.099 248514 DEBUG nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.099 248514 WARNING nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state reboot_started_hard.
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.099 248514 DEBUG nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.100 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.100 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.100 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.101 248514 DEBUG nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.101 248514 WARNING nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state reboot_started_hard.
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.102 248514 DEBUG nova.compute.manager [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.103 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.103 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614696.0367146, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.104 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.107 248514 INFO nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Took 12.28 seconds to spawn the instance on the hypervisor.
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.107 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.113 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance rebooted successfully.
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.114 248514 DEBUG nova.compute.manager [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.158 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.161 248514 DEBUG oslo_concurrency.lockutils [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.164 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.195 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyunprnlr" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.215 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.228 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.234 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.277 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.277 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614696.0368996, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.278 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.281 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.289 248514 INFO nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Took 13.76 seconds to build instance.
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.320 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.321 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.326 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:36 compute-0 podman[314494]: 2025-12-13 08:31:36.341014576 +0000 UTC m=+0.071997267 container create f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 08:31:36 compute-0 systemd[1]: Started libpod-conmon-f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce.scope.
Dec 13 08:31:36 compute-0 podman[314494]: 2025-12-13 08:31:36.297234096 +0000 UTC m=+0.028216807 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:31:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5248c08c37cabcce8b13f5f177f9d27ffd14bb5e2bc32ec9054b3d6fc2fbb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:36 compute-0 podman[314494]: 2025-12-13 08:31:36.439764003 +0000 UTC m=+0.170746724 container init f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:31:36 compute-0 podman[314494]: 2025-12-13 08:31:36.445860103 +0000 UTC m=+0.176842794 container start f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.456 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.458 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Deleting local config drive /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config because it was imported into RBD.
Dec 13 08:31:36 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [NOTICE]   (314530) : New worker (314533) forked
Dec 13 08:31:36 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [NOTICE]   (314530) : Loading success.
Dec 13 08:31:36 compute-0 kernel: tapb2ee664d-ff: entered promiscuous mode
Dec 13 08:31:36 compute-0 NetworkManager[50376]: <info>  [1765614696.5176] manager: (tapb2ee664d-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Dec 13 08:31:36 compute-0 systemd-udevd[314395]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:31:36 compute-0 NetworkManager[50376]: <info>  [1765614696.5399] device (tapb2ee664d-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:31:36 compute-0 NetworkManager[50376]: <info>  [1765614696.5408] device (tapb2ee664d-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:31:36 compute-0 ovn_controller[148476]: 2025-12-13T08:31:36Z|00648|binding|INFO|Claiming lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 for this chassis.
Dec 13 08:31:36 compute-0 ovn_controller[148476]: 2025-12-13T08:31:36Z|00649|binding|INFO|b2ee664d-ff99-4665-a5cc-70bd7aeb1546: Claiming fa:16:3e:63:3a:53 10.100.0.3
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.559 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:3a:53 10.100.0.3'], port_security=['fa:16:3e:63:3a:53 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce9adb21-8832-4d3e-867e-b0b49bdb6850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4a5bf7b-dd16-4d92-81b7-546493ad4db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2ee664d-ff99-4665-a5cc-70bd7aeb1546) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.560 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2ee664d-ff99-4665-a5cc-70bd7aeb1546 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f bound to our chassis
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.563 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:31:36 compute-0 ovn_controller[148476]: 2025-12-13T08:31:36Z|00650|binding|INFO|Setting lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 ovn-installed in OVS
Dec 13 08:31:36 compute-0 ovn_controller[148476]: 2025-12-13T08:31:36Z|00651|binding|INFO|Setting lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 up in Southbound
Dec 13 08:31:36 compute-0 nova_compute[248510]: 2025-12-13 08:31:36.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:36 compute-0 systemd-machined[210538]: New machine qemu-80-instance-00000045.
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.583 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9efae554-9552-4654-801e-f80a2e61b2b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.584 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap409fc0bb-c1 in ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.586 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap409fc0bb-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.586 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d982545-4a92-4eef-a8fa-6acf4e477c30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.587 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1100f0-667a-4960-8c5b-0c2b84cc58bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000045.
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.607 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9bc01b-e705-4d09-aabe-931d7dcfb9fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[743842b1-9c00-48b5-b43e-eb68de8b4908]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.676 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1076277f-5f91-47bf-8a42-de8498f6000c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 NetworkManager[50376]: <info>  [1765614696.6857] manager: (tap409fc0bb-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.684 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[82532825-a922-4b4a-bbd4-acfcbb323a44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.732 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf613de-d2aa-4521-bb7d-6f8ae632c402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.737 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[631d89ea-01e4-4e46-b264-03957dde4a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 NetworkManager[50376]: <info>  [1765614696.7694] device (tap409fc0bb-c0): carrier: link connected
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.776 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31bc3869-7d8d-4814-82e6-d5cac2de6d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.800 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[38d64295-ab01-4e4e-95e2-ce75045bd611]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314570, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.820 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45200e70-4b22-4d98-a477-e7c0a51c7585]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:3b3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727398, 'tstamp': 727398}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314571, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.842 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[83d8482d-6020-43f0-8aae-b5eccac63582]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314572, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.902 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3a259058-7a0b-4ba5-ab3a-04f89abeb4b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.006 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[acad6b18-dee7-4063-b2fb-19a986e9fe86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.008 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.009 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.009 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:37 compute-0 NetworkManager[50376]: <info>  [1765614697.0122] manager: (tap409fc0bb-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:37 compute-0 kernel: tap409fc0bb-c0: entered promiscuous mode
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.018 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:37 compute-0 ovn_controller[148476]: 2025-12-13T08:31:37Z|00652|binding|INFO|Releasing lport c8e8a31b-a5fe-4e2d-bc19-65995078988f from this chassis (sb_readonly=0)
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.025 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/409fc0bb-caf3-4b90-9e44-83ff383dc88f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/409fc0bb-caf3-4b90-9e44-83ff383dc88f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.026 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93656d86-8bf2-4d0d-8f71-1356e4c2ebc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.027 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/409fc0bb-caf3-4b90-9e44-83ff383dc88f.pid.haproxy
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:31:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.029 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'env', 'PROCESS_TAG=haproxy-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/409fc0bb-caf3-4b90-9e44-83ff383dc88f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.063 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614697.0624516, ce9adb21-8832-4d3e-867e-b0b49bdb6850 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.063 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] VM Started (Lifecycle Event)
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.095 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.099 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614697.063127, ce9adb21-8832-4d3e-867e-b0b49bdb6850 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.100 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] VM Paused (Lifecycle Event)
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.129 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.130 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.133 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.137 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.160 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.165 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.256 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.257 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.265 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.265 248514 INFO nova.compute.claims [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:31:37 compute-0 nova_compute[248510]: 2025-12-13 08:31:37.423 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:37 compute-0 podman[314646]: 2025-12-13 08:31:37.483622708 +0000 UTC m=+0.068258235 container create 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 08:31:37 compute-0 systemd[1]: Started libpod-conmon-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a.scope.
Dec 13 08:31:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 13 08:31:37 compute-0 podman[314646]: 2025-12-13 08:31:37.45287817 +0000 UTC m=+0.037513717 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:31:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe1480fb3e22837a053f76acd0fd060e8f26ee729f63461240b6103c38efbe6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:37 compute-0 podman[314646]: 2025-12-13 08:31:37.576130581 +0000 UTC m=+0.160766118 container init 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:31:37 compute-0 podman[314646]: 2025-12-13 08:31:37.584457106 +0000 UTC m=+0.169092633 container start 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:31:37 compute-0 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [NOTICE]   (314684) : New worker (314686) forked
Dec 13 08:31:37 compute-0 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [NOTICE]   (314684) : Loading success.
Dec 13 08:31:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:31:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005998734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.012 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.020 248514 DEBUG nova.compute.provider_tree [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.045 248514 DEBUG nova.scheduler.client.report [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.071 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.071 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.114 248514 DEBUG nova.compute.manager [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.114 248514 DEBUG oslo_concurrency.lockutils [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.115 248514 DEBUG oslo_concurrency.lockutils [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.115 248514 DEBUG oslo_concurrency.lockutils [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.115 248514 DEBUG nova.compute.manager [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.115 248514 WARNING nova.compute.manager [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state active and task_state None.
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.121 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.121 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.146 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.172 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.196 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.197 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.197 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.197 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 WARNING nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.199 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.199 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Processing event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.199 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.199 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.200 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.200 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.200 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] No waiting events found dispatching network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.200 248514 WARNING nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received unexpected event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 for instance with vm_state building and task_state spawning.
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.201 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.214 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614698.2123458, ce9adb21-8832-4d3e-867e-b0b49bdb6850 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.215 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] VM Resumed (Lifecycle Event)
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.217 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.235 248514 INFO nova.virt.libvirt.driver [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance spawned successfully.
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.236 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.259 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.270 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.277 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.277 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.278 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.278 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.279 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.279 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.320 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.323 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.324 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.324 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Creating image(s)
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.345 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.367 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.390 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.393 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.436 248514 DEBUG nova.policy [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91d0d3efedc943b48ad0fc4295b6fc7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.444 248514 INFO nova.compute.manager [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Rescuing
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.444 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.444 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.445 248514 DEBUG nova.network.neutron [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.448 248514 INFO nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Took 10.71 seconds to spawn the instance on the hypervisor.
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.449 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.486 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.487 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.488 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.488 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.516 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.526 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.588 248514 INFO nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Took 12.64 seconds to build instance.
Dec 13 08:31:38 compute-0 ceph-mon[76537]: pgmap v2031: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 13 08:31:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4005998734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.652 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:38 compute-0 nova_compute[248510]: 2025-12-13 08:31:38.925 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.016 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] resizing rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.147 248514 DEBUG nova.objects.instance [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'migration_context' on Instance uuid 4d93482e-582f-4d44-ab53-87cd5f6aa66a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.176 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.176 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Ensure instance console log exists: /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.178 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.178 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.178 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.505 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Successfully created port: 35b172ab-1be7-44b2-9a76-0f60de6851ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:31:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 225 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Dec 13 08:31:39 compute-0 nova_compute[248510]: 2025-12-13 08:31:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:31:40 compute-0 nova_compute[248510]: 2025-12-13 08:31:40.332 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:40 compute-0 nova_compute[248510]: 2025-12-13 08:31:40.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:40 compute-0 ceph-mon[76537]: pgmap v2032: 321 pgs: 321 active+clean; 225 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Dec 13 08:31:40 compute-0 nova_compute[248510]: 2025-12-13 08:31:40.742 248514 DEBUG nova.network.neutron [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:40 compute-0 nova_compute[248510]: 2025-12-13 08:31:40.769 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:41 compute-0 nova_compute[248510]: 2025-12-13 08:31:41.066 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:31:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 239 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.3 MiB/s wr, 237 op/s
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.494 248514 DEBUG nova.compute.manager [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-changed-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.494 248514 DEBUG nova.compute.manager [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Refreshing instance network info cache due to event network-changed-b2ee664d-ff99-4665-a5cc-70bd7aeb1546. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.494 248514 DEBUG oslo_concurrency.lockutils [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.495 248514 DEBUG oslo_concurrency.lockutils [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.495 248514 DEBUG nova.network.neutron [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Refreshing network info cache for port b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.657 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Successfully updated port: 35b172ab-1be7-44b2-9a76-0f60de6851ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.682 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.682 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquired lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:42 compute-0 nova_compute[248510]: 2025-12-13 08:31:42.683 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:31:42 compute-0 ceph-mon[76537]: pgmap v2033: 321 pgs: 321 active+clean; 239 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.3 MiB/s wr, 237 op/s
Dec 13 08:31:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 249 op/s
Dec 13 08:31:43 compute-0 nova_compute[248510]: 2025-12-13 08:31:43.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:43 compute-0 nova_compute[248510]: 2025-12-13 08:31:43.774 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:31:44 compute-0 nova_compute[248510]: 2025-12-13 08:31:44.807 248514 DEBUG nova.compute.manager [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-changed-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:44 compute-0 nova_compute[248510]: 2025-12-13 08:31:44.807 248514 DEBUG nova.compute.manager [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Refreshing instance network info cache due to event network-changed-35b172ab-1be7-44b2-9a76-0f60de6851ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:31:44 compute-0 nova_compute[248510]: 2025-12-13 08:31:44.807 248514 DEBUG oslo_concurrency.lockutils [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:31:44 compute-0 ceph-mon[76537]: pgmap v2034: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 249 op/s
Dec 13 08:31:45 compute-0 nova_compute[248510]: 2025-12-13 08:31:45.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:45 compute-0 nova_compute[248510]: 2025-12-13 08:31:45.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 245 op/s
Dec 13 08:31:45 compute-0 sudo[314863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:31:45 compute-0 sudo[314863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:45 compute-0 sudo[314863]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:45 compute-0 sudo[314888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:31:45 compute-0 sudo[314888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:46 compute-0 ceph-mon[76537]: pgmap v2035: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 245 op/s
Dec 13 08:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:46 compute-0 sudo[314888]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:31:46 compute-0 sudo[314944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:31:46 compute-0 sudo[314944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:46 compute-0 sudo[314944]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:46 compute-0 sudo[314969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:31:46 compute-0 sudo[314969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.003 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Updating instance_info_cache with network_info: [{"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.012 248514 DEBUG nova.network.neutron [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updated VIF entry in instance network info cache for port b2ee664d-ff99-4665-a5cc-70bd7aeb1546. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.012 248514 DEBUG nova.network.neutron [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.027 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Releasing lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:47 compute-0 podman[315004]: 2025-12-13 08:31:47.026470803 +0000 UTC m=+0.050479279 container create c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.029 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance network_info: |[{"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.030 248514 DEBUG oslo_concurrency.lockutils [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.030 248514 DEBUG nova.network.neutron [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Refreshing network info cache for port 35b172ab-1be7-44b2-9a76-0f60de6851ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.036 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start _get_guest_xml network_info=[{"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.038 248514 DEBUG oslo_concurrency.lockutils [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.043 248514 WARNING nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.052 248514 DEBUG nova.virt.libvirt.host [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.054 248514 DEBUG nova.virt.libvirt.host [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.061 248514 DEBUG nova.virt.libvirt.host [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.061 248514 DEBUG nova.virt.libvirt.host [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.062 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.062 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.063 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.063 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.064 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.064 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.065 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.065 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.065 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.066 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.066 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.067 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.071 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:47 compute-0 systemd[1]: Started libpod-conmon-c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99.scope.
Dec 13 08:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:31:47 compute-0 podman[315004]: 2025-12-13 08:31:47.001907663 +0000 UTC m=+0.025916159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:31:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:47 compute-0 podman[315004]: 2025-12-13 08:31:47.125513457 +0000 UTC m=+0.149521953 container init c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:31:47 compute-0 podman[315004]: 2025-12-13 08:31:47.134286509 +0000 UTC m=+0.158294985 container start c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:31:47 compute-0 podman[315004]: 2025-12-13 08:31:47.138256552 +0000 UTC m=+0.162265248 container attach c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:31:47 compute-0 intelligent_rhodes[315019]: 167 167
Dec 13 08:31:47 compute-0 systemd[1]: libpod-c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99.scope: Deactivated successfully.
Dec 13 08:31:47 compute-0 podman[315004]: 2025-12-13 08:31:47.144047245 +0000 UTC m=+0.168055721 container died c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-79082616f053c878e9656e6427e7e5ef0b1fcbdb23034c72f4840fb69ab67c4d-merged.mount: Deactivated successfully.
Dec 13 08:31:47 compute-0 podman[315004]: 2025-12-13 08:31:47.196179975 +0000 UTC m=+0.220188451 container remove c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:31:47 compute-0 systemd[1]: libpod-conmon-c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99.scope: Deactivated successfully.
Dec 13 08:31:47 compute-0 podman[315062]: 2025-12-13 08:31:47.406960256 +0000 UTC m=+0.049128021 container create 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 08:31:47 compute-0 systemd[1]: Started libpod-conmon-145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305.scope.
Dec 13 08:31:47 compute-0 podman[315062]: 2025-12-13 08:31:47.386435722 +0000 UTC m=+0.028603507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:31:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:47 compute-0 podman[315062]: 2025-12-13 08:31:47.507931524 +0000 UTC m=+0.150099309 container init 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:31:47 compute-0 podman[315062]: 2025-12-13 08:31:47.516525069 +0000 UTC m=+0.158692834 container start 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 08:31:47 compute-0 podman[315062]: 2025-12-13 08:31:47.520807855 +0000 UTC m=+0.162975640 container attach 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 08:31:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 13 08:31:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490801119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.695 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.730 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.737 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.792 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.794 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:31:47 compute-0 nova_compute[248510]: 2025-12-13 08:31:47.826 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:31:48 compute-0 ceph-mon[76537]: pgmap v2036: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 13 08:31:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2490801119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:48 compute-0 elated_jennings[315078]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:31:48 compute-0 elated_jennings[315078]: --> All data devices are unavailable
Dec 13 08:31:48 compute-0 systemd[1]: libpod-145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305.scope: Deactivated successfully.
Dec 13 08:31:48 compute-0 conmon[315078]: conmon 145369c4449b2c3669c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305.scope/container/memory.events
Dec 13 08:31:48 compute-0 podman[315062]: 2025-12-13 08:31:48.143157713 +0000 UTC m=+0.785325488 container died 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8-merged.mount: Deactivated successfully.
Dec 13 08:31:48 compute-0 podman[315062]: 2025-12-13 08:31:48.200970041 +0000 UTC m=+0.843137806 container remove 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:31:48 compute-0 systemd[1]: libpod-conmon-145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305.scope: Deactivated successfully.
Dec 13 08:31:48 compute-0 sudo[314969]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:48 compute-0 sudo[315151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:31:48 compute-0 sudo[315151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:48 compute-0 sudo[315151]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:48 compute-0 ovn_controller[148476]: 2025-12-13T08:31:48Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:31:48 compute-0 sudo[315176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:31:48 compute-0 sudo[315176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830471578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.447 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.450 248514 DEBUG nova.virt.libvirt.vif [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1782155204',id=70,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-e9s8cfpo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:38Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=4d93482e-582f-4d44-ab53-87cd5f6aa66a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.450 248514 DEBUG nova.network.os_vif_util [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.451 248514 DEBUG nova.network.os_vif_util [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.453 248514 DEBUG nova.objects.instance [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d93482e-582f-4d44-ab53-87cd5f6aa66a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.477 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <uuid>4d93482e-582f-4d44-ab53-87cd5f6aa66a</uuid>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <name>instance-00000046</name>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1782155204</nova:name>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:31:47</nova:creationTime>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <nova:user uuid="91d0d3efedc943b48ad0fc4295b6fc7c">tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member</nova:user>
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <nova:project uuid="2de328b46a6e4f588f5e2a254db7f4ef">tempest-ImagesOneServerNegativeTestJSON-1826994500</nova:project>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <nova:port uuid="35b172ab-1be7-44b2-9a76-0f60de6851ab">
Dec 13 08:31:48 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <system>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <entry name="serial">4d93482e-582f-4d44-ab53-87cd5f6aa66a</entry>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <entry name="uuid">4d93482e-582f-4d44-ab53-87cd5f6aa66a</entry>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </system>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <os>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   </os>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <features>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   </features>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk">
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config">
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:48 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:67:86:fc"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <target dev="tap35b172ab-1b"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/console.log" append="off"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <video>
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </video>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:31:48 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:31:48 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:31:48 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:31:48 compute-0 nova_compute[248510]: </domain>
Dec 13 08:31:48 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.484 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Preparing to wait for external event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.484 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.485 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.485 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.486 248514 DEBUG nova.virt.libvirt.vif [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1782155204',id=70,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-e9s8cfpo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:38Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=4d93482e-582f-4d44-ab53-87cd5f6aa66a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.487 248514 DEBUG nova.network.os_vif_util [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.488 248514 DEBUG nova.network.os_vif_util [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.488 248514 DEBUG os_vif [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.490 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.491 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.496 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35b172ab-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.497 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35b172ab-1b, col_values=(('external_ids', {'iface-id': '35b172ab-1be7-44b2-9a76-0f60de6851ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:86:fc', 'vm-uuid': '4d93482e-582f-4d44-ab53-87cd5f6aa66a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:48 compute-0 NetworkManager[50376]: <info>  [1765614708.5002] manager: (tap35b172ab-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.508 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.509 248514 INFO os_vif [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b')
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.573 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.574 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.575 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No VIF found with MAC fa:16:3e:67:86:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.575 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Using config drive
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.616 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:48 compute-0 podman[315235]: 2025-12-13 08:31:48.708292438 +0000 UTC m=+0.045800796 container create b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 08:31:48 compute-0 systemd[1]: Started libpod-conmon-b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2.scope.
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:48 compute-0 podman[315235]: 2025-12-13 08:31:48.689175206 +0000 UTC m=+0.026683584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:31:48 compute-0 podman[315235]: 2025-12-13 08:31:48.796390336 +0000 UTC m=+0.133898714 container init b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:31:48 compute-0 podman[315235]: 2025-12-13 08:31:48.806056977 +0000 UTC m=+0.143565335 container start b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 08:31:48 compute-0 distracted_ardinghelli[315250]: 167 167
Dec 13 08:31:48 compute-0 systemd[1]: libpod-b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2.scope: Deactivated successfully.
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.951 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Creating config drive at /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config
Dec 13 08:31:48 compute-0 nova_compute[248510]: 2025-12-13 08:31:48.958 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpswaztoxc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:49 compute-0 podman[315235]: 2025-12-13 08:31:49.096779649 +0000 UTC m=+0.434288047 container attach b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 08:31:49 compute-0 podman[315235]: 2025-12-13 08:31:49.098499544 +0000 UTC m=+0.436007902 container died b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.114 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpswaztoxc" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.146 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.158 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1830471578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c7f1de502834039167cc4e7f79ca686d348c8e365c18e5d123c840791d4e869-merged.mount: Deactivated successfully.
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.201 248514 DEBUG nova.network.neutron [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Updated VIF entry in instance network info cache for port 35b172ab-1be7-44b2-9a76-0f60de6851ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.202 248514 DEBUG nova.network.neutron [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Updating instance_info_cache with network_info: [{"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.223 248514 DEBUG oslo_concurrency.lockutils [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:31:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 280 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.3 MiB/s wr, 290 op/s
Dec 13 08:31:49 compute-0 podman[315235]: 2025-12-13 08:31:49.575657307 +0000 UTC m=+0.913165685 container remove b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:31:49 compute-0 systemd[1]: libpod-conmon-b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2.scope: Deactivated successfully.
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.656 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.657 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Deleting local config drive /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config because it was imported into RBD.
Dec 13 08:31:49 compute-0 kernel: tap35b172ab-1b: entered promiscuous mode
Dec 13 08:31:49 compute-0 NetworkManager[50376]: <info>  [1765614709.7198] manager: (tap35b172ab-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Dec 13 08:31:49 compute-0 ovn_controller[148476]: 2025-12-13T08:31:49Z|00653|binding|INFO|Claiming lport 35b172ab-1be7-44b2-9a76-0f60de6851ab for this chassis.
Dec 13 08:31:49 compute-0 ovn_controller[148476]: 2025-12-13T08:31:49Z|00654|binding|INFO|35b172ab-1be7-44b2-9a76-0f60de6851ab: Claiming fa:16:3e:67:86:fc 10.100.0.3
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.722 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.735 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:86:fc 10.100.0.3'], port_security=['fa:16:3e:67:86:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4d93482e-582f-4d44-ab53-87cd5f6aa66a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=35b172ab-1be7-44b2-9a76-0f60de6851ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.737 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 35b172ab-1be7-44b2-9a76-0f60de6851ab in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d bound to our chassis
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.740 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:31:49 compute-0 ovn_controller[148476]: 2025-12-13T08:31:49Z|00655|binding|INFO|Setting lport 35b172ab-1be7-44b2-9a76-0f60de6851ab ovn-installed in OVS
Dec 13 08:31:49 compute-0 ovn_controller[148476]: 2025-12-13T08:31:49Z|00656|binding|INFO|Setting lport 35b172ab-1be7-44b2-9a76-0f60de6851ab up in Southbound
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:49 compute-0 nova_compute[248510]: 2025-12-13 08:31:49.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86a472ac-752c-444c-9f84-b1158d099cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.759 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527d37da-e1 in ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:31:49 compute-0 systemd-udevd[315337]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.762 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527d37da-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.762 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dba48cbb-99da-4f6b-a22f-efa3d4d90e2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.763 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b3e4fd-ea05-422b-bdc8-16bdc3f65c70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.776 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ed499bc1-3d74-46ac-bf2a-137e6e8e2965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 systemd-machined[210538]: New machine qemu-81-instance-00000046.
Dec 13 08:31:49 compute-0 NetworkManager[50376]: <info>  [1765614709.7893] device (tap35b172ab-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:31:49 compute-0 NetworkManager[50376]: <info>  [1765614709.7904] device (tap35b172ab-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:31:49 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000046.
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.806 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42298666-283f-403a-8aae-f3027db99df4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.837 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1331c18b-1ebc-4d4a-9a69-2018f7b7fe89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.842 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46e60fa4-e0e5-43ec-a96c-6c9773c3f5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 NetworkManager[50376]: <info>  [1765614709.8435] manager: (tap527d37da-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/294)
Dec 13 08:31:49 compute-0 podman[315326]: 2025-12-13 08:31:49.785164742 +0000 UTC m=+0.040994916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.888 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9caf9d28-f92e-4a8b-bb23-8e73e761bb95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.894 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2efdfb5b-b46f-4417-be67-07edd0b29367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 NetworkManager[50376]: <info>  [1765614709.9246] device (tap527d37da-e0): carrier: link connected
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.933 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[463acf46-57a3-4db5-a36b-e78be3081404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.966 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[643f2016-caae-4688-ac6b-9844537765f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728714, 'reachable_time': 35714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315373, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.993 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf78d08c-b81a-4901-affb-5b40a0b749a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:e196'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728714, 'tstamp': 728714}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315374, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.013 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea13872-f6a0-4e06-9fbe-349daad71488]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728714, 'reachable_time': 35714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315375, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca510a7-5711-47da-b0a2-593d596adaf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.132 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9364f90-e5c3-4d4a-a356-119125340703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.133 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.134 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.134 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527d37da-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.136 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:50 compute-0 NetworkManager[50376]: <info>  [1765614710.1369] manager: (tap527d37da-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Dec 13 08:31:50 compute-0 kernel: tap527d37da-e0: entered promiscuous mode
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.139 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:50 compute-0 ovn_controller[148476]: 2025-12-13T08:31:50Z|00657|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.139 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527d37da-e0, col_values=(('external_ids', {'iface-id': '9bf9e6e9-c189-485c-8803-c58be1ee6099'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.161 248514 DEBUG nova.compute.manager [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.162 248514 DEBUG oslo_concurrency.lockutils [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.162 248514 DEBUG oslo_concurrency.lockutils [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.162 248514 DEBUG oslo_concurrency.lockutils [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.162 248514 DEBUG nova.compute.manager [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Processing event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.216 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.217 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d89f7720-9069-4fc6-b7a4-6a6442337d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.218 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.218 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'env', 'PROCESS_TAG=haproxy-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527d37da-eda0-4bfe-9f1d-310d58024d5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:31:50 compute-0 podman[315326]: 2025-12-13 08:31:50.292846055 +0000 UTC m=+0.548676209 container create a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:50 compute-0 ceph-mon[76537]: pgmap v2037: 321 pgs: 321 active+clean; 280 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.3 MiB/s wr, 290 op/s
Dec 13 08:31:50 compute-0 systemd[1]: Started libpod-conmon-a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead.scope.
Dec 13 08:31:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:50 compute-0 podman[315326]: 2025-12-13 08:31:50.437809609 +0000 UTC m=+0.693639783 container init a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:31:50 compute-0 podman[315326]: 2025-12-13 08:31:50.451518327 +0000 UTC m=+0.707348481 container start a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 08:31:50 compute-0 podman[315326]: 2025-12-13 08:31:50.472206198 +0000 UTC m=+0.728036452 container attach a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.613 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.615 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614710.614457, 4d93482e-582f-4d44-ab53-87cd5f6aa66a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.615 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] VM Started (Lifecycle Event)
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.623 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.629 248514 INFO nova.virt.libvirt.driver [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance spawned successfully.
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.629 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.643 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.658 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.663 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.665 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.666 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.666 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.666 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.667 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.678 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.679 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614710.6147447, 4d93482e-582f-4d44-ab53-87cd5f6aa66a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.679 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] VM Paused (Lifecycle Event)
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.706 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.710 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614710.6232316, 4d93482e-582f-4d44-ab53-87cd5f6aa66a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.710 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] VM Resumed (Lifecycle Event)
Dec 13 08:31:50 compute-0 podman[315460]: 2025-12-13 08:31:50.71629886 +0000 UTC m=+0.074804570 container create 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.741 248514 INFO nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Took 12.42 seconds to spawn the instance on the hypervisor.
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.741 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.742 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.751 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:50 compute-0 podman[315460]: 2025-12-13 08:31:50.677538541 +0000 UTC m=+0.036044271 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:31:50 compute-0 systemd[1]: Started libpod-conmon-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223.scope.
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.797 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:31:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375cfa30e4ad2df2d660069f03564a6feb3d318d0dbd2e0180ee77a13cc3fe04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]: {
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:     "0": [
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:         {
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "devices": [
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "/dev/loop3"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             ],
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_name": "ceph_lv0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_size": "21470642176",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "name": "ceph_lv0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "tags": {
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cluster_name": "ceph",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.crush_device_class": "",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.encrypted": "0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.objectstore": "bluestore",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osd_id": "0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.type": "block",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.vdo": "0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.with_tpm": "0"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             },
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "type": "block",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "vg_name": "ceph_vg0"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:         }
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:     ],
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:     "1": [
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:         {
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "devices": [
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "/dev/loop4"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             ],
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_name": "ceph_lv1",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_size": "21470642176",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "name": "ceph_lv1",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "tags": {
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cluster_name": "ceph",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.crush_device_class": "",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.encrypted": "0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.objectstore": "bluestore",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osd_id": "1",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.type": "block",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.vdo": "0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.with_tpm": "0"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             },
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "type": "block",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "vg_name": "ceph_vg1"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:         }
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:     ],
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:     "2": [
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:         {
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "devices": [
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "/dev/loop5"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             ],
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_name": "ceph_lv2",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_size": "21470642176",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "name": "ceph_lv2",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "tags": {
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.cluster_name": "ceph",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.crush_device_class": "",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.encrypted": "0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.objectstore": "bluestore",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osd_id": "2",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.type": "block",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.vdo": "0",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:                 "ceph.with_tpm": "0"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             },
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "type": "block",
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:             "vg_name": "ceph_vg2"
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:         }
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]:     ]
Dec 13 08:31:50 compute-0 great_heyrovsky[315409]: }
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.825 248514 INFO nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Took 13.61 seconds to build instance.
Dec 13 08:31:50 compute-0 podman[315460]: 2025-12-13 08:31:50.839054736 +0000 UTC m=+0.197560466 container init 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:31:50 compute-0 podman[315460]: 2025-12-13 08:31:50.845609412 +0000 UTC m=+0.204115122 container start 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 08:31:50 compute-0 nova_compute[248510]: 2025-12-13 08:31:50.845 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:50 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [NOTICE]   (315483) : New worker (315485) forked
Dec 13 08:31:50 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [NOTICE]   (315483) : Loading success.
Dec 13 08:31:50 compute-0 systemd[1]: libpod-a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead.scope: Deactivated successfully.
Dec 13 08:31:50 compute-0 podman[315326]: 2025-12-13 08:31:50.897151187 +0000 UTC m=+1.152981341 container died a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.967 158419 INFO oslo_service.service [-] Child 315396 exited with status 0
Dec 13 08:31:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.968 158419 WARNING oslo_service.service [-] pid 315396 not in child list
Dec 13 08:31:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:51 compute-0 nova_compute[248510]: 2025-12-13 08:31:51.226 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa-merged.mount: Deactivated successfully.
Dec 13 08:31:51 compute-0 podman[315326]: 2025-12-13 08:31:51.253797621 +0000 UTC m=+1.509627775 container remove a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:31:51 compute-0 ovn_controller[148476]: 2025-12-13T08:31:51Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:3a:53 10.100.0.3
Dec 13 08:31:51 compute-0 ovn_controller[148476]: 2025-12-13T08:31:51Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:3a:53 10.100.0.3
Dec 13 08:31:51 compute-0 systemd[1]: libpod-conmon-a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead.scope: Deactivated successfully.
Dec 13 08:31:51 compute-0 sudo[315176]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:51 compute-0 sudo[315508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:31:51 compute-0 sudo[315508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:51 compute-0 sudo[315508]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:51 compute-0 sudo[315533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:31:51 compute-0 sudo[315533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 287 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 217 op/s
Dec 13 08:31:51 compute-0 podman[315571]: 2025-12-13 08:31:51.797962043 +0000 UTC m=+0.029084297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:31:51 compute-0 podman[315571]: 2025-12-13 08:31:51.904349607 +0000 UTC m=+0.135471841 container create 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.240 248514 DEBUG nova.compute.manager [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.241 248514 DEBUG oslo_concurrency.lockutils [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.243 248514 DEBUG oslo_concurrency.lockutils [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.243 248514 DEBUG oslo_concurrency.lockutils [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.244 248514 DEBUG nova.compute.manager [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] No waiting events found dispatching network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.245 248514 WARNING nova.compute.manager [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received unexpected event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab for instance with vm_state active and task_state None.
Dec 13 08:31:52 compute-0 systemd[1]: Started libpod-conmon-96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64.scope.
Dec 13 08:31:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:52 compute-0 podman[315571]: 2025-12-13 08:31:52.586611825 +0000 UTC m=+0.817734099 container init 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:31:52 compute-0 podman[315571]: 2025-12-13 08:31:52.597227687 +0000 UTC m=+0.828349921 container start 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 08:31:52 compute-0 podman[315571]: 2025-12-13 08:31:52.604587807 +0000 UTC m=+0.835710091 container attach 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:31:52 compute-0 dazzling_turing[315588]: 167 167
Dec 13 08:31:52 compute-0 systemd[1]: libpod-96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64.scope: Deactivated successfully.
Dec 13 08:31:52 compute-0 podman[315571]: 2025-12-13 08:31:52.610746165 +0000 UTC m=+0.841868399 container died 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:31:52 compute-0 ceph-mon[76537]: pgmap v2038: 321 pgs: 321 active+clean; 287 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 217 op/s
Dec 13 08:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc4f9154a2fcda621531acdc66d3e03531958e0941ddf0c98db6f6138c4e2df4-merged.mount: Deactivated successfully.
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:52 compute-0 podman[315571]: 2025-12-13 08:31:52.791971378 +0000 UTC m=+1.023093632 container remove 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.796 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.796 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.797 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.797 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:52 compute-0 systemd[1]: libpod-conmon-96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64.scope: Deactivated successfully.
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.928 248514 DEBUG nova.compute.manager [None req-3b4cf12d-beea-41c5-9501-a2afa6d243ee 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:52 compute-0 nova_compute[248510]: 2025-12-13 08:31:52.973 248514 INFO nova.compute.manager [None req-3b4cf12d-beea-41c5-9501-a2afa6d243ee 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] instance snapshotting
Dec 13 08:31:53 compute-0 podman[315615]: 2025-12-13 08:31:53.005905537 +0000 UTC m=+0.039040882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:31:53 compute-0 podman[315615]: 2025-12-13 08:31:53.098777872 +0000 UTC m=+0.131913177 container create 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 08:31:53 compute-0 systemd[1]: Started libpod-conmon-51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658.scope.
Dec 13 08:31:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.212 248514 WARNING nova.compute.manager [None req-3b4cf12d-beea-41c5-9501-a2afa6d243ee 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Image not found during snapshot: nova.exception.ImageNotFound: Image 612b7faa-fd2d-4fbd-b80b-aebf1594e00f could not be found.
Dec 13 08:31:53 compute-0 podman[315615]: 2025-12-13 08:31:53.225994153 +0000 UTC m=+0.259129458 container init 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:31:53 compute-0 podman[315615]: 2025-12-13 08:31:53.236311022 +0000 UTC m=+0.269446327 container start 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:31:53 compute-0 podman[315615]: 2025-12-13 08:31:53.241292409 +0000 UTC m=+0.274427724 container attach 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:31:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:31:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531632031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.444 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 303 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.7 MiB/s wr, 204 op/s
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.600 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.600 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.607 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.608 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.615 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.615 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.619 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.619 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:31:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/531632031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.908 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.909 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3278MB free_disk=59.85584915988147GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.909 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:53 compute-0 nova_compute[248510]: 2025-12-13 08:31:53.909 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:54 compute-0 kernel: tap7b3b1c0a-88 (unregistering): left promiscuous mode
Dec 13 08:31:54 compute-0 NetworkManager[50376]: <info>  [1765614714.0161] device (tap7b3b1c0a-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:31:54 compute-0 ovn_controller[148476]: 2025-12-13T08:31:54Z|00658|binding|INFO|Releasing lport 7b3b1c0a-882e-4f33-a582-667d018090d4 from this chassis (sb_readonly=0)
Dec 13 08:31:54 compute-0 ovn_controller[148476]: 2025-12-13T08:31:54Z|00659|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 down in Southbound
Dec 13 08:31:54 compute-0 ovn_controller[148476]: 2025-12-13T08:31:54Z|00660|binding|INFO|Removing iface tap7b3b1c0a-88 ovn-installed in OVS
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.037 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:56:12 10.100.0.13'], port_security=['fa:16:3e:41:56:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9c6de9d-63c0-43a5-9d6e-be356e504837', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7b3b1c0a-882e-4f33-a582-667d018090d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.038 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b1c0a-882e-4f33-a582-667d018090d4 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.040 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.043 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea92200-d0b1-475f-a9f7-477102b7fc5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.045 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.046 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a9c6de9d-63c0-43a5-9d6e-be356e504837 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.046 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ce9adb21-8832-4d3e-867e-b0b49bdb6850 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.046 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 4d93482e-582f-4d44-ab53-87cd5f6aa66a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.046 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.047 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.071 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 08:31:54 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec 13 08:31:54 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000044.scope: Consumed 13.966s CPU time.
Dec 13 08:31:54 compute-0 systemd-machined[210538]: Machine qemu-78-instance-00000044 terminated.
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.105 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.107 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.123 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 08:31:54 compute-0 lvm[315781]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:31:54 compute-0 lvm[315770]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:31:54 compute-0 lvm[315770]: VG ceph_vg0 finished
Dec 13 08:31:54 compute-0 lvm[315781]: VG ceph_vg1 finished
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.149 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 08:31:54 compute-0 lvm[315789]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:31:54 compute-0 lvm[315789]: VG ceph_vg2 finished
Dec 13 08:31:54 compute-0 podman[315725]: 2025-12-13 08:31:54.153660679 +0000 UTC m=+0.086664065 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:31:54 compute-0 podman[315723]: 2025-12-13 08:31:54.17157431 +0000 UTC m=+0.111697156 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.179 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.180 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.180 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.180 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.180 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.181 248514 INFO nova.compute.manager [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Terminating instance
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.183 248514 DEBUG nova.compute.manager [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:31:54 compute-0 podman[315724]: 2025-12-13 08:31:54.18672562 +0000 UTC m=+0.132856508 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 08:31:54 compute-0 lvm[315798]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:31:54 compute-0 lvm[315798]: VG ceph_vg1 finished
Dec 13 08:31:54 compute-0 vigorous_mccarthy[315646]: {}
Dec 13 08:31:54 compute-0 lvm[315801]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:31:54 compute-0 lvm[315801]: VG ceph_vg1 finished
Dec 13 08:31:54 compute-0 kernel: tap35b172ab-1b (unregistering): left promiscuous mode
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.254 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:54 compute-0 systemd[1]: libpod-51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658.scope: Deactivated successfully.
Dec 13 08:31:54 compute-0 systemd[1]: libpod-51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658.scope: Consumed 1.547s CPU time.
Dec 13 08:31:54 compute-0 podman[315615]: 2025-12-13 08:31:54.260313795 +0000 UTC m=+1.293449100 container died 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:31:54 compute-0 NetworkManager[50376]: <info>  [1765614714.2617] device (tap35b172ab-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:31:54 compute-0 ovn_controller[148476]: 2025-12-13T08:31:54Z|00661|binding|INFO|Releasing lport 35b172ab-1be7-44b2-9a76-0f60de6851ab from this chassis (sb_readonly=0)
Dec 13 08:31:54 compute-0 ovn_controller[148476]: 2025-12-13T08:31:54Z|00662|binding|INFO|Setting lport 35b172ab-1be7-44b2-9a76-0f60de6851ab down in Southbound
Dec 13 08:31:54 compute-0 ovn_controller[148476]: 2025-12-13T08:31:54Z|00663|binding|INFO|Removing iface tap35b172ab-1b ovn-installed in OVS
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.296 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:86:fc 10.100.0.3'], port_security=['fa:16:3e:67:86:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4d93482e-582f-4d44-ab53-87cd5f6aa66a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=35b172ab-1be7-44b2-9a76-0f60de6851ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.297 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 35b172ab-1be7-44b2-9a76-0f60de6851ab in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d unbound from our chassis
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.299 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527d37da-eda0-4bfe-9f1d-310d58024d5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.301 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc73a615-1dd2-4935-82f8-81dd4302a5e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.302 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace which is not needed anymore
Dec 13 08:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f-merged.mount: Deactivated successfully.
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.324 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance shutdown successfully after 13 seconds.
Dec 13 08:31:54 compute-0 podman[315615]: 2025-12-13 08:31:54.330670159 +0000 UTC m=+1.363805464 container remove 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.331 248514 INFO nova.virt.libvirt.driver [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance destroyed successfully.
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.331 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'numa_topology' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:54 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000046.scope: Deactivated successfully.
Dec 13 08:31:54 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000046.scope: Consumed 3.936s CPU time.
Dec 13 08:31:54 compute-0 systemd-machined[210538]: Machine qemu-81-instance-00000046 terminated.
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.349 248514 DEBUG nova.compute.manager [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.350 248514 DEBUG oslo_concurrency.lockutils [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.350 248514 DEBUG oslo_concurrency.lockutils [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.350 248514 DEBUG oslo_concurrency.lockutils [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.350 248514 DEBUG nova.compute.manager [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.351 248514 WARNING nova.compute.manager [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state active and task_state rescuing.
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.358 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Attempting rescue
Dec 13 08:31:54 compute-0 systemd[1]: libpod-conmon-51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658.scope: Deactivated successfully.
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.359 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.364 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.365 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating image(s)
Dec 13 08:31:54 compute-0 sudo[315533]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:31:54 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:31:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.391 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.398 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:54 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.467 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:54 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [NOTICE]   (315483) : haproxy version is 2.8.14-c23fe91
Dec 13 08:31:54 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [NOTICE]   (315483) : path to executable is /usr/sbin/haproxy
Dec 13 08:31:54 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [WARNING]  (315483) : Exiting Master process...
Dec 13 08:31:54 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [ALERT]    (315483) : Current worker (315485) exited with code 143 (Terminated)
Dec 13 08:31:54 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [WARNING]  (315483) : All workers exited. Exiting... (0)
Dec 13 08:31:54 compute-0 systemd[1]: libpod-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223.scope: Deactivated successfully.
Dec 13 08:31:54 compute-0 conmon[315479]: conmon 00e45c34b9e65e4ee06b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223.scope/container/memory.events
Dec 13 08:31:54 compute-0 sudo[315864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:31:54 compute-0 sudo[315864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:31:54 compute-0 podman[315863]: 2025-12-13 08:31:54.489122701 +0000 UTC m=+0.060494466 container died 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:31:54 compute-0 sudo[315864]: pam_unix(sudo:session): session closed for user root
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.494 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.501 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223-userdata-shm.mount: Deactivated successfully.
Dec 13 08:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-375cfa30e4ad2df2d660069f03564a6feb3d318d0dbd2e0180ee77a13cc3fe04-merged.mount: Deactivated successfully.
Dec 13 08:31:54 compute-0 podman[315863]: 2025-12-13 08:31:54.541375717 +0000 UTC m=+0.112747482 container cleanup 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.547 248514 INFO nova.virt.libvirt.driver [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance destroyed successfully.
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.549 248514 DEBUG nova.objects.instance [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'resources' on Instance uuid 4d93482e-582f-4d44-ab53-87cd5f6aa66a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:54 compute-0 systemd[1]: libpod-conmon-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223.scope: Deactivated successfully.
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.580 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.581 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.582 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.582 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.606 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:54 compute-0 podman[315980]: 2025-12-13 08:31:54.608909269 +0000 UTC m=+0.046209954 container remove 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.612 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.617 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92dfc657-244e-4a84-a062-910ac6cbe7aa]: (4, ('Sat Dec 13 08:31:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223)\n00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223\nSat Dec 13 08:31:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223)\n00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6344b2b-b5dd-4d50-b482-2fb7e7d3b3d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.619 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:54 compute-0 kernel: tap527d37da-e0: left promiscuous mode
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.653 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.657 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11b5d54c-e3a2-4b20-a2b0-0b5dc30f9500]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.675 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6a4548-a5eb-406c-b408-b32dd14e67e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[81758caa-c0f8-452e-8dfa-47f6ad4a08b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.696 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac9d06d-a3d7-4dd7-87b4-12391cfdb7d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728704, 'reachable_time': 16366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316022, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.699 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:31:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.699 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7d8a85-6d22-43e2-be68-e8217c70eb37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d527d37da\x2deda0\x2d4bfe\x2d9f1d\x2d310d58024d5d.mount: Deactivated successfully.
Dec 13 08:31:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:31:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284377546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:54 compute-0 ceph-mon[76537]: pgmap v2039: 321 pgs: 321 active+clean; 303 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.7 MiB/s wr, 204 op/s
Dec 13 08:31:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:31:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.880 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.888 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.991 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:54 compute-0 nova_compute[248510]: 2025-12-13 08:31:54.992 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'migration_context' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.308 248514 DEBUG nova.compute.manager [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-unplugged-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.309 248514 DEBUG oslo_concurrency.lockutils [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.309 248514 DEBUG oslo_concurrency.lockutils [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.309 248514 DEBUG oslo_concurrency.lockutils [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.310 248514 DEBUG nova.compute.manager [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] No waiting events found dispatching network-vif-unplugged-35b172ab-1be7-44b2-9a76-0f60de6851ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.310 248514 DEBUG nova.compute.manager [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-unplugged-35b172ab-1be7-44b2-9a76-0f60de6851ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.358 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.362 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.362 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start _get_guest_xml network_info=[{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:41:56:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.362 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'resources' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.364 248514 DEBUG nova.virt.libvirt.vif [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1782155204',id=70,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:31:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-e9s8cfpo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:53Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=4d93482e-582f-4d44-ab53-87cd5f6aa66a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.364 248514 DEBUG nova.network.os_vif_util [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.365 248514 DEBUG nova.network.os_vif_util [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.365 248514 DEBUG os_vif [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.368 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35b172ab-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.377 248514 INFO os_vif [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b')
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.403 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.404 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.405 248514 WARNING nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.413 248514 DEBUG nova.virt.libvirt.host [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:31:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:55.412 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:55.413 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.413 248514 DEBUG nova.virt.libvirt.host [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:31:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:55.414 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.418 248514 DEBUG nova.virt.libvirt.host [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.419 248514 DEBUG nova.virt.libvirt.host [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.419 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.419 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.419 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.421 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.421 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.421 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.421 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.443 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 326 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 261 op/s
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.639 248514 INFO nova.virt.libvirt.driver [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Deleting instance files /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a_del
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.641 248514 INFO nova.virt.libvirt.driver [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Deletion of /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a_del complete
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.701 248514 INFO nova.compute.manager [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Took 1.52 seconds to destroy the instance on the hypervisor.
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.702 248514 DEBUG oslo.service.loopingcall [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.702 248514 DEBUG nova.compute.manager [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:31:55 compute-0 nova_compute[248510]: 2025-12-13 08:31:55.703 248514 DEBUG nova.network.neutron [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:31:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1284377546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1480405954' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.030 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.032 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.404 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.404 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.405 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:31:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317881052' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.611 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.612 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:56 compute-0 ceph-mon[76537]: pgmap v2040: 321 pgs: 321 active+clean; 326 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 261 op/s
Dec 13 08:31:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1480405954' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2317881052' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.922 248514 DEBUG nova.compute.manager [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.923 248514 DEBUG oslo_concurrency.lockutils [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.923 248514 DEBUG oslo_concurrency.lockutils [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.923 248514 DEBUG oslo_concurrency.lockutils [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.924 248514 DEBUG nova.compute.manager [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:56 compute-0 nova_compute[248510]: 2025-12-13 08:31:56.924 248514 WARNING nova.compute.manager [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state active and task_state rescuing.
Dec 13 08:31:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:31:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42037945' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.143 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.145 248514 DEBUG nova.virt.libvirt.vif [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:31:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1742357064',display_name='tempest-ServerRescueTestJSON-server-1742357064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1742357064',id=68,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:31:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-hpc6t4lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:36Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=a9c6de9d-63c0-43a5-9d6e-be356e504837,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:41:56:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.145 248514 DEBUG nova.network.os_vif_util [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:41:56:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.146 248514 DEBUG nova.network.os_vif_util [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.148 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.175 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <uuid>a9c6de9d-63c0-43a5-9d6e-be356e504837</uuid>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <name>instance-00000044</name>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueTestJSON-server-1742357064</nova:name>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:31:55</nova:creationTime>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <nova:user uuid="93eec08d500a4f03afb3281e9899bd6a">tempest-ServerRescueTestJSON-1425963100-project-member</nova:user>
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <nova:project uuid="71e2453379684f0ca0563f8c370ea4a3">tempest-ServerRescueTestJSON-1425963100</nova:project>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <nova:port uuid="7b3b1c0a-882e-4f33-a582-667d018090d4">
Dec 13 08:31:57 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <system>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <entry name="serial">a9c6de9d-63c0-43a5-9d6e-be356e504837</entry>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <entry name="uuid">a9c6de9d-63c0-43a5-9d6e-be356e504837</entry>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </system>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <os>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   </os>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <features>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   </features>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue">
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk">
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <target dev="vdb" bus="virtio"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue">
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </source>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:31:57 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:41:56:12"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <target dev="tap7b3b1c0a-88"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/console.log" append="off"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <video>
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </video>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:31:57 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:31:57 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:31:57 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:31:57 compute-0 nova_compute[248510]: </domain>
Dec 13 08:31:57 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.184 248514 INFO nova.virt.libvirt.driver [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance destroyed successfully.
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.291 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.291 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.292 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.292 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No VIF found with MAC fa:16:3e:41:56:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.293 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Using config drive
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.315 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.342 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.408 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'keypairs' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:31:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 326 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 250 op/s
Dec 13 08:31:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/42037945' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.930 248514 DEBUG nova.compute.manager [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.930 248514 DEBUG oslo_concurrency.lockutils [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.931 248514 DEBUG oslo_concurrency.lockutils [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.931 248514 DEBUG oslo_concurrency.lockutils [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.931 248514 DEBUG nova.compute.manager [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] No waiting events found dispatching network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:31:57 compute-0 nova_compute[248510]: 2025-12-13 08:31:57.932 248514 WARNING nova.compute.manager [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received unexpected event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab for instance with vm_state active and task_state deleting.
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.491 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating config drive at /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.498 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7oounzy0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.544 248514 DEBUG nova.network.neutron [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.566 248514 INFO nova.compute.manager [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Took 2.86 seconds to deallocate network for instance.
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.637 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.637 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.652 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7oounzy0" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.677 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.682 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.806 248514 DEBUG oslo_concurrency.processutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.847 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.848 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deleting local config drive /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue because it was imported into RBD.
Dec 13 08:31:58 compute-0 ceph-mon[76537]: pgmap v2041: 321 pgs: 321 active+clean; 326 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 250 op/s
Dec 13 08:31:58 compute-0 kernel: tap7b3b1c0a-88: entered promiscuous mode
Dec 13 08:31:58 compute-0 NetworkManager[50376]: <info>  [1765614718.9176] manager: (tap7b3b1c0a-88): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Dec 13 08:31:58 compute-0 ovn_controller[148476]: 2025-12-13T08:31:58Z|00664|binding|INFO|Claiming lport 7b3b1c0a-882e-4f33-a582-667d018090d4 for this chassis.
Dec 13 08:31:58 compute-0 ovn_controller[148476]: 2025-12-13T08:31:58Z|00665|binding|INFO|7b3b1c0a-882e-4f33-a582-667d018090d4: Claiming fa:16:3e:41:56:12 10.100.0.13
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:58.935 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:56:12 10.100.0.13'], port_security=['fa:16:3e:41:56:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9c6de9d-63c0-43a5-9d6e-be356e504837', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7b3b1c0a-882e-4f33-a582-667d018090d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:31:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:58.937 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b1c0a-882e-4f33-a582-667d018090d4 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 bound to our chassis
Dec 13 08:31:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:58.938 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:31:58 compute-0 ovn_controller[148476]: 2025-12-13T08:31:58Z|00666|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 up in Southbound
Dec 13 08:31:58 compute-0 ovn_controller[148476]: 2025-12-13T08:31:58Z|00667|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 ovn-installed in OVS
Dec 13 08:31:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:31:58.939 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5fddc54e-3923-405f-ab48-e67ee1c92370]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:58 compute-0 nova_compute[248510]: 2025-12-13 08:31:58.946 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:31:58 compute-0 systemd-udevd[316204]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:31:58 compute-0 NetworkManager[50376]: <info>  [1765614718.9698] device (tap7b3b1c0a-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:31:58 compute-0 NetworkManager[50376]: <info>  [1765614718.9704] device (tap7b3b1c0a-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:31:58 compute-0 systemd-machined[210538]: New machine qemu-82-instance-00000044.
Dec 13 08:31:58 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-00000044.
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.451 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for a9c6de9d-63c0-43a5-9d6e-be356e504837 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.452 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614719.4505186, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.452 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Resumed (Lifecycle Event)
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.459 248514 DEBUG nova.compute.manager [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:31:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3298807745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.493 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.496 248514 DEBUG oslo_concurrency.processutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.498 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.503 248514 DEBUG nova.compute.provider_tree [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.527 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.527 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614719.4515405, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.528 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Started (Lifecycle Event)
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.532 248514 DEBUG nova.scheduler.client.report [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.548 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:31:59 compute-0 nova_compute[248510]: 2025-12-13 08:31:59.551 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:31:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 336 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 293 op/s
Dec 13 08:31:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3298807745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:00 compute-0 nova_compute[248510]: 2025-12-13 08:32:00.092 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:00 compute-0 nova_compute[248510]: 2025-12-13 08:32:00.146 248514 INFO nova.scheduler.client.report [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Deleted allocations for instance 4d93482e-582f-4d44-ab53-87cd5f6aa66a
Dec 13 08:32:00 compute-0 nova_compute[248510]: 2025-12-13 08:32:00.246 248514 DEBUG nova.compute.manager [req-557d4be1-e12e-4639-be60-a984ec878484 req-b03798b0-2ffe-489c-bcc3-e88a5dfc6620 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-deleted-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:00 compute-0 nova_compute[248510]: 2025-12-13 08:32:00.252 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:00 compute-0 nova_compute[248510]: 2025-12-13 08:32:00.345 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:00 compute-0 nova_compute[248510]: 2025-12-13 08:32:00.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:00 compute-0 nova_compute[248510]: 2025-12-13 08:32:00.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:32:00 compute-0 ceph-mon[76537]: pgmap v2042: 321 pgs: 321 active+clean; 336 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 293 op/s
Dec 13 08:32:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 328 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 251 op/s
Dec 13 08:32:02 compute-0 nova_compute[248510]: 2025-12-13 08:32:02.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:32:02 compute-0 ceph-mon[76537]: pgmap v2043: 321 pgs: 321 active+clean; 328 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 251 op/s
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.383 248514 DEBUG nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.383 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.384 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.384 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.384 248514 DEBUG nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 WARNING nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state rescued and task_state None.
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 DEBUG nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.386 248514 DEBUG nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:03 compute-0 nova_compute[248510]: 2025-12-13 08:32:03.386 248514 WARNING nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state rescued and task_state None.
Dec 13 08:32:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.1 MiB/s wr, 249 op/s
Dec 13 08:32:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:32:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 22K writes, 93K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 22K writes, 7420 syncs, 3.07 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9146 writes, 36K keys, 9146 commit groups, 1.0 writes per commit group, ingest: 36.34 MB, 0.06 MB/s
                                           Interval WAL: 9146 writes, 3584 syncs, 2.55 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:32:03 compute-0 ceph-osd[87041]: bluestore.MempoolThread fragmentation_score=0.002893 took=0.000055s
Dec 13 08:32:04 compute-0 ceph-mon[76537]: pgmap v2044: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.1 MiB/s wr, 249 op/s
Dec 13 08:32:04 compute-0 nova_compute[248510]: 2025-12-13 08:32:04.692 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:04 compute-0 nova_compute[248510]: 2025-12-13 08:32:04.693 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:04 compute-0 nova_compute[248510]: 2025-12-13 08:32:04.983 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.008 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.009 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.054 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.090 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.092 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.106 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.107 248514 INFO nova.compute.claims [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.149 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.350 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.1 MiB/s wr, 219 op/s
Dec 13 08:32:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:32:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642367905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.921 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.930 248514 DEBUG nova.compute.provider_tree [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.959 248514 DEBUG nova.scheduler.client.report [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.994 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.995 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:32:05 compute-0 nova_compute[248510]: 2025-12-13 08:32:05.998 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.007 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.007 248514 INFO nova.compute.claims [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.102 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.103 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.139 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:32:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.175 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.300 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.302 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.302 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating image(s)
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.326 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.351 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.375 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.380 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.418 248514 DEBUG nova.policy [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93eec08d500a4f03afb3281e9899bd6a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '71e2453379684f0ca0563f8c370ea4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.440 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.474 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.475 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.476 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.477 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.499 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.504 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c5edbf88-6361-407a-a0f1-c133f70b50e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.553 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.554 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.589 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:32:06 compute-0 nova_compute[248510]: 2025-12-13 08:32:06.682 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:32:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72469475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.023 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.033 248514 DEBUG nova.compute.provider_tree [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.075 248514 DEBUG nova.scheduler.client.report [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.117 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.118 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.128 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.148 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.152 248514 INFO nova.compute.claims [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.228 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.228 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.262 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:32:07 compute-0 ceph-mon[76537]: pgmap v2045: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.1 MiB/s wr, 219 op/s
Dec 13 08:32:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1642367905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.321 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.477 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.480 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.480 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating image(s)
Dec 13 08:32:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.612 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.654 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.686 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.691 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.732 248514 DEBUG nova.policy [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b310bdebec646949fad4ea1821b4c3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.773 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.774 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.775 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.775 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.801 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.806 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5d34feed-2663-4e17-b951-65a37bd3a275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.852 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:07 compute-0 nova_compute[248510]: 2025-12-13 08:32:07.981 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Successfully created port: 2bdcea64-a01f-4a75-b664-9c9c971533a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:32:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:32:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544839028' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.475 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.483 248514 DEBUG nova.compute.provider_tree [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:32:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/72469475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:08 compute-0 ceph-mon[76537]: pgmap v2046: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.521 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Successfully created port: 533958d1-8a74-4963-9731-40767b4bb127 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.526 248514 DEBUG nova.scheduler.client.report [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.556 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.557 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.602 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c5edbf88-6361-407a-a0f1-c133f70b50e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.654 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.654 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.711 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.721 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] resizing rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.765 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.825 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5d34feed-2663-4e17-b951-65a37bd3a275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.872 248514 DEBUG nova.objects.instance [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'migration_context' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.907 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.907 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Ensure instance console log exists: /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.908 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.908 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.909 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.915 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] resizing rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.985 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.988 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:32:08 compute-0 nova_compute[248510]: 2025-12-13 08:32:08.989 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Creating image(s)
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.012 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.040 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.071 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.076 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.125 248514 DEBUG nova.policy [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91d0d3efedc943b48ad0fc4295b6fc7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.166 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.167 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.167 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.168 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.188 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.192 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 dc076c88-fe0b-4674-ac32-fb22420b78bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:32:09
Dec 13 08:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'images']
Dec 13 08:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.312 248514 DEBUG nova.objects.instance [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'migration_context' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.344 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.345 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Ensure instance console log exists: /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.346 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.346 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.346 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/544839028' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.545 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614714.4352958, 4d93482e-582f-4d44-ab53-87cd5f6aa66a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.546 248514 INFO nova.compute.manager [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] VM Stopped (Lifecycle Event)
Dec 13 08:32:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 351 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.571 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Successfully updated port: 2bdcea64-a01f-4a75-b664-9c9c971533a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.579 248514 DEBUG nova.compute.manager [None req-8ee42bc1-df6f-4227-a95a-22c04ff1bac7 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.611 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.611 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.611 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.770 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 dc076c88-fe0b-4674-ac32-fb22420b78bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.840 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] resizing rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:32:09 compute-0 nova_compute[248510]: 2025-12-13 08:32:09.938 248514 DEBUG nova.objects.instance [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'migration_context' on Instance uuid dc076c88-fe0b-4674-ac32-fb22420b78bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.117 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.118 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Ensure instance console log exists: /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.119 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.119 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.119 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:10 compute-0 ceph-mon[76537]: pgmap v2047: 321 pgs: 321 active+clean; 351 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:32:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.655 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Successfully created port: 7903e65c-c0bf-4bb5-b044-95f5693f5c38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.880 248514 DEBUG nova.compute.manager [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-changed-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.880 248514 DEBUG nova.compute.manager [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Refreshing instance network info cache due to event network-changed-2bdcea64-a01f-4a75-b664-9c9c971533a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.880 248514 DEBUG oslo_concurrency.lockutils [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:10 compute-0 nova_compute[248510]: 2025-12-13 08:32:10.881 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:32:11 compute-0 nova_compute[248510]: 2025-12-13 08:32:11.035 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Successfully updated port: 533958d1-8a74-4963-9731-40767b4bb127 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:32:11 compute-0 nova_compute[248510]: 2025-12-13 08:32:11.057 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:11 compute-0 nova_compute[248510]: 2025-12-13 08:32:11.057 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquired lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:11 compute-0 nova_compute[248510]: 2025-12-13 08:32:11.057 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:32:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:32:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3601.8 total, 600.0 interval
                                           Cumulative writes: 25K writes, 95K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 25K writes, 8466 syncs, 2.95 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 38.37 MB, 0.06 MB/s
                                           Interval WAL: 10K writes, 4107 syncs, 2.50 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:32:11 compute-0 ceph-osd[88086]: bluestore.MempoolThread fragmentation_score=0.002571 took=0.000042s
Dec 13 08:32:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:11 compute-0 nova_compute[248510]: 2025-12-13 08:32:11.296 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:32:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 418 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 122 op/s
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.030 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Successfully updated port: 7903e65c-c0bf-4bb5-b044-95f5693f5c38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.048 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.049 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquired lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.049 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.282 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.374 248514 DEBUG nova.compute.manager [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-changed-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.375 248514 DEBUG nova.compute.manager [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Refreshing instance network info cache due to event network-changed-7903e65c-c0bf-4bb5-b044-95f5693f5c38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.375 248514 DEBUG oslo_concurrency.lockutils [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.405 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.436 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.437 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance network_info: |[{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.437 248514 DEBUG oslo_concurrency.lockutils [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.438 248514 DEBUG nova.network.neutron [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Refreshing network info cache for port 2bdcea64-a01f-4a75-b664-9c9c971533a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.442 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start _get_guest_xml network_info=[{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.446 248514 WARNING nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.458 248514 DEBUG nova.virt.libvirt.host [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.459 248514 DEBUG nova.virt.libvirt.host [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.464 248514 DEBUG nova.virt.libvirt.host [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.465 248514 DEBUG nova.virt.libvirt.host [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.465 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.465 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.466 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.466 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.467 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.467 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.467 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.467 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.468 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.468 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.468 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.468 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.472 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:12 compute-0 ceph-mon[76537]: pgmap v2048: 321 pgs: 321 active+clean; 418 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 122 op/s
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.868 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Updating instance_info_cache with network_info: [{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.903 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Releasing lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.904 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance network_info: |[{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.907 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start _get_guest_xml network_info=[{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.913 248514 WARNING nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.920 248514 DEBUG nova.virt.libvirt.host [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.920 248514 DEBUG nova.virt.libvirt.host [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.924 248514 DEBUG nova.virt.libvirt.host [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.924 248514 DEBUG nova.virt.libvirt.host [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.925 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.925 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.926 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.926 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.926 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.926 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.927 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.927 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.927 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.927 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.928 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.928 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:32:12 compute-0 nova_compute[248510]: 2025-12-13 08:32:12.932 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448719229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.099 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.128 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.133 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.177 248514 DEBUG nova.compute.manager [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-changed-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.178 248514 DEBUG nova.compute.manager [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Refreshing instance network info cache due to event network-changed-533958d1-8a74-4963-9731-40767b4bb127. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.179 248514 DEBUG oslo_concurrency.lockutils [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.179 248514 DEBUG oslo_concurrency.lockutils [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.179 248514 DEBUG nova.network.neutron [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Refreshing network info cache for port 533958d1-8a74-4963-9731-40767b4bb127 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.345 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Updating instance_info_cache with network_info: [{"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.373 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Releasing lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.374 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance network_info: |[{"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.374 248514 DEBUG oslo_concurrency.lockutils [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.375 248514 DEBUG nova.network.neutron [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Refreshing network info cache for port 7903e65c-c0bf-4bb5-b044-95f5693f5c38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.378 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start _get_guest_xml network_info=[{"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.391 248514 WARNING nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.396 248514 DEBUG nova.virt.libvirt.host [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.397 248514 DEBUG nova.virt.libvirt.host [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.405 248514 DEBUG nova.virt.libvirt.host [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.405 248514 DEBUG nova.virt.libvirt.host [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.406 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.406 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.408 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.408 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.408 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.408 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.409 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.412 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 455 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.8 MiB/s wr, 155 op/s
Dec 13 08:32:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2298876853' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.584 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.611 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.615 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499395576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.784 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.789 248514 DEBUG nova.virt.libvirt.vif [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-607419756',display_name='tempest-ServerRescueTestJSON-server-607419756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-607419756',id=71,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-h6adjjd8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:06Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=c5edbf88-6361-407a-a0f1-c133f70b50e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.791 248514 DEBUG nova.network.os_vif_util [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.793 248514 DEBUG nova.network.os_vif_util [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.796 248514 DEBUG nova.objects.instance [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.818 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <uuid>c5edbf88-6361-407a-a0f1-c133f70b50e9</uuid>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <name>instance-00000047</name>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueTestJSON-server-607419756</nova:name>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:32:12</nova:creationTime>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <nova:user uuid="93eec08d500a4f03afb3281e9899bd6a">tempest-ServerRescueTestJSON-1425963100-project-member</nova:user>
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <nova:project uuid="71e2453379684f0ca0563f8c370ea4a3">tempest-ServerRescueTestJSON-1425963100</nova:project>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <nova:port uuid="2bdcea64-a01f-4a75-b664-9c9c971533a6">
Dec 13 08:32:13 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <system>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <entry name="serial">c5edbf88-6361-407a-a0f1-c133f70b50e9</entry>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <entry name="uuid">c5edbf88-6361-407a-a0f1-c133f70b50e9</entry>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </system>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <os>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   </os>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <features>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   </features>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk">
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config">
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:13 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:13:4d:d0"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <target dev="tap2bdcea64-a0"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/console.log" append="off"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <video>
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </video>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:32:13 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:32:13 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:32:13 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:32:13 compute-0 nova_compute[248510]: </domain>
Dec 13 08:32:13 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.819 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Preparing to wait for external event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.820 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.820 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.820 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.821 248514 DEBUG nova.virt.libvirt.vif [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-607419756',display_name='tempest-ServerRescueTestJSON-server-607419756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-607419756',id=71,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-h6adjjd8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:06Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=c5edbf88-6361-407a-a0f1-c133f70b50e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.822 248514 DEBUG nova.network.os_vif_util [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.822 248514 DEBUG nova.network.os_vif_util [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.823 248514 DEBUG os_vif [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.824 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.825 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.828 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.829 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bdcea64-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.829 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bdcea64-a0, col_values=(('external_ids', {'iface-id': '2bdcea64-a01f-4a75-b664-9c9c971533a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:4d:d0', 'vm-uuid': 'c5edbf88-6361-407a-a0f1-c133f70b50e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:13 compute-0 NetworkManager[50376]: <info>  [1765614733.8332] manager: (tap2bdcea64-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.843 248514 INFO os_vif [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0')
Dec 13 08:32:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1448719229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2298876853' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388076238' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.948 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.949 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.949 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No VIF found with MAC fa:16:3e:13:4d:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.950 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Using config drive
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.974 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:13 compute-0 nova_compute[248510]: 2025-12-13 08:32:13.980 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.002 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.006 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376110682' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.189 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.190 248514 DEBUG nova.virt.libvirt.vif [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:07Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.191 248514 DEBUG nova.network.os_vif_util [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.191 248514 DEBUG nova.network.os_vif_util [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.192 248514 DEBUG nova.objects.instance [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.229 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <uuid>5d34feed-2663-4e17-b951-65a37bd3a275</uuid>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <name>instance-00000048</name>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:name>tempest-tempest.common.compute-instance-1505100715</nova:name>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:32:12</nova:creationTime>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:user uuid="5b310bdebec646949fad4ea1821b4c3f">tempest-ServerActionsTestOtherA-1325599242-project-member</nova:user>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:project uuid="b4d2999518df4b9f8ccbabe38976dc3c">tempest-ServerActionsTestOtherA-1325599242</nova:project>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:port uuid="533958d1-8a74-4963-9731-40767b4bb127">
Dec 13 08:32:14 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <system>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="serial">5d34feed-2663-4e17-b951-65a37bd3a275</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="uuid">5d34feed-2663-4e17-b951-65a37bd3a275</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </system>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <os>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </os>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <features>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </features>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5d34feed-2663-4e17-b951-65a37bd3a275_disk">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5d34feed-2663-4e17-b951-65a37bd3a275_disk.config">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:66:ab:b9"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <target dev="tap533958d1-8a"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/console.log" append="off"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <video>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </video>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:32:14 compute-0 nova_compute[248510]: </domain>
Dec 13 08:32:14 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.232 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Preparing to wait for external event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.233 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.233 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.234 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.235 248514 DEBUG nova.virt.libvirt.vif [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:07Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.235 248514 DEBUG nova.network.os_vif_util [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.237 248514 DEBUG nova.network.os_vif_util [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.237 248514 DEBUG os_vif [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.239 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.240 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.245 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap533958d1-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.246 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap533958d1-8a, col_values=(('external_ids', {'iface-id': '533958d1-8a74-4963-9731-40767b4bb127', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:ab:b9', 'vm-uuid': '5d34feed-2663-4e17-b951-65a37bd3a275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:14 compute-0 NetworkManager[50376]: <info>  [1765614734.2490] manager: (tap533958d1-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.250 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.258 248514 INFO os_vif [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a')
Dec 13 08:32:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081812408' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.624 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.626 248514 DEBUG nova.virt.libvirt.vif [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1364276133',id=73,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-1zyw8g5c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:08Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=dc076c88-fe0b-4674-ac32-fb22420b78bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.626 248514 DEBUG nova.network.os_vif_util [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.627 248514 DEBUG nova.network.os_vif_util [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.628 248514 DEBUG nova.objects.instance [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'pci_devices' on Instance uuid dc076c88-fe0b-4674-ac32-fb22420b78bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.823 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <uuid>dc076c88-fe0b-4674-ac32-fb22420b78bc</uuid>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <name>instance-00000049</name>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1364276133</nova:name>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:32:13</nova:creationTime>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:user uuid="91d0d3efedc943b48ad0fc4295b6fc7c">tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member</nova:user>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:project uuid="2de328b46a6e4f588f5e2a254db7f4ef">tempest-ImagesOneServerNegativeTestJSON-1826994500</nova:project>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <nova:port uuid="7903e65c-c0bf-4bb5-b044-95f5693f5c38">
Dec 13 08:32:14 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <system>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="serial">dc076c88-fe0b-4674-ac32-fb22420b78bc</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="uuid">dc076c88-fe0b-4674-ac32-fb22420b78bc</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </system>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <os>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </os>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <features>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </features>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/dc076c88-fe0b-4674-ac32-fb22420b78bc_disk">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:cc:56:55"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <target dev="tap7903e65c-c0"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/console.log" append="off"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <video>
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </video>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:32:14 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:32:14 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:32:14 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:32:14 compute-0 nova_compute[248510]: </domain>
Dec 13 08:32:14 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.825 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Preparing to wait for external event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.825 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.826 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.826 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.828 248514 DEBUG nova.virt.libvirt.vif [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1364276133',id=73,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-1zyw8g5c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:08Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=dc076c88-fe0b-4674-ac32-fb22420b78bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.829 248514 DEBUG nova.network.os_vif_util [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.830 248514 DEBUG nova.network.os_vif_util [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.831 248514 DEBUG os_vif [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.832 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.833 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.837 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.838 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7903e65c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.839 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7903e65c-c0, col_values=(('external_ids', {'iface-id': '7903e65c-c0bf-4bb5-b044-95f5693f5c38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:56:55', 'vm-uuid': 'dc076c88-fe0b-4674-ac32-fb22420b78bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:14 compute-0 NetworkManager[50376]: <info>  [1765614734.8869] manager: (tap7903e65c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.894 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.894 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.895 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No VIF found with MAC fa:16:3e:66:ab:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.895 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Using config drive
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.918 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.925 248514 INFO os_vif [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0')
Dec 13 08:32:14 compute-0 ceph-mon[76537]: pgmap v2049: 321 pgs: 321 active+clean; 455 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.8 MiB/s wr, 155 op/s
Dec 13 08:32:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1499395576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2388076238' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2376110682' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3081812408' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.985 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.986 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.986 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No VIF found with MAC fa:16:3e:cc:56:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:32:14 compute-0 nova_compute[248510]: 2025-12-13 08:32:14.987 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Using config drive
Dec 13 08:32:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:32:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/287174040' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:32:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:32:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/287174040' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.011 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.159 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating config drive at /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.164 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6nlukd81 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.319 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6nlukd81" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.349 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.352 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 180 op/s
Dec 13 08:32:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/287174040' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:32:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/287174040' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.957 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:15 compute-0 nova_compute[248510]: 2025-12-13 08:32:15.958 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deleting local config drive /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config because it was imported into RBD.
Dec 13 08:32:16 compute-0 NetworkManager[50376]: <info>  [1765614736.0191] manager: (tap2bdcea64-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Dec 13 08:32:16 compute-0 kernel: tap2bdcea64-a0: entered promiscuous mode
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.028 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:16 compute-0 ovn_controller[148476]: 2025-12-13T08:32:16Z|00668|binding|INFO|Claiming lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 for this chassis.
Dec 13 08:32:16 compute-0 ovn_controller[148476]: 2025-12-13T08:32:16Z|00669|binding|INFO|2bdcea64-a01f-4a75-b664-9c9c971533a6: Claiming fa:16:3e:13:4d:d0 10.100.0.6
Dec 13 08:32:16 compute-0 systemd-udevd[317157]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.057 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.058 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 bound to our chassis
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.059 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.060 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[caa6fe11-64a3-43e6-a853-b77e2d4e761a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:16 compute-0 ovn_controller[148476]: 2025-12-13T08:32:16Z|00670|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 ovn-installed in OVS
Dec 13 08:32:16 compute-0 ovn_controller[148476]: 2025-12-13T08:32:16Z|00671|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 up in Southbound
Dec 13 08:32:16 compute-0 NetworkManager[50376]: <info>  [1765614736.0682] device (tap2bdcea64-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.067 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:16 compute-0 NetworkManager[50376]: <info>  [1765614736.0700] device (tap2bdcea64-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:32:16 compute-0 systemd-machined[210538]: New machine qemu-83-instance-00000047.
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:16 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-00000047.
Dec 13 08:32:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.259 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating config drive at /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.266 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpponz1jqm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.409 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpponz1jqm" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.438 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.443 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.697 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.698 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deleting local config drive /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config because it was imported into RBD.
Dec 13 08:32:16 compute-0 NetworkManager[50376]: <info>  [1765614736.7582] manager: (tap533958d1-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Dec 13 08:32:16 compute-0 kernel: tap533958d1-8a: entered promiscuous mode
Dec 13 08:32:16 compute-0 systemd-udevd[317163]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:32:16 compute-0 ovn_controller[148476]: 2025-12-13T08:32:16Z|00672|binding|INFO|Claiming lport 533958d1-8a74-4963-9731-40767b4bb127 for this chassis.
Dec 13 08:32:16 compute-0 ovn_controller[148476]: 2025-12-13T08:32:16Z|00673|binding|INFO|533958d1-8a74-4963-9731-40767b4bb127: Claiming fa:16:3e:66:ab:b9 10.100.0.5
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.776 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ab:b9 10.100.0.5'], port_security=['fa:16:3e:66:ab:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d34feed-2663-4e17-b951-65a37bd3a275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=533958d1-8a74-4963-9731-40767b4bb127) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.777 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 533958d1-8a74-4963-9731-40767b4bb127 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f bound to our chassis
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.779 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:32:16 compute-0 NetworkManager[50376]: <info>  [1765614736.7807] device (tap533958d1-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:32:16 compute-0 NetworkManager[50376]: <info>  [1765614736.7885] device (tap533958d1-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:32:16 compute-0 ovn_controller[148476]: 2025-12-13T08:32:16Z|00674|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 ovn-installed in OVS
Dec 13 08:32:16 compute-0 ovn_controller[148476]: 2025-12-13T08:32:16Z|00675|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 up in Southbound
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.799 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7c87e1-8f7c-49be-a125-c176606131af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.800 248514 DEBUG nova.network.neutron [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Updated VIF entry in instance network info cache for port 7903e65c-c0bf-4bb5-b044-95f5693f5c38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.801 248514 DEBUG nova.network.neutron [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Updating instance_info_cache with network_info: [{"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:16 compute-0 systemd-machined[210538]: New machine qemu-84-instance-00000048.
Dec 13 08:32:16 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-00000048.
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.828 248514 DEBUG oslo_concurrency.lockutils [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.835 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[39342143-a25f-4afc-a76e-77698e4106bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.840 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f97c6b1e-af70-4f49-a455-703e99201d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.877 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d0cd9e83-b1a6-4849-a5c0-ef61bed3a353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.899 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f78a7dbb-41bf-44f8-a8a3-45fe96a151ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317236, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.922 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[415b25c3-c415-4f07-b2ba-0b9fce7d653c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317238, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317238, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.925 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.934 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.935 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.935 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.936 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.945 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Creating config drive at /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config
Dec 13 08:32:16 compute-0 nova_compute[248510]: 2025-12-13 08:32:16.952 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp085tm9rq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:16 compute-0 ceph-mon[76537]: pgmap v2050: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 180 op/s
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.003 248514 DEBUG nova.network.neutron [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updated VIF entry in instance network info cache for port 2bdcea64-a01f-4a75-b664-9c9c971533a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.004 248514 DEBUG nova.network.neutron [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.035 248514 DEBUG oslo_concurrency.lockutils [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.104 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp085tm9rq" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.139 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.145 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.312 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.313 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Deleting local config drive /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config because it was imported into RBD.
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.325 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614737.3251662, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.326 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Started (Lifecycle Event)
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.363 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.368 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614737.3325074, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.369 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Paused (Lifecycle Event)
Dec 13 08:32:17 compute-0 kernel: tap7903e65c-c0: entered promiscuous mode
Dec 13 08:32:17 compute-0 NetworkManager[50376]: <info>  [1765614737.3713] manager: (tap7903e65c-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:17 compute-0 ovn_controller[148476]: 2025-12-13T08:32:17Z|00676|binding|INFO|Claiming lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 for this chassis.
Dec 13 08:32:17 compute-0 ovn_controller[148476]: 2025-12-13T08:32:17Z|00677|binding|INFO|7903e65c-c0bf-4bb5-b044-95f5693f5c38: Claiming fa:16:3e:cc:56:55 10.100.0.9
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.385 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:56:55 10.100.0.9'], port_security=['fa:16:3e:cc:56:55 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dc076c88-fe0b-4674-ac32-fb22420b78bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7903e65c-c0bf-4bb5-b044-95f5693f5c38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.387 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7903e65c-c0bf-4bb5-b044-95f5693f5c38 in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d bound to our chassis
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.389 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:32:17 compute-0 NetworkManager[50376]: <info>  [1765614737.3944] device (tap7903e65c-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:32:17 compute-0 NetworkManager[50376]: <info>  [1765614737.3952] device (tap7903e65c-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:17 compute-0 ovn_controller[148476]: 2025-12-13T08:32:17Z|00678|binding|INFO|Setting lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 ovn-installed in OVS
Dec 13 08:32:17 compute-0 ovn_controller[148476]: 2025-12-13T08:32:17Z|00679|binding|INFO|Setting lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 up in Southbound
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.401 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.402 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.406 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f23dec5d-5926-4268-9035-aa49aecdf954]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.407 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527d37da-e1 in ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.409 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527d37da-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.409 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[19e9c0b9-4a75-401a-99d5-86c6b1cdaebe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.411 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[edf07889-ca73-472f-a4e0-effc644d569d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.411 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:17 compute-0 systemd-machined[210538]: New machine qemu-85-instance-00000049.
Dec 13 08:32:17 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000049.
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.427 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb893c2-9a8a-44e7-a5d2-c660e1b65416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.441 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.441 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614737.4354827, 5d34feed-2663-4e17-b951-65a37bd3a275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.441 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Started (Lifecycle Event)
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.445 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5e0346-c074-4914-9b7a-b7292207ab4e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.465 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.472 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614737.435613, 5d34feed-2663-4e17-b951-65a37bd3a275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.472 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Paused (Lifecycle Event)
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.479 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9b1b60-9948-4f4c-a9d7-b76647b881b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 NetworkManager[50376]: <info>  [1765614737.4887] manager: (tap527d37da-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/303)
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.488 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0ea2f5-2835-47d9-9488-c27779132590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.498 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.505 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.527 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d133cded-edbc-42d7-b4e2-f42a6c127e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.529 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.530 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[59f95452-27d9-45cd-b9d9-8534e2e9b783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 NetworkManager[50376]: <info>  [1765614737.5564] device (tap527d37da-e0): carrier: link connected
Dec 13 08:32:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 5.3 MiB/s wr, 141 op/s
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.565 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dba5458c-443c-4e03-a0ad-a863dcce3c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[54d72114-3316-49e8-a771-7ea8d99bb137]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731477, 'reachable_time': 37710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317414, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.609 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18c7fe41-a824-416d-b08a-e2b89d68bff4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:e196'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731477, 'tstamp': 731477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317415, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9770f970-7ded-4d1b-98a5-97ad218bf00d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731477, 'reachable_time': 37710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317416, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef597c0-e8dd-4c0c-8d83-749d9038389a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.749 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b71dad1-924e-4a4a-8355-f4f9a6a398a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527d37da-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:17 compute-0 kernel: tap527d37da-e0: entered promiscuous mode
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:17 compute-0 NetworkManager[50376]: <info>  [1765614737.7551] manager: (tap527d37da-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.757 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527d37da-e0, col_values=(('external_ids', {'iface-id': '9bf9e6e9-c189-485c-8803-c58be1ee6099'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:17 compute-0 ovn_controller[148476]: 2025-12-13T08:32:17Z|00680|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:17 compute-0 nova_compute[248510]: 2025-12-13 08:32:17.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.778 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.779 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[492dd429-bda1-4b42-bff1-c46804cd48f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.780 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:32:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.781 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'env', 'PROCESS_TAG=haproxy-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527d37da-eda0-4bfe-9f1d-310d58024d5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.176 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614738.1762235, dc076c88-fe0b-4674-ac32-fb22420b78bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.177 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] VM Started (Lifecycle Event)
Dec 13 08:32:18 compute-0 podman[317489]: 2025-12-13 08:32:18.206832464 +0000 UTC m=+0.068202582 container create 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.210 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.221 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614738.1764863, dc076c88-fe0b-4674-ac32-fb22420b78bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.221 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] VM Paused (Lifecycle Event)
Dec 13 08:32:18 compute-0 systemd[1]: Started libpod-conmon-7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3.scope.
Dec 13 08:32:18 compute-0 podman[317489]: 2025-12-13 08:32:18.171024134 +0000 UTC m=+0.032394242 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:32:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9542c431d09130ceda6f2031f37bc321f382444a062bd064f5a5f75508c11df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:32:18 compute-0 podman[317489]: 2025-12-13 08:32:18.303434462 +0000 UTC m=+0.164804570 container init 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:32:18 compute-0 podman[317489]: 2025-12-13 08:32:18.309113349 +0000 UTC m=+0.170483437 container start 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:32:18 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [NOTICE]   (317509) : New worker (317511) forked
Dec 13 08:32:18 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [NOTICE]   (317509) : Loading success.
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.386 248514 DEBUG nova.network.neutron [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Updated VIF entry in instance network info cache for port 533958d1-8a74-4963-9731-40767b4bb127. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.386 248514 DEBUG nova.network.neutron [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Updating instance_info_cache with network_info: [{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.397 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.407 248514 DEBUG oslo_concurrency.lockutils [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.428 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.868 248514 DEBUG nova.compute.manager [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.869 248514 DEBUG oslo_concurrency.lockutils [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.869 248514 DEBUG oslo_concurrency.lockutils [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.869 248514 DEBUG oslo_concurrency.lockutils [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.869 248514 DEBUG nova.compute.manager [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Processing event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.870 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.875 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614738.8752797, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.875 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Resumed (Lifecycle Event)
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.878 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.883 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance spawned successfully.
Dec 13 08:32:18 compute-0 nova_compute[248510]: 2025-12-13 08:32:18.883 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:32:18 compute-0 ceph-mon[76537]: pgmap v2051: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 5.3 MiB/s wr, 141 op/s
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.059 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.067 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.071 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.071 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.072 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.072 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.072 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.073 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.104 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.163 248514 INFO nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Took 12.86 seconds to spawn the instance on the hypervisor.
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.164 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.244 248514 INFO nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Took 14.18 seconds to build instance.
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.267 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 5.4 MiB/s wr, 154 op/s
Dec 13 08:32:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:32:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.2 total, 600.0 interval
                                           Cumulative writes: 19K writes, 79K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 19K writes, 6254 syncs, 3.12 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7515 writes, 29K keys, 7515 commit groups, 1.0 writes per commit group, ingest: 28.11 MB, 0.05 MB/s
                                           Interval WAL: 7515 writes, 2963 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:32:19 compute-0 ceph-osd[89221]: bluestore.MempoolThread fragmentation_score=0.002473 took=0.000052s
Dec 13 08:32:19 compute-0 nova_compute[248510]: 2025-12-13 08:32:19.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:20 compute-0 ceph-mon[76537]: pgmap v2052: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 5.4 MiB/s wr, 154 op/s
Dec 13 08:32:20 compute-0 nova_compute[248510]: 2025-12-13 08:32:20.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0036801846766212594 of space, bias 1.0, pg target 1.1040554029863778 quantized to 32 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006673386640797728 of space, bias 1.0, pg target 0.19953426055985207 quantized to 32 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.056797568854111e-07 of space, bias 4.0, pg target 0.0007243929892349517 quantized to 16 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:32:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 08:32:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.214 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.214 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.215 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.217 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.217 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.217 248514 WARNING nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state None.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.217 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.218 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.218 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.218 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.218 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Processing event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.219 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.219 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.219 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.219 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 WARNING nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state building and task_state spawning.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.221 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.221 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Processing event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.221 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.221 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.222 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.222 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.222 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] No waiting events found dispatching network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.222 248514 WARNING nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received unexpected event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 for instance with vm_state building and task_state spawning.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.223 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.224 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.231 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614741.2305806, dc076c88-fe0b-4674-ac32-fb22420b78bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.232 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] VM Resumed (Lifecycle Event)
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.234 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.234 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.239 248514 INFO nova.virt.libvirt.driver [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance spawned successfully.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.239 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.242 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance spawned successfully.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.243 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.258 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.263 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.279 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.280 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.280 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.280 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.281 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.281 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.293 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.294 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.295 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.295 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.295 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.296 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.342 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.343 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614741.2339013, 5d34feed-2663-4e17-b951-65a37bd3a275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.343 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Resumed (Lifecycle Event)
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.398 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.401 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.437 248514 INFO nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Took 12.45 seconds to spawn the instance on the hypervisor.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.437 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.438 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.546 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.547 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.564 248514 INFO nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Took 14.09 seconds to spawn the instance on the hypervisor.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.565 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 469 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.0 MiB/s wr, 165 op/s
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.576 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.660 248514 INFO nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Took 15.00 seconds to build instance.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.696 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.708 248514 INFO nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Took 16.59 seconds to build instance.
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.728 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.728 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.733 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.736 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:32:21 compute-0 nova_compute[248510]: 2025-12-13 08:32:21.736 248514 INFO nova.compute.claims [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:32:22 compute-0 nova_compute[248510]: 2025-12-13 08:32:22.132 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:22 compute-0 ceph-mon[76537]: pgmap v2053: 321 pgs: 321 active+clean; 469 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.0 MiB/s wr, 165 op/s
Dec 13 08:32:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:32:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531208489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:22 compute-0 nova_compute[248510]: 2025-12-13 08:32:22.919 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.788s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:22 compute-0 nova_compute[248510]: 2025-12-13 08:32:22.927 248514 DEBUG nova.compute.provider_tree [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:32:22 compute-0 nova_compute[248510]: 2025-12-13 08:32:22.956 248514 DEBUG nova.scheduler.client.report [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:32:22 compute-0 nova_compute[248510]: 2025-12-13 08:32:22.990 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:22 compute-0 nova_compute[248510]: 2025-12-13 08:32:22.991 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.049 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.050 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.078 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.102 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:32:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.459 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.460 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.460 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating image(s)
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.488 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.520 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.548 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.553 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 469 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 191 op/s
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.597 248514 DEBUG nova.policy [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d19f7d5ece8482dab03e4bc02fdf410', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6718df841f0471ba710516400f126fa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.640 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.641 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.642 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.642 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.642 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.644 248514 INFO nova.compute.manager [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Terminating instance
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.645 248514 DEBUG nova.compute.manager [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.646 248514 INFO nova.compute.manager [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Rescuing
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.646 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.646 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.646 248514 DEBUG nova.network.neutron [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.649 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.650 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.650 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.651 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.706 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.715 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bb8c91ff-01cb-4fd5-ab69-005313784b57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/531208489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:23 compute-0 kernel: tap7903e65c-c0 (unregistering): left promiscuous mode
Dec 13 08:32:23 compute-0 NetworkManager[50376]: <info>  [1765614743.9364] device (tap7903e65c-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:32:23 compute-0 ovn_controller[148476]: 2025-12-13T08:32:23Z|00681|binding|INFO|Releasing lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 from this chassis (sb_readonly=0)
Dec 13 08:32:23 compute-0 ovn_controller[148476]: 2025-12-13T08:32:23Z|00682|binding|INFO|Setting lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 down in Southbound
Dec 13 08:32:23 compute-0 ovn_controller[148476]: 2025-12-13T08:32:23Z|00683|binding|INFO|Removing iface tap7903e65c-c0 ovn-installed in OVS
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.966 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:56:55 10.100.0.9'], port_security=['fa:16:3e:cc:56:55 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dc076c88-fe0b-4674-ac32-fb22420b78bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7903e65c-c0bf-4bb5-b044-95f5693f5c38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.967 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7903e65c-c0bf-4bb5-b044-95f5693f5c38 in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d unbound from our chassis
Dec 13 08:32:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.969 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527d37da-eda0-4bfe-9f1d-310d58024d5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:32:23 compute-0 nova_compute[248510]: 2025-12-13 08:32:23.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3583fa-5a42-477e-8577-3010472d391d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.983 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace which is not needed anymore
Dec 13 08:32:23 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000049.scope: Deactivated successfully.
Dec 13 08:32:23 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000049.scope: Consumed 3.330s CPU time.
Dec 13 08:32:24 compute-0 systemd-machined[210538]: Machine qemu-85-instance-00000049 terminated.
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.203 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.221 248514 INFO nova.virt.libvirt.driver [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance destroyed successfully.
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.221 248514 DEBUG nova.objects.instance [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'resources' on Instance uuid dc076c88-fe0b-4674-ac32-fb22420b78bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:24 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [NOTICE]   (317509) : haproxy version is 2.8.14-c23fe91
Dec 13 08:32:24 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [NOTICE]   (317509) : path to executable is /usr/sbin/haproxy
Dec 13 08:32:24 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [WARNING]  (317509) : Exiting Master process...
Dec 13 08:32:24 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [ALERT]    (317509) : Current worker (317511) exited with code 143 (Terminated)
Dec 13 08:32:24 compute-0 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [WARNING]  (317509) : All workers exited. Exiting... (0)
Dec 13 08:32:24 compute-0 systemd[1]: libpod-7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3.scope: Deactivated successfully.
Dec 13 08:32:24 compute-0 podman[317659]: 2025-12-13 08:32:24.246251398 +0000 UTC m=+0.126755819 container died 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.245 248514 DEBUG nova.virt.libvirt.vif [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1364276133',id=73,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-1zyw8g5c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:32:21Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=dc076c88-fe0b-4674-ac32-fb22420b78bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.246 248514 DEBUG nova.network.os_vif_util [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.247 248514 DEBUG nova.network.os_vif_util [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.248 248514 DEBUG os_vif [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.252 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7903e65c-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.254 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.260 248514 INFO os_vif [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0')
Dec 13 08:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3-userdata-shm.mount: Deactivated successfully.
Dec 13 08:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9542c431d09130ceda6f2031f37bc321f382444a062bd064f5a5f75508c11df-merged.mount: Deactivated successfully.
Dec 13 08:32:24 compute-0 podman[317659]: 2025-12-13 08:32:24.322275886 +0000 UTC m=+0.202780307 container cleanup 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 08:32:24 compute-0 systemd[1]: libpod-conmon-7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3.scope: Deactivated successfully.
Dec 13 08:32:24 compute-0 podman[317688]: 2025-12-13 08:32:24.387860285 +0000 UTC m=+0.093365560 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.388 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bb8c91ff-01cb-4fd5-ab69-005313784b57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:24 compute-0 podman[317740]: 2025-12-13 08:32:24.405333949 +0000 UTC m=+0.058552495 container remove 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.416 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aecda265-9364-41cb-bb75-424049f46d03]: (4, ('Sat Dec 13 08:32:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3)\n7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3\nSat Dec 13 08:32:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3)\n7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.419 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af1c6e88-5465-42ea-90e1-0a0aaa5da5d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.420 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:24 compute-0 kernel: tap527d37da-e0: left promiscuous mode
Dec 13 08:32:24 compute-0 podman[317684]: 2025-12-13 08:32:24.431033807 +0000 UTC m=+0.157873932 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.446 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70755ccc-2a4e-4952-81ae-be90277df4e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.459 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b468c0-4187-43dd-88eb-c0049c7fb32a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f96dff-1b2f-4472-a0e5-47ad6959d1b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:24 compute-0 podman[317682]: 2025-12-13 08:32:24.478168228 +0000 UTC m=+0.199619149 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.482 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc71971-4b27-4b1e-a1fc-294a159e56d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731468, 'reachable_time': 32811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317804, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d527d37da\x2deda0\x2d4bfe\x2d9f1d\x2d310d58024d5d.mount: Deactivated successfully.
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.487 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:32:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.487 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcdc51d-e969-4c31-9638-2e02fc104ced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.516 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] resizing rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.606 248514 DEBUG nova.objects.instance [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'migration_context' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.626 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.626 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Ensure instance console log exists: /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.627 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.627 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.628 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.706 248514 INFO nova.virt.libvirt.driver [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Deleting instance files /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc_del
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.707 248514 INFO nova.virt.libvirt.driver [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Deletion of /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc_del complete
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.791 248514 INFO nova.compute.manager [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Took 1.15 seconds to destroy the instance on the hypervisor.
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.792 248514 DEBUG oslo.service.loopingcall [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.794 248514 DEBUG nova.compute.manager [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:32:24 compute-0 nova_compute[248510]: 2025-12-13 08:32:24.794 248514 DEBUG nova.network.neutron [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:32:24 compute-0 ceph-mon[76537]: pgmap v2054: 321 pgs: 321 active+clean; 469 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 191 op/s
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.252 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Successfully created port: d001c32a-bc2d-4374-9cf1-cea4a3723c66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.508 248514 DEBUG nova.compute.manager [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-unplugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.508 248514 DEBUG oslo_concurrency.lockutils [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.508 248514 DEBUG oslo_concurrency.lockutils [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.509 248514 DEBUG oslo_concurrency.lockutils [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.509 248514 DEBUG nova.compute.manager [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] No waiting events found dispatching network-vif-unplugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.509 248514 DEBUG nova.compute.manager [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-unplugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:32:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.5 MiB/s wr, 309 op/s
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.683 248514 DEBUG oslo_concurrency.lockutils [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.683 248514 DEBUG oslo_concurrency.lockutils [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.684 248514 DEBUG nova.compute.manager [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.687 248514 DEBUG nova.compute.manager [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.690 248514 DEBUG nova.objects.instance [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'flavor' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.732 248514 DEBUG nova.virt.libvirt.driver [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.740 248514 DEBUG nova.network.neutron [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:25 compute-0 nova_compute[248510]: 2025-12-13 08:32:25.760 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:26 compute-0 ceph-mon[76537]: pgmap v2055: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.5 MiB/s wr, 309 op/s
Dec 13 08:32:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.170 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.319 248514 DEBUG nova.network.neutron [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.352 248514 INFO nova.compute.manager [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Took 1.56 seconds to deallocate network for instance.
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.409 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.410 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.627 248514 DEBUG oslo_concurrency.processutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.840 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Successfully updated port: d001c32a-bc2d-4374-9cf1-cea4a3723c66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.863 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.863 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:26 compute-0 nova_compute[248510]: 2025-12-13 08:32:26.863 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:32:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:32:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852630843' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.241 248514 DEBUG oslo_concurrency.processutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.246 248514 DEBUG nova.compute.provider_tree [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.266 248514 DEBUG nova.scheduler.client.report [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:32:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3852630843' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.430 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.470 248514 INFO nova.scheduler.client.report [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Deleted allocations for instance dc076c88-fe0b-4674-ac32-fb22420b78bc
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.473 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:32:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.0 MiB/s wr, 250 op/s
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.615 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] No waiting events found dispatching network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 WARNING nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received unexpected event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 for instance with vm_state deleted and task_state None.
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.617 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-deleted-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.617 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-changed-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.617 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Refreshing instance network info cache due to event network-changed-d001c32a-bc2d-4374-9cf1-cea4a3723c66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.617 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:27 compute-0 nova_compute[248510]: 2025-12-13 08:32:27.620 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:28 compute-0 ceph-mon[76537]: pgmap v2056: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.0 MiB/s wr, 250 op/s
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.803 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Updating instance_info_cache with network_info: [{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.833 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.833 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance network_info: |[{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.834 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.834 248514 DEBUG nova.network.neutron [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Refreshing network info cache for port d001c32a-bc2d-4374-9cf1-cea4a3723c66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.837 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start _get_guest_xml network_info=[{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.841 248514 WARNING nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.848 248514 DEBUG nova.virt.libvirt.host [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.849 248514 DEBUG nova.virt.libvirt.host [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.852 248514 DEBUG nova.virt.libvirt.host [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.853 248514 DEBUG nova.virt.libvirt.host [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.853 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.856 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:32:28 compute-0 nova_compute[248510]: 2025-12-13 08:32:28.859 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:29 compute-0 nova_compute[248510]: 2025-12-13 08:32:29.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514561463' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:29 compute-0 nova_compute[248510]: 2025-12-13 08:32:29.433 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:29 compute-0 nova_compute[248510]: 2025-12-13 08:32:29.456 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:29 compute-0 nova_compute[248510]: 2025-12-13 08:32:29.461 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2514561463' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 469 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 276 op/s
Dec 13 08:32:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1214043577' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.064 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.066 248514 DEBUG nova.virt.libvirt.vif [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-tempest.common.compute-instance-81806715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:23Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.066 248514 DEBUG nova.network.os_vif_util [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.067 248514 DEBUG nova.network.os_vif_util [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.068 248514 DEBUG nova.objects.instance [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.273 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <uuid>bb8c91ff-01cb-4fd5-ab69-005313784b57</uuid>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <name>instance-0000004a</name>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <nova:name>tempest-tempest.common.compute-instance-81806715</nova:name>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:32:28</nova:creationTime>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <nova:port uuid="d001c32a-bc2d-4374-9cf1-cea4a3723c66">
Dec 13 08:32:30 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <system>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <entry name="serial">bb8c91ff-01cb-4fd5-ab69-005313784b57</entry>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <entry name="uuid">bb8c91ff-01cb-4fd5-ab69-005313784b57</entry>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </system>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <os>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   </os>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <features>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   </features>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bb8c91ff-01cb-4fd5-ab69-005313784b57_disk">
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config">
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:30 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:63:10:dc"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <target dev="tapd001c32a-bc"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/console.log" append="off"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <video>
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </video>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:32:30 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:32:30 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:32:30 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:32:30 compute-0 nova_compute[248510]: </domain>
Dec 13 08:32:30 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.275 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Preparing to wait for external event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.275 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.276 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.276 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.277 248514 DEBUG nova.virt.libvirt.vif [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-tempest.common.compute-instance-81806715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:23Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.277 248514 DEBUG nova.network.os_vif_util [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.278 248514 DEBUG nova.network.os_vif_util [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.278 248514 DEBUG os_vif [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.280 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.280 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.284 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd001c32a-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.284 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd001c32a-bc, col_values=(('external_ids', {'iface-id': 'd001c32a-bc2d-4374-9cf1-cea4a3723c66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:10:dc', 'vm-uuid': 'bb8c91ff-01cb-4fd5-ab69-005313784b57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.286 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:30 compute-0 NetworkManager[50376]: <info>  [1765614750.2870] manager: (tapd001c32a-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.294 248514 INFO os_vif [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc')
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.357 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.357 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.358 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No VIF found with MAC fa:16:3e:63:10:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.358 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Using config drive
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.384 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:30 compute-0 ceph-mon[76537]: pgmap v2057: 321 pgs: 321 active+clean; 469 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 276 op/s
Dec 13 08:32:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1214043577' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.878 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating config drive at /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.884 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0dcnw3y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.934 248514 DEBUG nova.network.neutron [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Updated VIF entry in instance network info cache for port d001c32a-bc2d-4374-9cf1-cea4a3723c66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:32:30 compute-0 nova_compute[248510]: 2025-12-13 08:32:30.935 248514 DEBUG nova.network.neutron [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Updating instance_info_cache with network_info: [{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.032 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0dcnw3y" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.061 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.065 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.074 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.075 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.119 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.229 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.231 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deleting local config drive /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config because it was imported into RBD.
Dec 13 08:32:31 compute-0 kernel: tapd001c32a-bc: entered promiscuous mode
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:31 compute-0 ovn_controller[148476]: 2025-12-13T08:32:31Z|00684|binding|INFO|Claiming lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 for this chassis.
Dec 13 08:32:31 compute-0 ovn_controller[148476]: 2025-12-13T08:32:31Z|00685|binding|INFO|d001c32a-bc2d-4374-9cf1-cea4a3723c66: Claiming fa:16:3e:63:10:dc 10.100.0.13
Dec 13 08:32:31 compute-0 NetworkManager[50376]: <info>  [1765614751.2971] manager: (tapd001c32a-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.304 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:10:dc 10.100.0.13'], port_security=['fa:16:3e:63:10:dc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8c91ff-01cb-4fd5-ab69-005313784b57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f5cceeb-71e0-4eee-9335-64d44ec2d969', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d001c32a-bc2d-4374-9cf1-cea4a3723c66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.305 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d001c32a-bc2d-4374-9cf1-cea4a3723c66 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.309 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:32:31 compute-0 ovn_controller[148476]: 2025-12-13T08:32:31Z|00686|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 ovn-installed in OVS
Dec 13 08:32:31 compute-0 ovn_controller[148476]: 2025-12-13T08:32:31Z|00687|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 up in Southbound
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:31 compute-0 systemd-udevd[318016]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.331 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f94dc89-09f1-45a9-9052-e1b84ff473a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:31 compute-0 NetworkManager[50376]: <info>  [1765614751.3431] device (tapd001c32a-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:32:31 compute-0 NetworkManager[50376]: <info>  [1765614751.3441] device (tapd001c32a-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:32:31 compute-0 systemd-machined[210538]: New machine qemu-86-instance-0000004a.
Dec 13 08:32:31 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-0000004a.
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.376 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7af3cdd8-9f6d-4b4c-b94e-e4eecf85761b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.379 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0822d0b6-671d-49d9-950f-019cf8c63ada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.413 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f51befd5-1773-49d6-86e1-f0cbb668e61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.448 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff81c76-b002-4f8e-9a93-6214fabfc0b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318030, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b53bfb5c-e5cf-426f-91cf-c1cc9d95c8c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727305, 'tstamp': 727305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318032, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727308, 'tstamp': 727308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318032, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.471 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.473 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.476 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 469 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 263 op/s
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.634149) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751634215, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1267, "num_deletes": 253, "total_data_size": 1870054, "memory_usage": 1901392, "flush_reason": "Manual Compaction"}
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751648568, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1825298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38788, "largest_seqno": 40054, "table_properties": {"data_size": 1819273, "index_size": 3292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13478, "raw_average_key_size": 20, "raw_value_size": 1806879, "raw_average_value_size": 2725, "num_data_blocks": 146, "num_entries": 663, "num_filter_entries": 663, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614645, "oldest_key_time": 1765614645, "file_creation_time": 1765614751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 14512 microseconds, and 6202 cpu microseconds.
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.648627) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1825298 bytes OK
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.648686) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.651399) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.651415) EVENT_LOG_v1 {"time_micros": 1765614751651410, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.651442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1864244, prev total WAL file size 1864244, number of live WAL files 2.
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.652172) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1782KB)], [86(10MB)]
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751652480, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12333504, "oldest_snapshot_seqno": -1}
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6452 keys, 10557422 bytes, temperature: kUnknown
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751748021, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10557422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10512489, "index_size": 27671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164414, "raw_average_key_size": 25, "raw_value_size": 10395008, "raw_average_value_size": 1611, "num_data_blocks": 1117, "num_entries": 6452, "num_filter_entries": 6452, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.748462) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10557422 bytes
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.750800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.9 rd, 110.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.0 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(12.5) write-amplify(5.8) OK, records in: 6974, records dropped: 522 output_compression: NoCompression
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.750840) EVENT_LOG_v1 {"time_micros": 1765614751750825, "job": 50, "event": "compaction_finished", "compaction_time_micros": 95697, "compaction_time_cpu_micros": 32593, "output_level": 6, "num_output_files": 1, "total_output_size": 10557422, "num_input_records": 6974, "num_output_records": 6452, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751751698, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751755608, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.652051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:32:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.802 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614751.8011086, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.802 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Started (Lifecycle Event)
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.829 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.836 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614751.8050423, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.836 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Paused (Lifecycle Event)
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.861 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.865 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.887 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.914 248514 DEBUG nova.compute.manager [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.914 248514 DEBUG oslo_concurrency.lockutils [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.914 248514 DEBUG oslo_concurrency.lockutils [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.915 248514 DEBUG oslo_concurrency.lockutils [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.915 248514 DEBUG nova.compute.manager [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Processing event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.915 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.918 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614751.918699, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.919 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Resumed (Lifecycle Event)
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.921 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.925 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance spawned successfully.
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.925 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.951 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.962 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.970 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.971 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.972 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.973 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.974 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:31 compute-0 nova_compute[248510]: 2025-12-13 08:32:31.975 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:32:32 compute-0 nova_compute[248510]: 2025-12-13 08:32:32.009 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:32:32 compute-0 nova_compute[248510]: 2025-12-13 08:32:32.180 248514 INFO nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Took 8.72 seconds to spawn the instance on the hypervisor.
Dec 13 08:32:32 compute-0 nova_compute[248510]: 2025-12-13 08:32:32.181 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:32 compute-0 nova_compute[248510]: 2025-12-13 08:32:32.290 248514 INFO nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Took 10.60 seconds to build instance.
Dec 13 08:32:32 compute-0 nova_compute[248510]: 2025-12-13 08:32:32.317 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:32 compute-0 ceph-mon[76537]: pgmap v2058: 321 pgs: 321 active+clean; 469 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 263 op/s
Dec 13 08:32:33 compute-0 sshd-session[318075]: Invalid user ubuntu from 193.32.162.146 port 40344
Dec 13 08:32:33 compute-0 sshd-session[318075]: Connection closed by invalid user ubuntu 193.32.162.146 port 40344 [preauth]
Dec 13 08:32:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 471 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.1 MiB/s wr, 245 op/s
Dec 13 08:32:34 compute-0 ceph-mon[76537]: pgmap v2059: 321 pgs: 321 active+clean; 471 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.1 MiB/s wr, 245 op/s
Dec 13 08:32:34 compute-0 nova_compute[248510]: 2025-12-13 08:32:34.683 248514 DEBUG nova.compute.manager [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:34 compute-0 nova_compute[248510]: 2025-12-13 08:32:34.684 248514 DEBUG oslo_concurrency.lockutils [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:34 compute-0 nova_compute[248510]: 2025-12-13 08:32:34.684 248514 DEBUG oslo_concurrency.lockutils [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:34 compute-0 nova_compute[248510]: 2025-12-13 08:32:34.684 248514 DEBUG oslo_concurrency.lockutils [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:34 compute-0 nova_compute[248510]: 2025-12-13 08:32:34.685 248514 DEBUG nova.compute.manager [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:34 compute-0 nova_compute[248510]: 2025-12-13 08:32:34.685 248514 WARNING nova.compute.manager [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state active and task_state None.
Dec 13 08:32:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:35.077 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:35 compute-0 nova_compute[248510]: 2025-12-13 08:32:35.288 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:35 compute-0 nova_compute[248510]: 2025-12-13 08:32:35.401 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:35 compute-0 ovn_controller[148476]: 2025-12-13T08:32:35Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:ab:b9 10.100.0.5
Dec 13 08:32:35 compute-0 ovn_controller[148476]: 2025-12-13T08:32:35Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:ab:b9 10.100.0.5
Dec 13 08:32:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 500 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.0 MiB/s wr, 298 op/s
Dec 13 08:32:35 compute-0 nova_compute[248510]: 2025-12-13 08:32:35.782 248514 DEBUG nova.virt.libvirt.driver [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:32:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:36 compute-0 nova_compute[248510]: 2025-12-13 08:32:36.216 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:32:36 compute-0 ceph-mon[76537]: pgmap v2060: 321 pgs: 321 active+clean; 500 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.0 MiB/s wr, 298 op/s
Dec 13 08:32:36 compute-0 nova_compute[248510]: 2025-12-13 08:32:36.745 248514 INFO nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Rebuilding instance
Dec 13 08:32:37 compute-0 nova_compute[248510]: 2025-12-13 08:32:37.095 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'trusted_certs' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:37 compute-0 nova_compute[248510]: 2025-12-13 08:32:37.117 248514 DEBUG nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:37 compute-0 nova_compute[248510]: 2025-12-13 08:32:37.184 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_requests' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:37 compute-0 nova_compute[248510]: 2025-12-13 08:32:37.198 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:37 compute-0 nova_compute[248510]: 2025-12-13 08:32:37.215 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:37 compute-0 nova_compute[248510]: 2025-12-13 08:32:37.229 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'migration_context' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:37 compute-0 nova_compute[248510]: 2025-12-13 08:32:37.242 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:32:37 compute-0 nova_compute[248510]: 2025-12-13 08:32:37.246 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:32:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 500 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 139 op/s
Dec 13 08:32:38 compute-0 kernel: tap533958d1-8a (unregistering): left promiscuous mode
Dec 13 08:32:38 compute-0 NetworkManager[50376]: <info>  [1765614758.4375] device (tap533958d1-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:38 compute-0 ovn_controller[148476]: 2025-12-13T08:32:38Z|00688|binding|INFO|Releasing lport 533958d1-8a74-4963-9731-40767b4bb127 from this chassis (sb_readonly=0)
Dec 13 08:32:38 compute-0 ovn_controller[148476]: 2025-12-13T08:32:38Z|00689|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 down in Southbound
Dec 13 08:32:38 compute-0 ovn_controller[148476]: 2025-12-13T08:32:38Z|00690|binding|INFO|Removing iface tap533958d1-8a ovn-installed in OVS
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:38 compute-0 kernel: tap2bdcea64-a0 (unregistering): left promiscuous mode
Dec 13 08:32:38 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000048.scope: Deactivated successfully.
Dec 13 08:32:38 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000048.scope: Consumed 14.954s CPU time.
Dec 13 08:32:38 compute-0 NetworkManager[50376]: <info>  [1765614758.5090] device (tap2bdcea64-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:32:38 compute-0 systemd-machined[210538]: Machine qemu-84-instance-00000048 terminated.
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:38 compute-0 ovn_controller[148476]: 2025-12-13T08:32:38Z|00691|binding|INFO|Releasing lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 from this chassis (sb_readonly=1)
Dec 13 08:32:38 compute-0 ovn_controller[148476]: 2025-12-13T08:32:38Z|00692|binding|INFO|Removing iface tap2bdcea64-a0 ovn-installed in OVS
Dec 13 08:32:38 compute-0 ovn_controller[148476]: 2025-12-13T08:32:38Z|00693|if_status|INFO|Dropped 1 log messages in last 118 seconds (most recently, 118 seconds ago) due to excessive rate
Dec 13 08:32:38 compute-0 ovn_controller[148476]: 2025-12-13T08:32:38Z|00694|if_status|INFO|Not setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 down as sb is readonly
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:38 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000047.scope: Deactivated successfully.
Dec 13 08:32:38 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000047.scope: Consumed 16.649s CPU time.
Dec 13 08:32:38 compute-0 systemd-machined[210538]: Machine qemu-83-instance-00000047 terminated.
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:38 compute-0 ceph-mon[76537]: pgmap v2061: 321 pgs: 321 active+clean; 500 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 139 op/s
Dec 13 08:32:38 compute-0 NetworkManager[50376]: <info>  [1765614758.6996] manager: (tap2bdcea64-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.799 248514 INFO nova.virt.libvirt.driver [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance shutdown successfully after 13 seconds.
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.805 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.806 248514 DEBUG nova.objects.instance [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'numa_topology' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:38 compute-0 ovn_controller[148476]: 2025-12-13T08:32:38Z|00695|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 down in Southbound
Dec 13 08:32:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:38.959 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ab:b9 10.100.0.5'], port_security=['fa:16:3e:66:ab:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d34feed-2663-4e17-b951-65a37bd3a275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=533958d1-8a74-4963-9731-40767b4bb127) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:38.960 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 533958d1-8a74-4963-9731-40767b4bb127 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f unbound from our chassis
Dec 13 08:32:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:38.963 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:32:38 compute-0 nova_compute[248510]: 2025-12-13 08:32:38.976 248514 DEBUG nova.compute.manager [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:38.981 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[070fe869-5b07-4e63-8160-31b60f0ace64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.015 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4edbc03b-d2f5-453c-929b-d2d844b27c22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.019 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[96eb5a84-8a09-4727-be80-0b8e8491de44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.040 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.063 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a0550654-c774-4588-a556-ff88c4a7ce09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27c44905-08e6-4979-ae9f-9a05d9c1dd81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318119, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.101 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[90b526af-1ddf-4154-84a8-ef214226b8c6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318120, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318120, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.103 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:39 compute-0 nova_compute[248510]: 2025-12-13 08:32:39.104 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:39 compute-0 nova_compute[248510]: 2025-12-13 08:32:39.112 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.113 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.113 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.114 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.114 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.116 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.117 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:32:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.119 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01700b5f-7403-495b-8a0a-22533f6f2591]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:39 compute-0 nova_compute[248510]: 2025-12-13 08:32:39.210 248514 DEBUG oslo_concurrency.lockutils [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:39 compute-0 nova_compute[248510]: 2025-12-13 08:32:39.217 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614744.2162104, dc076c88-fe0b-4674-ac32-fb22420b78bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:39 compute-0 nova_compute[248510]: 2025-12-13 08:32:39.219 248514 INFO nova.compute.manager [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] VM Stopped (Lifecycle Event)
Dec 13 08:32:39 compute-0 nova_compute[248510]: 2025-12-13 08:32:39.233 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance shutdown successfully after 13 seconds.
Dec 13 08:32:39 compute-0 nova_compute[248510]: 2025-12-13 08:32:39.242 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance destroyed successfully.
Dec 13 08:32:39 compute-0 nova_compute[248510]: 2025-12-13 08:32:39.242 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'numa_topology' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 535 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.1 MiB/s wr, 226 op/s
Dec 13 08:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.143 248514 DEBUG nova.compute.manager [None req-9386d99e-eccc-4eec-bd52-f3a1048eeeac - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.149 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Attempting rescue
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.150 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.156 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.157 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating image(s)
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.185 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.190 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.273 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.296 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.301 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.403 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.408 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.409 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.410 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.410 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.434 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.438 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:40 compute-0 ceph-mon[76537]: pgmap v2062: 321 pgs: 321 active+clean; 535 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.1 MiB/s wr, 226 op/s
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.783 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.784 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'migration_context' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.885 248514 DEBUG nova.compute.manager [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.886 248514 DEBUG oslo_concurrency.lockutils [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.886 248514 DEBUG oslo_concurrency.lockutils [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.887 248514 DEBUG oslo_concurrency.lockutils [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.887 248514 DEBUG nova.compute.manager [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.887 248514 WARNING nova.compute.manager [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state None.
Dec 13 08:32:40 compute-0 ovn_controller[148476]: 2025-12-13T08:32:40Z|00696|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:32:40 compute-0 ovn_controller[148476]: 2025-12-13T08:32:40Z|00697|binding|INFO|Releasing lport c8e8a31b-a5fe-4e2d-bc19-65995078988f from this chassis (sb_readonly=0)
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.918 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.920 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start _get_guest_xml network_info=[{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:13:4d:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.921 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'resources' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:40 compute-0 nova_compute[248510]: 2025-12-13 08:32:40.968 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.022 248514 WARNING nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.035 248514 DEBUG nova.virt.libvirt.host [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.036 248514 DEBUG nova.virt.libvirt.host [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.039 248514 DEBUG nova.virt.libvirt.host [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.040 248514 DEBUG nova.virt.libvirt.host [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.040 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.041 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.041 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.041 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.042 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.042 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.042 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.042 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.043 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.043 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.043 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.043 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.044 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.063 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 535 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Dec 13 08:32:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564628827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.705 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:41 compute-0 nova_compute[248510]: 2025-12-13 08:32:41.706 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3564628827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1542106509' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:42 compute-0 nova_compute[248510]: 2025-12-13 08:32:42.387 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:42 compute-0 nova_compute[248510]: 2025-12-13 08:32:42.389 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661846961' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:43 compute-0 ceph-mon[76537]: pgmap v2063: 321 pgs: 321 active+clean; 535 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Dec 13 08:32:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1542106509' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.057 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.060 248514 DEBUG nova.virt.libvirt.vif [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-607419756',display_name='tempest-ServerRescueTestJSON-server-607419756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-607419756',id=71,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-h6adjjd8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:19Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=c5edbf88-6361-407a-a0f1-c133f70b50e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:13:4d:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.060 248514 DEBUG nova.network.os_vif_util [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:13:4d:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.061 248514 DEBUG nova.network.os_vif_util [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.062 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.102 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <uuid>c5edbf88-6361-407a-a0f1-c133f70b50e9</uuid>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <name>instance-00000047</name>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueTestJSON-server-607419756</nova:name>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:32:41</nova:creationTime>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <nova:user uuid="93eec08d500a4f03afb3281e9899bd6a">tempest-ServerRescueTestJSON-1425963100-project-member</nova:user>
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <nova:project uuid="71e2453379684f0ca0563f8c370ea4a3">tempest-ServerRescueTestJSON-1425963100</nova:project>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <nova:port uuid="2bdcea64-a01f-4a75-b664-9c9c971533a6">
Dec 13 08:32:43 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <system>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <entry name="serial">c5edbf88-6361-407a-a0f1-c133f70b50e9</entry>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <entry name="uuid">c5edbf88-6361-407a-a0f1-c133f70b50e9</entry>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </system>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <os>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   </os>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <features>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   </features>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue">
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk">
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <target dev="vdb" bus="virtio"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue">
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:43 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:13:4d:d0"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <target dev="tap2bdcea64-a0"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/console.log" append="off"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <video>
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </video>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:32:43 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:32:43 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:32:43 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:32:43 compute-0 nova_compute[248510]: </domain>
Dec 13 08:32:43 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.111 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance destroyed successfully.
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.212 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.213 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.213 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.213 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No VIF found with MAC fa:16:3e:13:4d:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.214 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Using config drive
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.235 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.268 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.305 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'keypairs' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 553 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 211 op/s
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.651 248514 INFO nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Rebuilding instance
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.810 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating config drive at /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.815 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplhwsqjta execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.962 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplhwsqjta" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:43 compute-0 nova_compute[248510]: 2025-12-13 08:32:43.998 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.003 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/661846961' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:44 compute-0 ceph-mon[76537]: pgmap v2064: 321 pgs: 321 active+clean; 553 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 211 op/s
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.374 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.375 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.375 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.376 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.376 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.376 248514 WARNING nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state rebuilding.
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.376 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.377 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.377 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.377 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.377 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.378 248514 WARNING nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state rescuing.
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.378 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.378 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.378 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.379 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.379 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.379 248514 WARNING nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state rescuing.
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.426 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.451 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.509 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.526 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.546 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'resources' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.562 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'migration_context' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.605 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.611 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance already shutdown.
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.617 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.624 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.625 248514 DEBUG nova.virt.libvirt.vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:42Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.626 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.627 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.628 248514 DEBUG os_vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.632 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap533958d1-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.647 248514 INFO os_vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a')
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.773 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.770s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.773 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deleting local config drive /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue because it was imported into RBD.
Dec 13 08:32:44 compute-0 kernel: tap2bdcea64-a0: entered promiscuous mode
Dec 13 08:32:44 compute-0 NetworkManager[50376]: <info>  [1765614764.8176] manager: (tap2bdcea64-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Dec 13 08:32:44 compute-0 ovn_controller[148476]: 2025-12-13T08:32:44Z|00698|binding|INFO|Claiming lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 for this chassis.
Dec 13 08:32:44 compute-0 ovn_controller[148476]: 2025-12-13T08:32:44Z|00699|binding|INFO|2bdcea64-a01f-4a75-b664-9c9c971533a6: Claiming fa:16:3e:13:4d:d0 10.100.0.6
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:44.826 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:44.827 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 bound to our chassis
Dec 13 08:32:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:44.829 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:32:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:44.830 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[063f0f72-2a06-4b56-876c-d465b5872341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:44 compute-0 ovn_controller[148476]: 2025-12-13T08:32:44Z|00700|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 ovn-installed in OVS
Dec 13 08:32:44 compute-0 ovn_controller[148476]: 2025-12-13T08:32:44Z|00701|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 up in Southbound
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.837 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:44 compute-0 nova_compute[248510]: 2025-12-13 08:32:44.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:44 compute-0 systemd-udevd[318374]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:32:44 compute-0 systemd-machined[210538]: New machine qemu-87-instance-00000047.
Dec 13 08:32:44 compute-0 NetworkManager[50376]: <info>  [1765614764.8610] device (tap2bdcea64-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:32:44 compute-0 NetworkManager[50376]: <info>  [1765614764.8623] device (tap2bdcea64-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:32:44 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-00000047.
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 581 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 212 op/s
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.902 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for c5edbf88-6361-407a-a0f1-c133f70b50e9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.904 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614765.9022048, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.904 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Resumed (Lifecycle Event)
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.911 248514 DEBUG nova.compute.manager [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.941 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.945 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.976 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.977 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614765.9077804, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.977 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Started (Lifecycle Event)
Dec 13 08:32:45 compute-0 nova_compute[248510]: 2025-12-13 08:32:45.999 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.003 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.582 248514 DEBUG nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.583 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.583 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.583 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.584 248514 DEBUG nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.584 248514 WARNING nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state rescued and task_state None.
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.584 248514 DEBUG nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.584 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.585 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.585 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.585 248514 DEBUG nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:46 compute-0 nova_compute[248510]: 2025-12-13 08:32:46.586 248514 WARNING nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state rescued and task_state None.
Dec 13 08:32:47 compute-0 ceph-mon[76537]: pgmap v2065: 321 pgs: 321 active+clean; 581 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 212 op/s
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.394 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:32:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 581 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 2.9 MiB/s wr, 114 op/s
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.759 248514 INFO nova.compute.manager [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Unrescuing
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.760 248514 DEBUG oslo_concurrency.lockutils [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.760 248514 DEBUG oslo_concurrency.lockutils [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.760 248514 DEBUG nova.network.neutron [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.997 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.998 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.998 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:32:47 compute-0 nova_compute[248510]: 2025-12-13 08:32:47.998 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:48 compute-0 ceph-mon[76537]: pgmap v2066: 321 pgs: 321 active+clean; 581 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 2.9 MiB/s wr, 114 op/s
Dec 13 08:32:48 compute-0 nova_compute[248510]: 2025-12-13 08:32:48.780 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deleting instance files /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275_del
Dec 13 08:32:48 compute-0 nova_compute[248510]: 2025-12-13 08:32:48.782 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deletion of /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275_del complete
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.398 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.399 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating image(s)
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.427 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.455 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.486 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.490 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.566 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.567 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.569 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.569 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 196 op/s
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.590 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.597 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 5d34feed-2663-4e17-b951-65a37bd3a275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.631 248514 DEBUG nova.network.neutron [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:49 compute-0 nova_compute[248510]: 2025-12-13 08:32:49.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.081 248514 DEBUG oslo_concurrency.lockutils [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.084 248514 DEBUG nova.objects.instance [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'flavor' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.336 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 5d34feed-2663-4e17-b951-65a37bd3a275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.739s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.395 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] resizing rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:50 compute-0 ovn_controller[148476]: 2025-12-13T08:32:50Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:10:dc 10.100.0.13
Dec 13 08:32:50 compute-0 ovn_controller[148476]: 2025-12-13T08:32:50Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:10:dc 10.100.0.13
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.546 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.547 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Ensure instance console log exists: /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.548 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.548 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.549 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.553 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start _get_guest_xml network_info=[{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.559 248514 WARNING nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.566 248514 DEBUG nova.virt.libvirt.host [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.567 248514 DEBUG nova.virt.libvirt.host [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.570 248514 DEBUG nova.virt.libvirt.host [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.571 248514 DEBUG nova.virt.libvirt.host [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.572 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.572 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.573 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.573 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.574 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.574 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.575 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.575 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.575 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.576 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.576 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.577 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:32:50 compute-0 nova_compute[248510]: 2025-12-13 08:32:50.577 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:50 compute-0 ceph-mon[76537]: pgmap v2067: 321 pgs: 321 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 196 op/s
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.143 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:51 compute-0 kernel: tap2bdcea64-a0 (unregistering): left promiscuous mode
Dec 13 08:32:51 compute-0 NetworkManager[50376]: <info>  [1765614771.4005] device (tap2bdcea64-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:32:51 compute-0 ovn_controller[148476]: 2025-12-13T08:32:51Z|00702|binding|INFO|Releasing lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 from this chassis (sb_readonly=0)
Dec 13 08:32:51 compute-0 ovn_controller[148476]: 2025-12-13T08:32:51Z|00703|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 down in Southbound
Dec 13 08:32:51 compute-0 ovn_controller[148476]: 2025-12-13T08:32:51Z|00704|binding|INFO|Removing iface tap2bdcea64-a0 ovn-installed in OVS
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:51 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000047.scope: Deactivated successfully.
Dec 13 08:32:51 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000047.scope: Consumed 5.297s CPU time.
Dec 13 08:32:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.455 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.456 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis
Dec 13 08:32:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.458 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:32:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.459 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4a5a36-830b-4230-b8f5-8deb211455d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:51 compute-0 systemd-machined[210538]: Machine qemu-87-instance-00000047 terminated.
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.542 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.559 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance destroyed successfully.
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.561 248514 DEBUG nova.objects.instance [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'numa_topology' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 523 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Dec 13 08:32:51 compute-0 kernel: tap2bdcea64-a0: entered promiscuous mode
Dec 13 08:32:51 compute-0 systemd-udevd[318635]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:32:51 compute-0 NetworkManager[50376]: <info>  [1765614771.6838] manager: (tap2bdcea64-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Dec 13 08:32:51 compute-0 ovn_controller[148476]: 2025-12-13T08:32:51Z|00705|binding|INFO|Claiming lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 for this chassis.
Dec 13 08:32:51 compute-0 ovn_controller[148476]: 2025-12-13T08:32:51Z|00706|binding|INFO|2bdcea64-a01f-4a75-b664-9c9c971533a6: Claiming fa:16:3e:13:4d:d0 10.100.0.6
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.686 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.692 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:32:51 compute-0 NetworkManager[50376]: <info>  [1765614771.6961] device (tap2bdcea64-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:32:51 compute-0 NetworkManager[50376]: <info>  [1765614771.6969] device (tap2bdcea64-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.706 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:51 compute-0 ovn_controller[148476]: 2025-12-13T08:32:51Z|00707|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 ovn-installed in OVS
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:51 compute-0 ovn_controller[148476]: 2025-12-13T08:32:51Z|00708|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 up in Southbound
Dec 13 08:32:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.723 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.725 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 bound to our chassis
Dec 13 08:32:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.726 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:32:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.727 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7925d9c3-8eae-4ed1-bade-45f0ac49c907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:51 compute-0 systemd-machined[210538]: New machine qemu-88-instance-00000047.
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.740 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.740 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.741 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:32:51 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-00000047.
Dec 13 08:32:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348443458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.781 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.812 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:51 compute-0 nova_compute[248510]: 2025-12-13 08:32:51.817 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3348443458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.284 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for c5edbf88-6361-407a-a0f1-c133f70b50e9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.286 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614772.28383, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.287 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Resumed (Lifecycle Event)
Dec 13 08:32:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:32:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078105784' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.449 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.451 248514 DEBUG nova.virt.libvirt.vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:49Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.451 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.452 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.455 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <uuid>5d34feed-2663-4e17-b951-65a37bd3a275</uuid>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <name>instance-00000048</name>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <nova:name>tempest-tempest.common.compute-instance-1505100715</nova:name>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:32:50</nova:creationTime>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <nova:user uuid="5b310bdebec646949fad4ea1821b4c3f">tempest-ServerActionsTestOtherA-1325599242-project-member</nova:user>
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <nova:project uuid="b4d2999518df4b9f8ccbabe38976dc3c">tempest-ServerActionsTestOtherA-1325599242</nova:project>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <nova:port uuid="533958d1-8a74-4963-9731-40767b4bb127">
Dec 13 08:32:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <system>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <entry name="serial">5d34feed-2663-4e17-b951-65a37bd3a275</entry>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <entry name="uuid">5d34feed-2663-4e17-b951-65a37bd3a275</entry>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </system>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <os>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   </os>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <features>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   </features>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5d34feed-2663-4e17-b951-65a37bd3a275_disk">
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5d34feed-2663-4e17-b951-65a37bd3a275_disk.config">
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       </source>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:32:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:66:ab:b9"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <target dev="tap533958d1-8a"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/console.log" append="off"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <video>
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </video>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:32:52 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:32:52 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:32:52 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:32:52 compute-0 nova_compute[248510]: </domain>
Dec 13 08:32:52 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.455 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Preparing to wait for external event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.455 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.456 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.456 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.457 248514 DEBUG nova.virt.libvirt.vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:49Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.457 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.458 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.458 248514 DEBUG os_vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.459 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.460 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.465 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap533958d1-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.466 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap533958d1-8a, col_values=(('external_ids', {'iface-id': '533958d1-8a74-4963-9731-40767b4bb127', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:ab:b9', 'vm-uuid': '5d34feed-2663-4e17-b951-65a37bd3a275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:52 compute-0 NetworkManager[50376]: <info>  [1765614772.4701] manager: (tap533958d1-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.477 248514 INFO os_vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a')
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.618 248514 DEBUG nova.compute.manager [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:32:52 compute-0 nova_compute[248510]: 2025-12-13 08:32:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:32:52 compute-0 ceph-mon[76537]: pgmap v2068: 321 pgs: 321 active+clean; 523 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Dec 13 08:32:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3078105784' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:32:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 528 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Dec 13 08:32:53 compute-0 nova_compute[248510]: 2025-12-13 08:32:53.684 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614758.6838434, 5d34feed-2663-4e17-b951-65a37bd3a275 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:53 compute-0 nova_compute[248510]: 2025-12-13 08:32:53.685 248514 INFO nova.compute.manager [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Stopped (Lifecycle Event)
Dec 13 08:32:54 compute-0 sudo[318773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:32:54 compute-0 sudo[318773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:32:54 compute-0 sudo[318773]: pam_unix(sudo:session): session closed for user root
Dec 13 08:32:54 compute-0 sudo[318801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:32:54 compute-0 sudo[318801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:32:54 compute-0 podman[318799]: 2025-12-13 08:32:54.70652776 +0000 UTC m=+0.074592953 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:32:54 compute-0 podman[318798]: 2025-12-13 08:32:54.735253814 +0000 UTC m=+0.103493551 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:32:54 compute-0 podman[318797]: 2025-12-13 08:32:54.770375826 +0000 UTC m=+0.137336812 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 08:32:54 compute-0 ceph-mon[76537]: pgmap v2069: 321 pgs: 321 active+clean; 528 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.064 248514 DEBUG nova.compute.manager [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.065 248514 DEBUG oslo_concurrency.lockutils [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.066 248514 DEBUG oslo_concurrency.lockutils [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.066 248514 DEBUG oslo_concurrency.lockutils [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.066 248514 DEBUG nova.compute.manager [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.067 248514 WARNING nova.compute.manager [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state rescued and task_state unrescuing.
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.112 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.113 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.113 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.113 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.114 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.150 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.155 248514 DEBUG nova.compute.manager [None req-989d6202-c2ac-4efa-a641-3dc013fd9c98 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.160 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.165 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.165 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.166 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No VIF found with MAC fa:16:3e:66:ab:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.167 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Using config drive
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.201 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.213 248514 DEBUG nova.compute.manager [None req-989d6202-c2ac-4efa-a641-3dc013fd9c98 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.254 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.259 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614772.286051, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.259 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Started (Lifecycle Event)
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.260 248514 INFO nova.compute.manager [None req-989d6202-c2ac-4efa-a641-3dc013fd9c98 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.358 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.362 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'keypairs' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.370 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:55 compute-0 sudo[318801]: pam_unix(sudo:session): session closed for user root
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:55.413 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:55.415 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:55.417 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:55 compute-0 sudo[318952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:32:55 compute-0 sudo[318952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:32:55 compute-0 sudo[318952]: pam_unix(sudo:session): session closed for user root
Dec 13 08:32:55 compute-0 sudo[318977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 13 08:32:55 compute-0 sudo[318977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:32:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 534 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 285 op/s
Dec 13 08:32:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:32:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511849649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:55 compute-0 nova_compute[248510]: 2025-12-13 08:32:55.720 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:55 compute-0 sudo[318977]: pam_unix(sudo:session): session closed for user root
Dec 13 08:32:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:32:55 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:32:55 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:56 compute-0 ceph-mon[76537]: pgmap v2070: 321 pgs: 321 active+clean; 534 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 285 op/s
Dec 13 08:32:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2511849649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:56 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:56 compute-0 sudo[319024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:32:56 compute-0 sudo[319024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:32:56 compute-0 sudo[319024]: pam_unix(sudo:session): session closed for user root
Dec 13 08:32:56 compute-0 sudo[319049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- inventory --format=json-pretty --filter-for-batch
Dec 13 08:32:56 compute-0 sudo[319049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:32:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:32:56 compute-0 podman[319087]: 2025-12-13 08:32:56.39330302 +0000 UTC m=+0.025850723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:32:56 compute-0 podman[319087]: 2025-12-13 08:32:56.492608426 +0000 UTC m=+0.125156149 container create e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:32:56 compute-0 nova_compute[248510]: 2025-12-13 08:32:56.594 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating config drive at /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config
Dec 13 08:32:56 compute-0 nova_compute[248510]: 2025-12-13 08:32:56.605 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplbn39prn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:56 compute-0 systemd[1]: Started libpod-conmon-e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0.scope.
Dec 13 08:32:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:32:56 compute-0 podman[319087]: 2025-12-13 08:32:56.713687566 +0000 UTC m=+0.346235269 container init e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 08:32:56 compute-0 podman[319087]: 2025-12-13 08:32:56.725298965 +0000 UTC m=+0.357846648 container start e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 08:32:56 compute-0 systemd[1]: libpod-e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0.scope: Deactivated successfully.
Dec 13 08:32:56 compute-0 quizzical_visvesvaraya[319107]: 167 167
Dec 13 08:32:56 compute-0 podman[319087]: 2025-12-13 08:32:56.739654891 +0000 UTC m=+0.372202574 container attach e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 08:32:56 compute-0 conmon[319107]: conmon e3fc15f770ee55958587 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0.scope/container/memory.events
Dec 13 08:32:56 compute-0 podman[319087]: 2025-12-13 08:32:56.741866276 +0000 UTC m=+0.374413959 container died e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:32:56 compute-0 nova_compute[248510]: 2025-12-13 08:32:56.758 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplbn39prn" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:56 compute-0 nova_compute[248510]: 2025-12-13 08:32:56.785 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:32:56 compute-0 nova_compute[248510]: 2025-12-13 08:32:56.790 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bb7bd5d9ea982ce7c440cf2db7fcdd94f56bba2a8ce7d938eda21b0711740e8-merged.mount: Deactivated successfully.
Dec 13 08:32:57 compute-0 podman[319087]: 2025-12-13 08:32:57.010898957 +0000 UTC m=+0.643446660 container remove e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:32:57 compute-0 systemd[1]: libpod-conmon-e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0.scope: Deactivated successfully.
Dec 13 08:32:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:57 compute-0 podman[319166]: 2025-12-13 08:32:57.265797146 +0000 UTC m=+0.065762214 container create e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.306 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.308 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deleting local config drive /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config because it was imported into RBD.
Dec 13 08:32:57 compute-0 podman[319166]: 2025-12-13 08:32:57.226813558 +0000 UTC m=+0.026778646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:32:57 compute-0 systemd[1]: Started libpod-conmon-e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427.scope.
Dec 13 08:32:57 compute-0 kernel: tap533958d1-8a: entered promiscuous mode
Dec 13 08:32:57 compute-0 NetworkManager[50376]: <info>  [1765614777.3721] manager: (tap533958d1-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/311)
Dec 13 08:32:57 compute-0 ovn_controller[148476]: 2025-12-13T08:32:57Z|00709|binding|INFO|Claiming lport 533958d1-8a74-4963-9731-40767b4bb127 for this chassis.
Dec 13 08:32:57 compute-0 ovn_controller[148476]: 2025-12-13T08:32:57Z|00710|binding|INFO|533958d1-8a74-4963-9731-40767b4bb127: Claiming fa:16:3e:66:ab:b9 10.100.0.5
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:32:57 compute-0 ovn_controller[148476]: 2025-12-13T08:32:57Z|00711|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 ovn-installed in OVS
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:57 compute-0 systemd-machined[210538]: New machine qemu-89-instance-00000048.
Dec 13 08:32:57 compute-0 systemd-udevd[319199]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:32:57 compute-0 NetworkManager[50376]: <info>  [1765614777.4437] device (tap533958d1-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:32:57 compute-0 NetworkManager[50376]: <info>  [1765614777.4445] device (tap533958d1-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:32:57 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-00000048.
Dec 13 08:32:57 compute-0 podman[319166]: 2025-12-13 08:32:57.457427785 +0000 UTC m=+0.257392853 container init e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:32:57 compute-0 podman[319166]: 2025-12-13 08:32:57.466643914 +0000 UTC m=+0.266608982 container start e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:57 compute-0 podman[319166]: 2025-12-13 08:32:57.534825267 +0000 UTC m=+0.334790335 container attach e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:32:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 534 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.656 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ab:b9 10.100.0.5'], port_security=['fa:16:3e:66:ab:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d34feed-2663-4e17-b951-65a37bd3a275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=533958d1-8a74-4963-9731-40767b4bb127) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:57 compute-0 ovn_controller[148476]: 2025-12-13T08:32:57Z|00712|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 up in Southbound
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.660 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 533958d1-8a74-4963-9731-40767b4bb127 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f bound to our chassis
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.662 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.680 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.681 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.685 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40425b5c-b8ee-405c-b94b-ec7ad47a5bff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.686 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.686 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.694 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.694 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.700 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.700 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.701 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.704 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.704 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.710 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.710 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.725 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c9afc4-89ca-435b-8d7f-8d50057fceb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.729 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca15c4c-1d08-48f2-9459-e6f579a6e1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.765 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f390372b-f2f5-4842-8b84-2926b67bf2f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6973906-16f0-4322-88c5-cec79f64fef2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319215, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.809 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[412a930c-3c2c-498c-a173-0cb89b738edf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319216, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319216, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.812 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.815 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.815 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.816 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.816 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.935 248514 DEBUG nova.compute.manager [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.937 248514 DEBUG oslo_concurrency.lockutils [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.937 248514 DEBUG oslo_concurrency.lockutils [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.938 248514 DEBUG oslo_concurrency.lockutils [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.938 248514 DEBUG nova.compute.manager [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.938 248514 WARNING nova.compute.manager [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state None.
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.977 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.978 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3027MB free_disk=59.72048218920827GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.979 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:57 compute-0 nova_compute[248510]: 2025-12-13 08:32:57.979 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.093 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614778.0930796, 5d34feed-2663-4e17-b951-65a37bd3a275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.094 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Started (Lifecycle Event)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: pgmap v2071: 321 pgs: 321 active+clean; 534 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]: [
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:     {
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "available": false,
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "being_replaced": false,
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "ceph_device_lvm": false,
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "lsm_data": {},
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "lvs": [],
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "path": "/dev/sr0",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "rejected_reasons": [
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "Insufficient space (<5GB)",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "Has a FileSystem"
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         ],
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         "sys_api": {
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "actuators": null,
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "device_nodes": [
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:                 "sr0"
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             ],
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "devname": "sr0",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "human_readable_size": "482.00 KB",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "id_bus": "ata",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "model": "QEMU DVD-ROM",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "nr_requests": "2",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "parent": "/dev/sr0",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "partitions": {},
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "path": "/dev/sr0",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "removable": "1",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "rev": "2.5+",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "ro": "0",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "rotational": "1",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "sas_address": "",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "sas_device_handle": "",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "scheduler_mode": "mq-deadline",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "sectors": 0,
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "sectorsize": "2048",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "size": 493568.0,
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "support_discard": "2048",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "type": "disk",
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:             "vendor": "QEMU"
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:         }
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]:     }
Dec 13 08:32:58 compute-0 eloquent_kirch[319188]: ]
Dec 13 08:32:58 compute-0 systemd[1]: libpod-e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427.scope: Deactivated successfully.
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.373 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.377 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614778.0933862, 5d34feed-2663-4e17-b951-65a37bd3a275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.377 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Paused (Lifecycle Event)
Dec 13 08:32:58 compute-0 podman[320131]: 2025-12-13 08:32:58.396929527 +0000 UTC m=+0.032379185 container died e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.397 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.398 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.398 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.399 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.399 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.402 248514 INFO nova.compute.manager [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Terminating instance
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.403 248514 DEBUG nova.compute.manager [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.406 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.411 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b-merged.mount: Deactivated successfully.
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.434 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.434 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a9c6de9d-63c0-43a5-9d6e-be356e504837 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ce9adb21-8832-4d3e-867e-b0b49bdb6850 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c5edbf88-6361-407a-a0f1-c133f70b50e9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 5d34feed-2663-4e17-b951-65a37bd3a275 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance bb8c91ff-01cb-4fd5-ab69-005313784b57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.436 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.436 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.440 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:32:58 compute-0 kernel: tap2bdcea64-a0 (unregistering): left promiscuous mode
Dec 13 08:32:58 compute-0 NetworkManager[50376]: <info>  [1765614778.4505] device (tap2bdcea64-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:32:58 compute-0 ovn_controller[148476]: 2025-12-13T08:32:58Z|00713|binding|INFO|Releasing lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 from this chassis (sb_readonly=0)
Dec 13 08:32:58 compute-0 ovn_controller[148476]: 2025-12-13T08:32:58Z|00714|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 down in Southbound
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:58 compute-0 ovn_controller[148476]: 2025-12-13T08:32:58Z|00715|binding|INFO|Removing iface tap2bdcea64-a0 ovn-installed in OVS
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.457 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:58.463 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:32:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:58.465 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis
Dec 13 08:32:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:58.466 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:32:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:32:58.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[abdae7f8-9cee-4644-bd01-b3077cf791a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:58 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000047.scope: Deactivated successfully.
Dec 13 08:32:58 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000047.scope: Consumed 6.726s CPU time.
Dec 13 08:32:58 compute-0 systemd-machined[210538]: Machine qemu-88-instance-00000047 terminated.
Dec 13 08:32:58 compute-0 podman[320131]: 2025-12-13 08:32:58.510443156 +0000 UTC m=+0.145892794 container remove e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:32:58 compute-0 systemd[1]: libpod-conmon-e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427.scope: Deactivated successfully.
Dec 13 08:32:58 compute-0 sudo[319049]: pam_unix(sudo:session): session closed for user root
Dec 13 08:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.608 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.661 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance destroyed successfully.
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.662 248514 DEBUG nova.objects.instance [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'resources' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.684 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:32:58 compute-0 sudo[320156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:32:58 compute-0 sudo[320156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.705 248514 DEBUG nova.virt.libvirt.vif [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-607419756',display_name='tempest-ServerRescueTestJSON-server-607419756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-607419756',id=71,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-h6adjjd8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:32:55Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=c5edbf88-6361-407a-a0f1-c133f70b50e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.706 248514 DEBUG nova.network.os_vif_util [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.707 248514 DEBUG nova.network.os_vif_util [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:32:58 compute-0 sudo[320156]: pam_unix(sudo:session): session closed for user root
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.708 248514 DEBUG os_vif [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.713 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bdcea64-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:32:58 compute-0 nova_compute[248510]: 2025-12-13 08:32:58.720 248514 INFO os_vif [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0')
Dec 13 08:32:58 compute-0 sudo[320203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:32:58 compute-0 sudo[320203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:32:59 compute-0 podman[320261]: 2025-12-13 08:32:59.191391897 +0000 UTC m=+0.104719592 container create 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:32:59 compute-0 podman[320261]: 2025-12-13 08:32:59.11139391 +0000 UTC m=+0.024721595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:32:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:32:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677728074' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:59 compute-0 nova_compute[248510]: 2025-12-13 08:32:59.247 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:32:59 compute-0 nova_compute[248510]: 2025-12-13 08:32:59.254 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:32:59 compute-0 systemd[1]: Started libpod-conmon-083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d.scope.
Dec 13 08:32:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:32:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 534 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 291 op/s
Dec 13 08:32:59 compute-0 podman[320261]: 2025-12-13 08:32:59.662268331 +0000 UTC m=+0.575596026 container init 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:32:59 compute-0 podman[320261]: 2025-12-13 08:32:59.67071168 +0000 UTC m=+0.584039345 container start 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:32:59 compute-0 exciting_aryabhata[320279]: 167 167
Dec 13 08:32:59 compute-0 systemd[1]: libpod-083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d.scope: Deactivated successfully.
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:32:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/677728074' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:32:59 compute-0 podman[320261]: 2025-12-13 08:32:59.721373338 +0000 UTC m=+0.634701083 container attach 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 08:32:59 compute-0 podman[320261]: 2025-12-13 08:32:59.722971788 +0000 UTC m=+0.636299473 container died 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6da9fd8bc8eacf29ac568dd99aeee4338c425deaf00158873bf2e186b8913dfb-merged.mount: Deactivated successfully.
Dec 13 08:33:00 compute-0 podman[320261]: 2025-12-13 08:33:00.02019826 +0000 UTC m=+0.933525925 container remove 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:33:00 compute-0 systemd[1]: libpod-conmon-083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d.scope: Deactivated successfully.
Dec 13 08:33:00 compute-0 nova_compute[248510]: 2025-12-13 08:33:00.179 248514 INFO nova.virt.libvirt.driver [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deleting instance files /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9_del
Dec 13 08:33:00 compute-0 nova_compute[248510]: 2025-12-13 08:33:00.181 248514 INFO nova.virt.libvirt.driver [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deletion of /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9_del complete
Dec 13 08:33:00 compute-0 podman[320305]: 2025-12-13 08:33:00.254851377 +0000 UTC m=+0.060215136 container create 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:33:00 compute-0 systemd[1]: Started libpod-conmon-6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b.scope.
Dec 13 08:33:00 compute-0 podman[320305]: 2025-12-13 08:33:00.234913532 +0000 UTC m=+0.040277311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:33:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:00 compute-0 podman[320305]: 2025-12-13 08:33:00.358586233 +0000 UTC m=+0.163950002 container init 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:33:00 compute-0 podman[320305]: 2025-12-13 08:33:00.366781267 +0000 UTC m=+0.172145026 container start 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 08:33:00 compute-0 podman[320305]: 2025-12-13 08:33:00.370992651 +0000 UTC m=+0.176356430 container attach 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 08:33:00 compute-0 nova_compute[248510]: 2025-12-13 08:33:00.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:00 compute-0 great_colden[320322]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:33:00 compute-0 great_colden[320322]: --> All data devices are unavailable
Dec 13 08:33:00 compute-0 ceph-mon[76537]: pgmap v2072: 321 pgs: 321 active+clean; 534 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 291 op/s
Dec 13 08:33:00 compute-0 systemd[1]: libpod-6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b.scope: Deactivated successfully.
Dec 13 08:33:00 compute-0 podman[320305]: 2025-12-13 08:33:00.895472055 +0000 UTC m=+0.700835824 container died 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6-merged.mount: Deactivated successfully.
Dec 13 08:33:01 compute-0 podman[320305]: 2025-12-13 08:33:01.132359358 +0000 UTC m=+0.937723157 container remove 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:33:01 compute-0 systemd[1]: libpod-conmon-6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b.scope: Deactivated successfully.
Dec 13 08:33:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:01 compute-0 sudo[320203]: pam_unix(sudo:session): session closed for user root
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.242723) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781242780, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 532, "num_deletes": 250, "total_data_size": 545480, "memory_usage": 555480, "flush_reason": "Manual Compaction"}
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Dec 13 08:33:01 compute-0 sudo[320356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:33:01 compute-0 sudo[320356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:33:01 compute-0 sudo[320356]: pam_unix(sudo:session): session closed for user root
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781287500, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 526503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40055, "largest_seqno": 40586, "table_properties": {"data_size": 523564, "index_size": 911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6053, "raw_average_key_size": 16, "raw_value_size": 517704, "raw_average_value_size": 1395, "num_data_blocks": 40, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614752, "oldest_key_time": 1765614752, "file_creation_time": 1765614781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 44831 microseconds, and 2440 cpu microseconds.
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.287554) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 526503 bytes OK
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.287578) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.289533) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.289548) EVENT_LOG_v1 {"time_micros": 1765614781289543, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.289567) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 542418, prev total WAL file size 542418, number of live WAL files 2.
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.290139) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(514KB)], [89(10MB)]
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781290370, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11083925, "oldest_snapshot_seqno": -1}
Dec 13 08:33:01 compute-0 sudo[320381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:33:01 compute-0 sudo[320381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6311 keys, 10321792 bytes, temperature: kUnknown
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781373311, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10321792, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10277722, "index_size": 27192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 163295, "raw_average_key_size": 25, "raw_value_size": 10162487, "raw_average_value_size": 1610, "num_data_blocks": 1077, "num_entries": 6311, "num_filter_entries": 6311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.373647) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10321792 bytes
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.375168) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 124.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.1 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(40.7) write-amplify(19.6) OK, records in: 6823, records dropped: 512 output_compression: NoCompression
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.375185) EVENT_LOG_v1 {"time_micros": 1765614781375177, "job": 52, "event": "compaction_finished", "compaction_time_micros": 83092, "compaction_time_cpu_micros": 33349, "output_level": 6, "num_output_files": 1, "total_output_size": 10321792, "num_input_records": 6823, "num_output_records": 6311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781375375, "job": 52, "event": "table_file_deletion", "file_number": 91}
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781377209, "job": 52, "event": "table_file_deletion", "file_number": 89}
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.289979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:33:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:33:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 535 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 224 op/s
Dec 13 08:33:01 compute-0 podman[320420]: 2025-12-13 08:33:01.717357686 +0000 UTC m=+0.072133142 container create 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:33:01 compute-0 podman[320420]: 2025-12-13 08:33:01.67562909 +0000 UTC m=+0.030404566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:33:01 compute-0 systemd[1]: Started libpod-conmon-8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45.scope.
Dec 13 08:33:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:33:01 compute-0 kernel: tapd001c32a-bc (unregistering): left promiscuous mode
Dec 13 08:33:01 compute-0 NetworkManager[50376]: <info>  [1765614781.9666] device (tapd001c32a-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:01 compute-0 podman[320420]: 2025-12-13 08:33:01.976487331 +0000 UTC m=+0.331262807 container init 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 08:33:01 compute-0 nova_compute[248510]: 2025-12-13 08:33:01.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:01 compute-0 ovn_controller[148476]: 2025-12-13T08:33:01Z|00716|binding|INFO|Releasing lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 from this chassis (sb_readonly=0)
Dec 13 08:33:01 compute-0 ovn_controller[148476]: 2025-12-13T08:33:01Z|00717|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 down in Southbound
Dec 13 08:33:01 compute-0 ovn_controller[148476]: 2025-12-13T08:33:01Z|00718|binding|INFO|Removing iface tapd001c32a-bc ovn-installed in OVS
Dec 13 08:33:01 compute-0 nova_compute[248510]: 2025-12-13 08:33:01.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:01 compute-0 podman[320420]: 2025-12-13 08:33:01.986298265 +0000 UTC m=+0.341073721 container start 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 08:33:01 compute-0 podman[320420]: 2025-12-13 08:33:01.991468013 +0000 UTC m=+0.346243519 container attach 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:33:01 compute-0 systemd[1]: libpod-8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45.scope: Deactivated successfully.
Dec 13 08:33:01 compute-0 eloquent_bhaskara[320436]: 167 167
Dec 13 08:33:01 compute-0 conmon[320436]: conmon 8e614ed305013b613de1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45.scope/container/memory.events
Dec 13 08:33:01 compute-0 nova_compute[248510]: 2025-12-13 08:33:01.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:02 compute-0 podman[320445]: 2025-12-13 08:33:02.045494435 +0000 UTC m=+0.030636992 container died 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:33:02 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Dec 13 08:33:02 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004a.scope: Consumed 15.901s CPU time.
Dec 13 08:33:02 compute-0 systemd-machined[210538]: Machine qemu-86-instance-0000004a terminated.
Dec 13 08:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a46cf14a40daa16884c103e54cb76efd65f460f4760bf80edfb426c00a0d706-merged.mount: Deactivated successfully.
Dec 13 08:33:02 compute-0 podman[320445]: 2025-12-13 08:33:02.209459927 +0000 UTC m=+0.194602454 container remove 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 08:33:02 compute-0 systemd[1]: libpod-conmon-8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45.scope: Deactivated successfully.
Dec 13 08:33:02 compute-0 ceph-mon[76537]: pgmap v2073: 321 pgs: 321 active+clean; 535 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 224 op/s
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.492 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:10:dc 10.100.0.13'], port_security=['fa:16:3e:63:10:dc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8c91ff-01cb-4fd5-ab69-005313784b57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f5cceeb-71e0-4eee-9335-64d44ec2d969', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d001c32a-bc2d-4374-9cf1-cea4a3723c66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.493 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d001c32a-bc2d-4374-9cf1-cea4a3723c66 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.495 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:02 compute-0 podman[320480]: 2025-12-13 08:33:02.404334036 +0000 UTC m=+0.030749504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.504 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 WARNING nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state deleting.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 WARNING nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state deleting.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Processing event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 WARNING nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state rebuild_spawning.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.510 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.513 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2a8a16-40af-4059-a203-3a8a6bbf818d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.514 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614782.5146015, 5d34feed-2663-4e17-b951-65a37bd3a275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.515 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Resumed (Lifecycle Event)
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.518 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:33:02 compute-0 podman[320480]: 2025-12-13 08:33:02.520427089 +0000 UTC m=+0.146842527 container create 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.522 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance spawned successfully.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.522 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.525 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.548 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[227d1804-c924-4413-ac7f-daac01a6c475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.552 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6e938092-10c9-42de-83c2-89166fd12ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.580 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbd2c29-586d-4bc5-838b-cd92700f3661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.597 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.600 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.601 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.604 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53bc3772-44f6-4ed4-b0aa-82e7f3389470]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320502, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 systemd[1]: Started libpod-conmon-45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a.scope.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.611 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.611 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.612 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.612 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.613 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.613 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.622 248514 INFO nova.compute.manager [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Took 4.22 seconds to destroy the instance on the hypervisor.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.622 248514 DEBUG oslo.service.loopingcall [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.623 248514 DEBUG nova.compute.manager [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.623 248514 DEBUG nova.network.neutron [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:33:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba47d38-2ca4-479c-a228-a86b30eb8189]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727305, 'tstamp': 727305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320505, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727308, 'tstamp': 727308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320505, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.636 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.644 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.645 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.646 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.646 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.647 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:33:02 compute-0 podman[320480]: 2025-12-13 08:33:02.687275253 +0000 UTC m=+0.313690711 container init 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.693 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:02 compute-0 podman[320480]: 2025-12-13 08:33:02.697493407 +0000 UTC m=+0.323908845 container start 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 08:33:02 compute-0 podman[320480]: 2025-12-13 08:33:02.701107446 +0000 UTC m=+0.327522894 container attach 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.712 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance shutdown successfully after 25 seconds.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.718 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance destroyed successfully.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.723 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance destroyed successfully.
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.724 248514 DEBUG nova.virt.libvirt.vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-ServerActionsTestJSON-server-1327556776',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:36Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.724 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.725 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.725 248514 DEBUG os_vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.727 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd001c32a-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.734 248514 INFO os_vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc')
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.769 248514 INFO nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] bringing vm to original state: 'stopped'
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.862 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.862 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.863 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.867 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 08:33:02 compute-0 kernel: tap533958d1-8a (unregistering): left promiscuous mode
Dec 13 08:33:02 compute-0 NetworkManager[50376]: <info>  [1765614782.9045] device (tap533958d1-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:02 compute-0 ovn_controller[148476]: 2025-12-13T08:33:02Z|00719|binding|INFO|Releasing lport 533958d1-8a74-4963-9731-40767b4bb127 from this chassis (sb_readonly=0)
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:02 compute-0 ovn_controller[148476]: 2025-12-13T08:33:02Z|00720|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 down in Southbound
Dec 13 08:33:02 compute-0 ovn_controller[148476]: 2025-12-13T08:33:02Z|00721|binding|INFO|Removing iface tap533958d1-8a ovn-installed in OVS
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.920 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ab:b9 10.100.0.5'], port_security=['fa:16:3e:66:ab:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d34feed-2663-4e17-b951-65a37bd3a275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=533958d1-8a74-4963-9731-40767b4bb127) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.921 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 533958d1-8a74-4963-9731-40767b4bb127 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f unbound from our chassis
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.923 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:33:02 compute-0 nova_compute[248510]: 2025-12-13 08:33:02.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.939 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[276489f2-760e-4ad3-a1b8-4f7b772073dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000048.scope: Deactivated successfully.
Dec 13 08:33:02 compute-0 systemd-machined[210538]: Machine qemu-89-instance-00000048 terminated.
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.966 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[25baca03-4caa-4b6a-bb9a-16accdd71127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.968 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c93a577c-2412-4bd2-89e3-b806213b2990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.994 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0785f5-601f-4aa0-913c-7f322db62dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:03 compute-0 crazy_swanson[320503]: {
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:     "0": [
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:         {
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "devices": [
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "/dev/loop3"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             ],
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_name": "ceph_lv0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_size": "21470642176",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "name": "ceph_lv0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "tags": {
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cluster_name": "ceph",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.crush_device_class": "",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.encrypted": "0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.objectstore": "bluestore",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osd_id": "0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.type": "block",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.vdo": "0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.with_tpm": "0"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             },
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "type": "block",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "vg_name": "ceph_vg0"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:         }
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:     ],
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:     "1": [
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:         {
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "devices": [
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "/dev/loop4"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             ],
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_name": "ceph_lv1",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_size": "21470642176",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "name": "ceph_lv1",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "tags": {
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cluster_name": "ceph",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.crush_device_class": "",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.encrypted": "0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.objectstore": "bluestore",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osd_id": "1",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.type": "block",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.vdo": "0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.with_tpm": "0"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             },
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "type": "block",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "vg_name": "ceph_vg1"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:         }
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:     ],
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:     "2": [
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:         {
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "devices": [
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "/dev/loop5"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             ],
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_name": "ceph_lv2",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_size": "21470642176",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "name": "ceph_lv2",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "tags": {
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.cluster_name": "ceph",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.crush_device_class": "",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.encrypted": "0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.objectstore": "bluestore",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osd_id": "2",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.type": "block",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.vdo": "0",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:                 "ceph.with_tpm": "0"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             },
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "type": "block",
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:             "vg_name": "ceph_vg2"
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:         }
Dec 13 08:33:03 compute-0 crazy_swanson[320503]:     ]
Dec 13 08:33:03 compute-0 crazy_swanson[320503]: }
Dec 13 08:33:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36171559-8111-4fa6-af6f-0b8b6704043b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320540, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:03 compute-0 systemd[1]: libpod-45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a.scope: Deactivated successfully.
Dec 13 08:33:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.036 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa90035d-5009-44f9-82ff-1eda351bb212]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320541, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320541, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.039 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:03 compute-0 podman[320543]: 2025-12-13 08:33:03.085502543 +0000 UTC m=+0.032684143 container died 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.115 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.115 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.187 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.218 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.218 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.218 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.295 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.395 248514 DEBUG nova.compute.manager [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.395 248514 DEBUG oslo_concurrency.lockutils [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.396 248514 DEBUG oslo_concurrency.lockutils [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.396 248514 DEBUG oslo_concurrency.lockutils [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.396 248514 DEBUG nova.compute.manager [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.396 248514 WARNING nova.compute.manager [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state active and task_state rebuilding.
Dec 13 08:33:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 513 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Dec 13 08:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b-merged.mount: Deactivated successfully.
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.911 248514 DEBUG nova.network.neutron [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.938 248514 INFO nova.compute.manager [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Took 1.32 seconds to deallocate network for instance.
Dec 13 08:33:03 compute-0 podman[320543]: 2025-12-13 08:33:03.98575373 +0000 UTC m=+0.932935310 container remove 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.990 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:03 compute-0 nova_compute[248510]: 2025-12-13 08:33:03.990 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:03 compute-0 systemd[1]: libpod-conmon-45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a.scope: Deactivated successfully.
Dec 13 08:33:04 compute-0 sudo[320381]: pam_unix(sudo:session): session closed for user root
Dec 13 08:33:04 compute-0 sudo[320566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:33:04 compute-0 sudo[320566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:33:04 compute-0 sudo[320566]: pam_unix(sudo:session): session closed for user root
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.160 248514 DEBUG oslo_concurrency.processutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:04 compute-0 sudo[320591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:33:04 compute-0 sudo[320591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:33:04 compute-0 podman[320648]: 2025-12-13 08:33:04.572191762 +0000 UTC m=+0.104339462 container create 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:33:04 compute-0 podman[320648]: 2025-12-13 08:33:04.491755095 +0000 UTC m=+0.023902785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.601 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.602 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.603 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.603 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:33:04 compute-0 systemd[1]: Started libpod-conmon-69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7.scope.
Dec 13 08:33:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:33:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3911432715' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:04 compute-0 ceph-mon[76537]: pgmap v2074: 321 pgs: 321 active+clean; 513 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Dec 13 08:33:04 compute-0 podman[320648]: 2025-12-13 08:33:04.93089779 +0000 UTC m=+0.463045500 container init 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.933 248514 DEBUG oslo_concurrency.processutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.772s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.944 248514 DEBUG nova.compute.provider_tree [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:04 compute-0 podman[320648]: 2025-12-13 08:33:04.944457337 +0000 UTC m=+0.476604997 container start 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:33:04 compute-0 sharp_lovelace[320666]: 167 167
Dec 13 08:33:04 compute-0 systemd[1]: libpod-69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7.scope: Deactivated successfully.
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.967 248514 DEBUG nova.scheduler.client.report [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:04 compute-0 podman[320648]: 2025-12-13 08:33:04.990688625 +0000 UTC m=+0.522836315 container attach 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:33:04 compute-0 podman[320648]: 2025-12-13 08:33:04.991313391 +0000 UTC m=+0.523461081 container died 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 08:33:04 compute-0 nova_compute[248510]: 2025-12-13 08:33:04.995 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.031 248514 INFO nova.scheduler.client.report [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Deleted allocations for instance c5edbf88-6361-407a-a0f1-c133f70b50e9
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.120 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.233 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.233 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.234 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 WARNING nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state deleted and task_state None.
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 WARNING nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state None.
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.237 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.237 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.237 248514 WARNING nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state None.
Dec 13 08:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4b7550db5ef0b9210bdc205ee0b9550ad95ef4d3c7ae9b3363eaffbc9a2aa9d-merged.mount: Deactivated successfully.
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.421 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:05 compute-0 podman[320648]: 2025-12-13 08:33:05.531546997 +0000 UTC m=+1.063694657 container remove 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.538 248514 DEBUG nova.compute.manager [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.539 248514 DEBUG oslo_concurrency.lockutils [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.539 248514 DEBUG oslo_concurrency.lockutils [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.539 248514 DEBUG oslo_concurrency.lockutils [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.539 248514 DEBUG nova.compute.manager [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.540 248514 WARNING nova.compute.manager [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state active and task_state rebuilding.
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.540 248514 DEBUG nova.compute.manager [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-deleted-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:05 compute-0 systemd[1]: libpod-conmon-69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7.scope: Deactivated successfully.
Dec 13 08:33:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 395 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 165 op/s
Dec 13 08:33:05 compute-0 podman[320690]: 2025-12-13 08:33:05.75152183 +0000 UTC m=+0.026853618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:33:05 compute-0 podman[320690]: 2025-12-13 08:33:05.961386492 +0000 UTC m=+0.236718250 container create 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.962 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.963 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.964 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.965 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.965 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.967 248514 INFO nova.compute.manager [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Terminating instance
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.968 248514 DEBUG nova.compute.manager [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.978 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deleting instance files /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57_del
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.979 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deletion of /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57_del complete
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.989 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.
Dec 13 08:33:05 compute-0 nova_compute[248510]: 2025-12-13 08:33:05.990 248514 DEBUG nova.objects.instance [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'resources' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.014 248514 DEBUG nova.virt.libvirt.vif [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:33:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:03Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.014 248514 DEBUG nova.network.os_vif_util [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.015 248514 DEBUG nova.network.os_vif_util [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.015 248514 DEBUG os_vif [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.018 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap533958d1-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.023 248514 INFO os_vif [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a')
Dec 13 08:33:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3911432715' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:06 compute-0 systemd[1]: Started libpod-conmon-5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be.scope.
Dec 13 08:33:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.195 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.196 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating image(s)
Dec 13 08:33:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.224 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:06 compute-0 podman[320690]: 2025-12-13 08:33:06.257785642 +0000 UTC m=+0.533117440 container init 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.261 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:06 compute-0 podman[320690]: 2025-12-13 08:33:06.26654636 +0000 UTC m=+0.541878128 container start 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.286 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.291 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.332 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.332 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.333 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.333 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.333 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.335 248514 INFO nova.compute.manager [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Terminating instance
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.337 248514 DEBUG nova.compute.manager [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:33:06 compute-0 podman[320690]: 2025-12-13 08:33:06.369507147 +0000 UTC m=+0.644839015 container attach 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.381 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.382 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.383 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.383 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.406 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.410 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc bb8c91ff-01cb-4fd5-ab69-005313784b57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:06 compute-0 kernel: tap7b3b1c0a-88 (unregistering): left promiscuous mode
Dec 13 08:33:06 compute-0 NetworkManager[50376]: <info>  [1765614786.5709] device (tap7b3b1c0a-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:06 compute-0 ovn_controller[148476]: 2025-12-13T08:33:06Z|00722|binding|INFO|Releasing lport 7b3b1c0a-882e-4f33-a582-667d018090d4 from this chassis (sb_readonly=0)
Dec 13 08:33:06 compute-0 ovn_controller[148476]: 2025-12-13T08:33:06Z|00723|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 down in Southbound
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:06 compute-0 ovn_controller[148476]: 2025-12-13T08:33:06Z|00724|binding|INFO|Removing iface tap7b3b1c0a-88 ovn-installed in OVS
Dec 13 08:33:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:06.606 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:56:12 10.100.0.13'], port_security=['fa:16:3e:41:56:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9c6de9d-63c0-43a5-9d6e-be356e504837', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7b3b1c0a-882e-4f33-a582-667d018090d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:06.608 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b1c0a-882e-4f33-a582-667d018090d4 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis
Dec 13 08:33:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:06.609 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:33:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:06.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ceda403d-8c25-4123-b8d0-f9c37ac00905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:06 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec 13 08:33:06 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000044.scope: Consumed 14.821s CPU time.
Dec 13 08:33:06 compute-0 systemd-machined[210538]: Machine qemu-82-instance-00000044 terminated.
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.782 248514 INFO nova.virt.libvirt.driver [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance destroyed successfully.
Dec 13 08:33:06 compute-0 nova_compute[248510]: 2025-12-13 08:33:06.784 248514 DEBUG nova.objects.instance [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'resources' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:07 compute-0 lvm[320910]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:33:07 compute-0 lvm[320910]: VG ceph_vg0 finished
Dec 13 08:33:07 compute-0 lvm[320912]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:33:07 compute-0 lvm[320912]: VG ceph_vg1 finished
Dec 13 08:33:07 compute-0 lvm[320914]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:33:07 compute-0 lvm[320914]: VG ceph_vg2 finished
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.139 248514 DEBUG nova.virt.libvirt.vif [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:31:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1742357064',display_name='tempest-ServerRescueTestJSON-server-1742357064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1742357064',id=68,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:31:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-hpc6t4lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:59Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=a9c6de9d-63c0-43a5-9d6e-be356e504837,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.141 248514 DEBUG nova.network.os_vif_util [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.142 248514 DEBUG nova.network.os_vif_util [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.143 248514 DEBUG os_vif [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.146 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.146 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b3b1c0a-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.150 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.153 248514 INFO os_vif [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88')
Dec 13 08:33:07 compute-0 laughing_vaughan[320722]: {}
Dec 13 08:33:07 compute-0 systemd[1]: libpod-5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be.scope: Deactivated successfully.
Dec 13 08:33:07 compute-0 systemd[1]: libpod-5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be.scope: Consumed 1.485s CPU time.
Dec 13 08:33:07 compute-0 podman[320690]: 2025-12-13 08:33:07.209957829 +0000 UTC m=+1.485289597 container died 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:33:07 compute-0 ceph-mon[76537]: pgmap v2075: 321 pgs: 321 active+clean; 395 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 165 op/s
Dec 13 08:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2-merged.mount: Deactivated successfully.
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.328 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc bb8c91ff-01cb-4fd5-ab69-005313784b57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.918s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:07 compute-0 podman[320690]: 2025-12-13 08:33:07.338515321 +0000 UTC m=+1.613847099 container remove 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:33:07 compute-0 systemd[1]: libpod-conmon-5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be.scope: Deactivated successfully.
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.392 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] resizing rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:33:07 compute-0 sudo[320591]: pam_unix(sudo:session): session closed for user root
Dec 13 08:33:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:33:07 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:33:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:33:07 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:33:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 395 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 581 KiB/s rd, 43 KiB/s wr, 65 op/s
Dec 13 08:33:07 compute-0 sudo[321004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:33:07 compute-0 sudo[321004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:33:07 compute-0 sudo[321004]: pam_unix(sudo:session): session closed for user root
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.702 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.704 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.706 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.706 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.706 248514 WARNING nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state rescued and task_state deleting.
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.751 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.751 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Ensure instance console log exists: /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.752 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.752 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.752 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.754 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start _get_guest_xml network_info=[{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.759 248514 WARNING nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.764 248514 DEBUG nova.virt.libvirt.host [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.765 248514 DEBUG nova.virt.libvirt.host [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.769 248514 DEBUG nova.virt.libvirt.host [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.769 248514 DEBUG nova.virt.libvirt.host [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.770 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.770 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:07 compute-0 nova_compute[248510]: 2025-12-13 08:33:07.800 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:33:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350825097' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.360 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.381 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.386 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:08 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:33:08 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:33:08 compute-0 ceph-mon[76537]: pgmap v2076: 321 pgs: 321 active+clean; 395 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 581 KiB/s rd, 43 KiB/s wr, 65 op/s
Dec 13 08:33:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3350825097' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.790 248514 INFO nova.virt.libvirt.driver [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deleting instance files /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275_del
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.791 248514 INFO nova.virt.libvirt.driver [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deletion of /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275_del complete
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.846 248514 INFO nova.compute.manager [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Took 2.88 seconds to destroy the instance on the hypervisor.
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.847 248514 DEBUG oslo.service.loopingcall [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.847 248514 DEBUG nova.compute.manager [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.847 248514 DEBUG nova.network.neutron [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:33:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:33:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552837108' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.984 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.986 248514 DEBUG nova.virt.libvirt.vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-ServerActionsTestJSON-server-1327556776',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:33:06Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.986 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.987 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.991 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <uuid>bb8c91ff-01cb-4fd5-ab69-005313784b57</uuid>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <name>instance-0000004a</name>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestJSON-server-1327556776</nova:name>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:33:07</nova:creationTime>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <nova:port uuid="d001c32a-bc2d-4374-9cf1-cea4a3723c66">
Dec 13 08:33:08 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <system>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <entry name="serial">bb8c91ff-01cb-4fd5-ab69-005313784b57</entry>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <entry name="uuid">bb8c91ff-01cb-4fd5-ab69-005313784b57</entry>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </system>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <os>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   </os>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <features>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   </features>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bb8c91ff-01cb-4fd5-ab69-005313784b57_disk">
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       </source>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config">
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       </source>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:33:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:63:10:dc"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <target dev="tapd001c32a-bc"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/console.log" append="off"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <video>
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </video>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:33:08 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:33:08 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:33:08 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:33:08 compute-0 nova_compute[248510]: </domain>
Dec 13 08:33:08 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.993 248514 DEBUG nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Preparing to wait for external event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.993 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.993 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.993 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.994 248514 DEBUG nova.virt.libvirt.vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-ServerActionsTestJSON-server-1327556776',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:33:06Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.994 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.995 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.995 248514 DEBUG os_vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.996 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.996 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:08 compute-0 nova_compute[248510]: 2025-12-13 08:33:08.997 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.000 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd001c32a-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.001 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd001c32a-bc, col_values=(('external_ids', {'iface-id': 'd001c32a-bc2d-4374-9cf1-cea4a3723c66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:10:dc', 'vm-uuid': 'bb8c91ff-01cb-4fd5-ab69-005313784b57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:09 compute-0 NetworkManager[50376]: <info>  [1765614789.0375] manager: (tapd001c32a-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.046 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.047 248514 INFO os_vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc')
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.124 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.125 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.125 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No VIF found with MAC fa:16:3e:63:10:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.126 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Using config drive
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.148 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.172 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'ec2_ids' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.223 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'keypairs' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:33:09
Dec 13 08:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Dec 13 08:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:33:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 344 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 1.4 MiB/s wr, 126 op/s
Dec 13 08:33:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3552837108' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.891 248514 INFO nova.virt.libvirt.driver [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deleting instance files /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837_del
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.892 248514 INFO nova.virt.libvirt.driver [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deletion of /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837_del complete
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.957 248514 INFO nova.compute.manager [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Took 3.62 seconds to destroy the instance on the hypervisor.
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.958 248514 DEBUG oslo.service.loopingcall [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.959 248514 DEBUG nova.compute.manager [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:33:09 compute-0 nova_compute[248510]: 2025-12-13 08:33:09.959 248514 DEBUG nova.network.neutron [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.147 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating config drive at /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.152 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptov3s5sj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.295 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptov3s5sj" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.323 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.329 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.422 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.550 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.551 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deleting local config drive /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config because it was imported into RBD.
Dec 13 08:33:10 compute-0 kernel: tapd001c32a-bc: entered promiscuous mode
Dec 13 08:33:10 compute-0 NetworkManager[50376]: <info>  [1765614790.6056] manager: (tapd001c32a-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Dec 13 08:33:10 compute-0 ovn_controller[148476]: 2025-12-13T08:33:10Z|00725|binding|INFO|Claiming lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 for this chassis.
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.607 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:10 compute-0 ovn_controller[148476]: 2025-12-13T08:33:10Z|00726|binding|INFO|d001c32a-bc2d-4374-9cf1-cea4a3723c66: Claiming fa:16:3e:63:10:dc 10.100.0.13
Dec 13 08:33:10 compute-0 ovn_controller[148476]: 2025-12-13T08:33:10Z|00727|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 ovn-installed in OVS
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:33:10 compute-0 systemd-udevd[321183]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:33:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:33:10 compute-0 systemd-machined[210538]: New machine qemu-90-instance-0000004a.
Dec 13 08:33:10 compute-0 NetworkManager[50376]: <info>  [1765614790.6521] device (tapd001c32a-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:33:10 compute-0 NetworkManager[50376]: <info>  [1765614790.6526] device (tapd001c32a-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:33:10 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-0000004a.
Dec 13 08:33:10 compute-0 ovn_controller[148476]: 2025-12-13T08:33:10Z|00728|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 up in Southbound
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.665 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:10:dc 10.100.0.13'], port_security=['fa:16:3e:63:10:dc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8c91ff-01cb-4fd5-ab69-005313784b57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8f5cceeb-71e0-4eee-9335-64d44ec2d969', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d001c32a-bc2d-4374-9cf1-cea4a3723c66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.666 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d001c32a-bc2d-4374-9cf1-cea4a3723c66 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.668 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4c8fc3-49cd-4127-bd1b-c1051dd39fc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.715 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a3375000-1912-42b2-99cb-ef2ea9246832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.718 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f27519ef-ac5a-4e7c-9239-a40f025a6fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.745 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eacbbad5-eb60-4b14-9be3-70fa8f1aa536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.765 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26d03e3d-4f91-4314-b8c9-8bc05bda1c16]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321198, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.780 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9cef977-917d-4e02-a0a3-41eceb39db8b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727305, 'tstamp': 727305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321199, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727308, 'tstamp': 727308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321199, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.782 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:10 compute-0 nova_compute[248510]: 2025-12-13 08:33:10.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.785 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.785 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.786 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.786 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:10 compute-0 ceph-mon[76537]: pgmap v2077: 321 pgs: 321 active+clean; 344 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 1.4 MiB/s wr, 126 op/s
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.077 248514 DEBUG nova.network.neutron [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.082 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for bb8c91ff-01cb-4fd5-ab69-005313784b57 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.083 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614791.0813084, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.083 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Started (Lifecycle Event)
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.125 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.130 248514 INFO nova.compute.manager [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Took 2.28 seconds to deallocate network for instance.
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.137 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614791.0815198, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.137 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Paused (Lifecycle Event)
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.197 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.202 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.211 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.212 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.217 248514 DEBUG nova.compute.manager [req-81daf74f-1d5c-4aed-bb50-0259ac766e96 req-9ee9eb7f-64ae-4492-8737-2f71ad5baac9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-deleted-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.241 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.370 248514 DEBUG oslo_concurrency.processutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 305 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.8 MiB/s wr, 152 op/s
Dec 13 08:33:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423383259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.953 248514 DEBUG oslo_concurrency.processutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:11 compute-0 nova_compute[248510]: 2025-12-13 08:33:11.959 248514 DEBUG nova.compute.provider_tree [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.126 248514 DEBUG nova.scheduler.client.report [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.757 248514 DEBUG nova.network.neutron [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.763 248514 DEBUG nova.compute.manager [req-735623e6-432f-4623-968c-b96688d045fe req-ccce2a6c-cb96-4f66-b033-47614369de63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-deleted-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.763 248514 INFO nova.compute.manager [req-735623e6-432f-4623-968c-b96688d045fe req-ccce2a6c-cb96-4f66-b033-47614369de63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Neutron deleted interface 7b3b1c0a-882e-4f33-a582-667d018090d4; detaching it from the instance and deleting it from the info cache
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.763 248514 DEBUG nova.network.neutron [req-735623e6-432f-4623-968c-b96688d045fe req-ccce2a6c-cb96-4f66-b033-47614369de63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.788 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.793 248514 INFO nova.compute.manager [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Took 2.83 seconds to deallocate network for instance.
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.800 248514 DEBUG nova.compute.manager [req-735623e6-432f-4623-968c-b96688d045fe req-ccce2a6c-cb96-4f66-b033-47614369de63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Detach interface failed, port_id=7b3b1c0a-882e-4f33-a582-667d018090d4, reason: Instance a9c6de9d-63c0-43a5-9d6e-be356e504837 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:33:12 compute-0 nova_compute[248510]: 2025-12-13 08:33:12.834 248514 INFO nova.scheduler.client.report [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Deleted allocations for instance 5d34feed-2663-4e17-b951-65a37bd3a275
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.040 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.041 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:13 compute-0 ceph-mon[76537]: pgmap v2078: 321 pgs: 321 active+clean; 305 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.8 MiB/s wr, 152 op/s
Dec 13 08:33:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/423383259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.158 248514 DEBUG oslo_concurrency.processutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.351 248514 DEBUG nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.352 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.353 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.353 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.353 248514 DEBUG nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Processing event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.354 248514 DEBUG nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.354 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.355 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.355 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.355 248514 DEBUG nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.355 248514 WARNING nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state active and task_state rebuild_spawning.
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.357 248514 DEBUG nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.362 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.364 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614793.3622882, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.364 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Resumed (Lifecycle Event)
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.373 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance spawned successfully.
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.374 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.381 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.397 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.408 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.421 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.422 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.423 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.423 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.424 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.425 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.453 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.554 248514 DEBUG nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 273 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 150 op/s
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.624 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.660 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614778.6589525, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.660 248514 INFO nova.compute.manager [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Stopped (Lifecycle Event)
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.721 248514 DEBUG nova.compute.manager [None req-04348bb6-9266-49b7-844a-5fc1bf7eaa61 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120018909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.754 248514 DEBUG oslo_concurrency.processutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:13 compute-0 nova_compute[248510]: 2025-12-13 08:33:13.759 248514 DEBUG nova.compute.provider_tree [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:14 compute-0 nova_compute[248510]: 2025-12-13 08:33:14.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:14 compute-0 ceph-mon[76537]: pgmap v2079: 321 pgs: 321 active+clean; 273 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 150 op/s
Dec 13 08:33:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3120018909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:14 compute-0 nova_compute[248510]: 2025-12-13 08:33:14.478 248514 DEBUG nova.scheduler.client.report [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:14 compute-0 nova_compute[248510]: 2025-12-13 08:33:14.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:33:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/953622266' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:33:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:33:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/953622266' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:33:15 compute-0 nova_compute[248510]: 2025-12-13 08:33:15.088 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:15 compute-0 nova_compute[248510]: 2025-12-13 08:33:15.091 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 1.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:15 compute-0 nova_compute[248510]: 2025-12-13 08:33:15.092 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:33:15 compute-0 nova_compute[248510]: 2025-12-13 08:33:15.122 248514 INFO nova.scheduler.client.report [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Deleted allocations for instance a9c6de9d-63c0-43a5-9d6e-be356e504837
Dec 13 08:33:15 compute-0 nova_compute[248510]: 2025-12-13 08:33:15.168 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/953622266' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:33:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/953622266' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:33:15 compute-0 nova_compute[248510]: 2025-12-13 08:33:15.217 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:15 compute-0 nova_compute[248510]: 2025-12-13 08:33:15.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 1.8 MiB/s wr, 181 op/s
Dec 13 08:33:16 compute-0 ceph-mon[76537]: pgmap v2080: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 1.8 MiB/s wr, 181 op/s
Dec 13 08:33:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 694 KiB/s rd, 1.8 MiB/s wr, 159 op/s
Dec 13 08:33:18 compute-0 nova_compute[248510]: 2025-12-13 08:33:18.113 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614783.1120586, 5d34feed-2663-4e17-b951-65a37bd3a275 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:18 compute-0 nova_compute[248510]: 2025-12-13 08:33:18.114 248514 INFO nova.compute.manager [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Stopped (Lifecycle Event)
Dec 13 08:33:18 compute-0 nova_compute[248510]: 2025-12-13 08:33:18.335 248514 DEBUG nova.compute.manager [None req-32937ac4-a574-4c32-a091-4b040a77c725 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:18 compute-0 ceph-mon[76537]: pgmap v2081: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 694 KiB/s rd, 1.8 MiB/s wr, 159 op/s
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.091 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.091 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.092 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.092 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.092 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.094 248514 INFO nova.compute.manager [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Terminating instance
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.095 248514 DEBUG nova.compute.manager [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.178 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 kernel: tapd001c32a-bc (unregistering): left promiscuous mode
Dec 13 08:33:19 compute-0 NetworkManager[50376]: <info>  [1765614799.2317] device (tapd001c32a-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 ovn_controller[148476]: 2025-12-13T08:33:19Z|00729|binding|INFO|Releasing lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 from this chassis (sb_readonly=0)
Dec 13 08:33:19 compute-0 ovn_controller[148476]: 2025-12-13T08:33:19Z|00730|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 down in Southbound
Dec 13 08:33:19 compute-0 ovn_controller[148476]: 2025-12-13T08:33:19Z|00731|binding|INFO|Removing iface tapd001c32a-bc ovn-installed in OVS
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.253 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:10:dc 10.100.0.13'], port_security=['fa:16:3e:63:10:dc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8c91ff-01cb-4fd5-ab69-005313784b57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8f5cceeb-71e0-4eee-9335-64d44ec2d969', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d001c32a-bc2d-4374-9cf1-cea4a3723c66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.255 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d001c32a-bc2d-4374-9cf1-cea4a3723c66 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.257 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.261 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.273 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddaf093-d8f9-4fb0-acb6-b12730dcad38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:19 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Dec 13 08:33:19 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004a.scope: Consumed 6.320s CPU time.
Dec 13 08:33:19 compute-0 systemd-machined[210538]: Machine qemu-90-instance-0000004a terminated.
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.308 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ee9e54-2cfe-4243-ac3e-b93eba764d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.312 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[075c2fe4-0c90-41f1-b597-4796442c5c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.348 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bb9f2b-39e1-4f0f-b34a-2f94f0346c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.367 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8fe5e0-177a-4936-a08e-39d191b88387]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321298, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.389 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24563474-dfad-4deb-9773-7a8f02462746]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727305, 'tstamp': 727305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321299, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727308, 'tstamp': 727308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321299, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.392 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 NetworkManager[50376]: <info>  [1765614799.3987] manager: (tapd001c32a-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.399 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.399 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.400 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.400 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.400 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.418 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance destroyed successfully.
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.418 248514 DEBUG nova.objects.instance [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.437 248514 DEBUG nova.virt.libvirt.vif [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-ServerActionsTestJSON-server-1327556776',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:33:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:15Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.437 248514 DEBUG nova.network.os_vif_util [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.438 248514 DEBUG nova.network.os_vif_util [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.438 248514 DEBUG os_vif [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.440 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd001c32a-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.445 248514 INFO os_vif [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc')
Dec 13 08:33:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.717 248514 INFO nova.virt.libvirt.driver [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deleting instance files /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57_del
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.718 248514 INFO nova.virt.libvirt.driver [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deletion of /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57_del complete
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.781 248514 INFO nova.compute.manager [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Took 0.69 seconds to destroy the instance on the hypervisor.
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.782 248514 DEBUG oslo.service.loopingcall [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.783 248514 DEBUG nova.compute.manager [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:33:19 compute-0 nova_compute[248510]: 2025-12-13 08:33:19.784 248514 DEBUG nova.network.neutron [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:33:20 compute-0 nova_compute[248510]: 2025-12-13 08:33:20.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:20 compute-0 ceph-mon[76537]: pgmap v2082: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001877099840313598 of space, bias 1.0, pg target 0.5631299520940793 quantized to 32 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00066741625840538 of space, bias 1.0, pg target 0.200224877521614 quantized to 32 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.013637859932945e-07 of space, bias 4.0, pg target 0.0007216365431919533 quantized to 16 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:33:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.191 248514 DEBUG nova.network.neutron [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.218 248514 INFO nova.compute.manager [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Took 1.43 seconds to deallocate network for instance.
Dec 13 08:33:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.286 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.287 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.310 248514 DEBUG nova.compute.manager [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 DEBUG oslo_concurrency.lockutils [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 DEBUG oslo_concurrency.lockutils [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 DEBUG oslo_concurrency.lockutils [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 DEBUG nova.compute.manager [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 WARNING nova.compute.manager [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state deleted and task_state None.
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.405 248514 DEBUG oslo_concurrency.processutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 233 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 480 KiB/s wr, 146 op/s
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.780 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614786.7785583, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.781 248514 INFO nova.compute.manager [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Stopped (Lifecycle Event)
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.804 248514 DEBUG nova.compute.manager [None req-69261dad-8bd9-461b-bd4d-31fe12630dda - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598863755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.988 248514 DEBUG oslo_concurrency.processutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:21 compute-0 nova_compute[248510]: 2025-12-13 08:33:21.995 248514 DEBUG nova.compute.provider_tree [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:22 compute-0 nova_compute[248510]: 2025-12-13 08:33:22.026 248514 DEBUG nova.scheduler.client.report [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:22 compute-0 nova_compute[248510]: 2025-12-13 08:33:22.074 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:22 compute-0 nova_compute[248510]: 2025-12-13 08:33:22.167 248514 INFO nova.scheduler.client.report [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Deleted allocations for instance bb8c91ff-01cb-4fd5-ab69-005313784b57
Dec 13 08:33:22 compute-0 nova_compute[248510]: 2025-12-13 08:33:22.263 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:22 compute-0 ceph-mon[76537]: pgmap v2083: 321 pgs: 321 active+clean; 233 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 480 KiB/s wr, 146 op/s
Dec 13 08:33:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1598863755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 219 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 108 op/s
Dec 13 08:33:24 compute-0 nova_compute[248510]: 2025-12-13 08:33:24.412 248514 DEBUG nova.compute.manager [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:24 compute-0 nova_compute[248510]: 2025-12-13 08:33:24.413 248514 DEBUG oslo_concurrency.lockutils [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:24 compute-0 nova_compute[248510]: 2025-12-13 08:33:24.413 248514 DEBUG oslo_concurrency.lockutils [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:24 compute-0 nova_compute[248510]: 2025-12-13 08:33:24.413 248514 DEBUG oslo_concurrency.lockutils [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:24 compute-0 nova_compute[248510]: 2025-12-13 08:33:24.414 248514 DEBUG nova.compute.manager [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:24 compute-0 nova_compute[248510]: 2025-12-13 08:33:24.414 248514 WARNING nova.compute.manager [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state deleted and task_state None.
Dec 13 08:33:24 compute-0 nova_compute[248510]: 2025-12-13 08:33:24.414 248514 DEBUG nova.compute.manager [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-deleted-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:24 compute-0 nova_compute[248510]: 2025-12-13 08:33:24.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:24 compute-0 podman[321348]: 2025-12-13 08:33:24.988505978 +0000 UTC m=+0.073183408 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:33:25 compute-0 podman[321347]: 2025-12-13 08:33:25.014984176 +0000 UTC m=+0.096408885 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 08:33:25 compute-0 podman[321346]: 2025-12-13 08:33:25.031196039 +0000 UTC m=+0.116150146 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 08:33:25 compute-0 ceph-mon[76537]: pgmap v2084: 321 pgs: 321 active+clean; 219 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 108 op/s
Dec 13 08:33:25 compute-0 nova_compute[248510]: 2025-12-13 08:33:25.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.1 KiB/s wr, 107 op/s
Dec 13 08:33:26 compute-0 ceph-mon[76537]: pgmap v2085: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.1 KiB/s wr, 107 op/s
Dec 13 08:33:26 compute-0 nova_compute[248510]: 2025-12-13 08:33:26.202 248514 DEBUG oslo_concurrency.lockutils [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:26 compute-0 nova_compute[248510]: 2025-12-13 08:33:26.203 248514 DEBUG oslo_concurrency.lockutils [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:26 compute-0 nova_compute[248510]: 2025-12-13 08:33:26.203 248514 DEBUG nova.compute.manager [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:26 compute-0 nova_compute[248510]: 2025-12-13 08:33:26.209 248514 DEBUG nova.compute.manager [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 08:33:26 compute-0 nova_compute[248510]: 2025-12-13 08:33:26.210 248514 DEBUG nova.objects.instance [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:26 compute-0 nova_compute[248510]: 2025-12-13 08:33:26.249 248514 DEBUG nova.virt.libvirt.driver [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:33:26 compute-0 ovn_controller[148476]: 2025-12-13T08:33:26Z|00732|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:33:26 compute-0 ovn_controller[148476]: 2025-12-13T08:33:26Z|00733|binding|INFO|Releasing lport c8e8a31b-a5fe-4e2d-bc19-65995078988f from this chassis (sb_readonly=0)
Dec 13 08:33:26 compute-0 nova_compute[248510]: 2025-12-13 08:33:26.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:26 compute-0 nova_compute[248510]: 2025-12-13 08:33:26.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:27 compute-0 nova_compute[248510]: 2025-12-13 08:33:27.564 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:27 compute-0 nova_compute[248510]: 2025-12-13 08:33:27.565 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:27 compute-0 nova_compute[248510]: 2025-12-13 08:33:27.587 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:33:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Dec 13 08:33:27 compute-0 nova_compute[248510]: 2025-12-13 08:33:27.674 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:27 compute-0 nova_compute[248510]: 2025-12-13 08:33:27.675 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:27 compute-0 nova_compute[248510]: 2025-12-13 08:33:27.684 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:33:27 compute-0 nova_compute[248510]: 2025-12-13 08:33:27.685 248514 INFO nova.compute.claims [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:33:27 compute-0 nova_compute[248510]: 2025-12-13 08:33:27.842 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044164758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.446 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.453 248514 DEBUG nova.compute.provider_tree [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:28 compute-0 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 08:33:28 compute-0 NetworkManager[50376]: <info>  [1765614808.4771] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.478 248514 DEBUG nova.scheduler.client.report [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:28 compute-0 ovn_controller[148476]: 2025-12-13T08:33:28Z|00734|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:28 compute-0 ovn_controller[148476]: 2025-12-13T08:33:28Z|00735|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 08:33:28 compute-0 ovn_controller[148476]: 2025-12-13T08:33:28Z|00736|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.498 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.500 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.502 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06ef4e92-4fba-430a-bc1c-dbf73a195999]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.504 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.512 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.513 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:28 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 08:33:28 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000003c.scope: Consumed 17.934s CPU time.
Dec 13 08:33:28 compute-0 systemd-machined[210538]: Machine qemu-79-instance-0000003c terminated.
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.578 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.579 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.610 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.638 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:33:28 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [NOTICE]   (314530) : haproxy version is 2.8.14-c23fe91
Dec 13 08:33:28 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [NOTICE]   (314530) : path to executable is /usr/sbin/haproxy
Dec 13 08:33:28 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [WARNING]  (314530) : Exiting Master process...
Dec 13 08:33:28 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [ALERT]    (314530) : Current worker (314533) exited with code 143 (Terminated)
Dec 13 08:33:28 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [WARNING]  (314530) : All workers exited. Exiting... (0)
Dec 13 08:33:28 compute-0 systemd[1]: libpod-f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce.scope: Deactivated successfully.
Dec 13 08:33:28 compute-0 podman[321454]: 2025-12-13 08:33:28.652036888 +0000 UTC m=+0.047234164 container died f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:33:28 compute-0 ceph-mon[76537]: pgmap v2086: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Dec 13 08:33:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3044164758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce-userdata-shm.mount: Deactivated successfully.
Dec 13 08:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf5248c08c37cabcce8b13f5f177f9d27ffd14bb5e2bc32ec9054b3d6fc2fbb6-merged.mount: Deactivated successfully.
Dec 13 08:33:28 compute-0 podman[321454]: 2025-12-13 08:33:28.695488057 +0000 UTC m=+0.090685313 container cleanup f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:33:28 compute-0 systemd[1]: libpod-conmon-f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce.scope: Deactivated successfully.
Dec 13 08:33:28 compute-0 podman[321485]: 2025-12-13 08:33:28.766444889 +0000 UTC m=+0.047905871 container remove f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d932cab-084a-40e2-9f4b-7ca96d60e428]: (4, ('Sat Dec 13 08:33:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce)\nf362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce\nSat Dec 13 08:33:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce)\nf362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.777 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c4415d1-7d88-4d41-9969-bcef5582ecdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.778 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.780 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:28 compute-0 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.804 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5f7802-afe6-463e-893e-d252ac8a1eb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.813 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.814 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.815 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Creating image(s)
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.824 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d62b36a9-70ef-49ab-b625-c144f6133d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.825 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79254998-3065-4ab6-b1ed-255bd818e53d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.838 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.841 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[73af01c0-6e8d-426a-913e-c61372831bee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727285, 'reachable_time': 40152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321526, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.847 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:33:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.847 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4052a517-4fd2-4d47-9b6d-40f23ce3b2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.866 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.893 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.899 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.946 248514 DEBUG nova.policy [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b310bdebec646949fad4ea1821b4c3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.988 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.988 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.989 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:28 compute-0 nova_compute[248510]: 2025-12-13 08:33:28.990 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.016 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.021 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.102 248514 DEBUG nova.compute.manager [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.102 248514 DEBUG oslo_concurrency.lockutils [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.103 248514 DEBUG oslo_concurrency.lockutils [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.103 248514 DEBUG oslo_concurrency.lockutils [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.103 248514 DEBUG nova.compute.manager [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.103 248514 WARNING nova.compute.manager [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state powering-off.
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.271 248514 INFO nova.virt.libvirt.driver [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance shutdown successfully after 3 seconds.
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.280 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.281 248514 DEBUG nova.objects.instance [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.303 248514 DEBUG nova.compute.manager [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.351 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.383 248514 DEBUG oslo_concurrency.lockutils [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.419 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] resizing rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.502 248514 DEBUG nova.objects.instance [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'migration_context' on Instance uuid c98e670c-9bea-41c0-87ad-fcaba6d2be2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.520 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.521 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Ensure instance console log exists: /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.522 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.522 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.523 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.7 KiB/s wr, 71 op/s
Dec 13 08:33:29 compute-0 nova_compute[248510]: 2025-12-13 08:33:29.765 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Successfully created port: 6b1400c2-c07a-450e-8698-6c2b60d0227e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:33:30 compute-0 nova_compute[248510]: 2025-12-13 08:33:30.431 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:30 compute-0 ceph-mon[76537]: pgmap v2087: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.7 KiB/s wr, 71 op/s
Dec 13 08:33:30 compute-0 nova_compute[248510]: 2025-12-13 08:33:30.903 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Successfully updated port: 6b1400c2-c07a-450e-8698-6c2b60d0227e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:33:30 compute-0 nova_compute[248510]: 2025-12-13 08:33:30.932 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:33:30 compute-0 nova_compute[248510]: 2025-12-13 08:33:30.933 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquired lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:33:30 compute-0 nova_compute[248510]: 2025-12-13 08:33:30.933 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.139 248514 DEBUG nova.compute.manager [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-changed-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.140 248514 DEBUG nova.compute.manager [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Refreshing instance network info cache due to event network-changed-6b1400c2-c07a-450e-8698-6c2b60d0227e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.140 248514 DEBUG oslo_concurrency.lockutils [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.189 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:33:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.401 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.427 248514 DEBUG oslo_concurrency.lockutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.428 248514 DEBUG oslo_concurrency.lockutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.428 248514 DEBUG nova.network.neutron [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.428 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'info_cache' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.474 248514 DEBUG nova.compute.manager [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.475 248514 DEBUG oslo_concurrency.lockutils [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.475 248514 DEBUG oslo_concurrency.lockutils [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.475 248514 DEBUG oslo_concurrency.lockutils [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.476 248514 DEBUG nova.compute.manager [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:31 compute-0 nova_compute[248510]: 2025-12-13 08:33:31.476 248514 WARNING nova.compute.manager [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state stopped and task_state powering-on.
Dec 13 08:33:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 223 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 686 KiB/s wr, 29 op/s
Dec 13 08:33:32 compute-0 ceph-mon[76537]: pgmap v2088: 321 pgs: 321 active+clean; 223 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 686 KiB/s wr, 29 op/s
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.689 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updating instance_info_cache with network_info: [{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.712 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Releasing lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.712 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance network_info: |[{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.712 248514 DEBUG oslo_concurrency.lockutils [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.713 248514 DEBUG nova.network.neutron [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Refreshing network info cache for port 6b1400c2-c07a-450e-8698-6c2b60d0227e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.715 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start _get_guest_xml network_info=[{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.721 248514 WARNING nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.727 248514 DEBUG nova.virt.libvirt.host [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.727 248514 DEBUG nova.virt.libvirt.host [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.737 248514 DEBUG nova.virt.libvirt.host [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.738 248514 DEBUG nova.virt.libvirt.host [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.738 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.738 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.739 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.739 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.739 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.740 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.740 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.740 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.740 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.741 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.741 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.741 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:33:32 compute-0 nova_compute[248510]: 2025-12-13 08:33:32.744 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.288 248514 DEBUG nova.network.neutron [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:33:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3656069266' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.315 248514 DEBUG oslo_concurrency.lockutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.328 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.353 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.359 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.407 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.408 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.423 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.424 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.439 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.444 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.465 248514 DEBUG nova.virt.libvirt.vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.466 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.467 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.467 248514 DEBUG os_vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.470 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.470 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.477 248514 INFO os_vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.486 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.492 248514 WARNING nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.501 248514 DEBUG nova.virt.libvirt.host [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.502 248514 DEBUG nova.virt.libvirt.host [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.507 248514 DEBUG nova.virt.libvirt.host [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.508 248514 DEBUG nova.virt.libvirt.host [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.509 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.510 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.512 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.512 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.512 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.513 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.513 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.513 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.537 248514 DEBUG oslo_concurrency.processutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.593 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.595 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 245 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 MiB/s wr, 38 op/s
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.613 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.613 248514 INFO nova.compute.claims [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:33:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3656069266' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.853 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:33:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331903218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.967 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.969 248514 DEBUG nova.virt.libvirt.vif [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:33:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1353488107',display_name='tempest-ServerActionsTestOtherA-server-1353488107',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1353488107',id=75,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-9n9e56kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:33:28Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=c98e670c-9bea-41c0-87ad-fcaba6d2be2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.970 248514 DEBUG nova.network.os_vif_util [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.971 248514 DEBUG nova.network.os_vif_util [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.972 248514 DEBUG nova.objects.instance [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_devices' on Instance uuid c98e670c-9bea-41c0-87ad-fcaba6d2be2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.978 248514 DEBUG nova.network.neutron [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updated VIF entry in instance network info cache for port 6b1400c2-c07a-450e-8698-6c2b60d0227e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.979 248514 DEBUG nova.network.neutron [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updating instance_info_cache with network_info: [{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:33 compute-0 nova_compute[248510]: 2025-12-13 08:33:33.998 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <uuid>c98e670c-9bea-41c0-87ad-fcaba6d2be2c</uuid>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <name>instance-0000004b</name>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestOtherA-server-1353488107</nova:name>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:33:32</nova:creationTime>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <nova:user uuid="5b310bdebec646949fad4ea1821b4c3f">tempest-ServerActionsTestOtherA-1325599242-project-member</nova:user>
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <nova:project uuid="b4d2999518df4b9f8ccbabe38976dc3c">tempest-ServerActionsTestOtherA-1325599242</nova:project>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <nova:port uuid="6b1400c2-c07a-450e-8698-6c2b60d0227e">
Dec 13 08:33:33 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <system>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <entry name="serial">c98e670c-9bea-41c0-87ad-fcaba6d2be2c</entry>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <entry name="uuid">c98e670c-9bea-41c0-87ad-fcaba6d2be2c</entry>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </system>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <os>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   </os>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <features>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   </features>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk">
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       </source>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config">
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       </source>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:33:33 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ee:af:77"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <target dev="tap6b1400c2-c0"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/console.log" append="off"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <video>
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </video>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:33:33 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:33:33 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:33:33 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:33:33 compute-0 nova_compute[248510]: </domain>
Dec 13 08:33:33 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.004 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Preparing to wait for external event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.005 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.005 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.006 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.007 248514 DEBUG nova.virt.libvirt.vif [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:33:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1353488107',display_name='tempest-ServerActionsTestOtherA-server-1353488107',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1353488107',id=75,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-9n9e56kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:33:28Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=c98e670c-9bea-41c0-87ad-fcaba6d2be2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.007 248514 DEBUG nova.network.os_vif_util [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.008 248514 DEBUG nova.network.os_vif_util [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.009 248514 DEBUG os_vif [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.012 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.012 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.013 248514 DEBUG oslo_concurrency.lockutils [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.016 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b1400c2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.017 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b1400c2-c0, col_values=(('external_ids', {'iface-id': '6b1400c2-c07a-450e-8698-6c2b60d0227e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:af:77', 'vm-uuid': 'c98e670c-9bea-41c0-87ad-fcaba6d2be2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 NetworkManager[50376]: <info>  [1765614814.0195] manager: (tap6b1400c2-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.025 248514 INFO os_vif [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0')
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.088 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.089 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.089 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No VIF found with MAC fa:16:3e:ee:af:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.090 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Using config drive
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.112 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:33:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255405447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.172 248514 DEBUG oslo_concurrency.processutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.204 248514 DEBUG oslo_concurrency.processutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1088339276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.417 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614799.4151325, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.418 248514 INFO nova.compute.manager [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Stopped (Lifecycle Event)
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.434 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.443 248514 DEBUG nova.compute.manager [None req-eda2318f-9fdd-404a-9040-41417c93f847 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.446 248514 DEBUG nova.compute.provider_tree [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.456 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Creating config drive at /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.462 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3gxmdtgw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.495 248514 DEBUG nova.scheduler.client.report [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.523 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.524 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.577 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.601 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3gxmdtgw" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.645 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.650 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.690 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:33:34 compute-0 ceph-mon[76537]: pgmap v2089: 321 pgs: 321 active+clean; 245 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 MiB/s wr, 38 op/s
Dec 13 08:33:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/331903218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1255405447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1088339276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.714 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.801 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.802 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.803 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating image(s)
Dec 13 08:33:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:33:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608325763' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.824 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.845 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.864 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.869 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.903 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.906 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Deleting local config drive /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config because it was imported into RBD.
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.907 248514 DEBUG oslo_concurrency.processutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.909 248514 DEBUG nova.virt.libvirt.vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.910 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.911 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.914 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.936 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <name>instance-0000003c</name>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:33:33</nova:creationTime>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 08:33:34 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <system>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </system>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <os>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   </os>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <features>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   </features>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       </source>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       </source>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:33:34 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <target dev="tapb5058a06-71"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <video>
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </video>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:33:34 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:33:34 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:33:34 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:33:34 compute-0 nova_compute[248510]: </domain>
Dec 13 08:33:34 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.938 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.939 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.940 248514 DEBUG nova.virt.libvirt.vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.940 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.941 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.941 248514 DEBUG os_vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.942 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.943 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.945 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.945 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.946 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 NetworkManager[50376]: <info>  [1765614814.9488] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.956 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 kernel: tap6b1400c2-c0: entered promiscuous mode
Dec 13 08:33:34 compute-0 NetworkManager[50376]: <info>  [1765614814.9598] manager: (tap6b1400c2-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.957 248514 INFO os_vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.963 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:34 compute-0 ovn_controller[148476]: 2025-12-13T08:33:34Z|00737|binding|INFO|Claiming lport 6b1400c2-c07a-450e-8698-6c2b60d0227e for this chassis.
Dec 13 08:33:34 compute-0 ovn_controller[148476]: 2025-12-13T08:33:34Z|00738|binding|INFO|6b1400c2-c07a-450e-8698-6c2b60d0227e: Claiming fa:16:3e:ee:af:77 10.100.0.6
Dec 13 08:33:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:34.976 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:af:77 10.100.0.6'], port_security=['fa:16:3e:ee:af:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c98e670c-9bea-41c0-87ad-fcaba6d2be2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b1400c2-c07a-450e-8698-6c2b60d0227e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:34.978 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b1400c2-c07a-450e-8698-6c2b60d0227e in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f bound to our chassis
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.977 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.978 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.979 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:34 compute-0 nova_compute[248510]: 2025-12-13 08:33:34.979 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:34.980 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:33:34 compute-0 ovn_controller[148476]: 2025-12-13T08:33:34Z|00739|binding|INFO|Setting lport 6b1400c2-c07a-450e-8698-6c2b60d0227e ovn-installed in OVS
Dec 13 08:33:34 compute-0 ovn_controller[148476]: 2025-12-13T08:33:34Z|00740|binding|INFO|Setting lport 6b1400c2-c07a-450e-8698-6c2b60d0227e up in Southbound
Dec 13 08:33:35 compute-0 systemd-udevd[321967]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.000 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[81b49983-a96f-427c-8704-ddf080332e9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 systemd-machined[210538]: New machine qemu-91-instance-0000004b.
Dec 13 08:33:35 compute-0 NetworkManager[50376]: <info>  [1765614815.0165] device (tap6b1400c2-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:33:35 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-0000004b.
Dec 13 08:33:35 compute-0 NetworkManager[50376]: <info>  [1765614815.0174] device (tap6b1400c2-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.066 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.071 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.082 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65c443e4-0ef2-454d-80c8-bcfbc6900977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.086 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f60e638b-f557-47bf-bcfb-18fb3ea47bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.122 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[04490eac-5476-4907-a24b-3871e30fdf8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.142 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c11690-9e16-45b3-bdd2-ddf898ca0a2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321992, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.158 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[13c9c145-0a7b-4a3d-9363-391ea11bb05c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321998, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321998, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.161 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.170 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.171 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.171 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.172 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:35 compute-0 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 08:33:35 compute-0 NetworkManager[50376]: <info>  [1765614815.1733] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Dec 13 08:33:35 compute-0 systemd-udevd[321981]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:33:35 compute-0 ovn_controller[148476]: 2025-12-13T08:33:35Z|00741|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 08:33:35 compute-0 ovn_controller[148476]: 2025-12-13T08:33:35Z|00742|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.173 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.182 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.184 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.186 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:35 compute-0 NetworkManager[50376]: <info>  [1765614815.1904] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:33:35 compute-0 NetworkManager[50376]: <info>  [1765614815.1912] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:33:35 compute-0 ovn_controller[148476]: 2025-12-13T08:33:35Z|00743|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 08:33:35 compute-0 ovn_controller[148476]: 2025-12-13T08:33:35Z|00744|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.205 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[adbfcbeb-8d24-41f4-941f-e7dd3cdda83b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.206 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.208 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.209 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec98436c-6437-4d36-9232-30f2974d2043]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.210 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f30b058-33c5-4667-860d-cb27c80c553a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 systemd-machined[210538]: New machine qemu-92-instance-0000003c.
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.226 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8735f0-8a87-48ba-a8e2-eee967c328ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-0000003c.
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.251 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[049f3777-4654-440a-b831-b6ef22a884a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.289 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4639f5fd-6f8f-489d-8347-48b363b61b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 NetworkManager[50376]: <info>  [1765614815.2972] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.298 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe75b5f-50c1-48ed-8afa-2dfe2ec200bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.340 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b13a5f13-b06c-4009-83cf-a14d6cd6dd65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.343 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b0ce37-839f-42f6-8530-67f8ab59d2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 NetworkManager[50376]: <info>  [1765614815.3694] device (tap43ee8730-a0): carrier: link connected
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.376 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5a282176-f98d-4dc8-8089-6a897408fd1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.395 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c79b87b4-34f9-47f7-97a4-3fa94660c1e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739258, 'reachable_time': 40636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322057, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.416 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[473ae6a1-1c96-48bf-a973-5851c78eea0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739258, 'tstamp': 739258}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322058, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.431 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.437 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5ec931-c27d-441e-a244-532ab5d85296]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739258, 'reachable_time': 40636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322059, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.444 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.481 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da74336c-c477-49d2-a74b-4100bf0ee61b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.534 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] resizing rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.551 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[10710c00-dd2d-42fb-bd63-4a205b55be70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.553 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.553 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.553 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:35 compute-0 NetworkManager[50376]: <info>  [1765614815.5564] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Dec 13 08:33:35 compute-0 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.560 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:35 compute-0 ovn_controller[148476]: 2025-12-13T08:33:35Z|00745|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.578 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.579 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[410214dc-8770-4b67-818b-5b400a5857ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.580 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.581 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:33:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 248 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.640 248514 DEBUG nova.objects.instance [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'migration_context' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.658 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.658 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Ensure instance console log exists: /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.659 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.659 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.659 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.661 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.666 248514 WARNING nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.671 248514 DEBUG nova.virt.libvirt.host [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.672 248514 DEBUG nova.virt.libvirt.host [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.675 248514 DEBUG nova.virt.libvirt.host [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.676 248514 DEBUG nova.virt.libvirt.host [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.676 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.676 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.681 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2608325763' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.716 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614815.696059, c98e670c-9bea-41c0-87ad-fcaba6d2be2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.717 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] VM Started (Lifecycle Event)
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.748 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.752 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614815.69667, c98e670c-9bea-41c0-87ad-fcaba6d2be2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.753 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] VM Paused (Lifecycle Event)
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.781 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.785 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.812 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:33:35 compute-0 nova_compute[248510]: 2025-12-13 08:33:35.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.960 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:36 compute-0 podman[322225]: 2025-12-13 08:33:36.05176753 +0000 UTC m=+0.070993863 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:33:36 compute-0 podman[322225]: 2025-12-13 08:33:36.191671615 +0000 UTC m=+0.210897958 container create eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:33:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:36 compute-0 systemd[1]: Started libpod-conmon-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959.scope.
Dec 13 08:33:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c4ee944e775e18453f7f96f287b0d219a31177c28ca489631fbbf96e995c67/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:36 compute-0 podman[322225]: 2025-12-13 08:33:36.313765257 +0000 UTC m=+0.332991580 container init eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.316 248514 DEBUG nova.compute.manager [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.319 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.319 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614816.3178973, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.319 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)
Dec 13 08:33:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:33:36 compute-0 podman[322225]: 2025-12-13 08:33:36.321875009 +0000 UTC m=+0.341101312 container start eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 08:33:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968502624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.326 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance rebooted successfully.
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.327 248514 DEBUG nova.compute.manager [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.343 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:36 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [NOTICE]   (322288) : New worker (322291) forked
Dec 13 08:33:36 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [NOTICE]   (322288) : Loading success.
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.352 248514 DEBUG nova.compute.manager [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.353 248514 DEBUG oslo_concurrency.lockutils [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.353 248514 DEBUG oslo_concurrency.lockutils [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.353 248514 DEBUG oslo_concurrency.lockutils [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.354 248514 DEBUG nova.compute.manager [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Processing event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.355 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.355 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.359 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.360 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.383 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.393 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:36.402 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.430 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (powering-on). Skip.
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.431 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614816.3181283, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.432 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.444 248514 DEBUG nova.compute.manager [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.444 248514 DEBUG oslo_concurrency.lockutils [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.445 248514 DEBUG oslo_concurrency.lockutils [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.445 248514 DEBUG oslo_concurrency.lockutils [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.446 248514 DEBUG nova.compute.manager [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.446 248514 WARNING nova.compute.manager [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.449 248514 INFO nova.virt.libvirt.driver [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance spawned successfully.
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.449 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.475 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.481 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.486 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.486 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.487 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.487 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.488 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.488 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.521 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614816.3589265, c98e670c-9bea-41c0-87ad-fcaba6d2be2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.521 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] VM Resumed (Lifecycle Event)
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.547 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.551 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.559 248514 INFO nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Took 7.75 seconds to spawn the instance on the hypervisor.
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.560 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.592 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.660 248514 INFO nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Took 9.01 seconds to build instance.
Dec 13 08:33:36 compute-0 nova_compute[248510]: 2025-12-13 08:33:36.686 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:36 compute-0 ceph-mon[76537]: pgmap v2090: 321 pgs: 321 active+clean; 248 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 13 08:33:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3968502624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:33:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3284486636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.004 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.007 248514 DEBUG nova.objects.instance [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.024 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <uuid>84abd1d4-6b7b-459e-9783-fdc15d7e8bde</uuid>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <name>instance-0000004c</name>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerShowV254Test-server-116887759</nova:name>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:33:35</nova:creationTime>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <nova:user uuid="dfd2c0543e264c50b5b818f8b1bef249">tempest-ServerShowV254Test-1640329662-project-member</nova:user>
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <nova:project uuid="94f9c66cba1c4ab683e5ee108b067558">tempest-ServerShowV254Test-1640329662</nova:project>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <system>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <entry name="serial">84abd1d4-6b7b-459e-9783-fdc15d7e8bde</entry>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <entry name="uuid">84abd1d4-6b7b-459e-9783-fdc15d7e8bde</entry>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     </system>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <os>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   </os>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <features>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   </features>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk">
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       </source>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config">
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       </source>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:33:37 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/console.log" append="off"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <video>
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     </video>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:33:37 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:33:37 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:33:37 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:33:37 compute-0 nova_compute[248510]: </domain>
Dec 13 08:33:37 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.215 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.215 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.216 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Using config drive
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.242 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.590 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating config drive at /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.597 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8uf9svio execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 248 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 13 08:33:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3284486636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.737 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8uf9svio" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.766 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:37 compute-0 nova_compute[248510]: 2025-12-13 08:33:37.770 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:38 compute-0 nova_compute[248510]: 2025-12-13 08:33:38.407 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:38 compute-0 nova_compute[248510]: 2025-12-13 08:33:38.409 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deleting local config drive /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config because it was imported into RBD.
Dec 13 08:33:38 compute-0 systemd-machined[210538]: New machine qemu-93-instance-0000004c.
Dec 13 08:33:38 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004c.
Dec 13 08:33:38 compute-0 ceph-mon[76537]: pgmap v2091: 321 pgs: 321 active+clean; 248 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.150 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614819.1500237, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.151 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Resumed (Lifecycle Event)
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.155 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.156 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.161 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance spawned successfully.
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.162 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.187 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.195 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.199 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.199 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.200 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.200 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.200 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.201 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.236 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.237 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614819.150698, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.237 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Started (Lifecycle Event)
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.281 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.284 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.292 248514 INFO nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Took 4.49 seconds to spawn the instance on the hypervisor.
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.292 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.318 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.372 248514 INFO nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Took 5.88 seconds to build instance.
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.391 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 264 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.902 248514 DEBUG nova.compute.manager [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.902 248514 DEBUG oslo_concurrency.lockutils [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.902 248514 DEBUG oslo_concurrency.lockutils [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.902 248514 DEBUG oslo_concurrency.lockutils [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.903 248514 DEBUG nova.compute.manager [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] No waiting events found dispatching network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.903 248514 WARNING nova.compute.manager [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received unexpected event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e for instance with vm_state active and task_state None.
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.990 248514 DEBUG nova.compute.manager [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.991 248514 DEBUG oslo_concurrency.lockutils [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.991 248514 DEBUG oslo_concurrency.lockutils [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.991 248514 DEBUG oslo_concurrency.lockutils [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.992 248514 DEBUG nova.compute.manager [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:39 compute-0 nova_compute[248510]: 2025-12-13 08:33:39.992 248514 WARNING nova.compute.manager [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:33:40 compute-0 nova_compute[248510]: 2025-12-13 08:33:40.434 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:40 compute-0 ceph-mon[76537]: pgmap v2092: 321 pgs: 321 active+clean; 264 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Dec 13 08:33:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 13 08:33:41 compute-0 nova_compute[248510]: 2025-12-13 08:33:41.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:42 compute-0 ceph-mon[76537]: pgmap v2093: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 13 08:33:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:43.404 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.9 MiB/s wr, 223 op/s
Dec 13 08:33:43 compute-0 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG nova.compute.manager [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-changed-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:43 compute-0 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG nova.compute.manager [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Refreshing instance network info cache due to event network-changed-6b1400c2-c07a-450e-8698-6c2b60d0227e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:33:43 compute-0 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG oslo_concurrency.lockutils [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:33:43 compute-0 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG oslo_concurrency.lockutils [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:33:43 compute-0 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG nova.network.neutron [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Refreshing network info cache for port 6b1400c2-c07a-450e-8698-6c2b60d0227e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:33:44 compute-0 nova_compute[248510]: 2025-12-13 08:33:44.912 248514 INFO nova.compute.manager [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Rebuilding instance
Dec 13 08:33:44 compute-0 ceph-mon[76537]: pgmap v2094: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.9 MiB/s wr, 223 op/s
Dec 13 08:33:44 compute-0 nova_compute[248510]: 2025-12-13 08:33:44.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.221 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.241 248514 DEBUG nova.compute.manager [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.297 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'pci_requests' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.316 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.328 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'resources' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.339 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'migration_context' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.352 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.357 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:33:45 compute-0 nova_compute[248510]: 2025-12-13 08:33:45.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.9 MiB/s wr, 257 op/s
Dec 13 08:33:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:46 compute-0 nova_compute[248510]: 2025-12-13 08:33:46.688 248514 DEBUG nova.network.neutron [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updated VIF entry in instance network info cache for port 6b1400c2-c07a-450e-8698-6c2b60d0227e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:33:46 compute-0 nova_compute[248510]: 2025-12-13 08:33:46.688 248514 DEBUG nova.network.neutron [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updating instance_info_cache with network_info: [{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:46 compute-0 nova_compute[248510]: 2025-12-13 08:33:46.730 248514 DEBUG oslo_concurrency.lockutils [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:33:47 compute-0 ceph-mon[76537]: pgmap v2095: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.9 MiB/s wr, 257 op/s
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.216 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.217 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.218 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.218 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.219 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.222 248514 INFO nova.compute.manager [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Terminating instance
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.225 248514 DEBUG nova.compute.manager [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:33:47 compute-0 kernel: tap6b1400c2-c0 (unregistering): left promiscuous mode
Dec 13 08:33:47 compute-0 NetworkManager[50376]: <info>  [1765614827.2618] device (tap6b1400c2-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:47 compute-0 ovn_controller[148476]: 2025-12-13T08:33:47Z|00746|binding|INFO|Releasing lport 6b1400c2-c07a-450e-8698-6c2b60d0227e from this chassis (sb_readonly=0)
Dec 13 08:33:47 compute-0 ovn_controller[148476]: 2025-12-13T08:33:47Z|00747|binding|INFO|Setting lport 6b1400c2-c07a-450e-8698-6c2b60d0227e down in Southbound
Dec 13 08:33:47 compute-0 ovn_controller[148476]: 2025-12-13T08:33:47Z|00748|binding|INFO|Removing iface tap6b1400c2-c0 ovn-installed in OVS
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.333 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:af:77 10.100.0.6'], port_security=['fa:16:3e:ee:af:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c98e670c-9bea-41c0-87ad-fcaba6d2be2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b1400c2-c07a-450e-8698-6c2b60d0227e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.335 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b1400c2-c07a-450e-8698-6c2b60d0227e in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f unbound from our chassis
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.337 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.358 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa028625-7a5a-40de-a226-0e58d0009840]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:47 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Dec 13 08:33:47 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004b.scope: Consumed 11.415s CPU time.
Dec 13 08:33:47 compute-0 systemd-machined[210538]: Machine qemu-91-instance-0000004b terminated.
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.392 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d507412f-f2b3-4ec9-8186-a02cf90c1884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.396 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9afd2b32-f3a8-4fc3-a61f-65bfadddc3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.426 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a71c80b0-8ee2-4957-9c0c-1484f5906cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.452 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c432448b-6817-47e6-bc27-c52b5efc629f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322467, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.467 248514 INFO nova.virt.libvirt.driver [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance destroyed successfully.
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.468 248514 DEBUG nova.objects.instance [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'resources' on Instance uuid c98e670c-9bea-41c0-87ad-fcaba6d2be2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.477 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7f42b334-183e-490b-adc0-12ee1624f5bc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322477, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322477, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.480 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.483 248514 DEBUG nova.virt.libvirt.vif [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:33:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1353488107',display_name='tempest-ServerActionsTestOtherA-server-1353488107',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1353488107',id=75,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:33:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-9n9e56kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:36Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=c98e670c-9bea-41c0-87ad-fcaba6d2be2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.484 248514 DEBUG nova.network.os_vif_util [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.484 248514 DEBUG nova.network.os_vif_util [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.485 248514 DEBUG os_vif [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.488 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b1400c2-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.493 248514 INFO os_vif [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0')
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.499 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.499 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.499 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.499 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 245 op/s
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.820 248514 DEBUG nova.objects.instance [None req-dda55a40-86ef-4341-99c9-9af150cfd938 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.865 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614827.865392, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.866 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Paused (Lifecycle Event)
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.893 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.899 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:47 compute-0 nova_compute[248510]: 2025-12-13 08:33:47.931 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.012 248514 INFO nova.virt.libvirt.driver [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Deleting instance files /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c_del
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.014 248514 INFO nova.virt.libvirt.driver [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Deletion of /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c_del complete
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.040 248514 DEBUG nova.compute.manager [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-unplugged-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.041 248514 DEBUG oslo_concurrency.lockutils [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.041 248514 DEBUG oslo_concurrency.lockutils [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.041 248514 DEBUG oslo_concurrency.lockutils [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.041 248514 DEBUG nova.compute.manager [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] No waiting events found dispatching network-vif-unplugged-6b1400c2-c07a-450e-8698-6c2b60d0227e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.042 248514 DEBUG nova.compute.manager [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-unplugged-6b1400c2-c07a-450e-8698-6c2b60d0227e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.066 248514 INFO nova.compute.manager [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Took 0.84 seconds to destroy the instance on the hypervisor.
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.066 248514 DEBUG oslo.service.loopingcall [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.067 248514 DEBUG nova.compute.manager [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.067 248514 DEBUG nova.network.neutron [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.128 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.129 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.129 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:33:48 compute-0 ceph-mon[76537]: pgmap v2096: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 245 op/s
Dec 13 08:33:48 compute-0 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 08:33:48 compute-0 NetworkManager[50376]: <info>  [1765614828.8532] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:48 compute-0 ovn_controller[148476]: 2025-12-13T08:33:48Z|00749|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 08:33:48 compute-0 ovn_controller[148476]: 2025-12-13T08:33:48Z|00750|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 08:33:48 compute-0 ovn_controller[148476]: 2025-12-13T08:33:48Z|00751|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.860 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.868 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.864 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.870 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:33:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.872 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:33:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.873 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f828d7-4b88-4557-9cb3-68f52f130c75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.879 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore
Dec 13 08:33:48 compute-0 nova_compute[248510]: 2025-12-13 08:33:48.884 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:48 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 08:33:48 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000003c.scope: Consumed 12.492s CPU time.
Dec 13 08:33:48 compute-0 systemd-machined[210538]: Machine qemu-92-instance-0000003c terminated.
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.017 248514 DEBUG nova.compute.manager [None req-dda55a40-86ef-4341-99c9-9af150cfd938 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:49 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [NOTICE]   (322288) : haproxy version is 2.8.14-c23fe91
Dec 13 08:33:49 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [NOTICE]   (322288) : path to executable is /usr/sbin/haproxy
Dec 13 08:33:49 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [WARNING]  (322288) : Exiting Master process...
Dec 13 08:33:49 compute-0 systemd[1]: libpod-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959.scope: Deactivated successfully.
Dec 13 08:33:49 compute-0 conmon[322282]: conmon eab32040dc9843d1afa4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959.scope/container/memory.events
Dec 13 08:33:49 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [ALERT]    (322288) : Current worker (322291) exited with code 143 (Terminated)
Dec 13 08:33:49 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [WARNING]  (322288) : All workers exited. Exiting... (0)
Dec 13 08:33:49 compute-0 podman[322525]: 2025-12-13 08:33:49.05581231 +0000 UTC m=+0.056481094 container died eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.164 248514 DEBUG nova.network.neutron [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.189 248514 INFO nova.compute.manager [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Took 1.12 seconds to deallocate network for instance.
Dec 13 08:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959-userdata-shm.mount: Deactivated successfully.
Dec 13 08:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c4ee944e775e18453f7f96f287b0d219a31177c28ca489631fbbf96e995c67-merged.mount: Deactivated successfully.
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.246 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.247 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:49 compute-0 podman[322525]: 2025-12-13 08:33:49.347933214 +0000 UTC m=+0.348601998 container cleanup eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 08:33:49 compute-0 systemd[1]: libpod-conmon-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959.scope: Deactivated successfully.
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.357 248514 DEBUG nova.compute.manager [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.358 248514 DEBUG oslo_concurrency.lockutils [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.358 248514 DEBUG oslo_concurrency.lockutils [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.359 248514 DEBUG oslo_concurrency.lockutils [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.359 248514 DEBUG nova.compute.manager [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.360 248514 WARNING nova.compute.manager [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state suspended and task_state None.
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.367 248514 DEBUG oslo_concurrency.processutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:49 compute-0 podman[322567]: 2025-12-13 08:33:49.422523197 +0000 UTC m=+0.050561167 container remove eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b4227b17-968c-40c7-8aef-551ec489642a]: (4, ('Sat Dec 13 08:33:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959)\neab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959\nSat Dec 13 08:33:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959)\neab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.434 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57404be3-359d-40b6-bee0-ab785be362f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.434 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:49 compute-0 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2222e729-1519-4b81-8bf9-3aafccd2e046]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18cf5e86-3f75-451f-bf79-0ca7f4b15bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.475 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c019fa9-afca-4e30-ac29-9491b514f8ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3252d9a2-27a8-43d1-b01f-09d07a88d672]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739249, 'reachable_time': 26335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322586, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.498 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:33:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.498 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f7f8bd-988c-433b-964f-011bb5c05948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 269 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Dec 13 08:33:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300441500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.953 248514 DEBUG oslo_concurrency.processutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.959 248514 DEBUG nova.compute.provider_tree [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:49 compute-0 nova_compute[248510]: 2025-12-13 08:33:49.980 248514 DEBUG nova.scheduler.client.report [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.010 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.035 248514 INFO nova.scheduler.client.report [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Deleted allocations for instance c98e670c-9bea-41c0-87ad-fcaba6d2be2c
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.135 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.144 248514 DEBUG nova.compute.manager [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.145 248514 DEBUG oslo_concurrency.lockutils [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.145 248514 DEBUG oslo_concurrency.lockutils [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.145 248514 DEBUG oslo_concurrency.lockutils [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.146 248514 DEBUG nova.compute.manager [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] No waiting events found dispatching network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.146 248514 WARNING nova.compute.manager [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received unexpected event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e for instance with vm_state deleted and task_state None.
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.146 248514 DEBUG nova.compute.manager [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-deleted-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:50 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec 13 08:33:50 compute-0 nova_compute[248510]: 2025-12-13 08:33:50.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:50 compute-0 ceph-mon[76537]: pgmap v2097: 321 pgs: 321 active+clean; 269 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Dec 13 08:33:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3300441500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.089 248514 INFO nova.compute.manager [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Resuming
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.090 248514 DEBUG nova.objects.instance [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.155 248514 DEBUG oslo_concurrency.lockutils [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.155 248514 DEBUG oslo_concurrency.lockutils [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.155 248514 DEBUG nova.network.neutron [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:33:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.349 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.372 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.372 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.442 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.442 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.443 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.443 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.443 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.444 248514 INFO nova.compute.manager [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Terminating instance
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.446 248514 DEBUG nova.compute.manager [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:33:51 compute-0 kernel: tapb2ee664d-ff (unregistering): left promiscuous mode
Dec 13 08:33:51 compute-0 NetworkManager[50376]: <info>  [1765614831.4990] device (tapb2ee664d-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:51 compute-0 ovn_controller[148476]: 2025-12-13T08:33:51Z|00752|binding|INFO|Releasing lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 from this chassis (sb_readonly=0)
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:51 compute-0 ovn_controller[148476]: 2025-12-13T08:33:51Z|00753|binding|INFO|Setting lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 down in Southbound
Dec 13 08:33:51 compute-0 ovn_controller[148476]: 2025-12-13T08:33:51Z|00754|binding|INFO|Removing iface tapb2ee664d-ff ovn-installed in OVS
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.509 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:51 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000045.scope: Deactivated successfully.
Dec 13 08:33:51 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000045.scope: Consumed 18.401s CPU time.
Dec 13 08:33:51 compute-0 systemd-machined[210538]: Machine qemu-80-instance-00000045 terminated.
Dec 13 08:33:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 249 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 203 op/s
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.689 248514 INFO nova.virt.libvirt.driver [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance destroyed successfully.
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.690 248514 DEBUG nova.objects.instance [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'resources' on Instance uuid ce9adb21-8832-4d3e-867e-b0b49bdb6850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.703 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:3a:53 10.100.0.3'], port_security=['fa:16:3e:63:3a:53 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce9adb21-8832-4d3e-867e-b0b49bdb6850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4a5bf7b-dd16-4d92-81b7-546493ad4db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2ee664d-ff99-4665-a5cc-70bd7aeb1546) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.705 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2ee664d-ff99-4665-a5cc-70bd7aeb1546 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f unbound from our chassis
Dec 13 08:33:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.707 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:33:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.709 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[13ef923e-abfc-4bb2-8a95-fe6ba2bf1c7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.709 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f namespace which is not needed anymore
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.721 248514 DEBUG nova.virt.libvirt.vif [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:31:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2099788276',display_name='tempest-ServerActionsTestOtherA-server-2099788276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2099788276',id=69,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxJgtD1FEWUw7tJ8pibGATgtrZITyeOCdRqSR73HftGqNDavcdP1XHx0prQ71D2yOjUOO7ZJAEgnPXlpVAfW1QGvCbp1snKSBX1V/4lwFnsJaGPS7QewWPvSMs5UYFhVA==',key_name='tempest-keypair-180617026',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:31:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-b9qwtikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=ce9adb21-8832-4d3e-867e-b0b49bdb6850,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.721 248514 DEBUG nova.network.os_vif_util [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.722 248514 DEBUG nova.network.os_vif_util [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.724 248514 DEBUG os_vif [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.727 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2ee664d-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.736 248514 INFO os_vif [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff')
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.799 248514 DEBUG nova.compute.manager [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.799 248514 DEBUG oslo_concurrency.lockutils [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.800 248514 DEBUG oslo_concurrency.lockutils [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.800 248514 DEBUG oslo_concurrency.lockutils [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.800 248514 DEBUG nova.compute.manager [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:51 compute-0 nova_compute[248510]: 2025-12-13 08:33:51.800 248514 WARNING nova.compute.manager [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state suspended and task_state resuming.
Dec 13 08:33:51 compute-0 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [NOTICE]   (314684) : haproxy version is 2.8.14-c23fe91
Dec 13 08:33:51 compute-0 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [NOTICE]   (314684) : path to executable is /usr/sbin/haproxy
Dec 13 08:33:51 compute-0 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [WARNING]  (314684) : Exiting Master process...
Dec 13 08:33:51 compute-0 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [WARNING]  (314684) : Exiting Master process...
Dec 13 08:33:51 compute-0 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [ALERT]    (314684) : Current worker (314686) exited with code 143 (Terminated)
Dec 13 08:33:51 compute-0 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [WARNING]  (314684) : All workers exited. Exiting... (0)
Dec 13 08:33:51 compute-0 systemd[1]: libpod-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a.scope: Deactivated successfully.
Dec 13 08:33:51 compute-0 conmon[314661]: conmon 2eb827e287b29525194b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a.scope/container/memory.events
Dec 13 08:33:51 compute-0 podman[322663]: 2025-12-13 08:33:51.882520447 +0000 UTC m=+0.050494255 container died 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-abe1480fb3e22837a053f76acd0fd060e8f26ee729f63461240b6103c38efbe6-merged.mount: Deactivated successfully.
Dec 13 08:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a-userdata-shm.mount: Deactivated successfully.
Dec 13 08:33:51 compute-0 podman[322663]: 2025-12-13 08:33:51.93095432 +0000 UTC m=+0.098928108 container cleanup 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 08:33:51 compute-0 systemd[1]: libpod-conmon-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a.scope: Deactivated successfully.
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.036 248514 INFO nova.virt.libvirt.driver [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Deleting instance files /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850_del
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.038 248514 INFO nova.virt.libvirt.driver [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Deletion of /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850_del complete
Dec 13 08:33:52 compute-0 podman[322691]: 2025-12-13 08:33:52.042456929 +0000 UTC m=+0.074473030 container remove 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[306b4d1e-97e2-4787-834e-b7d736127ccc]: (4, ('Sat Dec 13 08:33:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f (2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a)\n2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a\nSat Dec 13 08:33:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f (2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a)\n2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.056 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7468d828-9f13-4b59-8db1-ff096dd62ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.058 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:52 compute-0 kernel: tap409fc0bb-c0: left promiscuous mode
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.091 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4b13ce-ddee-4868-b1ac-4d4088033042]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.105 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47519533-a880-417a-a442-f73b9a0a0470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.106 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95cdd325-e0de-473b-8d1a-14ed6af56600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.115 248514 INFO nova.compute.manager [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Took 0.67 seconds to destroy the instance on the hypervisor.
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.117 248514 DEBUG oslo.service.loopingcall [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.119 248514 DEBUG nova.compute.manager [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.120 248514 DEBUG nova.network.neutron [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.130 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62a28e36-5abf-4677-98a3-7f476ffe9ed9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727388, 'reachable_time': 24704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322713, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.132 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:33:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.132 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b31f1dbf-e2ca-46a4-ab3a-b2778dc481e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d409fc0bb\x2dcaf3\x2d4b90\x2d9e44\x2d83ff383dc88f.mount: Deactivated successfully.
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:52 compute-0 ceph-mon[76537]: pgmap v2098: 321 pgs: 321 active+clean; 249 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 203 op/s
Dec 13 08:33:52 compute-0 nova_compute[248510]: 2025-12-13 08:33:52.947 248514 DEBUG nova.network.neutron [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.213 248514 DEBUG oslo_concurrency.lockutils [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.221 248514 DEBUG nova.virt.libvirt.vif [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.221 248514 DEBUG nova.network.os_vif_util [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.222 248514 DEBUG nova.network.os_vif_util [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.223 248514 DEBUG os_vif [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.224 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.225 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.229 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.229 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.230 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.230 248514 INFO os_vif [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.256 248514 DEBUG nova.objects.instance [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:53 compute-0 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 08:33:53 compute-0 NetworkManager[50376]: <info>  [1765614833.4121] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Dec 13 08:33:53 compute-0 systemd-udevd[322614]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 ovn_controller[148476]: 2025-12-13T08:33:53Z|00755|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 08:33:53 compute-0 ovn_controller[148476]: 2025-12-13T08:33:53Z|00756|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.425 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.426 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.427 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:53 compute-0 NetworkManager[50376]: <info>  [1765614833.4291] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:33:53 compute-0 NetworkManager[50376]: <info>  [1765614833.4300] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.439 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[20902f1b-dde0-44dc-a9c2-a93bfa926fee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.440 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.442 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.443 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3d80fd-941c-42ce-8e53-e4ed810cbb76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.443 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d9c8ac-cce1-4ef6-88b4-51b6c1ca6fa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_controller[148476]: 2025-12-13T08:33:53Z|00757|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 08:33:53 compute-0 ovn_controller[148476]: 2025-12-13T08:33:53Z|00758|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.446 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 systemd-machined[210538]: New machine qemu-94-instance-0000003c.
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.458 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[02a4d913-674d-4eb0-b45a-3837f9c42d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-0000003c.
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.484 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3866408b-3084-497e-a135-4f1e7b74f849]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.521 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0966b1a7-239f-4562-bf6a-1fc710c5096f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 NetworkManager[50376]: <info>  [1765614833.5277] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/322)
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.527 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[48964b7f-a792-4577-9536-038b88b216e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.558 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d29914ea-9b9a-472d-bf8c-42c2a222d739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.563 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6c434a-dd22-4add-b67f-5719f701a48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 NetworkManager[50376]: <info>  [1765614833.5817] device (tap43ee8730-a0): carrier: link connected
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.585 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[12e9fe47-009b-497a-9ca9-c2ad239011a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.604 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdf4631-150b-433d-b45e-2fef4bc32856]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741079, 'reachable_time': 39670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322758, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 228 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 402 KiB/s wr, 152 op/s
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f716781f-8c6a-4470-a900-54c8c092805a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741079, 'tstamp': 741079}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322759, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.642 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07393a51-16f2-4a8e-b9d2-9c07065d5f9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741079, 'reachable_time': 39670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322760, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.674 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b91cab-4f3f-43d3-81fd-80c79a4b1fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.749 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[188be0a6-8adb-40ff-8904-cac2020227a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.750 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 NetworkManager[50376]: <info>  [1765614833.7550] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Dec 13 08:33:53 compute-0 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.759 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:53 compute-0 ovn_controller[148476]: 2025-12-13T08:33:53Z|00759|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.791 248514 DEBUG nova.network.neutron [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.793 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.795 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.795 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8badeb85-de21-4f41-8c31-783b1fcd9f6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.795 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.795 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.796 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.796 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.796 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:33:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.799 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.831 248514 INFO nova.compute.manager [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Took 1.71 seconds to deallocate network for instance.
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.909 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:53 compute-0 nova_compute[248510]: 2025-12-13 08:33:53.910 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.020 248514 DEBUG oslo_concurrency.processutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:54 compute-0 podman[322812]: 2025-12-13 08:33:54.200301708 +0000 UTC m=+0.048052365 container create e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.202 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-unplugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.203 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.203 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.204 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.204 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] No waiting events found dispatching network-vif-unplugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.204 248514 WARNING nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received unexpected event network-vif-unplugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 for instance with vm_state deleted and task_state None.
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.204 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.205 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.205 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.205 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.205 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] No waiting events found dispatching network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.206 248514 WARNING nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received unexpected event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 for instance with vm_state deleted and task_state None.
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.206 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-deleted-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:54 compute-0 systemd[1]: Started libpod-conmon-e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b.scope.
Dec 13 08:33:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:33:54 compute-0 podman[322812]: 2025-12-13 08:33:54.17825588 +0000 UTC m=+0.026006567 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce670cf208a55e7edf7d049fefbc3f51f1fb991fa118f368570ceaef6ceea3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:33:54 compute-0 podman[322812]: 2025-12-13 08:33:54.2917981 +0000 UTC m=+0.139548757 container init e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:33:54 compute-0 podman[322812]: 2025-12-13 08:33:54.297938651 +0000 UTC m=+0.145689308 container start e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:33:54 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [NOTICE]   (322864) : New worker (322882) forked
Dec 13 08:33:54 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [NOTICE]   (322864) : Loading success.
Dec 13 08:33:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021240571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.349 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644318088' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.625 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.628 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614834.6253257, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.628 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.642 248514 DEBUG oslo_concurrency.processutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.647 248514 DEBUG nova.compute.provider_tree [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.678 248514 DEBUG nova.compute.manager [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.678 248514 DEBUG nova.objects.instance [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.793 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.796 248514 DEBUG nova.scheduler.client.report [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.805 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.806 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.808 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:54 compute-0 ceph-mon[76537]: pgmap v2099: 321 pgs: 321 active+clean; 228 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 402 KiB/s wr, 152 op/s
Dec 13 08:33:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2021240571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1644318088' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.814 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance running successfully.
Dec 13 08:33:54 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.816 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.817 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.818 248514 DEBUG nova.virt.libvirt.guest [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.818 248514 DEBUG nova.compute.manager [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.977 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.978 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3701MB free_disk=59.888090652413666GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:33:54 compute-0 nova_compute[248510]: 2025-12-13 08:33:54.979 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.025 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.025 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614834.632862, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.026 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.028 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.031 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.073 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.076 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.080 248514 INFO nova.scheduler.client.report [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Deleted allocations for instance ce9adb21-8832-4d3e-867e-b0b49bdb6850
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.161 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.161 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 84abd1d4-6b7b-459e-9783-fdc15d7e8bde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.162 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.162 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.189 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.268 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:55.414 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:55.415 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:55.416 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.433 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:55 compute-0 ovn_controller[148476]: 2025-12-13T08:33:55Z|00760|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 08:33:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 202 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 204 op/s
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:33:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2262848978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.884 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:55 compute-0 nova_compute[248510]: 2025-12-13 08:33:55.892 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:33:55 compute-0 ovn_controller[148476]: 2025-12-13T08:33:55Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 08:33:55 compute-0 podman[322930]: 2025-12-13 08:33:55.970464337 +0000 UTC m=+0.053943601 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 08:33:55 compute-0 podman[322929]: 2025-12-13 08:33:55.986091355 +0000 UTC m=+0.074815839 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:33:56 compute-0 podman[322928]: 2025-12-13 08:33:56.015320401 +0000 UTC m=+0.104656620 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:33:56 compute-0 nova_compute[248510]: 2025-12-13 08:33:56.086 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:33:56 compute-0 nova_compute[248510]: 2025-12-13 08:33:56.122 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:33:56 compute-0 nova_compute[248510]: 2025-12-13 08:33:56.122 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:33:56 compute-0 nova_compute[248510]: 2025-12-13 08:33:56.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:56 compute-0 ceph-mon[76537]: pgmap v2100: 321 pgs: 321 active+clean; 202 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 204 op/s
Dec 13 08:33:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2262848978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:33:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 202 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Dec 13 08:33:57 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Dec 13 08:33:57 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Consumed 12.701s CPU time.
Dec 13 08:33:57 compute-0 systemd-machined[210538]: Machine qemu-93-instance-0000004c terminated.
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.779 248514 DEBUG nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.780 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.781 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.781 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.782 248514 DEBUG nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.782 248514 WARNING nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.783 248514 DEBUG nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.783 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.784 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.784 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.785 248514 DEBUG nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:57 compute-0 nova_compute[248510]: 2025-12-13 08:33:57.785 248514 WARNING nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.281 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.282 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.282 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.283 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.283 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.285 248514 INFO nova.compute.manager [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Terminating instance
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.286 248514 DEBUG nova.compute.manager [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:33:58 compute-0 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 08:33:58 compute-0 NetworkManager[50376]: <info>  [1765614838.3251] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.332 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 ovn_controller[148476]: 2025-12-13T08:33:58Z|00761|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 08:33:58 compute-0 ovn_controller[148476]: 2025-12-13T08:33:58Z|00762|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 08:33:58 compute-0 ovn_controller[148476]: 2025-12-13T08:33:58Z|00763|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.344 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.345 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.347 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.348 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0d806e59-e7ad-4799-b918-a521fb8b44eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.350 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 08:33:58 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000003c.scope: Consumed 1.764s CPU time.
Dec 13 08:33:58 compute-0 systemd-machined[210538]: Machine qemu-94-instance-0000003c terminated.
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.451 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance shutdown successfully after 13 seconds.
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.458 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance destroyed successfully.
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.464 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance destroyed successfully.
Dec 13 08:33:58 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [NOTICE]   (322864) : haproxy version is 2.8.14-c23fe91
Dec 13 08:33:58 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [NOTICE]   (322864) : path to executable is /usr/sbin/haproxy
Dec 13 08:33:58 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [ALERT]    (322864) : Current worker (322882) exited with code 143 (Terminated)
Dec 13 08:33:58 compute-0 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [WARNING]  (322864) : All workers exited. Exiting... (0)
Dec 13 08:33:58 compute-0 systemd[1]: libpod-e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b.scope: Deactivated successfully.
Dec 13 08:33:58 compute-0 podman[323012]: 2025-12-13 08:33:58.500485977 +0000 UTC m=+0.058084874 container died e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:33:58 compute-0 NetworkManager[50376]: <info>  [1765614838.5058] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.509 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.520 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.521 248514 DEBUG nova.objects.instance [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ce670cf208a55e7edf7d049fefbc3f51f1fb991fa118f368570ceaef6ceea3e-merged.mount: Deactivated successfully.
Dec 13 08:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b-userdata-shm.mount: Deactivated successfully.
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.538 248514 DEBUG nova.virt.libvirt.vif [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.539 248514 DEBUG nova.network.os_vif_util [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:33:58 compute-0 podman[323012]: 2025-12-13 08:33:58.541343312 +0000 UTC m=+0.098942209 container cleanup e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.541 248514 DEBUG nova.network.os_vif_util [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.542 248514 DEBUG os_vif [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.545 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 systemd[1]: libpod-conmon-e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b.scope: Deactivated successfully.
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.555 248514 INFO os_vif [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')
Dec 13 08:33:58 compute-0 podman[323066]: 2025-12-13 08:33:58.621213295 +0000 UTC m=+0.052986707 container remove e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca795ba8-a6bf-4cd7-98c1-10c227cbb9a0]: (4, ('Sat Dec 13 08:33:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b)\ne24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b\nSat Dec 13 08:33:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b)\ne24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[163274d0-2eb8-4fc3-919b-fdcf431a099f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.632 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.634 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.653 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[96551753-cbcf-489f-a146-aaad7c2fdd26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.672 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc5fa5e-4b88-4bb1-8ef4-0d4e30f2a556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a77c5daf-7b8f-41d9-a15f-eb2afe60470b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.688 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[131f9f46-be06-4180-8725-d47f5be1c59a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741073, 'reachable_time': 15043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323100, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.692 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:33:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.692 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4f7831-941b-4c2d-8674-5d93ad3b6de3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.769 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deleting instance files /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_del
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.770 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deletion of /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_del complete
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.805 248514 INFO nova.virt.libvirt.driver [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Deleting instance files /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542_del
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.806 248514 INFO nova.virt.libvirt.driver [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Deletion of /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542_del complete
Dec 13 08:33:58 compute-0 ceph-mon[76537]: pgmap v2101: 321 pgs: 321 active+clean; 202 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.903 248514 INFO nova.compute.manager [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Took 0.62 seconds to destroy the instance on the hypervisor.
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.905 248514 DEBUG oslo.service.loopingcall [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.905 248514 DEBUG nova.compute.manager [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:33:58 compute-0 nova_compute[248510]: 2025-12-13 08:33:58.905 248514 DEBUG nova.network.neutron [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.007 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.008 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating image(s)
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.033 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.061 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.088 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.092 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.135 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.136 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.136 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.177 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.179 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.180 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.180 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.199 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.202 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.531 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.606 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] resizing rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:33:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 153 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 2.1 MiB/s wr, 187 op/s
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.713 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.713 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Ensure instance console log exists: /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.714 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.714 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.714 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.716 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.722 248514 WARNING nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.735 248514 DEBUG nova.virt.libvirt.host [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.735 248514 DEBUG nova.virt.libvirt.host [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.739 248514 DEBUG nova.virt.libvirt.host [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.740 248514 DEBUG nova.virt.libvirt.host [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.740 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.740 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.741 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.741 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.742 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.742 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.742 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.742 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.743 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.743 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.743 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.743 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.744 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.775 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.933 248514 DEBUG nova.compute.manager [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.934 248514 DEBUG oslo_concurrency.lockutils [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.934 248514 DEBUG oslo_concurrency.lockutils [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.935 248514 DEBUG oslo_concurrency.lockutils [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.935 248514 DEBUG nova.compute.manager [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:33:59 compute-0 nova_compute[248510]: 2025-12-13 08:33:59.935 248514 DEBUG nova.compute.manager [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:34:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:34:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236151226' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:00 compute-0 nova_compute[248510]: 2025-12-13 08:34:00.360 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:00 compute-0 nova_compute[248510]: 2025-12-13 08:34:00.391 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:00 compute-0 nova_compute[248510]: 2025-12-13 08:34:00.395 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:00 compute-0 nova_compute[248510]: 2025-12-13 08:34:00.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:00 compute-0 ceph-mon[76537]: pgmap v2102: 321 pgs: 321 active+clean; 153 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 2.1 MiB/s wr, 187 op/s
Dec 13 08:34:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4236151226' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:34:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445963606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:00 compute-0 nova_compute[248510]: 2025-12-13 08:34:00.963 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:00 compute-0 nova_compute[248510]: 2025-12-13 08:34:00.968 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <uuid>84abd1d4-6b7b-459e-9783-fdc15d7e8bde</uuid>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <name>instance-0000004c</name>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerShowV254Test-server-116887759</nova:name>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:33:59</nova:creationTime>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <nova:user uuid="dfd2c0543e264c50b5b818f8b1bef249">tempest-ServerShowV254Test-1640329662-project-member</nova:user>
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <nova:project uuid="94f9c66cba1c4ab683e5ee108b067558">tempest-ServerShowV254Test-1640329662</nova:project>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <system>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <entry name="serial">84abd1d4-6b7b-459e-9783-fdc15d7e8bde</entry>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <entry name="uuid">84abd1d4-6b7b-459e-9783-fdc15d7e8bde</entry>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     </system>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <os>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   </os>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <features>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   </features>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk">
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       </source>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config">
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       </source>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:34:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/console.log" append="off"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <video>
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     </video>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:34:00 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:34:00 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:34:00 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:34:00 compute-0 nova_compute[248510]: </domain>
Dec 13 08:34:00 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.326 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.327 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.328 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Using config drive
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.362 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.385 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.423 248514 DEBUG nova.network.neutron [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.450 248514 INFO nova.compute.manager [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Took 2.54 seconds to deallocate network for instance.
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.512 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.513 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.612 248514 DEBUG oslo_concurrency.processutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 96 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 2.4 MiB/s wr, 193 op/s
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.766 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating config drive at /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.774 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9robjiyq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.811 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:01 compute-0 nova_compute[248510]: 2025-12-13 08:34:01.919 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9robjiyq" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2445963606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.061 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.068 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.119 248514 DEBUG nova.compute.manager [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.119 248514 DEBUG oslo_concurrency.lockutils [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 DEBUG oslo_concurrency.lockutils [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 DEBUG oslo_concurrency.lockutils [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 DEBUG nova.compute.manager [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 WARNING nova.compute.manager [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state deleted and task_state None.
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 DEBUG nova.compute.manager [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-deleted-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:34:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918785895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.216 248514 DEBUG oslo_concurrency.processutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.225 248514 DEBUG nova.compute.provider_tree [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.245 248514 DEBUG nova.scheduler.client.report [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.270 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.320 248514 INFO nova.scheduler.client.report [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Deleted allocations for instance 3ced27d6-a2a8-4ce3-a7e7-494270418542
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.410 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.463 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614827.4630418, c98e670c-9bea-41c0-87ad-fcaba6d2be2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.464 248514 INFO nova.compute.manager [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] VM Stopped (Lifecycle Event)
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.497 248514 DEBUG nova.compute.manager [None req-8b03e052-71b2-4f49-83fe-c6e80449fdbc - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.524 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:02 compute-0 nova_compute[248510]: 2025-12-13 08:34:02.524 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deleting local config drive /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config because it was imported into RBD.
Dec 13 08:34:02 compute-0 systemd-machined[210538]: New machine qemu-95-instance-0000004c.
Dec 13 08:34:02 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-0000004c.
Dec 13 08:34:03 compute-0 ceph-mon[76537]: pgmap v2103: 321 pgs: 321 active+clean; 96 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 2.4 MiB/s wr, 193 op/s
Dec 13 08:34:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2918785895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.594 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 84abd1d4-6b7b-459e-9783-fdc15d7e8bde due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.594 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614843.5941281, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.595 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Resumed (Lifecycle Event)
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.597 248514 DEBUG nova.compute.manager [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.598 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.602 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance spawned successfully.
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.602 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:34:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 69 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 2.8 MiB/s wr, 218 op/s
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.625 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.630 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.631 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.631 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.632 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.632 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.632 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.637 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.667 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.668 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614843.5968723, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.668 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Started (Lifecycle Event)
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.717 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.722 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.731 248514 DEBUG nova.compute.manager [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.764 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.820 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.820 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.821 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:34:03 compute-0 nova_compute[248510]: 2025-12-13 08:34:03.915 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:04 compute-0 ceph-mon[76537]: pgmap v2104: 321 pgs: 321 active+clean; 69 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 2.8 MiB/s wr, 218 op/s
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.099 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.365 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.366 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.366 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.366 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.367 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.368 248514 INFO nova.compute.manager [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Terminating instance
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.368 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "refresh_cache-84abd1d4-6b7b-459e-9783-fdc15d7e8bde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.369 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquired lock "refresh_cache-84abd1d4-6b7b-459e-9783-fdc15d7e8bde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.369 248514 DEBUG nova.network.neutron [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.445 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 3.6 MiB/s wr, 239 op/s
Dec 13 08:34:05 compute-0 nova_compute[248510]: 2025-12-13 08:34:05.677 248514 DEBUG nova.network.neutron [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.082 248514 DEBUG nova.network.neutron [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.115 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Releasing lock "refresh_cache-84abd1d4-6b7b-459e-9783-fdc15d7e8bde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.115 248514 DEBUG nova.compute.manager [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:34:06 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Dec 13 08:34:06 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004c.scope: Consumed 3.420s CPU time.
Dec 13 08:34:06 compute-0 systemd-machined[210538]: Machine qemu-95-instance-0000004c terminated.
Dec 13 08:34:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.337 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance destroyed successfully.
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.338 248514 DEBUG nova.objects.instance [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'resources' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.687 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614831.6862476, ce9adb21-8832-4d3e-867e-b0b49bdb6850 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.689 248514 INFO nova.compute.manager [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] VM Stopped (Lifecycle Event)
Dec 13 08:34:06 compute-0 ceph-mon[76537]: pgmap v2105: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 3.6 MiB/s wr, 239 op/s
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.714 248514 DEBUG nova.compute.manager [None req-0250c1e3-382d-49a3-8da1-aa750c5a424f - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.885 248514 INFO nova.virt.libvirt.driver [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deleting instance files /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_del
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.886 248514 INFO nova.virt.libvirt.driver [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deletion of /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_del complete
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.956 248514 INFO nova.compute.manager [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Took 0.84 seconds to destroy the instance on the hypervisor.
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.957 248514 DEBUG oslo.service.loopingcall [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.958 248514 DEBUG nova.compute.manager [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:34:06 compute-0 nova_compute[248510]: 2025-12-13 08:34:06.958 248514 DEBUG nova.network.neutron [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:34:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 08:34:07 compute-0 nova_compute[248510]: 2025-12-13 08:34:07.640 248514 DEBUG nova.network.neutron [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:34:07 compute-0 nova_compute[248510]: 2025-12-13 08:34:07.661 248514 DEBUG nova.network.neutron [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:07 compute-0 nova_compute[248510]: 2025-12-13 08:34:07.676 248514 INFO nova.compute.manager [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Took 0.72 seconds to deallocate network for instance.
Dec 13 08:34:07 compute-0 sudo[323491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:34:07 compute-0 nova_compute[248510]: 2025-12-13 08:34:07.734 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:07 compute-0 nova_compute[248510]: 2025-12-13 08:34:07.735 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:07 compute-0 sudo[323491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:07 compute-0 sudo[323491]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:07 compute-0 sudo[323516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:34:07 compute-0 sudo[323516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:07 compute-0 nova_compute[248510]: 2025-12-13 08:34:07.820 248514 DEBUG oslo_concurrency.processutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:34:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408091150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:08 compute-0 sudo[323516]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:08 compute-0 nova_compute[248510]: 2025-12-13 08:34:08.389 248514 DEBUG oslo_concurrency.processutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:34:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:34:08 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:34:08 compute-0 nova_compute[248510]: 2025-12-13 08:34:08.398 248514 DEBUG nova.compute.provider_tree [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:34:08 compute-0 nova_compute[248510]: 2025-12-13 08:34:08.433 248514 DEBUG nova.scheduler.client.report [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:34:08 compute-0 nova_compute[248510]: 2025-12-13 08:34:08.463 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:08 compute-0 nova_compute[248510]: 2025-12-13 08:34:08.496 248514 INFO nova.scheduler.client.report [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Deleted allocations for instance 84abd1d4-6b7b-459e-9783-fdc15d7e8bde
Dec 13 08:34:08 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:34:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:34:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:34:08 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:34:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:34:08 compute-0 nova_compute[248510]: 2025-12-13 08:34:08.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:08 compute-0 nova_compute[248510]: 2025-12-13 08:34:08.570 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:08 compute-0 sudo[323593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:34:08 compute-0 sudo[323593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:08 compute-0 sudo[323593]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:08 compute-0 sudo[323618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:34:08 compute-0 sudo[323618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:08 compute-0 ceph-mon[76537]: pgmap v2106: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 08:34:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1408091150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:34:08 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:34:08 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:34:08 compute-0 podman[323654]: 2025-12-13 08:34:08.973798549 +0000 UTC m=+0.048172278 container create 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 13 08:34:09 compute-0 systemd[1]: Started libpod-conmon-393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0.scope.
Dec 13 08:34:09 compute-0 podman[323654]: 2025-12-13 08:34:08.953531785 +0000 UTC m=+0.027905544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:34:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:09 compute-0 podman[323654]: 2025-12-13 08:34:09.084763264 +0000 UTC m=+0.159137013 container init 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:34:09 compute-0 podman[323654]: 2025-12-13 08:34:09.094123357 +0000 UTC m=+0.168497086 container start 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:34:09 compute-0 podman[323654]: 2025-12-13 08:34:09.097377798 +0000 UTC m=+0.171751527 container attach 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:34:09 compute-0 beautiful_morse[323670]: 167 167
Dec 13 08:34:09 compute-0 systemd[1]: libpod-393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0.scope: Deactivated successfully.
Dec 13 08:34:09 compute-0 conmon[323670]: conmon 393bfe5579ffe1c02a35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0.scope/container/memory.events
Dec 13 08:34:09 compute-0 podman[323654]: 2025-12-13 08:34:09.103271274 +0000 UTC m=+0.177645043 container died 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9da754c065afcd3a88d6a2aa4ff2ecb0d062243bd7ef488b8a4739c395629f07-merged.mount: Deactivated successfully.
Dec 13 08:34:09 compute-0 podman[323654]: 2025-12-13 08:34:09.150997909 +0000 UTC m=+0.225371638 container remove 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 08:34:09 compute-0 systemd[1]: libpod-conmon-393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0.scope: Deactivated successfully.
Dec 13 08:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:34:09
Dec 13 08:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control']
Dec 13 08:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:34:09 compute-0 podman[323693]: 2025-12-13 08:34:09.352436042 +0000 UTC m=+0.069083187 container create 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:34:09 compute-0 systemd[1]: Started libpod-conmon-9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666.scope.
Dec 13 08:34:09 compute-0 podman[323693]: 2025-12-13 08:34:09.309725881 +0000 UTC m=+0.026373026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:34:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:09 compute-0 podman[323693]: 2025-12-13 08:34:09.462730211 +0000 UTC m=+0.179377356 container init 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 08:34:09 compute-0 podman[323693]: 2025-12-13 08:34:09.471783706 +0000 UTC m=+0.188430851 container start 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:34:09 compute-0 podman[323693]: 2025-12-13 08:34:09.523831598 +0000 UTC m=+0.240478783 container attach 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:34:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 49 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Dec 13 08:34:09 compute-0 fervent_shamir[323709]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:34:09 compute-0 fervent_shamir[323709]: --> All data devices are unavailable
Dec 13 08:34:09 compute-0 systemd[1]: libpod-9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666.scope: Deactivated successfully.
Dec 13 08:34:09 compute-0 podman[323693]: 2025-12-13 08:34:09.984904028 +0000 UTC m=+0.701551233 container died 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b-merged.mount: Deactivated successfully.
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:34:10 compute-0 podman[323693]: 2025-12-13 08:34:10.223638287 +0000 UTC m=+0.940285442 container remove 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 08:34:10 compute-0 systemd[1]: libpod-conmon-9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666.scope: Deactivated successfully.
Dec 13 08:34:10 compute-0 sudo[323618]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:10 compute-0 sudo[323742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:34:10 compute-0 sudo[323742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:10 compute-0 sudo[323742]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:10 compute-0 sudo[323767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:34:10 compute-0 sudo[323767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:10 compute-0 nova_compute[248510]: 2025-12-13 08:34:10.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:34:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:34:10 compute-0 podman[323804]: 2025-12-13 08:34:10.69729279 +0000 UTC m=+0.028855368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:34:10 compute-0 podman[323804]: 2025-12-13 08:34:10.906589918 +0000 UTC m=+0.238152466 container create a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:34:10 compute-0 ceph-mon[76537]: pgmap v2107: 321 pgs: 321 active+clean; 49 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Dec 13 08:34:11 compute-0 systemd[1]: Started libpod-conmon-a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10.scope.
Dec 13 08:34:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:11 compute-0 podman[323804]: 2025-12-13 08:34:11.337482169 +0000 UTC m=+0.669044747 container init a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 08:34:11 compute-0 podman[323804]: 2025-12-13 08:34:11.346372869 +0000 UTC m=+0.677935437 container start a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:34:11 compute-0 bold_wright[323820]: 167 167
Dec 13 08:34:11 compute-0 systemd[1]: libpod-a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10.scope: Deactivated successfully.
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.371208) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851371294, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 840, "num_deletes": 250, "total_data_size": 1138734, "memory_usage": 1167544, "flush_reason": "Manual Compaction"}
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851420356, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 718803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40587, "largest_seqno": 41426, "table_properties": {"data_size": 715299, "index_size": 1284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9582, "raw_average_key_size": 20, "raw_value_size": 707776, "raw_average_value_size": 1538, "num_data_blocks": 57, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614782, "oldest_key_time": 1765614782, "file_creation_time": 1765614851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 49196 microseconds, and 3033 cpu microseconds.
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.420401) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 718803 bytes OK
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.420423) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.432659) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.432705) EVENT_LOG_v1 {"time_micros": 1765614851432697, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.432731) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1134548, prev total WAL file size 1134548, number of live WAL files 2.
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.433385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353039' seq:72057594037927935, type:22 .. '6D6772737461740031373630' seq:0, type:0; will stop at (end)
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(701KB)], [92(10079KB)]
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851433437, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11040595, "oldest_snapshot_seqno": -1}
Dec 13 08:34:11 compute-0 podman[323804]: 2025-12-13 08:34:11.433886922 +0000 UTC m=+0.765449480 container attach a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 08:34:11 compute-0 podman[323804]: 2025-12-13 08:34:11.434458397 +0000 UTC m=+0.766020965 container died a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6289 keys, 7976431 bytes, temperature: kUnknown
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851517513, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 7976431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7936493, "index_size": 23124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 163088, "raw_average_key_size": 25, "raw_value_size": 7825623, "raw_average_value_size": 1244, "num_data_blocks": 910, "num_entries": 6289, "num_filter_entries": 6289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.517786) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 7976431 bytes
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.523222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.2 rd, 94.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.8 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(26.5) write-amplify(11.1) OK, records in: 6771, records dropped: 482 output_compression: NoCompression
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.523246) EVENT_LOG_v1 {"time_micros": 1765614851523236, "job": 54, "event": "compaction_finished", "compaction_time_micros": 84169, "compaction_time_cpu_micros": 23661, "output_level": 6, "num_output_files": 1, "total_output_size": 7976431, "num_input_records": 6771, "num_output_records": 6289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851523475, "job": 54, "event": "table_file_deletion", "file_number": 94}
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851526697, "job": 54, "event": "table_file_deletion", "file_number": 92}
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.433328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:34:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b5429439c6aa4411a2f5ff73cd2a081a27be70a895f74b871df63796a5c4be4-merged.mount: Deactivated successfully.
Dec 13 08:34:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 13 08:34:11 compute-0 podman[323804]: 2025-12-13 08:34:11.796573 +0000 UTC m=+1.128135588 container remove a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:34:11 compute-0 systemd[1]: libpod-conmon-a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10.scope: Deactivated successfully.
Dec 13 08:34:12 compute-0 podman[323845]: 2025-12-13 08:34:12.013565708 +0000 UTC m=+0.051706455 container create d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 08:34:12 compute-0 systemd[1]: Started libpod-conmon-d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919.scope.
Dec 13 08:34:12 compute-0 podman[323845]: 2025-12-13 08:34:11.990855444 +0000 UTC m=+0.028996221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:34:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:12 compute-0 podman[323845]: 2025-12-13 08:34:12.098780225 +0000 UTC m=+0.136920992 container init d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:34:12 compute-0 podman[323845]: 2025-12-13 08:34:12.109237644 +0000 UTC m=+0.147378371 container start d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 08:34:12 compute-0 podman[323845]: 2025-12-13 08:34:12.112795533 +0000 UTC m=+0.150936260 container attach d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 08:34:12 compute-0 ceph-mon[76537]: pgmap v2108: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 13 08:34:12 compute-0 agitated_booth[323862]: {
Dec 13 08:34:12 compute-0 agitated_booth[323862]:     "0": [
Dec 13 08:34:12 compute-0 agitated_booth[323862]:         {
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "devices": [
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "/dev/loop3"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             ],
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_name": "ceph_lv0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_size": "21470642176",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "name": "ceph_lv0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "tags": {
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cluster_name": "ceph",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.crush_device_class": "",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.encrypted": "0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.objectstore": "bluestore",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osd_id": "0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.type": "block",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.vdo": "0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.with_tpm": "0"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             },
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "type": "block",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "vg_name": "ceph_vg0"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:         }
Dec 13 08:34:12 compute-0 agitated_booth[323862]:     ],
Dec 13 08:34:12 compute-0 agitated_booth[323862]:     "1": [
Dec 13 08:34:12 compute-0 agitated_booth[323862]:         {
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "devices": [
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "/dev/loop4"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             ],
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_name": "ceph_lv1",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_size": "21470642176",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "name": "ceph_lv1",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "tags": {
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cluster_name": "ceph",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.crush_device_class": "",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.encrypted": "0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.objectstore": "bluestore",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osd_id": "1",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.type": "block",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.vdo": "0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.with_tpm": "0"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             },
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "type": "block",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "vg_name": "ceph_vg1"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:         }
Dec 13 08:34:12 compute-0 agitated_booth[323862]:     ],
Dec 13 08:34:12 compute-0 agitated_booth[323862]:     "2": [
Dec 13 08:34:12 compute-0 agitated_booth[323862]:         {
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "devices": [
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "/dev/loop5"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             ],
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_name": "ceph_lv2",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_size": "21470642176",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "name": "ceph_lv2",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "tags": {
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.cluster_name": "ceph",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.crush_device_class": "",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.encrypted": "0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.objectstore": "bluestore",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osd_id": "2",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.type": "block",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.vdo": "0",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:                 "ceph.with_tpm": "0"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             },
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "type": "block",
Dec 13 08:34:12 compute-0 agitated_booth[323862]:             "vg_name": "ceph_vg2"
Dec 13 08:34:12 compute-0 agitated_booth[323862]:         }
Dec 13 08:34:12 compute-0 agitated_booth[323862]:     ]
Dec 13 08:34:12 compute-0 agitated_booth[323862]: }
Dec 13 08:34:12 compute-0 systemd[1]: libpod-d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919.scope: Deactivated successfully.
Dec 13 08:34:12 compute-0 podman[323845]: 2025-12-13 08:34:12.484029111 +0000 UTC m=+0.522169838 container died d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce-merged.mount: Deactivated successfully.
Dec 13 08:34:12 compute-0 podman[323845]: 2025-12-13 08:34:12.528120396 +0000 UTC m=+0.566261123 container remove d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:34:12 compute-0 systemd[1]: libpod-conmon-d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919.scope: Deactivated successfully.
Dec 13 08:34:12 compute-0 sudo[323767]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:12 compute-0 sudo[323882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:34:12 compute-0 sudo[323882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:12 compute-0 sudo[323882]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:12 compute-0 sudo[323907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:34:12 compute-0 sudo[323907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:13 compute-0 podman[323945]: 2025-12-13 08:34:12.999578484 +0000 UTC m=+0.045778098 container create 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:34:13 compute-0 systemd[1]: Started libpod-conmon-0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0.scope.
Dec 13 08:34:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:13 compute-0 podman[323945]: 2025-12-13 08:34:12.977618309 +0000 UTC m=+0.023817893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:34:13 compute-0 podman[323945]: 2025-12-13 08:34:13.082342499 +0000 UTC m=+0.128542073 container init 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:34:13 compute-0 podman[323945]: 2025-12-13 08:34:13.090196965 +0000 UTC m=+0.136396529 container start 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:34:13 compute-0 podman[323945]: 2025-12-13 08:34:13.093932877 +0000 UTC m=+0.140132541 container attach 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 08:34:13 compute-0 loving_booth[323961]: 167 167
Dec 13 08:34:13 compute-0 systemd[1]: libpod-0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0.scope: Deactivated successfully.
Dec 13 08:34:13 compute-0 podman[323945]: 2025-12-13 08:34:13.099001913 +0000 UTC m=+0.145201477 container died 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c3d44a51e73ab35b2fffd74f28ad81e713f69b640fa8800b4853c9676b108ec-merged.mount: Deactivated successfully.
Dec 13 08:34:13 compute-0 podman[323945]: 2025-12-13 08:34:13.147503138 +0000 UTC m=+0.193702702 container remove 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 08:34:13 compute-0 systemd[1]: libpod-conmon-0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0.scope: Deactivated successfully.
Dec 13 08:34:13 compute-0 podman[323985]: 2025-12-13 08:34:13.349636697 +0000 UTC m=+0.054401512 container create 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 08:34:13 compute-0 systemd[1]: Started libpod-conmon-6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634.scope.
Dec 13 08:34:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:13 compute-0 podman[323985]: 2025-12-13 08:34:13.330845721 +0000 UTC m=+0.035610556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:34:13 compute-0 podman[323985]: 2025-12-13 08:34:13.430219959 +0000 UTC m=+0.134984804 container init 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:34:13 compute-0 podman[323985]: 2025-12-13 08:34:13.438011202 +0000 UTC m=+0.142776037 container start 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:34:13 compute-0 podman[323985]: 2025-12-13 08:34:13.442307369 +0000 UTC m=+0.147072194 container attach 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:34:13 compute-0 nova_compute[248510]: 2025-12-13 08:34:13.517 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614838.516485, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:13 compute-0 nova_compute[248510]: 2025-12-13 08:34:13.519 248514 INFO nova.compute.manager [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Stopped (Lifecycle Event)
Dec 13 08:34:13 compute-0 nova_compute[248510]: 2025-12-13 08:34:13.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:13 compute-0 nova_compute[248510]: 2025-12-13 08:34:13.558 248514 DEBUG nova.compute.manager [None req-e3387d21-6b2e-404e-a0ea-4e95b51f9504 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 192 op/s
Dec 13 08:34:14 compute-0 lvm[324080]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:34:14 compute-0 lvm[324079]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:34:14 compute-0 lvm[324080]: VG ceph_vg1 finished
Dec 13 08:34:14 compute-0 lvm[324079]: VG ceph_vg0 finished
Dec 13 08:34:14 compute-0 lvm[324082]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:34:14 compute-0 lvm[324082]: VG ceph_vg2 finished
Dec 13 08:34:14 compute-0 funny_keller[324001]: {}
Dec 13 08:34:14 compute-0 systemd[1]: libpod-6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634.scope: Deactivated successfully.
Dec 13 08:34:14 compute-0 systemd[1]: libpod-6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634.scope: Consumed 1.369s CPU time.
Dec 13 08:34:14 compute-0 podman[323985]: 2025-12-13 08:34:14.289442157 +0000 UTC m=+0.994207022 container died 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433-merged.mount: Deactivated successfully.
Dec 13 08:34:14 compute-0 podman[323985]: 2025-12-13 08:34:14.341982851 +0000 UTC m=+1.046747676 container remove 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 08:34:14 compute-0 systemd[1]: libpod-conmon-6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634.scope: Deactivated successfully.
Dec 13 08:34:14 compute-0 sudo[323907]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:34:14 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:34:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:34:14 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:34:14 compute-0 sudo[324096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:34:14 compute-0 sudo[324096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:34:14 compute-0 sudo[324096]: pam_unix(sudo:session): session closed for user root
Dec 13 08:34:14 compute-0 ceph-mon[76537]: pgmap v2109: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 192 op/s
Dec 13 08:34:14 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:34:14 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:34:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:34:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2818477851' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:34:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:34:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2818477851' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:34:15 compute-0 nova_compute[248510]: 2025-12-13 08:34:15.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 148 op/s
Dec 13 08:34:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2818477851' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:34:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2818477851' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:34:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:16 compute-0 ceph-mon[76537]: pgmap v2110: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 148 op/s
Dec 13 08:34:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Dec 13 08:34:18 compute-0 nova_compute[248510]: 2025-12-13 08:34:18.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:18 compute-0 ceph-mon[76537]: pgmap v2111: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Dec 13 08:34:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Dec 13 08:34:20 compute-0 nova_compute[248510]: 2025-12-13 08:34:20.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:20 compute-0 ceph-mon[76537]: pgmap v2112: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.48350654070969e-06 of space, bias 1.0, pg target 0.002545051962212907 quantized to 32 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672809573754347 of space, bias 1.0, pg target 0.2001842872126304 quantized to 32 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.983829715642068e-07 of space, bias 4.0, pg target 0.0007180595658770481 quantized to 16 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:34:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:34:21 compute-0 nova_compute[248510]: 2025-12-13 08:34:21.336 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614846.335488, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:21 compute-0 nova_compute[248510]: 2025-12-13 08:34:21.337 248514 INFO nova.compute.manager [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Stopped (Lifecycle Event)
Dec 13 08:34:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:21 compute-0 nova_compute[248510]: 2025-12-13 08:34:21.367 248514 DEBUG nova.compute.manager [None req-ac388c50-0e24-42a7-b43d-bf4ddf87ddaa - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 341 B/s wr, 14 op/s
Dec 13 08:34:22 compute-0 ceph-mon[76537]: pgmap v2113: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 341 B/s wr, 14 op/s
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.516 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.516 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.545 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.622 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.622 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.634 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.635 248514 INFO nova.compute.claims [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:34:23 compute-0 nova_compute[248510]: 2025-12-13 08:34:23.765 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:34:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244428344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.356 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.363 248514 DEBUG nova.compute.provider_tree [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.383 248514 DEBUG nova.scheduler.client.report [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.404 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.405 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.482 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.483 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.515 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.537 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.647 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.649 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.649 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Creating image(s)
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.676 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.709 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.740 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.745 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.818 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.819 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.819 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.820 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:24 compute-0 ceph-mon[76537]: pgmap v2114: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:34:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3244428344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.850 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.854 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:24 compute-0 nova_compute[248510]: 2025-12-13 08:34:24.956 248514 DEBUG nova.policy [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1daf78b2d25748d183901a09e4605044', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '082f98d662a4450a9fd767ac13a76391', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:34:25 compute-0 nova_compute[248510]: 2025-12-13 08:34:25.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:34:25 compute-0 nova_compute[248510]: 2025-12-13 08:34:25.666 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:25 compute-0 nova_compute[248510]: 2025-12-13 08:34:25.721 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] resizing rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:34:26 compute-0 nova_compute[248510]: 2025-12-13 08:34:26.041 248514 DEBUG nova.objects.instance [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'migration_context' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:26 compute-0 nova_compute[248510]: 2025-12-13 08:34:26.061 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:34:26 compute-0 nova_compute[248510]: 2025-12-13 08:34:26.062 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Ensure instance console log exists: /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:34:26 compute-0 nova_compute[248510]: 2025-12-13 08:34:26.063 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:26 compute-0 nova_compute[248510]: 2025-12-13 08:34:26.063 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:26 compute-0 nova_compute[248510]: 2025-12-13 08:34:26.064 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:26 compute-0 nova_compute[248510]: 2025-12-13 08:34:26.474 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Successfully created port: 9c81d394-98c0-444d-a1f2-9b909c7e2559 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:34:26 compute-0 ceph-mon[76537]: pgmap v2115: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:34:26 compute-0 podman[324314]: 2025-12-13 08:34:26.990103432 +0000 UTC m=+0.074667696 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:34:26 compute-0 podman[324313]: 2025-12-13 08:34:26.994683875 +0000 UTC m=+0.078165132 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:34:27 compute-0 podman[324312]: 2025-12-13 08:34:27.060380297 +0000 UTC m=+0.141776932 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 08:34:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:34:28 compute-0 nova_compute[248510]: 2025-12-13 08:34:28.443 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Successfully updated port: 9c81d394-98c0-444d-a1f2-9b909c7e2559 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:34:28 compute-0 nova_compute[248510]: 2025-12-13 08:34:28.469 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:28 compute-0 nova_compute[248510]: 2025-12-13 08:34:28.469 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquired lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:28 compute-0 nova_compute[248510]: 2025-12-13 08:34:28.470 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:34:28 compute-0 nova_compute[248510]: 2025-12-13 08:34:28.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:28 compute-0 nova_compute[248510]: 2025-12-13 08:34:28.670 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:34:29 compute-0 ceph-mon[76537]: pgmap v2116: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:34:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 72 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.066 248514 DEBUG nova.compute.manager [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-changed-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.067 248514 DEBUG nova.compute.manager [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Refreshing instance network info cache due to event network-changed-9c81d394-98c0-444d-a1f2-9b909c7e2559. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.067 248514 DEBUG oslo_concurrency.lockutils [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:30 compute-0 ceph-mon[76537]: pgmap v2117: 321 pgs: 321 active+clean; 72 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.652 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updating instance_info_cache with network_info: [{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.679 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Releasing lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.679 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance network_info: |[{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.680 248514 DEBUG oslo_concurrency.lockutils [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.680 248514 DEBUG nova.network.neutron [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Refreshing network info cache for port 9c81d394-98c0-444d-a1f2-9b909c7e2559 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.683 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start _get_guest_xml network_info=[{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.687 248514 WARNING nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.694 248514 DEBUG nova.virt.libvirt.host [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.695 248514 DEBUG nova.virt.libvirt.host [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.702 248514 DEBUG nova.virt.libvirt.host [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.703 248514 DEBUG nova.virt.libvirt.host [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.703 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.704 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.704 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.704 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.704 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.705 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.705 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.705 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.705 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.706 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.706 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.706 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:34:30 compute-0 nova_compute[248510]: 2025-12-13 08:34:30.709 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:34:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111517588' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:31 compute-0 nova_compute[248510]: 2025-12-13 08:34:31.281 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:31 compute-0 nova_compute[248510]: 2025-12-13 08:34:31.307 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:31 compute-0 nova_compute[248510]: 2025-12-13 08:34:31.311 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2111517588' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:34:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:34:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484614483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.029 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.718s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.032 248514 DEBUG nova.virt.libvirt.vif [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-458198005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-458198005',id=77,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='082f98d662a4450a9fd767ac13a76391',ramdisk_id='',reservation_id='r-rt0w0l4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-2087650193',owner_user_name='tempest-AttachInterfacesV270Test-2087650193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:24Z,user_data=None,user_id='1daf78b2d25748d183901a09e4605044',uuid=68f568e2-917b-4b70-8345-8d04c0f5d1a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.032 248514 DEBUG nova.network.os_vif_util [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converting VIF {"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.033 248514 DEBUG nova.network.os_vif_util [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.035 248514 DEBUG nova.objects.instance [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.059 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <uuid>68f568e2-917b-4b70-8345-8d04c0f5d1a6</uuid>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <name>instance-0000004d</name>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <nova:name>tempest-AttachInterfacesV270Test-server-458198005</nova:name>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:34:30</nova:creationTime>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <nova:user uuid="1daf78b2d25748d183901a09e4605044">tempest-AttachInterfacesV270Test-2087650193-project-member</nova:user>
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <nova:project uuid="082f98d662a4450a9fd767ac13a76391">tempest-AttachInterfacesV270Test-2087650193</nova:project>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <nova:port uuid="9c81d394-98c0-444d-a1f2-9b909c7e2559">
Dec 13 08:34:32 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <system>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <entry name="serial">68f568e2-917b-4b70-8345-8d04c0f5d1a6</entry>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <entry name="uuid">68f568e2-917b-4b70-8345-8d04c0f5d1a6</entry>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </system>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <os>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   </os>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <features>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   </features>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk">
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config">
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:34:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:07:28:0b"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <target dev="tap9c81d394-98"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/console.log" append="off"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <video>
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </video>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:34:32 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:34:32 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:34:32 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:34:32 compute-0 nova_compute[248510]: </domain>
Dec 13 08:34:32 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.061 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Preparing to wait for external event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.062 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.063 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.063 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.064 248514 DEBUG nova.virt.libvirt.vif [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-458198005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-458198005',id=77,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='082f98d662a4450a9fd767ac13a76391',ramdisk_id='',reservation_id='r-rt0w0l4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-2087650193',owner_user_name='tempest-AttachInterfacesV270Test-2087650193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:24Z,user_data=None,user_id='1daf78b2d25748d183901a09e4605044',uuid=68f568e2-917b-4b70-8345-8d04c0f5d1a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.065 248514 DEBUG nova.network.os_vif_util [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converting VIF {"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.066 248514 DEBUG nova.network.os_vif_util [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.067 248514 DEBUG os_vif [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.069 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.070 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.076 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c81d394-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.076 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c81d394-98, col_values=(('external_ids', {'iface-id': '9c81d394-98c0-444d-a1f2-9b909c7e2559', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:28:0b', 'vm-uuid': '68f568e2-917b-4b70-8345-8d04c0f5d1a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:32 compute-0 NetworkManager[50376]: <info>  [1765614872.0810] manager: (tap9c81d394-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.089 248514 INFO os_vif [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98')
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.336 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.337 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.337 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No VIF found with MAC fa:16:3e:07:28:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.339 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Using config drive
Dec 13 08:34:32 compute-0 nova_compute[248510]: 2025-12-13 08:34:32.372 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:32 compute-0 ceph-mon[76537]: pgmap v2118: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:34:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2484614483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:34:33 compute-0 nova_compute[248510]: 2025-12-13 08:34:33.904 248514 DEBUG nova.network.neutron [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updated VIF entry in instance network info cache for port 9c81d394-98c0-444d-a1f2-9b909c7e2559. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:34:33 compute-0 nova_compute[248510]: 2025-12-13 08:34:33.904 248514 DEBUG nova.network.neutron [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updating instance_info_cache with network_info: [{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:33 compute-0 nova_compute[248510]: 2025-12-13 08:34:33.940 248514 DEBUG oslo_concurrency.lockutils [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.017 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Creating config drive at /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.023 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh0o5s5v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.174 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh0o5s5v" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.199 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.203 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.364 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "287069fd-8250-4b7f-92c5-74450b2e92ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.365 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.398 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.398 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Deleting local config drive /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config because it was imported into RBD.
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.405 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:34:34 compute-0 kernel: tap9c81d394-98: entered promiscuous mode
Dec 13 08:34:34 compute-0 NetworkManager[50376]: <info>  [1765614874.4562] manager: (tap9c81d394-98): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.457 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:34 compute-0 ovn_controller[148476]: 2025-12-13T08:34:34Z|00764|binding|INFO|Claiming lport 9c81d394-98c0-444d-a1f2-9b909c7e2559 for this chassis.
Dec 13 08:34:34 compute-0 ovn_controller[148476]: 2025-12-13T08:34:34Z|00765|binding|INFO|9c81d394-98c0-444d-a1f2-9b909c7e2559: Claiming fa:16:3e:07:28:0b 10.100.0.13
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.478 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:28:0b 10.100.0.13'], port_security=['fa:16:3e:07:28:0b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '68f568e2-917b-4b70-8345-8d04c0f5d1a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfb72597-ac7e-412b-834c-574618fe1a4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '082f98d662a4450a9fd767ac13a76391', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79fb568a-41f5-487e-bdd9-c21c5553decf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=778b417f-43c9-4614-a0b9-308a3289fddb, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9c81d394-98c0-444d-a1f2-9b909c7e2559) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.480 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9c81d394-98c0-444d-a1f2-9b909c7e2559 in datapath dfb72597-ac7e-412b-834c-574618fe1a4c bound to our chassis
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.481 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfb72597-ac7e-412b-834c-574618fe1a4c
Dec 13 08:34:34 compute-0 systemd-udevd[324507]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.496 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[756c0737-679c-4f55-9709-a23363da1aa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.497 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdfb72597-a1 in ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:34:34 compute-0 systemd-machined[210538]: New machine qemu-96-instance-0000004d.
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.500 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdfb72597-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.500 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f99d37-5b4b-40ab-90a5-5d9e03c809a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.501 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06dd563f-6e80-4f28-a6f1-01361cbb1144]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.508 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:34 compute-0 NetworkManager[50376]: <info>  [1765614874.5101] device (tap9c81d394-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:34:34 compute-0 NetworkManager[50376]: <info>  [1765614874.5107] device (tap9c81d394-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.509 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.511 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[502ba1ea-f0e0-411b-ab84-86f65d3f4222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.519 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.520 248514 INFO nova.compute.claims [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:34:34 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-0000004d.
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:34 compute-0 ovn_controller[148476]: 2025-12-13T08:34:34Z|00766|binding|INFO|Setting lport 9c81d394-98c0-444d-a1f2-9b909c7e2559 ovn-installed in OVS
Dec 13 08:34:34 compute-0 ovn_controller[148476]: 2025-12-13T08:34:34Z|00767|binding|INFO|Setting lport 9c81d394-98c0-444d-a1f2-9b909c7e2559 up in Southbound
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.543 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9640a060-b765-4d50-bbb6-f54513930479]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.581 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4492c4ca-0bd6-472b-8794-da119ff63c9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.587 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[913d7d0e-1b8b-4169-ac2b-9dd2fe8f0149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 NetworkManager[50376]: <info>  [1765614874.5882] manager: (tapdfb72597-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.621 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[102a8b1b-016c-4868-8b5a-edd7e3676e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.624 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[305ca2c8-37d6-490b-b44f-8f0a4ff3df4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 NetworkManager[50376]: <info>  [1765614874.6476] device (tapdfb72597-a0): carrier: link connected
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.653 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a681b59e-3e11-4f84-be3d-db514f24a34e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f71afc4-e413-4e3b-83f1-c9b90921b210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfb72597-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:fa:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745186, 'reachable_time': 24813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324541, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.694 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.696 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc44c915-1554-4cd4-9d98-3630fef3ec28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:fa74'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745186, 'tstamp': 745186}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324542, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.720 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f116ccd-c153-472e-b4fb-ce152de8e60c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfb72597-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:fa:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745186, 'reachable_time': 24813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324543, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ceph-mon[76537]: pgmap v2119: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.763 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f21c0570-54e7-4c03-a38a-39dbd2c5be33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.820 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b859b036-f552-4da6-8597-104502c338cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.821 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfb72597-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfb72597-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:34 compute-0 kernel: tapdfb72597-a0: entered promiscuous mode
Dec 13 08:34:34 compute-0 NetworkManager[50376]: <info>  [1765614874.8249] manager: (tapdfb72597-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.827 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfb72597-a0, col_values=(('external_ids', {'iface-id': '5ddcc36f-0a9b-4b88-89cc-ebf23344a1de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:34 compute-0 ovn_controller[148476]: 2025-12-13T08:34:34Z|00768|binding|INFO|Releasing lport 5ddcc36f-0a9b-4b88-89cc-ebf23344a1de from this chassis (sb_readonly=0)
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.830 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dfb72597-ac7e-412b-834c-574618fe1a4c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dfb72597-ac7e-412b-834c-574618fe1a4c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6adb75e6-d935-4129-830a-76a3877bca82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.832 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-dfb72597-ac7e-412b-834c-574618fe1a4c
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/dfb72597-ac7e-412b-834c-574618fe1a4c.pid.haproxy
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID dfb72597-ac7e-412b-834c-574618fe1a4c
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:34:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.834 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'env', 'PROCESS_TAG=haproxy-dfb72597-ac7e-412b-834c-574618fe1a4c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dfb72597-ac7e-412b-834c-574618fe1a4c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:34:34 compute-0 nova_compute[248510]: 2025-12-13 08:34:34.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.144 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614875.1437128, 68f568e2-917b-4b70-8345-8d04c0f5d1a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.146 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] VM Started (Lifecycle Event)
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.178 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.187 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614875.1447654, 68f568e2-917b-4b70-8345-8d04c0f5d1a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.188 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] VM Paused (Lifecycle Event)
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.213 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.218 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.238 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:34:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:34:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1100857933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:35 compute-0 podman[324635]: 2025-12-13 08:34:35.234532401 +0000 UTC m=+0.029065233 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.331 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.340 248514 DEBUG nova.compute.provider_tree [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.361 248514 DEBUG nova.scheduler.client.report [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.395 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.396 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:34:35 compute-0 podman[324635]: 2025-12-13 08:34:35.412608764 +0000 UTC m=+0.207141576 container create 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.464 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.465 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:34:35 compute-0 systemd[1]: Started libpod-conmon-5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e.scope.
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.492 248514 INFO nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:34:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f3be40f551ba5f18c4992eac5c86f2009e1b17fb3be624151e23ab2639672f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:35 compute-0 podman[324635]: 2025-12-13 08:34:35.516177276 +0000 UTC m=+0.310710108 container init 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 08:34:35 compute-0 podman[324635]: 2025-12-13 08:34:35.522708358 +0000 UTC m=+0.317241170 container start 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.523 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:34:35 compute-0 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [NOTICE]   (324656) : New worker (324658) forked
Dec 13 08:34:35 compute-0 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [NOTICE]   (324656) : Loading success.
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.626 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.627 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.628 248514 INFO nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Creating image(s)
Dec 13 08:34:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.652 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.679 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.703 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.707 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1100857933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.803 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.805 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.805 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.806 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.831 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.837 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 287069fd-8250-4b7f-92c5-74450b2e92ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:35 compute-0 nova_compute[248510]: 2025-12-13 08:34:35.873 248514 DEBUG nova.policy [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c023c9703ede4646a33bf3a4c18781e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fec71de612e44bc68abc52770bf74e0c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:34:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:36 compute-0 nova_compute[248510]: 2025-12-13 08:34:36.699 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 287069fd-8250-4b7f-92c5-74450b2e92ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.863s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:36 compute-0 nova_compute[248510]: 2025-12-13 08:34:36.781 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] resizing rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:34:36 compute-0 ceph-mon[76537]: pgmap v2120: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:34:36 compute-0 nova_compute[248510]: 2025-12-13 08:34:36.960 248514 DEBUG nova.objects.instance [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lazy-loading 'migration_context' on Instance uuid 287069fd-8250-4b7f-92c5-74450b2e92ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:36 compute-0 nova_compute[248510]: 2025-12-13 08:34:36.979 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:34:36 compute-0 nova_compute[248510]: 2025-12-13 08:34:36.980 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Ensure instance console log exists: /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:34:36 compute-0 nova_compute[248510]: 2025-12-13 08:34:36.981 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:36 compute-0 nova_compute[248510]: 2025-12-13 08:34:36.981 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:36 compute-0 nova_compute[248510]: 2025-12-13 08:34:36.982 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:37 compute-0 nova_compute[248510]: 2025-12-13 08:34:37.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:37 compute-0 nova_compute[248510]: 2025-12-13 08:34:37.427 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Successfully created port: 65b19603-026e-4844-b71e-20239b4c9b1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:34:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:34:39 compute-0 ceph-mon[76537]: pgmap v2121: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:34:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 126 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 39 op/s
Dec 13 08:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:34:40 compute-0 ceph-mon[76537]: pgmap v2122: 321 pgs: 321 active+clean; 126 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 39 op/s
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.277 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Successfully updated port: 65b19603-026e-4844-b71e-20239b4c9b1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.300 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.301 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquired lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.301 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.618 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:34:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:40.679 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:40.682 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.862 248514 DEBUG nova.compute.manager [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.862 248514 DEBUG oslo_concurrency.lockutils [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.863 248514 DEBUG oslo_concurrency.lockutils [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.863 248514 DEBUG oslo_concurrency.lockutils [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.863 248514 DEBUG nova.compute.manager [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Processing event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.864 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.867 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614880.867357, 68f568e2-917b-4b70-8345-8d04c0f5d1a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.867 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] VM Resumed (Lifecycle Event)
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.870 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.875 248514 INFO nova.virt.libvirt.driver [-] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance spawned successfully.
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.875 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.915 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.925 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.926 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.926 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.927 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.927 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.928 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.933 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.985 248514 DEBUG nova.compute.manager [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Received event network-changed-65b19603-026e-4844-b71e-20239b4c9b1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.986 248514 DEBUG nova.compute.manager [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Refreshing instance network info cache due to event network-changed-65b19603-026e-4844-b71e-20239b4c9b1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.986 248514 DEBUG oslo_concurrency.lockutils [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:40 compute-0 nova_compute[248510]: 2025-12-13 08:34:40.989 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:34:41 compute-0 nova_compute[248510]: 2025-12-13 08:34:41.049 248514 INFO nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Took 16.40 seconds to spawn the instance on the hypervisor.
Dec 13 08:34:41 compute-0 nova_compute[248510]: 2025-12-13 08:34:41.050 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:41 compute-0 nova_compute[248510]: 2025-12-13 08:34:41.160 248514 INFO nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Took 17.57 seconds to build instance.
Dec 13 08:34:41 compute-0 nova_compute[248510]: 2025-12-13 08:34:41.188 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Dec 13 08:34:41 compute-0 nova_compute[248510]: 2025-12-13 08:34:41.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.768 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Updating instance_info_cache with network_info: [{"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.781 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.781 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.801 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Releasing lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.801 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Instance network_info: |[{"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.801 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.803 248514 DEBUG oslo_concurrency.lockutils [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.804 248514 DEBUG nova.network.neutron [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Refreshing network info cache for port 65b19603-026e-4844-b71e-20239b4c9b1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.806 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start _get_guest_xml network_info=[{"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.811 248514 WARNING nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.817 248514 DEBUG nova.virt.libvirt.host [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.818 248514 DEBUG nova.virt.libvirt.host [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.822 248514 DEBUG nova.virt.libvirt.host [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.822 248514 DEBUG nova.virt.libvirt.host [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.823 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.823 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.823 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.823 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.825 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.825 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.825 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.828 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:42 compute-0 ceph-mon[76537]: pgmap v2123: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.919 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.920 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.930 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:34:42 compute-0 nova_compute[248510]: 2025-12-13 08:34:42.930 248514 INFO nova.compute.claims [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:34:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:34:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071280628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.445 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.474 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.479 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.519 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.554 248514 DEBUG nova.compute.manager [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.555 248514 DEBUG oslo_concurrency.lockutils [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.555 248514 DEBUG oslo_concurrency.lockutils [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.556 248514 DEBUG oslo_concurrency.lockutils [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.556 248514 DEBUG nova.compute.manager [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] No waiting events found dispatching network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.556 248514 WARNING nova.compute.manager [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received unexpected event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 for instance with vm_state active and task_state None.
Dec 13 08:34:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.749 248514 DEBUG oslo_concurrency.lockutils [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "interface-68f568e2-917b-4b70-8345-8d04c0f5d1a6-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.750 248514 DEBUG oslo_concurrency.lockutils [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "interface-68f568e2-917b-4b70-8345-8d04c0f5d1a6-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.750 248514 DEBUG nova.objects.instance [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'flavor' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.783 248514 DEBUG nova.objects.instance [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'pci_requests' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:43 compute-0 nova_compute[248510]: 2025-12-13 08:34:43.802 248514 DEBUG nova.network.neutron [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:34:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4071280628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:34:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068031923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:34:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088948199' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.122 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.128 248514 DEBUG nova.compute.provider_tree [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.131 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.132 248514 DEBUG nova.virt.libvirt.vif [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1887941666',display_name='tempest-ServerAddressesTestJSON-server-1887941666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1887941666',id=78,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fec71de612e44bc68abc52770bf74e0c',ramdisk_id='',reservation_id='r-a75xsshy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1681799594',owner_user_name='tempest-ServerAddressesTestJSON-1681799594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:35Z,user_data=None,user_id='c023c9703ede4646a33bf3a4c18781e1',uuid=287069fd-8250-4b7f-92c5-74450b2e92ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.133 248514 DEBUG nova.network.os_vif_util [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Converting VIF {"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.134 248514 DEBUG nova.network.os_vif_util [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:23:89,bridge_name='br-int',has_traffic_filtering=True,id=65b19603-026e-4844-b71e-20239b4c9b1b,network=Network(ac80588a-2bf0-4858-a3dc-62f2f67c7bfd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65b19603-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.136 248514 DEBUG nova.objects.instance [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lazy-loading 'pci_devices' on Instance uuid 287069fd-8250-4b7f-92c5-74450b2e92ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.154 248514 DEBUG nova.scheduler.client.report [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.160 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <uuid>287069fd-8250-4b7f-92c5-74450b2e92ca</uuid>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <name>instance-0000004e</name>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerAddressesTestJSON-server-1887941666</nova:name>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:34:42</nova:creationTime>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <nova:user uuid="c023c9703ede4646a33bf3a4c18781e1">tempest-ServerAddressesTestJSON-1681799594-project-member</nova:user>
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <nova:project uuid="fec71de612e44bc68abc52770bf74e0c">tempest-ServerAddressesTestJSON-1681799594</nova:project>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <nova:port uuid="65b19603-026e-4844-b71e-20239b4c9b1b">
Dec 13 08:34:44 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <system>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <entry name="serial">287069fd-8250-4b7f-92c5-74450b2e92ca</entry>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <entry name="uuid">287069fd-8250-4b7f-92c5-74450b2e92ca</entry>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </system>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <os>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   </os>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <features>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   </features>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/287069fd-8250-4b7f-92c5-74450b2e92ca_disk">
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       </source>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/287069fd-8250-4b7f-92c5-74450b2e92ca_disk.config">
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       </source>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:34:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:12:23:89"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <target dev="tap65b19603-02"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/console.log" append="off"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <video>
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </video>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:34:44 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:34:44 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:34:44 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:34:44 compute-0 nova_compute[248510]: </domain>
Dec 13 08:34:44 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.161 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Preparing to wait for external event network-vif-plugged-65b19603-026e-4844-b71e-20239b4c9b1b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.161 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.161 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.162 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.162 248514 DEBUG nova.virt.libvirt.vif [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1887941666',display_name='tempest-ServerAddressesTestJSON-server-1887941666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1887941666',id=78,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fec71de612e44bc68abc52770bf74e0c',ramdisk_id='',reservation_id='r-a75xsshy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1681799594',owner_user_name='tempest-ServerAddressesTestJSON-1681799594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:35Z,user_data=None,user_id='c023c9703ede4646a33bf3a4c18781e1',uuid=287069fd-8250-4b7f-92c5-74450b2e92ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.163 248514 DEBUG nova.network.os_vif_util [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Converting VIF {"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.163 248514 DEBUG nova.network.os_vif_util [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:23:89,bridge_name='br-int',has_traffic_filtering=True,id=65b19603-026e-4844-b71e-20239b4c9b1b,network=Network(ac80588a-2bf0-4858-a3dc-62f2f67c7bfd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65b19603-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.164 248514 DEBUG os_vif [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:23:89,bridge_name='br-int',has_traffic_filtering=True,id=65b19603-026e-4844-b71e-20239b4c9b1b,network=Network(ac80588a-2bf0-4858-a3dc-62f2f67c7bfd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65b19603-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.165 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.165 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.169 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65b19603-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.170 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap65b19603-02, col_values=(('external_ids', {'iface-id': '65b19603-026e-4844-b71e-20239b4c9b1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:23:89', 'vm-uuid': '287069fd-8250-4b7f-92c5-74450b2e92ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.172 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:44 compute-0 NetworkManager[50376]: <info>  [1765614884.1731] manager: (tap65b19603-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.181 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.182 248514 INFO os_vif [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:23:89,bridge_name='br-int',has_traffic_filtering=True,id=65b19603-026e-4844-b71e-20239b4c9b1b,network=Network(ac80588a-2bf0-4858-a3dc-62f2f67c7bfd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65b19603-02')
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.218 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.219 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.289 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.289 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.290 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] No VIF found with MAC fa:16:3e:12:23:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.290 248514 INFO nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Using config drive
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.319 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.334 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.335 248514 DEBUG nova.network.neutron [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.357 248514 INFO nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:34:44 compute-0 rsyslogd[1002]: imjournal from <np0005558241:nova_compute>: begin to drop messages due to rate-limiting
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.383 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.535 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.537 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.537 248514 INFO nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Creating image(s)
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.558 248514 DEBUG nova.storage.rbd_utils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] rbd image a393861c-2eea-4309-8914-042ec8bcc873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.581 248514 DEBUG nova.storage.rbd_utils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] rbd image a393861c-2eea-4309-8914-042ec8bcc873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.609 248514 DEBUG nova.storage.rbd_utils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] rbd image a393861c-2eea-4309-8914-042ec8bcc873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.614 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.689 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.690 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.691 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.691 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.715 248514 DEBUG nova.storage.rbd_utils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] rbd image a393861c-2eea-4309-8914-042ec8bcc873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.719 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a393861c-2eea-4309-8914-042ec8bcc873_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:44 compute-0 nova_compute[248510]: 2025-12-13 08:34:44.838 248514 DEBUG nova.policy [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f0b9d8d9077a42d49daac45b8edf2a49', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e8467605fc934a9cbc34a7e2300c34ac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:34:44 compute-0 ceph-mon[76537]: pgmap v2124: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 13 08:34:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1068031923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3088948199' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.054 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a393861c-2eea-4309-8914-042ec8bcc873_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.126 248514 DEBUG nova.storage.rbd_utils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] resizing rbd image a393861c-2eea-4309-8914-042ec8bcc873_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.211 248514 DEBUG nova.objects.instance [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lazy-loading 'migration_context' on Instance uuid a393861c-2eea-4309-8914-042ec8bcc873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.231 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.232 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Ensure instance console log exists: /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.232 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.233 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.233 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.460 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Dec 13 08:34:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:45.684 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.832 248514 DEBUG nova.policy [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1daf78b2d25748d183901a09e4605044', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '082f98d662a4450a9fd767ac13a76391', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.974 248514 INFO nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Creating config drive at /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/disk.config
Dec 13 08:34:45 compute-0 nova_compute[248510]: 2025-12-13 08:34:45.979 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplbc1o64r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.143 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplbc1o64r" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.174 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.178 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/disk.config 287069fd-8250-4b7f-92c5-74450b2e92ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.314 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/disk.config 287069fd-8250-4b7f-92c5-74450b2e92ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.315 248514 INFO nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Deleting local config drive /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/disk.config because it was imported into RBD.
Dec 13 08:34:46 compute-0 kernel: tap65b19603-02: entered promiscuous mode
Dec 13 08:34:46 compute-0 NetworkManager[50376]: <info>  [1765614886.3829] manager: (tap65b19603-02): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Dec 13 08:34:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:46 compute-0 ovn_controller[148476]: 2025-12-13T08:34:46Z|00769|binding|INFO|Claiming lport 65b19603-026e-4844-b71e-20239b4c9b1b for this chassis.
Dec 13 08:34:46 compute-0 ovn_controller[148476]: 2025-12-13T08:34:46Z|00770|binding|INFO|65b19603-026e-4844-b71e-20239b4c9b1b: Claiming fa:16:3e:12:23:89 10.100.0.7
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.405 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:23:89 10.100.0.7'], port_security=['fa:16:3e:12:23:89 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '287069fd-8250-4b7f-92c5-74450b2e92ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fec71de612e44bc68abc52770bf74e0c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66354219-88d0-4e62-864d-287cb8dc51e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60e169ab-1841-42a7-a602-8f965141b8bc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=65b19603-026e-4844-b71e-20239b4c9b1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.407 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 65b19603-026e-4844-b71e-20239b4c9b1b in datapath ac80588a-2bf0-4858-a3dc-62f2f67c7bfd bound to our chassis
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.409 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac80588a-2bf0-4858-a3dc-62f2f67c7bfd
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.423 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d36198d-7c46-406c-b726-9330e9c92780]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.424 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac80588a-21 in ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:34:46 compute-0 systemd-machined[210538]: New machine qemu-97-instance-0000004e.
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.429 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac80588a-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.429 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91e21ab8-a73c-49e1-90aa-15a7b423755a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.430 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e24b2201-dfe8-4fec-adc5-2e09c69f721d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-0000004e.
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.445 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9e428c-ea12-4438-87bf-2edb61ba772b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 systemd-udevd[325161]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.467 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[070c8504-b449-4c1d-9e70-4689a1ac3b19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.473 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:46 compute-0 ovn_controller[148476]: 2025-12-13T08:34:46Z|00771|binding|INFO|Setting lport 65b19603-026e-4844-b71e-20239b4c9b1b ovn-installed in OVS
Dec 13 08:34:46 compute-0 ovn_controller[148476]: 2025-12-13T08:34:46Z|00772|binding|INFO|Setting lport 65b19603-026e-4844-b71e-20239b4c9b1b up in Southbound
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:46 compute-0 NetworkManager[50376]: <info>  [1765614886.4852] device (tap65b19603-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:34:46 compute-0 NetworkManager[50376]: <info>  [1765614886.4861] device (tap65b19603-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.515 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9763a703-7168-4264-9630-66bbec4588aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.519 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[678bcf2a-5b76-458d-82d2-d75836c023d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 NetworkManager[50376]: <info>  [1765614886.5209] manager: (tapac80588a-20): new Veth device (/org/freedesktop/NetworkManager/Devices/331)
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.561 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[935b8eb4-27e1-443f-b1ff-7cf755e7727e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.565 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2bce9373-7c26-47c1-9bbb-2c88d533a153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 NetworkManager[50376]: <info>  [1765614886.5897] device (tapac80588a-20): carrier: link connected
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.596 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[19bf35bd-748c-49c4-8462-033a3488e1b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.615 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0631a569-236a-40a1-8fda-8fac34f41000]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac80588a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ba:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746380, 'reachable_time': 36099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325191, 'error': None, 'target': 'ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fff6402d-693d-40dd-92fa-2bdea8a6bf47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:ba26'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746380, 'tstamp': 746380}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325192, 'error': None, 'target': 'ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.650 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[436d4acd-3b0d-4503-98f6-be8f8bd77d3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac80588a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ba:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746380, 'reachable_time': 36099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325193, 'error': None, 'target': 'ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.679 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4a63dce3-635b-4dbc-bb32-7383c82cccd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.756 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e0283bd4-c2f3-4d99-a6cf-65efc7db3d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.757 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac80588a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.757 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.758 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac80588a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:46 compute-0 NetworkManager[50376]: <info>  [1765614886.7606] manager: (tapac80588a-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Dec 13 08:34:46 compute-0 kernel: tapac80588a-20: entered promiscuous mode
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.762 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac80588a-20, col_values=(('external_ids', {'iface-id': '3aefebe5-a65b-4eb6-97c9-8b5fd3c5ffa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:46 compute-0 ovn_controller[148476]: 2025-12-13T08:34:46Z|00773|binding|INFO|Releasing lport 3aefebe5-a65b-4eb6-97c9-8b5fd3c5ffa9 from this chassis (sb_readonly=0)
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:46 compute-0 nova_compute[248510]: 2025-12-13 08:34:46.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.780 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac80588a-2bf0-4858-a3dc-62f2f67c7bfd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac80588a-2bf0-4858-a3dc-62f2f67c7bfd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.782 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7d996206-f5ce-4adb-a0cd-0788864b210e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.782 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/ac80588a-2bf0-4858-a3dc-62f2f67c7bfd.pid.haproxy
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID ac80588a-2bf0-4858-a3dc-62f2f67c7bfd
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:34:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:46.783 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd', 'env', 'PROCESS_TAG=haproxy-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac80588a-2bf0-4858-a3dc-62f2f67c7bfd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:34:46 compute-0 ceph-mon[76537]: pgmap v2125: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.086 248514 DEBUG nova.network.neutron [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Successfully created port: 5e06fac2-7381-499a-a4b0-d0006f071eae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:34:47 compute-0 podman[325225]: 2025-12-13 08:34:47.15805168 +0000 UTC m=+0.061826346 container create 265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:34:47 compute-0 systemd[1]: Started libpod-conmon-265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8.scope.
Dec 13 08:34:47 compute-0 podman[325225]: 2025-12-13 08:34:47.126011065 +0000 UTC m=+0.029785761 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:34:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779f5d2ba620eb50de48816b1593424aab960a23f59df3272551e22f659b9ff1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:47 compute-0 podman[325225]: 2025-12-13 08:34:47.264633767 +0000 UTC m=+0.168408493 container init 265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:34:47 compute-0 podman[325225]: 2025-12-13 08:34:47.271898138 +0000 UTC m=+0.175672824 container start 265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:34:47 compute-0 neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd[325241]: [NOTICE]   (325245) : New worker (325247) forked
Dec 13 08:34:47 compute-0 neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd[325241]: [NOTICE]   (325245) : Loading success.
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.460 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614887.4597793, 287069fd-8250-4b7f-92c5-74450b2e92ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.462 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] VM Started (Lifecycle Event)
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.494 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.500 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614887.4605026, 287069fd-8250-4b7f-92c5-74450b2e92ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.501 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] VM Paused (Lifecycle Event)
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.525 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.532 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.565 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:34:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:34:47 compute-0 nova_compute[248510]: 2025-12-13 08:34:47.881 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:34:48 compute-0 ceph-mon[76537]: pgmap v2126: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.209 248514 DEBUG nova.network.neutron [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Updated VIF entry in instance network info cache for port 65b19603-026e-4844-b71e-20239b4c9b1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.211 248514 DEBUG nova.network.neutron [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Updating instance_info_cache with network_info: [{"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.377 248514 DEBUG oslo_concurrency.lockutils [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.633 248514 DEBUG nova.network.neutron [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Successfully updated port: 5e06fac2-7381-499a-a4b0-d0006f071eae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.662 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "refresh_cache-a393861c-2eea-4309-8914-042ec8bcc873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.663 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquired lock "refresh_cache-a393861c-2eea-4309-8914-042ec8bcc873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.663 248514 DEBUG nova.network.neutron [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.875 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:48 compute-0 nova_compute[248510]: 2025-12-13 08:34:48.917 248514 DEBUG nova.network.neutron [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Successfully created port: ca039819-f818-4e22-97a0-a5906ef33fcd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.009 248514 DEBUG nova.compute.manager [req-6236a367-db39-475d-9b74-9bd5a8a62ed2 req-511453eb-27ab-47cd-a5e8-8450d4aa6a0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Received event network-vif-plugged-65b19603-026e-4844-b71e-20239b4c9b1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.010 248514 DEBUG oslo_concurrency.lockutils [req-6236a367-db39-475d-9b74-9bd5a8a62ed2 req-511453eb-27ab-47cd-a5e8-8450d4aa6a0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.010 248514 DEBUG oslo_concurrency.lockutils [req-6236a367-db39-475d-9b74-9bd5a8a62ed2 req-511453eb-27ab-47cd-a5e8-8450d4aa6a0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.010 248514 DEBUG oslo_concurrency.lockutils [req-6236a367-db39-475d-9b74-9bd5a8a62ed2 req-511453eb-27ab-47cd-a5e8-8450d4aa6a0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.011 248514 DEBUG nova.compute.manager [req-6236a367-db39-475d-9b74-9bd5a8a62ed2 req-511453eb-27ab-47cd-a5e8-8450d4aa6a0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Processing event network-vif-plugged-65b19603-026e-4844-b71e-20239b4c9b1b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.012 248514 DEBUG nova.network.neutron [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.015 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.020 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614889.0202026, 287069fd-8250-4b7f-92c5-74450b2e92ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.021 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] VM Resumed (Lifecycle Event)
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.024 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.028 248514 INFO nova.virt.libvirt.driver [-] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Instance spawned successfully.
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.028 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.052 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.057 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.067 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.068 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.069 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.069 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.070 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.070 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.270 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.331 248514 INFO nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Took 13.70 seconds to spawn the instance on the hypervisor.
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.331 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.405 248514 INFO nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Took 14.94 seconds to build instance.
Dec 13 08:34:49 compute-0 nova_compute[248510]: 2025-12-13 08:34:49.427 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 167 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 132 op/s
Dec 13 08:34:50 compute-0 nova_compute[248510]: 2025-12-13 08:34:50.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:50 compute-0 ceph-mon[76537]: pgmap v2127: 321 pgs: 321 active+clean; 167 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 132 op/s
Dec 13 08:34:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.387 248514 DEBUG nova.network.neutron [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Updating instance_info_cache with network_info: [{"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.614 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Releasing lock "refresh_cache-a393861c-2eea-4309-8914-042ec8bcc873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.616 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Instance network_info: |[{"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.618 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Start _get_guest_xml network_info=[{"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.633 248514 WARNING nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.643 248514 DEBUG nova.virt.libvirt.host [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:34:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 181 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 143 op/s
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.645 248514 DEBUG nova.virt.libvirt.host [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.649 248514 DEBUG nova.virt.libvirt.host [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.650 248514 DEBUG nova.virt.libvirt.host [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.650 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.651 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.651 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.652 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.652 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.652 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.652 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.653 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.653 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.654 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.654 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.654 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.658 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.932 248514 DEBUG nova.network.neutron [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Successfully updated port: ca039819-f818-4e22-97a0-a5906ef33fcd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.956 248514 DEBUG oslo_concurrency.lockutils [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.957 248514 DEBUG oslo_concurrency.lockutils [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquired lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:51 compute-0 nova_compute[248510]: 2025-12-13 08:34:51.957 248514 DEBUG nova.network.neutron [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.156 248514 WARNING nova.network.neutron [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] dfb72597-ac7e-412b-834c-574618fe1a4c already exists in list: networks containing: ['dfb72597-ac7e-412b-834c-574618fe1a4c']. ignoring it
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.298 248514 DEBUG nova.compute.manager [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Received event network-vif-plugged-65b19603-026e-4844-b71e-20239b4c9b1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.298 248514 DEBUG oslo_concurrency.lockutils [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.298 248514 DEBUG oslo_concurrency.lockutils [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.299 248514 DEBUG oslo_concurrency.lockutils [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.299 248514 DEBUG nova.compute.manager [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] No waiting events found dispatching network-vif-plugged-65b19603-026e-4844-b71e-20239b4c9b1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.299 248514 WARNING nova.compute.manager [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Received unexpected event network-vif-plugged-65b19603-026e-4844-b71e-20239b4c9b1b for instance with vm_state active and task_state None.
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.300 248514 DEBUG nova.compute.manager [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received event network-changed-5e06fac2-7381-499a-a4b0-d0006f071eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.300 248514 DEBUG nova.compute.manager [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Refreshing instance network info cache due to event network-changed-5e06fac2-7381-499a-a4b0-d0006f071eae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.300 248514 DEBUG oslo_concurrency.lockutils [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-a393861c-2eea-4309-8914-042ec8bcc873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.300 248514 DEBUG oslo_concurrency.lockutils [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-a393861c-2eea-4309-8914-042ec8bcc873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.300 248514 DEBUG nova.network.neutron [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Refreshing network info cache for port 5e06fac2-7381-499a-a4b0-d0006f071eae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:34:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:34:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519152726' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.399 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.421 248514 DEBUG nova.storage.rbd_utils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] rbd image a393861c-2eea-4309-8914-042ec8bcc873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:52 compute-0 nova_compute[248510]: 2025-12-13 08:34:52.426 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:52 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 13 08:34:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:34:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590415988' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.044 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.046 248514 DEBUG nova.virt.libvirt.vif [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-308458658',display_name='tempest-ServerAddressesNegativeTestJSON-server-308458658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-308458658',id=79,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e8467605fc934a9cbc34a7e2300c34ac',ramdisk_id='',reservation_id='r-tkrurkxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-84987378',owner_user_name='tempest-ServerAddressesNegativeTestJSON-84987378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:44Z,user_data=None,user_id='f0b9d8d9077a42d49daac45b8edf2a49',uuid=a393861c-2eea-4309-8914-042ec8bcc873,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.046 248514 DEBUG nova.network.os_vif_util [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Converting VIF {"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.047 248514 DEBUG nova.network.os_vif_util [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:d5:27,bridge_name='br-int',has_traffic_filtering=True,id=5e06fac2-7381-499a-a4b0-d0006f071eae,network=Network(423b50b9-fa16-4f05-a1e9-47a446c396ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e06fac2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.049 248514 DEBUG nova.objects.instance [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lazy-loading 'pci_devices' on Instance uuid a393861c-2eea-4309-8914-042ec8bcc873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:53 compute-0 ceph-mon[76537]: pgmap v2128: 321 pgs: 321 active+clean; 181 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 143 op/s
Dec 13 08:34:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3519152726' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.253 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <uuid>a393861c-2eea-4309-8914-042ec8bcc873</uuid>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <name>instance-0000004f</name>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-308458658</nova:name>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:34:51</nova:creationTime>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <nova:user uuid="f0b9d8d9077a42d49daac45b8edf2a49">tempest-ServerAddressesNegativeTestJSON-84987378-project-member</nova:user>
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <nova:project uuid="e8467605fc934a9cbc34a7e2300c34ac">tempest-ServerAddressesNegativeTestJSON-84987378</nova:project>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <nova:port uuid="5e06fac2-7381-499a-a4b0-d0006f071eae">
Dec 13 08:34:53 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <system>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <entry name="serial">a393861c-2eea-4309-8914-042ec8bcc873</entry>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <entry name="uuid">a393861c-2eea-4309-8914-042ec8bcc873</entry>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </system>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <os>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   </os>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <features>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   </features>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/a393861c-2eea-4309-8914-042ec8bcc873_disk">
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       </source>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/a393861c-2eea-4309-8914-042ec8bcc873_disk.config">
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       </source>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:34:53 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:46:d5:27"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <target dev="tap5e06fac2-73"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873/console.log" append="off"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <video>
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </video>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:34:53 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:34:53 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:34:53 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:34:53 compute-0 nova_compute[248510]: </domain>
Dec 13 08:34:53 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.256 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Preparing to wait for external event network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.256 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.256 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.257 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.257 248514 DEBUG nova.virt.libvirt.vif [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-308458658',display_name='tempest-ServerAddressesNegativeTestJSON-server-308458658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-308458658',id=79,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e8467605fc934a9cbc34a7e2300c34ac',ramdisk_id='',reservation_id='r-tkrurkxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-84987378',owner_user_name='tempest-ServerAddressesNegativeTestJSON-84987378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:44Z,user_data=None,user_id='f0b9d8d9077a42d49daac45b8edf2a49',uuid=a393861c-2eea-4309-8914-042ec8bcc873,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.258 248514 DEBUG nova.network.os_vif_util [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Converting VIF {"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.258 248514 DEBUG nova.network.os_vif_util [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:d5:27,bridge_name='br-int',has_traffic_filtering=True,id=5e06fac2-7381-499a-a4b0-d0006f071eae,network=Network(423b50b9-fa16-4f05-a1e9-47a446c396ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e06fac2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.259 248514 DEBUG os_vif [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:d5:27,bridge_name='br-int',has_traffic_filtering=True,id=5e06fac2-7381-499a-a4b0-d0006f071eae,network=Network(423b50b9-fa16-4f05-a1e9-47a446c396ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e06fac2-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.259 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.260 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.260 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.264 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e06fac2-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.265 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e06fac2-73, col_values=(('external_ids', {'iface-id': '5e06fac2-7381-499a-a4b0-d0006f071eae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:d5:27', 'vm-uuid': 'a393861c-2eea-4309-8914-042ec8bcc873'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.266 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:53 compute-0 NetworkManager[50376]: <info>  [1765614893.2679] manager: (tap5e06fac2-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.269 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.276 248514 INFO os_vif [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:d5:27,bridge_name='br-int',has_traffic_filtering=True,id=5e06fac2-7381-499a-a4b0-d0006f071eae,network=Network(423b50b9-fa16-4f05-a1e9-47a446c396ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e06fac2-73')
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.520 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.521 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.522 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] No VIF found with MAC fa:16:3e:46:d5:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.523 248514 INFO nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Using config drive
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.551 248514 DEBUG nova.storage.rbd_utils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] rbd image a393861c-2eea-4309-8914-042ec8bcc873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.582 248514 DEBUG nova.compute.manager [req-b90a357a-2d7c-4742-b6fd-2f2b6fdfcd3a req-185902eb-9d39-42ff-9528-c54f79549e72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-changed-ca039819-f818-4e22-97a0-a5906ef33fcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.583 248514 DEBUG nova.compute.manager [req-b90a357a-2d7c-4742-b6fd-2f2b6fdfcd3a req-185902eb-9d39-42ff-9528-c54f79549e72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Refreshing instance network info cache due to event network-changed-ca039819-f818-4e22-97a0-a5906ef33fcd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.583 248514 DEBUG oslo_concurrency.lockutils [req-b90a357a-2d7c-4742-b6fd-2f2b6fdfcd3a req-185902eb-9d39-42ff-9528-c54f79549e72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:34:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 181 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.815 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.815 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:34:53 compute-0 nova_compute[248510]: 2025-12-13 08:34:53.816 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1590415988' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:34:54 compute-0 ceph-mon[76537]: pgmap v2129: 321 pgs: 321 active+clean; 181 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Dec 13 08:34:54 compute-0 ovn_controller[148476]: 2025-12-13T08:34:54Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:28:0b 10.100.0.13
Dec 13 08:34:54 compute-0 ovn_controller[148476]: 2025-12-13T08:34:54Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:28:0b 10.100.0.13
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.319 248514 INFO nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Creating config drive at /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873/disk.config
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.326 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdpnjoupl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:34:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/299057405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.376 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.475 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdpnjoupl" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.508 248514 DEBUG nova.storage.rbd_utils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] rbd image a393861c-2eea-4309-8914-042ec8bcc873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.513 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873/disk.config a393861c-2eea-4309-8914-042ec8bcc873_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.894 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873/disk.config a393861c-2eea-4309-8914-042ec8bcc873_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.896 248514 INFO nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Deleting local config drive /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873/disk.config because it was imported into RBD.
Dec 13 08:34:54 compute-0 kernel: tap5e06fac2-73: entered promiscuous mode
Dec 13 08:34:54 compute-0 NetworkManager[50376]: <info>  [1765614894.9575] manager: (tap5e06fac2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Dec 13 08:34:54 compute-0 ovn_controller[148476]: 2025-12-13T08:34:54Z|00774|binding|INFO|Claiming lport 5e06fac2-7381-499a-a4b0-d0006f071eae for this chassis.
Dec 13 08:34:54 compute-0 ovn_controller[148476]: 2025-12-13T08:34:54Z|00775|binding|INFO|5e06fac2-7381-499a-a4b0-d0006f071eae: Claiming fa:16:3e:46:d5:27 10.100.0.5
Dec 13 08:34:54 compute-0 nova_compute[248510]: 2025-12-13 08:34:54.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:54 compute-0 systemd-udevd[325453]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:34:55 compute-0 NetworkManager[50376]: <info>  [1765614895.0056] device (tap5e06fac2-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:34:55 compute-0 NetworkManager[50376]: <info>  [1765614895.0071] device (tap5e06fac2-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:34:55 compute-0 systemd-machined[210538]: New machine qemu-98-instance-0000004f.
Dec 13 08:34:55 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-0000004f.
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.044 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:55 compute-0 ovn_controller[148476]: 2025-12-13T08:34:55Z|00776|binding|INFO|Setting lport 5e06fac2-7381-499a-a4b0-d0006f071eae ovn-installed in OVS
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.060 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/299057405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.397 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614895.3953018, a393861c-2eea-4309-8914-042ec8bcc873 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.399 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] VM Started (Lifecycle Event)
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.416 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.417 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.418 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:55 compute-0 ovn_controller[148476]: 2025-12-13T08:34:55Z|00777|binding|INFO|Setting lport 5e06fac2-7381-499a-a4b0-d0006f071eae up in Southbound
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.456 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:d5:27 10.100.0.5'], port_security=['fa:16:3e:46:d5:27 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a393861c-2eea-4309-8914-042ec8bcc873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-423b50b9-fa16-4f05-a1e9-47a446c396ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e8467605fc934a9cbc34a7e2300c34ac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f471ddb-3a4b-48d5-824f-0cf108b6a545', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bb4a7dc-ac9c-4da2-9e67-2deb7a14e733, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5e06fac2-7381-499a-a4b0-d0006f071eae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.458 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5e06fac2-7381-499a-a4b0-d0006f071eae in datapath 423b50b9-fa16-4f05-a1e9-47a446c396ca bound to our chassis
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.460 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 423b50b9-fa16-4f05-a1e9-47a446c396ca
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.465 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.479 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f426e2ec-0293-4485-9c80-d6b3ca75fb24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.480 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap423b50b9-f1 in ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.483 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap423b50b9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.483 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9b5866-a93a-4c69-9a17-7b62d6db680e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.484 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39f5af3a-e771-43b4-9e4f-a0042cdff6c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.489 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.499 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614895.3956769, a393861c-2eea-4309-8914-042ec8bcc873 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.500 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] VM Paused (Lifecycle Event)
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.501 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[88e9af5f-4738-42ad-ba60-697f3a03eabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.535 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.539 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.539 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.540 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.549 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.549 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3de000-466e-4627-b12d-860ccc283a7a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.555 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.555 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.597 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8a28d1-9133-448b-ba22-d4a1d3aba5b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 systemd-udevd[325458]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:34:55 compute-0 NetworkManager[50376]: <info>  [1765614895.6072] manager: (tap423b50b9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/335)
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.605 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec464b44-0331-49ec-8ab1-b68e5f9c464d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.636 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:34:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 192 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.659 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3d05bbc1-8917-41ed-932d-9091282244c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.664 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[be66de0f-e57c-4e7c-a07f-7d48f858913d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 NetworkManager[50376]: <info>  [1765614895.6915] device (tap423b50b9-f0): carrier: link connected
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.702 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95f28fd0-93a4-401b-92cd-9eeefec74a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.729 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fccbe836-83d8-449f-8f11-bb801c387d3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap423b50b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:36:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747290, 'reachable_time': 43238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325532, 'error': None, 'target': 'ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.756 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[560c3fde-10f5-4204-b9a5-ac5114d39ddc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:3621'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747290, 'tstamp': 747290}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325533, 'error': None, 'target': 'ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.782 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2dde0ec-ae20-4707-bf8b-bb20cd289bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap423b50b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:36:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747290, 'reachable_time': 43238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325534, 'error': None, 'target': 'ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.828 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3622956f-9cf6-4f16-b7a4-557738324eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.852 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.91934675630182GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.854 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.855 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.881 248514 DEBUG nova.network.neutron [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updating instance_info_cache with network_info: [{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.899 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f81bfee-7cb3-4e57-9747-64422d015fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.901 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap423b50b9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.902 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.902 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap423b50b9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:55 compute-0 NetworkManager[50376]: <info>  [1765614895.9050] manager: (tap423b50b9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Dec 13 08:34:55 compute-0 kernel: tap423b50b9-f0: entered promiscuous mode
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap423b50b9-f0, col_values=(('external_ids', {'iface-id': 'f4ca72ec-f056-44f7-8bca-4f89df55607d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:55 compute-0 ovn_controller[148476]: 2025-12-13T08:34:55Z|00778|binding|INFO|Releasing lport f4ca72ec-f056-44f7-8bca-4f89df55607d from this chassis (sb_readonly=0)
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:55 compute-0 nova_compute[248510]: 2025-12-13 08:34:55.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.925 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/423b50b9-fa16-4f05-a1e9-47a446c396ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/423b50b9-fa16-4f05-a1e9-47a446c396ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.926 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[181fbb8a-224f-40a8-b1e1-4b9c120a9291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.927 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-423b50b9-fa16-4f05-a1e9-47a446c396ca
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/423b50b9-fa16-4f05-a1e9-47a446c396ca.pid.haproxy
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 423b50b9-fa16-4f05-a1e9-47a446c396ca
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:55.929 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca', 'env', 'PROCESS_TAG=haproxy-423b50b9-fa16-4f05-a1e9-47a446c396ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/423b50b9-fa16-4f05-a1e9-47a446c396ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:34:56 compute-0 ceph-mon[76537]: pgmap v2130: 321 pgs: 321 active+clean; 192 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Dec 13 08:34:56 compute-0 podman[325566]: 2025-12-13 08:34:56.337926971 +0000 UTC m=+0.048326942 container create a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:34:56 compute-0 systemd[1]: Started libpod-conmon-a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4.scope.
Dec 13 08:34:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:34:56 compute-0 podman[325566]: 2025-12-13 08:34:56.313108214 +0000 UTC m=+0.023508205 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:34:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6661a0868b700b36c29455bd3bf88828350e5b82334bbc74720098be93a0a908/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:34:56 compute-0 podman[325566]: 2025-12-13 08:34:56.428297625 +0000 UTC m=+0.138697616 container init a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:34:56 compute-0 podman[325566]: 2025-12-13 08:34:56.434850458 +0000 UTC m=+0.145250429 container start a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:34:56 compute-0 neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca[325581]: [NOTICE]   (325585) : New worker (325587) forked
Dec 13 08:34:56 compute-0 neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca[325581]: [NOTICE]   (325585) : Loading success.
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.830 248514 DEBUG oslo_concurrency.lockutils [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Releasing lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.831 248514 DEBUG oslo_concurrency.lockutils [req-b90a357a-2d7c-4742-b6fd-2f2b6fdfcd3a req-185902eb-9d39-42ff-9528-c54f79549e72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.832 248514 DEBUG nova.network.neutron [req-b90a357a-2d7c-4742-b6fd-2f2b6fdfcd3a req-185902eb-9d39-42ff-9528-c54f79549e72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Refreshing network info cache for port ca039819-f818-4e22-97a0-a5906ef33fcd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.835 248514 DEBUG nova.virt.libvirt.vif [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:34:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-458198005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-458198005',id=77,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:34:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='082f98d662a4450a9fd767ac13a76391',ramdisk_id='',reservation_id='r-rt0w0l4h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-2087650193',owner_user_name='tempest-AttachInterfacesV270Test-2087650193-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:34:41Z,user_data=None,user_id='1daf78b2d25748d183901a09e4605044',uuid=68f568e2-917b-4b70-8345-8d04c0f5d1a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.835 248514 DEBUG nova.network.os_vif_util [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converting VIF {"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.836 248514 DEBUG nova.network.os_vif_util [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4b:eb,bridge_name='br-int',has_traffic_filtering=True,id=ca039819-f818-4e22-97a0-a5906ef33fcd,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca039819-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.836 248514 DEBUG os_vif [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4b:eb,bridge_name='br-int',has_traffic_filtering=True,id=ca039819-f818-4e22-97a0-a5906ef33fcd,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca039819-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.837 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.837 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.838 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.840 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca039819-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.841 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca039819-f8, col_values=(('external_ids', {'iface-id': 'ca039819-f818-4e22-97a0-a5906ef33fcd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:4b:eb', 'vm-uuid': '68f568e2-917b-4b70-8345-8d04c0f5d1a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:56 compute-0 NetworkManager[50376]: <info>  [1765614896.8441] manager: (tapca039819-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.844 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.850 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.851 248514 INFO os_vif [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4b:eb,bridge_name='br-int',has_traffic_filtering=True,id=ca039819-f818-4e22-97a0-a5906ef33fcd,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca039819-f8')
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.852 248514 DEBUG nova.virt.libvirt.vif [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:34:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-458198005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-458198005',id=77,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:34:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='082f98d662a4450a9fd767ac13a76391',ramdisk_id='',reservation_id='r-rt0w0l4h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-2087650193',owner_user_name='tempest-AttachInterfacesV270Test-2087650193-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:34:41Z,user_data=None,user_id='1daf78b2d25748d183901a09e4605044',uuid=68f568e2-917b-4b70-8345-8d04c0f5d1a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.853 248514 DEBUG nova.network.os_vif_util [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converting VIF {"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.853 248514 DEBUG nova.network.os_vif_util [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4b:eb,bridge_name='br-int',has_traffic_filtering=True,id=ca039819-f818-4e22-97a0-a5906ef33fcd,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca039819-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.856 248514 DEBUG nova.virt.libvirt.guest [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] attach device xml: <interface type="ethernet">
Dec 13 08:34:56 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:1c:4b:eb"/>
Dec 13 08:34:56 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:34:56 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:34:56 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:34:56 compute-0 nova_compute[248510]:   <target dev="tapca039819-f8"/>
Dec 13 08:34:56 compute-0 nova_compute[248510]: </interface>
Dec 13 08:34:56 compute-0 nova_compute[248510]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 08:34:56 compute-0 kernel: tapca039819-f8: entered promiscuous mode
Dec 13 08:34:56 compute-0 ovn_controller[148476]: 2025-12-13T08:34:56Z|00779|binding|INFO|Claiming lport ca039819-f818-4e22-97a0-a5906ef33fcd for this chassis.
Dec 13 08:34:56 compute-0 ovn_controller[148476]: 2025-12-13T08:34:56Z|00780|binding|INFO|ca039819-f818-4e22-97a0-a5906ef33fcd: Claiming fa:16:3e:1c:4b:eb 10.100.0.11
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:56 compute-0 systemd-udevd[325515]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:34:56 compute-0 NetworkManager[50376]: <info>  [1765614896.8766] manager: (tapca039819-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Dec 13 08:34:56 compute-0 NetworkManager[50376]: <info>  [1765614896.8853] device (tapca039819-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:34:56 compute-0 NetworkManager[50376]: <info>  [1765614896.8866] device (tapca039819-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:34:56 compute-0 ovn_controller[148476]: 2025-12-13T08:34:56Z|00781|binding|INFO|Setting lport ca039819-f818-4e22-97a0-a5906ef33fcd ovn-installed in OVS
Dec 13 08:34:56 compute-0 ovn_controller[148476]: 2025-12-13T08:34:56Z|00782|binding|INFO|Setting lport ca039819-f818-4e22-97a0-a5906ef33fcd up in Southbound
Dec 13 08:34:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:56.889 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:4b:eb 10.100.0.11'], port_security=['fa:16:3e:1c:4b:eb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '68f568e2-917b-4b70-8345-8d04c0f5d1a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfb72597-ac7e-412b-834c-574618fe1a4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '082f98d662a4450a9fd767ac13a76391', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79fb568a-41f5-487e-bdd9-c21c5553decf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=778b417f-43c9-4614-a0b9-308a3289fddb, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ca039819-f818-4e22-97a0-a5906ef33fcd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:34:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:56.891 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ca039819-f818-4e22-97a0-a5906ef33fcd in datapath dfb72597-ac7e-412b-834c-574618fe1a4c bound to our chassis
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:56.894 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfb72597-ac7e-412b-834c-574618fe1a4c
Dec 13 08:34:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:56.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97321d6f-4a19-4cec-a12e-dee5294dc87d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:56.949 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b11530cb-d6a5-46d5-8233-89490fdcd5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:56.952 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f0cdb02a-ed70-400f-91d6-7d60efdfa14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.975 248514 DEBUG nova.virt.libvirt.driver [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.976 248514 DEBUG nova.virt.libvirt.driver [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.976 248514 DEBUG nova.virt.libvirt.driver [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No VIF found with MAC fa:16:3e:07:28:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.977 248514 DEBUG nova.virt.libvirt.driver [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No VIF found with MAC fa:16:3e:1c:4b:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.980 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 68f568e2-917b-4b70-8345-8d04c0f5d1a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.980 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 287069fd-8250-4b7f-92c5-74450b2e92ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.980 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a393861c-2eea-4309-8914-042ec8bcc873 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.981 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:34:56 compute-0 nova_compute[248510]: 2025-12-13 08:34:56.981 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:34:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:56.991 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a36e94e1-1f0f-4e54-aa7e-04e5b940bb0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.007 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[37f3d0c7-f857-4487-9804-5eb5923fa07c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfb72597-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:fa:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745186, 'reachable_time': 24813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325607, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.030 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c845d80a-a462-4770-89a6-dcb811cbe7ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdfb72597-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745200, 'tstamp': 745200}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325608, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdfb72597-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745202, 'tstamp': 745202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325608, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.032 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfb72597-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.034 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.035 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.035 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfb72597-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.036 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.036 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfb72597-a0, col_values=(('external_ids', {'iface-id': '5ddcc36f-0a9b-4b88-89cc-ebf23344a1de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.037 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.048 248514 DEBUG nova.virt.libvirt.guest [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:34:57 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   <nova:name>tempest-AttachInterfacesV270Test-server-458198005</nova:name>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:34:57</nova:creationTime>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:user uuid="1daf78b2d25748d183901a09e4605044">tempest-AttachInterfacesV270Test-2087650193-project-member</nova:user>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:project uuid="082f98d662a4450a9fd767ac13a76391">tempest-AttachInterfacesV270Test-2087650193</nova:project>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:port uuid="9c81d394-98c0-444d-a1f2-9b909c7e2559">
Dec 13 08:34:57 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     <nova:port uuid="ca039819-f818-4e22-97a0-a5906ef33fcd">
Dec 13 08:34:57 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:34:57 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:34:57 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:34:57 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:34:57 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.114 248514 DEBUG oslo_concurrency.lockutils [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "interface-68f568e2-917b-4b70-8345-8d04c0f5d1a6-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 13.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.177 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.396 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "287069fd-8250-4b7f-92c5-74450b2e92ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.397 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.397 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.398 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.398 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.400 248514 INFO nova.compute.manager [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Terminating instance
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.401 248514 DEBUG nova.compute.manager [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:34:57 compute-0 kernel: tap65b19603-02 (unregistering): left promiscuous mode
Dec 13 08:34:57 compute-0 NetworkManager[50376]: <info>  [1765614897.4400] device (tap65b19603-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:34:57 compute-0 ovn_controller[148476]: 2025-12-13T08:34:57Z|00783|binding|INFO|Releasing lport 65b19603-026e-4844-b71e-20239b4c9b1b from this chassis (sb_readonly=0)
Dec 13 08:34:57 compute-0 ovn_controller[148476]: 2025-12-13T08:34:57Z|00784|binding|INFO|Setting lport 65b19603-026e-4844-b71e-20239b4c9b1b down in Southbound
Dec 13 08:34:57 compute-0 ovn_controller[148476]: 2025-12-13T08:34:57Z|00785|binding|INFO|Removing iface tap65b19603-02 ovn-installed in OVS
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.459 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:23:89 10.100.0.7'], port_security=['fa:16:3e:12:23:89 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '287069fd-8250-4b7f-92c5-74450b2e92ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fec71de612e44bc68abc52770bf74e0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66354219-88d0-4e62-864d-287cb8dc51e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60e169ab-1841-42a7-a602-8f965141b8bc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=65b19603-026e-4844-b71e-20239b4c9b1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.460 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 65b19603-026e-4844-b71e-20239b4c9b1b in datapath ac80588a-2bf0-4858-a3dc-62f2f67c7bfd unbound from our chassis
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.463 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac80588a-2bf0-4858-a3dc-62f2f67c7bfd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.464 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4dfcec-40fe-48e0-b840-4b66440d9518]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:57.464 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd namespace which is not needed anymore
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:57 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Dec 13 08:34:57 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004e.scope: Consumed 9.559s CPU time.
Dec 13 08:34:57 compute-0 systemd-machined[210538]: Machine qemu-97-instance-0000004e terminated.
Dec 13 08:34:57 compute-0 podman[325634]: 2025-12-13 08:34:57.524674642 +0000 UTC m=+0.053140510 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:34:57 compute-0 podman[325633]: 2025-12-13 08:34:57.544586307 +0000 UTC m=+0.064000451 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.583 248514 DEBUG nova.network.neutron [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Updated VIF entry in instance network info cache for port 5e06fac2-7381-499a-a4b0-d0006f071eae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.584 248514 DEBUG nova.network.neutron [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Updating instance_info_cache with network_info: [{"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.596 248514 DEBUG nova.compute.manager [req-aaecdded-f381-4c4f-a39f-55c54dc74720 req-d4198644-65e0-4879-8ffe-3ced299e71fd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received event network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.597 248514 DEBUG oslo_concurrency.lockutils [req-aaecdded-f381-4c4f-a39f-55c54dc74720 req-d4198644-65e0-4879-8ffe-3ced299e71fd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.597 248514 DEBUG oslo_concurrency.lockutils [req-aaecdded-f381-4c4f-a39f-55c54dc74720 req-d4198644-65e0-4879-8ffe-3ced299e71fd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.598 248514 DEBUG oslo_concurrency.lockutils [req-aaecdded-f381-4c4f-a39f-55c54dc74720 req-d4198644-65e0-4879-8ffe-3ced299e71fd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.598 248514 DEBUG nova.compute.manager [req-aaecdded-f381-4c4f-a39f-55c54dc74720 req-d4198644-65e0-4879-8ffe-3ced299e71fd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Processing event network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.599 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.602 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614897.6025784, a393861c-2eea-4309-8914-042ec8bcc873 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.603 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] VM Resumed (Lifecycle Event)
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.605 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.612 248514 INFO nova.virt.libvirt.driver [-] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Instance spawned successfully.
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.613 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.629 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 192 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 139 op/s
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.656 248514 INFO nova.virt.libvirt.driver [-] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Instance destroyed successfully.
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.657 248514 DEBUG nova.objects.instance [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lazy-loading 'resources' on Instance uuid 287069fd-8250-4b7f-92c5-74450b2e92ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:34:57 compute-0 neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd[325241]: [NOTICE]   (325245) : haproxy version is 2.8.14-c23fe91
Dec 13 08:34:57 compute-0 neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd[325241]: [NOTICE]   (325245) : path to executable is /usr/sbin/haproxy
Dec 13 08:34:57 compute-0 neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd[325241]: [WARNING]  (325245) : Exiting Master process...
Dec 13 08:34:57 compute-0 neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd[325241]: [WARNING]  (325245) : Exiting Master process...
Dec 13 08:34:57 compute-0 neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd[325241]: [ALERT]    (325245) : Current worker (325247) exited with code 143 (Terminated)
Dec 13 08:34:57 compute-0 neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd[325241]: [WARNING]  (325245) : All workers exited. Exiting... (0)
Dec 13 08:34:57 compute-0 systemd[1]: libpod-265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8.scope: Deactivated successfully.
Dec 13 08:34:57 compute-0 podman[325629]: 2025-12-13 08:34:57.677106858 +0000 UTC m=+0.209750050 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 08:34:57 compute-0 podman[325702]: 2025-12-13 08:34:57.684239915 +0000 UTC m=+0.130970604 container died 265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.721 248514 DEBUG oslo_concurrency.lockutils [req-0aa3eeb7-43f9-4ff8-a016-82ec8aaabf1b req-758d181e-a981-41c5-83d2-863ab4620e2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-a393861c-2eea-4309-8914-042ec8bcc873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:34:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:34:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003712307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.804 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:34:57 compute-0 nova_compute[248510]: 2025-12-13 08:34:57.809 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:34:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2003712307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8-userdata-shm.mount: Deactivated successfully.
Dec 13 08:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-779f5d2ba620eb50de48816b1593424aab960a23f59df3272551e22f659b9ff1-merged.mount: Deactivated successfully.
Dec 13 08:34:58 compute-0 podman[325702]: 2025-12-13 08:34:58.016827784 +0000 UTC m=+0.463558453 container cleanup 265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.019 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.024 248514 DEBUG nova.virt.libvirt.vif [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:34:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1887941666',display_name='tempest-ServerAddressesTestJSON-server-1887941666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1887941666',id=78,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:34:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fec71de612e44bc68abc52770bf74e0c',ramdisk_id='',reservation_id='r-a75xsshy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1681799594',owner_user_name='tempest-ServerAddressesTestJSON-1681799594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:34:49Z,user_data=None,user_id='c023c9703ede4646a33bf3a4c18781e1',uuid=287069fd-8250-4b7f-92c5-74450b2e92ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.025 248514 DEBUG nova.network.os_vif_util [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Converting VIF {"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.026 248514 DEBUG nova.network.os_vif_util [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:23:89,bridge_name='br-int',has_traffic_filtering=True,id=65b19603-026e-4844-b71e-20239b4c9b1b,network=Network(ac80588a-2bf0-4858-a3dc-62f2f67c7bfd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65b19603-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.026 248514 DEBUG os_vif [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:23:89,bridge_name='br-int',has_traffic_filtering=True,id=65b19603-026e-4844-b71e-20239b4c9b1b,network=Network(ac80588a-2bf0-4858-a3dc-62f2f67c7bfd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65b19603-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.029 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.029 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65b19603-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.033 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.034 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.035 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.037 248514 INFO os_vif [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:23:89,bridge_name='br-int',has_traffic_filtering=True,id=65b19603-026e-4844-b71e-20239b4c9b1b,network=Network(ac80588a-2bf0-4858-a3dc-62f2f67c7bfd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65b19603-02')
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.060 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.061 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.062 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:58 compute-0 systemd[1]: libpod-conmon-265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8.scope: Deactivated successfully.
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.063 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.064 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.064 248514 DEBUG nova.virt.libvirt.driver [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.072 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.089 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.092 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.093 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:34:58 compute-0 podman[325748]: 2025-12-13 08:34:58.108498041 +0000 UTC m=+0.052938316 container remove 265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.114 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d87929d-90d7-401f-9d03-daac183aeba6]: (4, ('Sat Dec 13 08:34:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd (265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8)\n265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8\nSat Dec 13 08:34:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd (265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8)\n265e9bbe309ebe9fc405c13085a702b6f623383a1638c5227aa060aee5e584d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.117 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f133df4-95cc-4593-9c4d-f2a91b24b580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.118 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac80588a-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.121 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:58 compute-0 kernel: tapac80588a-20: left promiscuous mode
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.126 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc9c3c8-9dc8-472c-9f0f-c3840e30c490]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.143 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9feaba5-2c7c-4ffd-88a5-13b0b1b10e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.146 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.149 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf79e062-bae5-4cc2-a52c-ebd13016ef36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.180 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[33a98eee-0e03-4f3c-b795-c020afb94110]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746372, 'reachable_time': 15726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325781, 'error': None, 'target': 'ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:58 compute-0 systemd[1]: run-netns-ovnmeta\x2dac80588a\x2d2bf0\x2d4858\x2da3dc\x2d62f2f67c7bfd.mount: Deactivated successfully.
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.186 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac80588a-2bf0-4858-a3dc-62f2f67c7bfd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:34:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:34:58.186 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[97e31730-9e46-49fc-8bad-992202eb8efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.203 248514 INFO nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Took 13.67 seconds to spawn the instance on the hypervisor.
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.203 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.334 248514 INFO nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Took 15.45 seconds to build instance.
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.349 248514 INFO nova.virt.libvirt.driver [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Deleting instance files /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca_del
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.351 248514 INFO nova.virt.libvirt.driver [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Deletion of /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca_del complete
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.481 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.647 248514 INFO nova.compute.manager [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Took 1.25 seconds to destroy the instance on the hypervisor.
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.647 248514 DEBUG oslo.service.loopingcall [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.647 248514 DEBUG nova.compute.manager [-] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:34:58 compute-0 nova_compute[248510]: 2025-12-13 08:34:58.648 248514 DEBUG nova.network.neutron [-] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:34:59 compute-0 ceph-mon[76537]: pgmap v2131: 321 pgs: 321 active+clean; 192 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 139 op/s
Dec 13 08:34:59 compute-0 ovn_controller[148476]: 2025-12-13T08:34:59Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:4b:eb 10.100.0.11
Dec 13 08:34:59 compute-0 ovn_controller[148476]: 2025-12-13T08:34:59Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:4b:eb 10.100.0.11
Dec 13 08:34:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 195 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Dec 13 08:35:00 compute-0 ceph-mon[76537]: pgmap v2132: 321 pgs: 321 active+clean; 195 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.315 248514 DEBUG nova.compute.manager [req-d9a597d2-78ee-466f-8c00-03561d252280 req-1bdbef0f-be5f-4c1f-8797-5b95117c2dac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received event network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.316 248514 DEBUG oslo_concurrency.lockutils [req-d9a597d2-78ee-466f-8c00-03561d252280 req-1bdbef0f-be5f-4c1f-8797-5b95117c2dac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.316 248514 DEBUG oslo_concurrency.lockutils [req-d9a597d2-78ee-466f-8c00-03561d252280 req-1bdbef0f-be5f-4c1f-8797-5b95117c2dac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.316 248514 DEBUG oslo_concurrency.lockutils [req-d9a597d2-78ee-466f-8c00-03561d252280 req-1bdbef0f-be5f-4c1f-8797-5b95117c2dac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.316 248514 DEBUG nova.compute.manager [req-d9a597d2-78ee-466f-8c00-03561d252280 req-1bdbef0f-be5f-4c1f-8797-5b95117c2dac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] No waiting events found dispatching network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.317 248514 WARNING nova.compute.manager [req-d9a597d2-78ee-466f-8c00-03561d252280 req-1bdbef0f-be5f-4c1f-8797-5b95117c2dac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received unexpected event network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae for instance with vm_state active and task_state None.
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.770 248514 DEBUG nova.network.neutron [req-b90a357a-2d7c-4742-b6fd-2f2b6fdfcd3a req-185902eb-9d39-42ff-9528-c54f79549e72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updated VIF entry in instance network info cache for port ca039819-f818-4e22-97a0-a5906ef33fcd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.770 248514 DEBUG nova.network.neutron [req-b90a357a-2d7c-4742-b6fd-2f2b6fdfcd3a req-185902eb-9d39-42ff-9528-c54f79549e72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updating instance_info_cache with network_info: [{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:00 compute-0 nova_compute[248510]: 2025-12-13 08:35:00.814 248514 DEBUG oslo_concurrency.lockutils [req-b90a357a-2d7c-4742-b6fd-2f2b6fdfcd3a req-185902eb-9d39-42ff-9528-c54f79549e72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.163 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.163 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.163 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:35:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.490 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.491 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.491 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.491 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.492 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.493 248514 INFO nova.compute.manager [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Terminating instance
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.494 248514 DEBUG nova.compute.manager [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:35:01 compute-0 kernel: tap5e06fac2-73 (unregistering): left promiscuous mode
Dec 13 08:35:01 compute-0 NetworkManager[50376]: <info>  [1765614901.5349] device (tap5e06fac2-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:35:01 compute-0 ovn_controller[148476]: 2025-12-13T08:35:01Z|00786|binding|INFO|Releasing lport 5e06fac2-7381-499a-a4b0-d0006f071eae from this chassis (sb_readonly=0)
Dec 13 08:35:01 compute-0 ovn_controller[148476]: 2025-12-13T08:35:01Z|00787|binding|INFO|Setting lport 5e06fac2-7381-499a-a4b0-d0006f071eae down in Southbound
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:01 compute-0 ovn_controller[148476]: 2025-12-13T08:35:01Z|00788|binding|INFO|Removing iface tap5e06fac2-73 ovn-installed in OVS
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:01 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Dec 13 08:35:01 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d0000004f.scope: Consumed 4.223s CPU time.
Dec 13 08:35:01 compute-0 systemd-machined[210538]: Machine qemu-98-instance-0000004f terminated.
Dec 13 08:35:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 189 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 214 op/s
Dec 13 08:35:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:01.702 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:d5:27 10.100.0.5'], port_security=['fa:16:3e:46:d5:27 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a393861c-2eea-4309-8914-042ec8bcc873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-423b50b9-fa16-4f05-a1e9-47a446c396ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e8467605fc934a9cbc34a7e2300c34ac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f471ddb-3a4b-48d5-824f-0cf108b6a545', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bb4a7dc-ac9c-4da2-9e67-2deb7a14e733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5e06fac2-7381-499a-a4b0-d0006f071eae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:35:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:01.705 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5e06fac2-7381-499a-a4b0-d0006f071eae in datapath 423b50b9-fa16-4f05-a1e9-47a446c396ca unbound from our chassis
Dec 13 08:35:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:01.709 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 423b50b9-fa16-4f05-a1e9-47a446c396ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:35:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:01.710 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e57d61-247a-40db-b7c1-601e61b62136]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:01.711 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca namespace which is not needed anymore
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.800 248514 INFO nova.virt.libvirt.driver [-] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Instance destroyed successfully.
Dec 13 08:35:01 compute-0 nova_compute[248510]: 2025-12-13 08:35:01.801 248514 DEBUG nova.objects.instance [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lazy-loading 'resources' on Instance uuid a393861c-2eea-4309-8914-042ec8bcc873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:35:01 compute-0 neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca[325581]: [NOTICE]   (325585) : haproxy version is 2.8.14-c23fe91
Dec 13 08:35:01 compute-0 neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca[325581]: [NOTICE]   (325585) : path to executable is /usr/sbin/haproxy
Dec 13 08:35:01 compute-0 neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca[325581]: [WARNING]  (325585) : Exiting Master process...
Dec 13 08:35:01 compute-0 neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca[325581]: [ALERT]    (325585) : Current worker (325587) exited with code 143 (Terminated)
Dec 13 08:35:01 compute-0 neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca[325581]: [WARNING]  (325585) : All workers exited. Exiting... (0)
Dec 13 08:35:01 compute-0 systemd[1]: libpod-a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4.scope: Deactivated successfully.
Dec 13 08:35:01 compute-0 podman[325821]: 2025-12-13 08:35:01.932559557 +0000 UTC m=+0.055297654 container died a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4-userdata-shm.mount: Deactivated successfully.
Dec 13 08:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6661a0868b700b36c29455bd3bf88828350e5b82334bbc74720098be93a0a908-merged.mount: Deactivated successfully.
Dec 13 08:35:01 compute-0 podman[325821]: 2025-12-13 08:35:01.96974846 +0000 UTC m=+0.092486557 container cleanup a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:35:01 compute-0 systemd[1]: libpod-conmon-a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4.scope: Deactivated successfully.
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.022 248514 DEBUG nova.virt.libvirt.vif [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:34:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-308458658',display_name='tempest-ServerAddressesNegativeTestJSON-server-308458658',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-308458658',id=79,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:34:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e8467605fc934a9cbc34a7e2300c34ac',ramdisk_id='',reservation_id='r-tkrurkxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-84987378',owner_user_name='tempest-ServerAddressesNegativeTestJSON-84987378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:34:58Z,user_data=None,user_id='f0b9d8d9077a42d49daac45b8edf2a49',uuid=a393861c-2eea-4309-8914-042ec8bcc873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.023 248514 DEBUG nova.network.os_vif_util [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Converting VIF {"id": "5e06fac2-7381-499a-a4b0-d0006f071eae", "address": "fa:16:3e:46:d5:27", "network": {"id": "423b50b9-fa16-4f05-a1e9-47a446c396ca", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1443061613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8467605fc934a9cbc34a7e2300c34ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e06fac2-73", "ovs_interfaceid": "5e06fac2-7381-499a-a4b0-d0006f071eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.023 248514 DEBUG nova.network.os_vif_util [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:d5:27,bridge_name='br-int',has_traffic_filtering=True,id=5e06fac2-7381-499a-a4b0-d0006f071eae,network=Network(423b50b9-fa16-4f05-a1e9-47a446c396ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e06fac2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.024 248514 DEBUG os_vif [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:d5:27,bridge_name='br-int',has_traffic_filtering=True,id=5e06fac2-7381-499a-a4b0-d0006f071eae,network=Network(423b50b9-fa16-4f05-a1e9-47a446c396ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e06fac2-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.026 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.026 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e06fac2-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:02 compute-0 podman[325849]: 2025-12-13 08:35:02.044259691 +0000 UTC m=+0.054722280 container remove a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.076 248514 INFO os_vif [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:d5:27,bridge_name='br-int',has_traffic_filtering=True,id=5e06fac2-7381-499a-a4b0-d0006f071eae,network=Network(423b50b9-fa16-4f05-a1e9-47a446c396ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e06fac2-73')
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.076 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[469d9627-bc85-438a-a19a-0f5a9fa34dc3]: (4, ('Sat Dec 13 08:35:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca (a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4)\na5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4\nSat Dec 13 08:35:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca (a5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4)\na5208cf8cee52cd845b3b86655aa534e7e22d539897e2e1e7fc452f946dd28f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.078 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d15b5abc-7ee0-4697-8ce0-8928af2d0492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.079 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap423b50b9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:02 compute-0 kernel: tap423b50b9-f0: left promiscuous mode
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.099 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.105 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[125f1b23-105d-47b6-8d4f-d01833e5c292]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.123 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d067d57c-accb-479a-b2fd-d5ac3f39a21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.125 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2ed9b5-d45f-4d2d-b9ed-d982096ba6f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.146 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f074929-6140-4e00-b8c3-761730b52efe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747280, 'reachable_time': 16553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325882, 'error': None, 'target': 'ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d423b50b9\x2dfa16\x2d4f05\x2da1e9\x2d47a446c396ca.mount: Deactivated successfully.
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.148 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-423b50b9-fa16-4f05-a1e9-47a446c396ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:35:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:02.148 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[fe893a6e-9c0f-4370-8603-5b75d5a8d3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.358 248514 INFO nova.virt.libvirt.driver [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Deleting instance files /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873_del
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.359 248514 INFO nova.virt.libvirt.driver [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Deletion of /var/lib/nova/instances/a393861c-2eea-4309-8914-042ec8bcc873_del complete
Dec 13 08:35:02 compute-0 ceph-mon[76537]: pgmap v2133: 321 pgs: 321 active+clean; 189 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 214 op/s
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.755 248514 INFO nova.compute.manager [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Took 1.26 seconds to destroy the instance on the hypervisor.
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.755 248514 DEBUG oslo.service.loopingcall [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.756 248514 DEBUG nova.compute.manager [-] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.756 248514 DEBUG nova.network.neutron [-] [instance: a393861c-2eea-4309-8914-042ec8bcc873] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 08:35:02 compute-0 nova_compute[248510]: 2025-12-13 08:35:02.996 248514 DEBUG nova.network.neutron [-] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.101 248514 DEBUG nova.compute.manager [req-8b2e7ab0-20b8-448b-b66b-b32af8f9e9f0 req-d8fb61e8-e607-419e-aaed-3f0a8ec56a73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received event network-vif-unplugged-5e06fac2-7381-499a-a4b0-d0006f071eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.101 248514 DEBUG oslo_concurrency.lockutils [req-8b2e7ab0-20b8-448b-b66b-b32af8f9e9f0 req-d8fb61e8-e607-419e-aaed-3f0a8ec56a73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.101 248514 DEBUG oslo_concurrency.lockutils [req-8b2e7ab0-20b8-448b-b66b-b32af8f9e9f0 req-d8fb61e8-e607-419e-aaed-3f0a8ec56a73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.102 248514 DEBUG oslo_concurrency.lockutils [req-8b2e7ab0-20b8-448b-b66b-b32af8f9e9f0 req-d8fb61e8-e607-419e-aaed-3f0a8ec56a73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.102 248514 DEBUG nova.compute.manager [req-8b2e7ab0-20b8-448b-b66b-b32af8f9e9f0 req-d8fb61e8-e607-419e-aaed-3f0a8ec56a73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] No waiting events found dispatching network-vif-unplugged-5e06fac2-7381-499a-a4b0-d0006f071eae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.102 248514 DEBUG nova.compute.manager [req-8b2e7ab0-20b8-448b-b66b-b32af8f9e9f0 req-d8fb61e8-e607-419e-aaed-3f0a8ec56a73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received event network-vif-unplugged-5e06fac2-7381-499a-a4b0-d0006f071eae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.105 248514 DEBUG nova.compute.manager [req-12213e37-f0ac-4b11-9db6-b5a4fdd9c099 req-085b9c18-ccc8-4de0-b18b-97c9f4ef0309 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Received event network-vif-deleted-65b19603-026e-4844-b71e-20239b4c9b1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.105 248514 INFO nova.compute.manager [req-12213e37-f0ac-4b11-9db6-b5a4fdd9c099 req-085b9c18-ccc8-4de0-b18b-97c9f4ef0309 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Neutron deleted interface 65b19603-026e-4844-b71e-20239b4c9b1b; detaching it from the instance and deleting it from the info cache
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.105 248514 DEBUG nova.network.neutron [req-12213e37-f0ac-4b11-9db6-b5a4fdd9c099 req-085b9c18-ccc8-4de0-b18b-97c9f4ef0309 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.200 248514 INFO nova.compute.manager [-] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Took 4.55 seconds to deallocate network for instance.
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.210 248514 DEBUG nova.compute.manager [req-12213e37-f0ac-4b11-9db6-b5a4fdd9c099 req-085b9c18-ccc8-4de0-b18b-97c9f4ef0309 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Detach interface failed, port_id=65b19603-026e-4844-b71e-20239b4c9b1b, reason: Instance 287069fd-8250-4b7f-92c5-74450b2e92ca could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.260 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.261 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.339 248514 DEBUG oslo_concurrency.processutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 151 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 229 op/s
Dec 13 08:35:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:35:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970685015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.934 248514 DEBUG oslo_concurrency.processutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:03 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.942 248514 DEBUG nova.compute.provider_tree [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:03.999 248514 DEBUG nova.scheduler.client.report [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.113 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.113 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.115 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.277 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.339 248514 INFO nova.scheduler.client.report [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Deleted allocations for instance 287069fd-8250-4b7f-92c5-74450b2e92ca
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.455 248514 DEBUG oslo_concurrency.lockutils [None req-3b065d46-7808-4bb0-b101-eae483d6bfbb c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.730 248514 DEBUG nova.network.neutron [-] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:04 compute-0 ceph-mon[76537]: pgmap v2134: 321 pgs: 321 active+clean; 151 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 229 op/s
Dec 13 08:35:04 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/970685015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.766 248514 INFO nova.compute.manager [-] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Took 2.01 seconds to deallocate network for instance.
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.830 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.831 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:04 compute-0 nova_compute[248510]: 2025-12-13 08:35:04.949 248514 DEBUG oslo_concurrency.processutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.332 248514 DEBUG nova.compute.manager [req-4b1b957e-c153-4e88-89ca-3bbd85fe2604 req-fc15954f-6d09-446d-9c81-6fae010a4791 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received event network-vif-deleted-5e06fac2-7381-499a-a4b0-d0006f071eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.423 248514 DEBUG nova.compute.manager [req-a5f9edb9-e01a-417d-90dc-8cf58cc1d35d req-3f8e61b3-a34a-42e5-ba46-06c6cd7cf145 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received event network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.423 248514 DEBUG oslo_concurrency.lockutils [req-a5f9edb9-e01a-417d-90dc-8cf58cc1d35d req-3f8e61b3-a34a-42e5-ba46-06c6cd7cf145 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.423 248514 DEBUG oslo_concurrency.lockutils [req-a5f9edb9-e01a-417d-90dc-8cf58cc1d35d req-3f8e61b3-a34a-42e5-ba46-06c6cd7cf145 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.424 248514 DEBUG oslo_concurrency.lockutils [req-a5f9edb9-e01a-417d-90dc-8cf58cc1d35d req-3f8e61b3-a34a-42e5-ba46-06c6cd7cf145 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.424 248514 DEBUG nova.compute.manager [req-a5f9edb9-e01a-417d-90dc-8cf58cc1d35d req-3f8e61b3-a34a-42e5-ba46-06c6cd7cf145 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] No waiting events found dispatching network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.424 248514 WARNING nova.compute.manager [req-a5f9edb9-e01a-417d-90dc-8cf58cc1d35d req-3f8e61b3-a34a-42e5-ba46-06c6cd7cf145 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Received unexpected event network-vif-plugged-5e06fac2-7381-499a-a4b0-d0006f071eae for instance with vm_state deleted and task_state None.
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.470 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:35:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319055591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.555 248514 DEBUG oslo_concurrency.processutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.564 248514 DEBUG nova.compute.provider_tree [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:35:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 121 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.741 248514 DEBUG nova.scheduler.client.report [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:35:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/319055591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.774 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:05 compute-0 nova_compute[248510]: 2025-12-13 08:35:05.847 248514 INFO nova.scheduler.client.report [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Deleted allocations for instance a393861c-2eea-4309-8914-042ec8bcc873
Dec 13 08:35:06 compute-0 nova_compute[248510]: 2025-12-13 08:35:06.010 248514 DEBUG oslo_concurrency.lockutils [None req-5d2c4410-6639-4cba-aa80-b40818f8b646 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:06 compute-0 ceph-mon[76537]: pgmap v2135: 321 pgs: 321 active+clean; 121 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Dec 13 08:35:07 compute-0 nova_compute[248510]: 2025-12-13 08:35:07.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 121 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 776 KiB/s wr, 156 op/s
Dec 13 08:35:08 compute-0 nova_compute[248510]: 2025-12-13 08:35:08.029 248514 DEBUG nova.compute.manager [req-621ab286-7ea5-4aa5-b329-f2a95a1f27b0 req-45f36846-99f8-414a-9bc6-ccace4436d44 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:08 compute-0 nova_compute[248510]: 2025-12-13 08:35:08.029 248514 DEBUG oslo_concurrency.lockutils [req-621ab286-7ea5-4aa5-b329-f2a95a1f27b0 req-45f36846-99f8-414a-9bc6-ccace4436d44 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:08 compute-0 nova_compute[248510]: 2025-12-13 08:35:08.030 248514 DEBUG oslo_concurrency.lockutils [req-621ab286-7ea5-4aa5-b329-f2a95a1f27b0 req-45f36846-99f8-414a-9bc6-ccace4436d44 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:08 compute-0 nova_compute[248510]: 2025-12-13 08:35:08.031 248514 DEBUG oslo_concurrency.lockutils [req-621ab286-7ea5-4aa5-b329-f2a95a1f27b0 req-45f36846-99f8-414a-9bc6-ccace4436d44 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:08 compute-0 nova_compute[248510]: 2025-12-13 08:35:08.031 248514 DEBUG nova.compute.manager [req-621ab286-7ea5-4aa5-b329-f2a95a1f27b0 req-45f36846-99f8-414a-9bc6-ccace4436d44 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] No waiting events found dispatching network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:08 compute-0 nova_compute[248510]: 2025-12-13 08:35:08.032 248514 WARNING nova.compute.manager [req-621ab286-7ea5-4aa5-b329-f2a95a1f27b0 req-45f36846-99f8-414a-9bc6-ccace4436d44 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received unexpected event network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd for instance with vm_state active and task_state None.
Dec 13 08:35:08 compute-0 ceph-mon[76537]: pgmap v2136: 321 pgs: 321 active+clean; 121 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 776 KiB/s wr, 156 op/s
Dec 13 08:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:35:09
Dec 13 08:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.log', 'images', '.rgw.root']
Dec 13 08:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.489 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.490 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.491 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.491 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.491 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.493 248514 INFO nova.compute.manager [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Terminating instance
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.495 248514 DEBUG nova.compute.manager [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:35:09 compute-0 kernel: tap9c81d394-98 (unregistering): left promiscuous mode
Dec 13 08:35:09 compute-0 NetworkManager[50376]: <info>  [1765614909.5504] device (tap9c81d394-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:35:09 compute-0 ovn_controller[148476]: 2025-12-13T08:35:09Z|00789|binding|INFO|Releasing lport 9c81d394-98c0-444d-a1f2-9b909c7e2559 from this chassis (sb_readonly=0)
Dec 13 08:35:09 compute-0 ovn_controller[148476]: 2025-12-13T08:35:09Z|00790|binding|INFO|Setting lport 9c81d394-98c0-444d-a1f2-9b909c7e2559 down in Southbound
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 ovn_controller[148476]: 2025-12-13T08:35:09Z|00791|binding|INFO|Removing iface tap9c81d394-98 ovn-installed in OVS
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.575 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:28:0b 10.100.0.13'], port_security=['fa:16:3e:07:28:0b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '68f568e2-917b-4b70-8345-8d04c0f5d1a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfb72597-ac7e-412b-834c-574618fe1a4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '082f98d662a4450a9fd767ac13a76391', 'neutron:revision_number': '4', 'neutron:security_group_ids': '79fb568a-41f5-487e-bdd9-c21c5553decf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=778b417f-43c9-4614-a0b9-308a3289fddb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9c81d394-98c0-444d-a1f2-9b909c7e2559) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.576 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9c81d394-98c0-444d-a1f2-9b909c7e2559 in datapath dfb72597-ac7e-412b-834c-574618fe1a4c unbound from our chassis
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.577 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfb72597-ac7e-412b-834c-574618fe1a4c
Dec 13 08:35:09 compute-0 kernel: tapca039819-f8 (unregistering): left promiscuous mode
Dec 13 08:35:09 compute-0 NetworkManager[50376]: <info>  [1765614909.5848] device (tapca039819-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:35:09 compute-0 ovn_controller[148476]: 2025-12-13T08:35:09Z|00792|binding|INFO|Releasing lport ca039819-f818-4e22-97a0-a5906ef33fcd from this chassis (sb_readonly=0)
Dec 13 08:35:09 compute-0 ovn_controller[148476]: 2025-12-13T08:35:09Z|00793|binding|INFO|Setting lport ca039819-f818-4e22-97a0-a5906ef33fcd down in Southbound
Dec 13 08:35:09 compute-0 ovn_controller[148476]: 2025-12-13T08:35:09Z|00794|binding|INFO|Removing iface tapca039819-f8 ovn-installed in OVS
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.592 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.592 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a55536d-71a9-419f-b62d-ef03e0f0f113]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.600 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:4b:eb 10.100.0.11'], port_security=['fa:16:3e:1c:4b:eb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '68f568e2-917b-4b70-8345-8d04c0f5d1a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfb72597-ac7e-412b-834c-574618fe1a4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '082f98d662a4450a9fd767ac13a76391', 'neutron:revision_number': '4', 'neutron:security_group_ids': '79fb568a-41f5-487e-bdd9-c21c5553decf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=778b417f-43c9-4614-a0b9-308a3289fddb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ca039819-f818-4e22-97a0-a5906ef33fcd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.626 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[30de06be-ccd2-4813-8e43-627272aa5f41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.629 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d23b7070-9fe8-4645-88d8-928be73bd5b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Dec 13 08:35:09 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Consumed 13.664s CPU time.
Dec 13 08:35:09 compute-0 systemd-machined[210538]: Machine qemu-96-instance-0000004d terminated.
Dec 13 08:35:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 121 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 776 KiB/s wr, 156 op/s
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.659 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8c30c3b6-491f-4fb7-936e-ba0e25326a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[33572ef5-7365-48a5-a129-9fba00ba5188]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfb72597-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:fa:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745186, 'reachable_time': 24813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325946, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.692 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b835752a-38ac-43db-a418-2b1528880ea8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdfb72597-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745200, 'tstamp': 745200}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325947, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdfb72597-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745202, 'tstamp': 745202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325947, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.694 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfb72597-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.701 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfb72597-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.702 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.702 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfb72597-a0, col_values=(('external_ids', {'iface-id': '5ddcc36f-0a9b-4b88-89cc-ebf23344a1de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.702 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.704 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ca039819-f818-4e22-97a0-a5906ef33fcd in datapath dfb72597-ac7e-412b-834c-574618fe1a4c unbound from our chassis
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.705 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfb72597-ac7e-412b-834c-574618fe1a4c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.706 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a082c77-6b1d-4224-866f-29731acde640]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.706 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c namespace which is not needed anymore
Dec 13 08:35:09 compute-0 NetworkManager[50376]: <info>  [1765614909.7235] manager: (tapca039819-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.744 248514 INFO nova.virt.libvirt.driver [-] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance destroyed successfully.
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.746 248514 DEBUG nova.objects.instance [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'resources' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.802 248514 DEBUG nova.virt.libvirt.vif [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:34:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-458198005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-458198005',id=77,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:34:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='082f98d662a4450a9fd767ac13a76391',ramdisk_id='',reservation_id='r-rt0w0l4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-2087650193',owner_user_name='tempest-AttachInterfacesV270Test-2087650193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:34:41Z,user_data=None,user_id='1daf78b2d25748d183901a09e4605044',uuid=68f568e2-917b-4b70-8345-8d04c0f5d1a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.802 248514 DEBUG nova.network.os_vif_util [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converting VIF {"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.803 248514 DEBUG nova.network.os_vif_util [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.803 248514 DEBUG os_vif [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.805 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.805 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c81d394-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.858 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [NOTICE]   (324656) : haproxy version is 2.8.14-c23fe91
Dec 13 08:35:09 compute-0 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [NOTICE]   (324656) : path to executable is /usr/sbin/haproxy
Dec 13 08:35:09 compute-0 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [WARNING]  (324656) : Exiting Master process...
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:35:09 compute-0 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [ALERT]    (324656) : Current worker (324658) exited with code 143 (Terminated)
Dec 13 08:35:09 compute-0 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [WARNING]  (324656) : All workers exited. Exiting... (0)
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 systemd[1]: libpod-5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e.scope: Deactivated successfully.
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.865 248514 INFO os_vif [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98')
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.866 248514 DEBUG nova.virt.libvirt.vif [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:34:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-458198005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-458198005',id=77,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:34:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='082f98d662a4450a9fd767ac13a76391',ramdisk_id='',reservation_id='r-rt0w0l4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-2087650193',owner_user_name='tempest-AttachInterfacesV270Test-2087650193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:34:41Z,user_data=None,user_id='1daf78b2d25748d183901a09e4605044',uuid=68f568e2-917b-4b70-8345-8d04c0f5d1a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.866 248514 DEBUG nova.network.os_vif_util [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converting VIF {"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.867 248514 DEBUG nova.network.os_vif_util [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4b:eb,bridge_name='br-int',has_traffic_filtering=True,id=ca039819-f818-4e22-97a0-a5906ef33fcd,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca039819-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.867 248514 DEBUG os_vif [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4b:eb,bridge_name='br-int',has_traffic_filtering=True,id=ca039819-f818-4e22-97a0-a5906ef33fcd,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca039819-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.869 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.869 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca039819-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:35:09 compute-0 podman[325987]: 2025-12-13 08:35:09.872152247 +0000 UTC m=+0.074954633 container died 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.874 248514 INFO os_vif [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4b:eb,bridge_name='br-int',has_traffic_filtering=True,id=ca039819-f818-4e22-97a0-a5906ef33fcd,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca039819-f8')
Dec 13 08:35:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e-userdata-shm.mount: Deactivated successfully.
Dec 13 08:35:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f3be40f551ba5f18c4992eac5c86f2009e1b17fb3be624151e23ab2639672f-merged.mount: Deactivated successfully.
Dec 13 08:35:09 compute-0 podman[325987]: 2025-12-13 08:35:09.906894619 +0000 UTC m=+0.109697005 container cleanup 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 08:35:09 compute-0 systemd[1]: libpod-conmon-5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e.scope: Deactivated successfully.
Dec 13 08:35:09 compute-0 podman[326033]: 2025-12-13 08:35:09.970869448 +0000 UTC m=+0.044415704 container remove 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.976 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9db61d3-44b0-4e82-b4ca-28e4b6e9628a]: (4, ('Sat Dec 13 08:35:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c (5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e)\n5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e\nSat Dec 13 08:35:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c (5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e)\n5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[752e04a2-6dca-4838-9598-5adc52e28e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.978 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfb72597-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:09 compute-0 kernel: tapdfb72597-a0: left promiscuous mode
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 nova_compute[248510]: 2025-12-13 08:35:09.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:09.997 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7d00fd-4065-4a67-84c3-1135ad80600f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:10.010 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[564bfc4f-0342-47f4-98e0-614bfa41f67a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:10.012 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b812d2d3-b3ec-4341-8e86-a9df70bdb1c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:10.030 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0d919f-7051-467a-95dc-c5532ed55ed8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745178, 'reachable_time': 25295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326048, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:10.032 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:35:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:10.032 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd6266f-a9ec-403f-8bf2-7b5e0c79fa2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:10 compute-0 systemd[1]: run-netns-ovnmeta\x2ddfb72597\x2dac7e\x2d412b\x2d834c\x2d574618fe1a4c.mount: Deactivated successfully.
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.157 248514 INFO nova.virt.libvirt.driver [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Deleting instance files /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6_del
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.158 248514 INFO nova.virt.libvirt.driver [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Deletion of /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6_del complete
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.176 248514 DEBUG nova.compute.manager [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.177 248514 DEBUG oslo_concurrency.lockutils [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.177 248514 DEBUG oslo_concurrency.lockutils [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.177 248514 DEBUG oslo_concurrency.lockutils [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.178 248514 DEBUG nova.compute.manager [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] No waiting events found dispatching network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.178 248514 WARNING nova.compute.manager [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received unexpected event network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd for instance with vm_state active and task_state deleting.
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.178 248514 DEBUG nova.compute.manager [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-unplugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.178 248514 DEBUG oslo_concurrency.lockutils [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.178 248514 DEBUG oslo_concurrency.lockutils [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.179 248514 DEBUG oslo_concurrency.lockutils [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.179 248514 DEBUG nova.compute.manager [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] No waiting events found dispatching network-vif-unplugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.179 248514 DEBUG nova.compute.manager [req-fe85d118-7a37-485e-8b35-c5c9b1cebccc req-bbf986b9-a771-4a63-a87a-b9c0b82304ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-unplugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.227 248514 INFO nova.compute.manager [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Took 0.73 seconds to destroy the instance on the hypervisor.
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.228 248514 DEBUG oslo.service.loopingcall [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.228 248514 DEBUG nova.compute.manager [-] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.228 248514 DEBUG nova.network.neutron [-] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.395 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "800e4f3f-112f-4fce-81aa-66073d601f06" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.396 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.427 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.499 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.499 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.506 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.507 248514 INFO nova.compute.claims [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:35:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:35:10 compute-0 nova_compute[248510]: 2025-12-13 08:35:10.657 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:10 compute-0 ceph-mon[76537]: pgmap v2137: 321 pgs: 321 active+clean; 121 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 776 KiB/s wr, 156 op/s
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.150 248514 DEBUG nova.compute.manager [req-aef88878-608f-4174-b8ac-f84718457544 req-beb394ed-39e9-464d-8467-ace9b4268d2f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-deleted-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.151 248514 INFO nova.compute.manager [req-aef88878-608f-4174-b8ac-f84718457544 req-beb394ed-39e9-464d-8467-ace9b4268d2f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Neutron deleted interface 9c81d394-98c0-444d-a1f2-9b909c7e2559; detaching it from the instance and deleting it from the info cache
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.151 248514 DEBUG nova.network.neutron [req-aef88878-608f-4174-b8ac-f84718457544 req-beb394ed-39e9-464d-8467-ace9b4268d2f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updating instance_info_cache with network_info: [{"id": "ca039819-f818-4e22-97a0-a5906ef33fcd", "address": "fa:16:3e:1c:4b:eb", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca039819-f8", "ovs_interfaceid": "ca039819-f818-4e22-97a0-a5906ef33fcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.181 248514 DEBUG nova.compute.manager [req-aef88878-608f-4174-b8ac-f84718457544 req-beb394ed-39e9-464d-8467-ace9b4268d2f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Detach interface failed, port_id=9c81d394-98c0-444d-a1f2-9b909c7e2559, reason: Instance 68f568e2-917b-4b70-8345-8d04c0f5d1a6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:35:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:35:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507172380' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.224 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.229 248514 DEBUG nova.compute.provider_tree [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.260 248514 DEBUG nova.scheduler.client.report [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.291 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.292 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.351 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.351 248514 DEBUG nova.network.neutron [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.384 248514 INFO nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:35:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.410 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.522 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.523 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.524 248514 INFO nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Creating image(s)
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.543 248514 DEBUG nova.storage.rbd_utils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] rbd image 800e4f3f-112f-4fce-81aa-66073d601f06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.561 248514 DEBUG nova.storage.rbd_utils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] rbd image 800e4f3f-112f-4fce-81aa-66073d601f06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.580 248514 DEBUG nova.storage.rbd_utils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] rbd image 800e4f3f-112f-4fce-81aa-66073d601f06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.583 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.633 248514 DEBUG nova.network.neutron [-] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.643 248514 DEBUG nova.policy [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2c09d963a7e0425c8d8237b3fb1df9af', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6127cd8ed5243eaa8db677cecead62e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:35:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 121 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 67 KiB/s wr, 101 op/s
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.656 248514 INFO nova.compute.manager [-] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Took 1.43 seconds to deallocate network for instance.
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.657 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.663 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.664 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.664 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.686 248514 DEBUG nova.storage.rbd_utils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] rbd image 800e4f3f-112f-4fce-81aa-66073d601f06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.689 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 800e4f3f-112f-4fce-81aa-66073d601f06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/507172380' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.979 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:11 compute-0 nova_compute[248510]: 2025-12-13 08:35:11.980 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.024 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 800e4f3f-112f-4fce-81aa-66073d601f06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.089 248514 DEBUG oslo_concurrency.processutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.141 248514 DEBUG nova.storage.rbd_utils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] resizing rbd image 800e4f3f-112f-4fce-81aa-66073d601f06_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.219 248514 DEBUG nova.objects.instance [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lazy-loading 'migration_context' on Instance uuid 800e4f3f-112f-4fce-81aa-66073d601f06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.277 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.278 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Ensure instance console log exists: /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.280 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.282 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.282 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.410 248514 DEBUG nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.412 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.412 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.413 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.414 248514 DEBUG nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] No waiting events found dispatching network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.414 248514 WARNING nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received unexpected event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 for instance with vm_state deleted and task_state None.
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.415 248514 DEBUG nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-unplugged-ca039819-f818-4e22-97a0-a5906ef33fcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.415 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.416 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.417 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.417 248514 DEBUG nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] No waiting events found dispatching network-vif-unplugged-ca039819-f818-4e22-97a0-a5906ef33fcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.418 248514 WARNING nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received unexpected event network-vif-unplugged-ca039819-f818-4e22-97a0-a5906ef33fcd for instance with vm_state deleted and task_state None.
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.419 248514 DEBUG nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.419 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.420 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.421 248514 DEBUG oslo_concurrency.lockutils [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.421 248514 DEBUG nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] No waiting events found dispatching network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.422 248514 WARNING nova.compute.manager [req-2e2b7695-1617-4524-8bff-b8bd8474dca8 req-7e526700-9a09-40b8-9f15-42da2bd4e77e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received unexpected event network-vif-plugged-ca039819-f818-4e22-97a0-a5906ef33fcd for instance with vm_state deleted and task_state None.
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:35:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1571658823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.728 248514 DEBUG oslo_concurrency.processutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.733 248514 DEBUG nova.compute.provider_tree [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.767 248514 DEBUG nova.scheduler.client.report [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.794 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:12 compute-0 ceph-mon[76537]: pgmap v2138: 321 pgs: 321 active+clean; 121 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 67 KiB/s wr, 101 op/s
Dec 13 08:35:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1571658823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.845 248514 INFO nova.scheduler.client.report [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Deleted allocations for instance 68f568e2-917b-4b70-8345-8d04c0f5d1a6
Dec 13 08:35:12 compute-0 nova_compute[248510]: 2025-12-13 08:35:12.942 248514 DEBUG oslo_concurrency.lockutils [None req-6a1062c7-9658-47f7-8dd2-e31eaad73176 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:13 compute-0 nova_compute[248510]: 2025-12-13 08:35:13.145 248514 DEBUG nova.network.neutron [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Successfully created port: 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:35:13 compute-0 nova_compute[248510]: 2025-12-13 08:35:13.151 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614897.636034, 287069fd-8250-4b7f-92c5-74450b2e92ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:35:13 compute-0 nova_compute[248510]: 2025-12-13 08:35:13.151 248514 INFO nova.compute.manager [-] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] VM Stopped (Lifecycle Event)
Dec 13 08:35:13 compute-0 nova_compute[248510]: 2025-12-13 08:35:13.196 248514 DEBUG nova.compute.manager [None req-228e20be-b3d4-4dcf-8627-dcc36d9499d6 - - - - - -] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:35:13 compute-0 nova_compute[248510]: 2025-12-13 08:35:13.253 248514 DEBUG nova.compute.manager [req-34f771f9-bae4-4d52-9925-b9612f92b94d req-6bb4b92b-2784-49b4-b99e-e8438b6ad606 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-deleted-ca039819-f818-4e22-97a0-a5906ef33fcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 97 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 862 KiB/s wr, 83 op/s
Dec 13 08:35:14 compute-0 sudo[326261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:35:14 compute-0 sudo[326261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:14 compute-0 sudo[326261]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:14 compute-0 sudo[326286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 08:35:14 compute-0 sudo[326286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:14 compute-0 ceph-mon[76537]: pgmap v2139: 321 pgs: 321 active+clean; 97 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 862 KiB/s wr, 83 op/s
Dec 13 08:35:14 compute-0 nova_compute[248510]: 2025-12-13 08:35:14.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:15 compute-0 sudo[326286]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3969571563' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3969571563' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:15 compute-0 sudo[326332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:35:15 compute-0 sudo[326332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:15 compute-0 sudo[326332]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:15 compute-0 sudo[326357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:35:15 compute-0 sudo[326357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.493 248514 DEBUG nova.network.neutron [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Successfully updated port: 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.518 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "refresh_cache-800e4f3f-112f-4fce-81aa-66073d601f06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.519 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquired lock "refresh_cache-800e4f3f-112f-4fce-81aa-66073d601f06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.519 248514 DEBUG nova.network.neutron [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:35:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.666 248514 DEBUG nova.compute.manager [req-15f0b344-91d9-490d-b493-231c2ecc58be req-5075edc2-63ca-4a54-b035-82dd37649a59 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Received event network-changed-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.666 248514 DEBUG nova.compute.manager [req-15f0b344-91d9-490d-b493-231c2ecc58be req-5075edc2-63ca-4a54-b035-82dd37649a59 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Refreshing instance network info cache due to event network-changed-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.667 248514 DEBUG oslo_concurrency.lockutils [req-15f0b344-91d9-490d-b493-231c2ecc58be req-5075edc2-63ca-4a54-b035-82dd37649a59 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-800e4f3f-112f-4fce-81aa-66073d601f06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:35:15 compute-0 sudo[326357]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:35:15 compute-0 nova_compute[248510]: 2025-12-13 08:35:15.750 248514 DEBUG nova.network.neutron [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:35:15 compute-0 sudo[326412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:35:15 compute-0 sudo[326412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:15 compute-0 sudo[326412]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3969571563' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3969571563' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:35:15 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:35:15 compute-0 sudo[326437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:35:15 compute-0 sudo[326437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:16 compute-0 podman[326473]: 2025-12-13 08:35:16.096813498 +0000 UTC m=+0.039986004 container create 743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:35:16 compute-0 systemd[1]: Started libpod-conmon-743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba.scope.
Dec 13 08:35:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:35:16 compute-0 podman[326473]: 2025-12-13 08:35:16.16773767 +0000 UTC m=+0.110910206 container init 743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_spence, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:35:16 compute-0 podman[326473]: 2025-12-13 08:35:16.078695278 +0000 UTC m=+0.021867804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:35:16 compute-0 podman[326473]: 2025-12-13 08:35:16.17380407 +0000 UTC m=+0.116976586 container start 743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 08:35:16 compute-0 podman[326473]: 2025-12-13 08:35:16.176784924 +0000 UTC m=+0.119957430 container attach 743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:35:16 compute-0 loving_spence[326489]: 167 167
Dec 13 08:35:16 compute-0 systemd[1]: libpod-743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba.scope: Deactivated successfully.
Dec 13 08:35:16 compute-0 podman[326473]: 2025-12-13 08:35:16.18062443 +0000 UTC m=+0.123796956 container died 743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_spence, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 08:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5a8db378e2a68add1cc70966ca91ea2efc3b492ddee52b78917f84c82536ca2-merged.mount: Deactivated successfully.
Dec 13 08:35:16 compute-0 podman[326473]: 2025-12-13 08:35:16.229898893 +0000 UTC m=+0.173071409 container remove 743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_spence, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:35:16 compute-0 systemd[1]: libpod-conmon-743ca4e357faab08541cadb5dc107f7e5e187d59b8a34d675b547becae90ecba.scope: Deactivated successfully.
Dec 13 08:35:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:16 compute-0 podman[326511]: 2025-12-13 08:35:16.431533071 +0000 UTC m=+0.050515506 container create 51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ishizaka, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 08:35:16 compute-0 systemd[1]: Started libpod-conmon-51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20.scope.
Dec 13 08:35:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:35:16 compute-0 podman[326511]: 2025-12-13 08:35:16.409622157 +0000 UTC m=+0.028604622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2153e5b6001da383e9fb4c44c4e7a9af993bef060023927f82bd22279e358bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2153e5b6001da383e9fb4c44c4e7a9af993bef060023927f82bd22279e358bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2153e5b6001da383e9fb4c44c4e7a9af993bef060023927f82bd22279e358bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2153e5b6001da383e9fb4c44c4e7a9af993bef060023927f82bd22279e358bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2153e5b6001da383e9fb4c44c4e7a9af993bef060023927f82bd22279e358bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:16 compute-0 podman[326511]: 2025-12-13 08:35:16.524290024 +0000 UTC m=+0.143272489 container init 51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ishizaka, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:35:16 compute-0 podman[326511]: 2025-12-13 08:35:16.531948204 +0000 UTC m=+0.150930639 container start 51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:35:16 compute-0 podman[326511]: 2025-12-13 08:35:16.536932208 +0000 UTC m=+0.155914643 container attach 51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:35:16 compute-0 nova_compute[248510]: 2025-12-13 08:35:16.799 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614901.7977467, a393861c-2eea-4309-8914-042ec8bcc873 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:35:16 compute-0 nova_compute[248510]: 2025-12-13 08:35:16.801 248514 INFO nova.compute.manager [-] [instance: a393861c-2eea-4309-8914-042ec8bcc873] VM Stopped (Lifecycle Event)
Dec 13 08:35:16 compute-0 ceph-mon[76537]: pgmap v2140: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec 13 08:35:16 compute-0 serene_ishizaka[326527]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:35:16 compute-0 serene_ishizaka[326527]: --> All data devices are unavailable
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.005 248514 DEBUG nova.compute.manager [None req-dca82079-9f51-4066-941e-e9b358c44164 - - - - - -] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:35:17 compute-0 systemd[1]: libpod-51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20.scope: Deactivated successfully.
Dec 13 08:35:17 compute-0 podman[326511]: 2025-12-13 08:35:17.020749542 +0000 UTC m=+0.639731977 container died 51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2153e5b6001da383e9fb4c44c4e7a9af993bef060023927f82bd22279e358bd-merged.mount: Deactivated successfully.
Dec 13 08:35:17 compute-0 podman[326511]: 2025-12-13 08:35:17.063844793 +0000 UTC m=+0.682827228 container remove 51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 08:35:17 compute-0 systemd[1]: libpod-conmon-51e4ebed9a99880e5041a1de95ed0c932468cd74d9711c2f9947ad0e37c51a20.scope: Deactivated successfully.
Dec 13 08:35:17 compute-0 sudo[326437]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:17 compute-0 sudo[326557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:35:17 compute-0 sudo[326557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:17 compute-0 sudo[326557]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:17 compute-0 sudo[326582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:35:17 compute-0 sudo[326582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:17 compute-0 podman[326619]: 2025-12-13 08:35:17.511483241 +0000 UTC m=+0.041171784 container create 4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.540 248514 DEBUG nova.network.neutron [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Updating instance_info_cache with network_info: [{"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:17 compute-0 systemd[1]: Started libpod-conmon-4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c.scope.
Dec 13 08:35:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.575 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Releasing lock "refresh_cache-800e4f3f-112f-4fce-81aa-66073d601f06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.576 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Instance network_info: |[{"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.576 248514 DEBUG oslo_concurrency.lockutils [req-15f0b344-91d9-490d-b493-231c2ecc58be req-5075edc2-63ca-4a54-b035-82dd37649a59 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-800e4f3f-112f-4fce-81aa-66073d601f06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.576 248514 DEBUG nova.network.neutron [req-15f0b344-91d9-490d-b493-231c2ecc58be req-5075edc2-63ca-4a54-b035-82dd37649a59 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Refreshing network info cache for port 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.579 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Start _get_guest_xml network_info=[{"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.584 248514 WARNING nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:35:17 compute-0 podman[326619]: 2025-12-13 08:35:17.587987581 +0000 UTC m=+0.117676134 container init 4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 13 08:35:17 compute-0 podman[326619]: 2025-12-13 08:35:17.494915309 +0000 UTC m=+0.024603882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.592 248514 DEBUG nova.virt.libvirt.host [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.594 248514 DEBUG nova.virt.libvirt.host [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:35:17 compute-0 podman[326619]: 2025-12-13 08:35:17.59519283 +0000 UTC m=+0.124881373 container start 4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.599 248514 DEBUG nova.virt.libvirt.host [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:35:17 compute-0 podman[326619]: 2025-12-13 08:35:17.599811124 +0000 UTC m=+0.129499667 container attach 4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.599 248514 DEBUG nova.virt.libvirt.host [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.600 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:35:17 compute-0 systemd[1]: libpod-4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c.scope: Deactivated successfully.
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.600 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.601 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:35:17 compute-0 silly_shirley[326634]: 167 167
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.601 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:35:17 compute-0 conmon[326634]: conmon 4bcb93dcf1504e259577 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c.scope/container/memory.events
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.601 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.601 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.602 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.602 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.602 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:35:17 compute-0 podman[326619]: 2025-12-13 08:35:17.602628284 +0000 UTC m=+0.132316857 container died 4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.602 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.602 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.603 248514 DEBUG nova.virt.hardware [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:35:17 compute-0 nova_compute[248510]: 2025-12-13 08:35:17.605 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c02a9bc686a00c71cca26826e52fd6dcb13b770285f8c3efa962af24b8f2b288-merged.mount: Deactivated successfully.
Dec 13 08:35:17 compute-0 podman[326619]: 2025-12-13 08:35:17.640132306 +0000 UTC m=+0.169820849 container remove 4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shirley, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:35:17 compute-0 systemd[1]: libpod-conmon-4bcb93dcf1504e2595772795f11aa5d689eb471b29c451a7a0b28ad9de22f46c.scope: Deactivated successfully.
Dec 13 08:35:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 13 08:35:17 compute-0 podman[326677]: 2025-12-13 08:35:17.812872826 +0000 UTC m=+0.040508337 container create ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 08:35:17 compute-0 systemd[1]: Started libpod-conmon-ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b.scope.
Dec 13 08:35:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc8d515bb9a6d206477e42fed3a397a4b1bad2b0420ef402473067fe74dbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc8d515bb9a6d206477e42fed3a397a4b1bad2b0420ef402473067fe74dbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc8d515bb9a6d206477e42fed3a397a4b1bad2b0420ef402473067fe74dbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc8d515bb9a6d206477e42fed3a397a4b1bad2b0420ef402473067fe74dbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:17 compute-0 podman[326677]: 2025-12-13 08:35:17.794614793 +0000 UTC m=+0.022250334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:35:17 compute-0 podman[326677]: 2025-12-13 08:35:17.894226626 +0000 UTC m=+0.121862167 container init ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:35:17 compute-0 podman[326677]: 2025-12-13 08:35:17.90322005 +0000 UTC m=+0.130855561 container start ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_volhard, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:35:17 compute-0 podman[326677]: 2025-12-13 08:35:17.906233675 +0000 UTC m=+0.133869216 container attach ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_volhard, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:35:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]: {
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:     "0": [
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:         {
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "devices": [
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "/dev/loop3"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             ],
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_name": "ceph_lv0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_size": "21470642176",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "name": "ceph_lv0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "tags": {
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cluster_name": "ceph",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.crush_device_class": "",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.encrypted": "0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.objectstore": "bluestore",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osd_id": "0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.type": "block",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.vdo": "0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.with_tpm": "0"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             },
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "type": "block",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "vg_name": "ceph_vg0"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:         }
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:     ],
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:     "1": [
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:         {
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "devices": [
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "/dev/loop4"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             ],
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_name": "ceph_lv1",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_size": "21470642176",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "name": "ceph_lv1",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "tags": {
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cluster_name": "ceph",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.crush_device_class": "",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.encrypted": "0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.objectstore": "bluestore",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osd_id": "1",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.type": "block",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.vdo": "0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.with_tpm": "0"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             },
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "type": "block",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "vg_name": "ceph_vg1"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:         }
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:     ],
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:     "2": [
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:         {
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "devices": [
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "/dev/loop5"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             ],
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_name": "ceph_lv2",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_size": "21470642176",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "name": "ceph_lv2",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "tags": {
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.cluster_name": "ceph",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.crush_device_class": "",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.encrypted": "0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.objectstore": "bluestore",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osd_id": "2",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.type": "block",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.vdo": "0",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:                 "ceph.with_tpm": "0"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             },
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "type": "block",
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:             "vg_name": "ceph_vg2"
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:         }
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]:     ]
Dec 13 08:35:18 compute-0 affectionate_volhard[326694]: }
Dec 13 08:35:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/711169507' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.201 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:18 compute-0 systemd[1]: libpod-ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b.scope: Deactivated successfully.
Dec 13 08:35:18 compute-0 podman[326677]: 2025-12-13 08:35:18.210180233 +0000 UTC m=+0.437815744 container died ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.232 248514 DEBUG nova.storage.rbd_utils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] rbd image 800e4f3f-112f-4fce-81aa-66073d601f06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.237 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-63acc8d515bb9a6d206477e42fed3a397a4b1bad2b0420ef402473067fe74dbd-merged.mount: Deactivated successfully.
Dec 13 08:35:18 compute-0 podman[326677]: 2025-12-13 08:35:18.257496598 +0000 UTC m=+0.485132109 container remove ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_volhard, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 08:35:18 compute-0 systemd[1]: libpod-conmon-ab834360f83be6ff476805bf2087c2f37c3c472c281862fd9d4e905a83cc495b.scope: Deactivated successfully.
Dec 13 08:35:18 compute-0 sudo[326582]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:18 compute-0 sudo[326737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:35:18 compute-0 sudo[326737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:18 compute-0 sudo[326737]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:18 compute-0 sudo[326781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:35:18 compute-0 sudo[326781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:35:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168575633' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:35:18 compute-0 podman[326818]: 2025-12-13 08:35:18.696926931 +0000 UTC m=+0.021629619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.799 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.801 248514 DEBUG nova.virt.libvirt.vif [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:35:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-629210456',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-629210456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-629210456',id=80,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6127cd8ed5243eaa8db677cecead62e',ramdisk_id='',reservation_id='r-wwrj3373',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1828069126',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1828069126-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:35:11Z,user_data=None,user_id='2c09d963a7e0425c8d8237b3fb1df9af',uuid=800e4f3f-112f-4fce-81aa-66073d601f06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.802 248514 DEBUG nova.network.os_vif_util [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Converting VIF {"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.803 248514 DEBUG nova.network.os_vif_util [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:29:00,bridge_name='br-int',has_traffic_filtering=True,id=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d,network=Network(0e44998e-0f16-46a9-a93d-1246b939f9b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5ad842-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.804 248514 DEBUG nova.objects.instance [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lazy-loading 'pci_devices' on Instance uuid 800e4f3f-112f-4fce-81aa-66073d601f06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:35:18 compute-0 podman[326818]: 2025-12-13 08:35:18.825551475 +0000 UTC m=+0.150254143 container create 092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 08:35:18 compute-0 ceph-mon[76537]: pgmap v2141: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 13 08:35:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/711169507' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:35:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3168575633' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:35:18 compute-0 systemd[1]: Started libpod-conmon-092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089.scope.
Dec 13 08:35:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.988 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <uuid>800e4f3f-112f-4fce-81aa-66073d601f06</uuid>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <name>instance-00000050</name>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-629210456</nova:name>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:35:17</nova:creationTime>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <nova:user uuid="2c09d963a7e0425c8d8237b3fb1df9af">tempest-ServersNegativeTestMultiTenantJSON-1828069126-project-member</nova:user>
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <nova:project uuid="a6127cd8ed5243eaa8db677cecead62e">tempest-ServersNegativeTestMultiTenantJSON-1828069126</nova:project>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <nova:port uuid="6b5ad842-5c2b-4f4d-b3b2-397b8926a77d">
Dec 13 08:35:18 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <system>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <entry name="serial">800e4f3f-112f-4fce-81aa-66073d601f06</entry>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <entry name="uuid">800e4f3f-112f-4fce-81aa-66073d601f06</entry>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </system>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <os>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   </os>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <features>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   </features>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/800e4f3f-112f-4fce-81aa-66073d601f06_disk">
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       </source>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/800e4f3f-112f-4fce-81aa-66073d601f06_disk.config">
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       </source>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:35:18 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:35:29:00"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <target dev="tap6b5ad842-5c"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06/console.log" append="off"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <video>
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </video>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:35:18 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:35:18 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:35:18 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:35:18 compute-0 nova_compute[248510]: </domain>
Dec 13 08:35:18 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.988 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Preparing to wait for external event network-vif-plugged-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.989 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.989 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.989 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.990 248514 DEBUG nova.virt.libvirt.vif [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:35:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-629210456',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-629210456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-629210456',id=80,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6127cd8ed5243eaa8db677cecead62e',ramdisk_id='',reservation_id='r-wwrj3373',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1828069126',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1828069126-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:35:11Z,user_data=None,user_id='2c09d963a7e0425c8d8237b3fb1df9af',uuid=800e4f3f-112f-4fce-81aa-66073d601f06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.990 248514 DEBUG nova.network.os_vif_util [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Converting VIF {"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.991 248514 DEBUG nova.network.os_vif_util [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:29:00,bridge_name='br-int',has_traffic_filtering=True,id=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d,network=Network(0e44998e-0f16-46a9-a93d-1246b939f9b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5ad842-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.991 248514 DEBUG os_vif [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:29:00,bridge_name='br-int',has_traffic_filtering=True,id=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d,network=Network(0e44998e-0f16-46a9-a93d-1246b939f9b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5ad842-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.992 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.993 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:18 compute-0 nova_compute[248510]: 2025-12-13 08:35:18.998 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b5ad842-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.000 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b5ad842-5c, col_values=(('external_ids', {'iface-id': '6b5ad842-5c2b-4f4d-b3b2-397b8926a77d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:29:00', 'vm-uuid': '800e4f3f-112f-4fce-81aa-66073d601f06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:19 compute-0 NetworkManager[50376]: <info>  [1765614919.0039] manager: (tap6b5ad842-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.008 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.014 248514 INFO os_vif [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:29:00,bridge_name='br-int',has_traffic_filtering=True,id=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d,network=Network(0e44998e-0f16-46a9-a93d-1246b939f9b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5ad842-5c')
Dec 13 08:35:19 compute-0 podman[326818]: 2025-12-13 08:35:19.025628983 +0000 UTC m=+0.350331691 container init 092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 08:35:19 compute-0 podman[326818]: 2025-12-13 08:35:19.040334879 +0000 UTC m=+0.365037577 container start 092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 08:35:19 compute-0 elegant_kapitsa[326837]: 167 167
Dec 13 08:35:19 compute-0 systemd[1]: libpod-092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089.scope: Deactivated successfully.
Dec 13 08:35:19 compute-0 podman[326818]: 2025-12-13 08:35:19.109708131 +0000 UTC m=+0.434410799 container attach 092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:35:19 compute-0 podman[326818]: 2025-12-13 08:35:19.111166368 +0000 UTC m=+0.435869046 container died 092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.216 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.218 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.218 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] No VIF found with MAC fa:16:3e:35:29:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.219 248514 INFO nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Using config drive
Dec 13 08:35:19 compute-0 nova_compute[248510]: 2025-12-13 08:35:19.247 248514 DEBUG nova.storage.rbd_utils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] rbd image 800e4f3f-112f-4fce-81aa-66073d601f06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaa98536e0dc419d128cb6e02eca2747ea3cb1ac71d642b3e2642b5499f718a4-merged.mount: Deactivated successfully.
Dec 13 08:35:19 compute-0 podman[326818]: 2025-12-13 08:35:19.428858177 +0000 UTC m=+0.753560845 container remove 092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:35:19 compute-0 systemd[1]: libpod-conmon-092551e144c87aec0c89ae24a5619d910a61d307a1c4c9730bf663811bfd3089.scope: Deactivated successfully.
Dec 13 08:35:19 compute-0 podman[326880]: 2025-12-13 08:35:19.608614851 +0000 UTC m=+0.046884085 container create afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:35:19 compute-0 systemd[1]: Started libpod-conmon-afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a.scope.
Dec 13 08:35:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 13 08:35:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bb95893e18f5095affc3a282c8bd0e1c7891dd55e9bc17243854cd848c27a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:19 compute-0 podman[326880]: 2025-12-13 08:35:19.585931688 +0000 UTC m=+0.024200942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bb95893e18f5095affc3a282c8bd0e1c7891dd55e9bc17243854cd848c27a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bb95893e18f5095affc3a282c8bd0e1c7891dd55e9bc17243854cd848c27a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bb95893e18f5095affc3a282c8bd0e1c7891dd55e9bc17243854cd848c27a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:19 compute-0 podman[326880]: 2025-12-13 08:35:19.692843583 +0000 UTC m=+0.131112837 container init afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:35:19 compute-0 podman[326880]: 2025-12-13 08:35:19.701351924 +0000 UTC m=+0.139621158 container start afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:35:19 compute-0 podman[326880]: 2025-12-13 08:35:19.705664841 +0000 UTC m=+0.143934285 container attach afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.277 248514 DEBUG nova.network.neutron [req-15f0b344-91d9-490d-b493-231c2ecc58be req-5075edc2-63ca-4a54-b035-82dd37649a59 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Updated VIF entry in instance network info cache for port 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.278 248514 DEBUG nova.network.neutron [req-15f0b344-91d9-490d-b493-231c2ecc58be req-5075edc2-63ca-4a54-b035-82dd37649a59 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Updating instance_info_cache with network_info: [{"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.304 248514 DEBUG oslo_concurrency.lockutils [req-15f0b344-91d9-490d-b493-231c2ecc58be req-5075edc2-63ca-4a54-b035-82dd37649a59 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-800e4f3f-112f-4fce-81aa-66073d601f06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:35:20 compute-0 lvm[326975]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:35:20 compute-0 lvm[326976]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:35:20 compute-0 lvm[326976]: VG ceph_vg1 finished
Dec 13 08:35:20 compute-0 lvm[326975]: VG ceph_vg0 finished
Dec 13 08:35:20 compute-0 lvm[326978]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:35:20 compute-0 lvm[326978]: VG ceph_vg2 finished
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.412 248514 INFO nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Creating config drive at /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06/disk.config
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.419 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9b4040z6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:20 compute-0 angry_ishizaka[326897]: {}
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:20 compute-0 systemd[1]: libpod-afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a.scope: Deactivated successfully.
Dec 13 08:35:20 compute-0 systemd[1]: libpod-afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a.scope: Consumed 1.302s CPU time.
Dec 13 08:35:20 compute-0 podman[326880]: 2025-12-13 08:35:20.509289967 +0000 UTC m=+0.947559201 container died afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-35bb95893e18f5095affc3a282c8bd0e1c7891dd55e9bc17243854cd848c27a1-merged.mount: Deactivated successfully.
Dec 13 08:35:20 compute-0 podman[326880]: 2025-12-13 08:35:20.556066209 +0000 UTC m=+0.994335443 container remove afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.562 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9b4040z6" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:20 compute-0 systemd[1]: libpod-conmon-afda9acf41d02a90d2e5681837a3e3209b02eea4ab61dc58965cee48db089d8a.scope: Deactivated successfully.
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.590 248514 DEBUG nova.storage.rbd_utils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] rbd image 800e4f3f-112f-4fce-81aa-66073d601f06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.594 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06/disk.config 800e4f3f-112f-4fce-81aa-66073d601f06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:20 compute-0 sudo[326781]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:35:20 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:35:20 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:20 compute-0 sudo[327015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:35:20 compute-0 sudo[327015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:35:20 compute-0 sudo[327015]: pam_unix(sudo:session): session closed for user root
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.724 248514 DEBUG oslo_concurrency.processutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06/disk.config 800e4f3f-112f-4fce-81aa-66073d601f06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.726 248514 INFO nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Deleting local config drive /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06/disk.config because it was imported into RBD.
Dec 13 08:35:20 compute-0 kernel: tap6b5ad842-5c: entered promiscuous mode
Dec 13 08:35:20 compute-0 NetworkManager[50376]: <info>  [1765614920.7870] manager: (tap6b5ad842-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Dec 13 08:35:20 compute-0 systemd-udevd[326977]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:35:20 compute-0 ovn_controller[148476]: 2025-12-13T08:35:20Z|00795|binding|INFO|Claiming lport 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d for this chassis.
Dec 13 08:35:20 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:35:20 compute-0 ovn_controller[148476]: 2025-12-13T08:35:20Z|00796|binding|INFO|6b5ad842-5c2b-4f4d-b3b2-397b8926a77d: Claiming fa:16:3e:35:29:00 10.100.0.13
Dec 13 08:35:20 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.790 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:20 compute-0 NetworkManager[50376]: <info>  [1765614920.8019] device (tap6b5ad842-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:35:20 compute-0 NetworkManager[50376]: <info>  [1765614920.8028] device (tap6b5ad842-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:35:20 compute-0 systemd-machined[210538]: New machine qemu-99-instance-00000050.
Dec 13 08:35:20 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000050.
Dec 13 08:35:20 compute-0 ovn_controller[148476]: 2025-12-13T08:35:20Z|00797|binding|INFO|Setting lport 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d ovn-installed in OVS
Dec 13 08:35:20 compute-0 nova_compute[248510]: 2025-12-13 08:35:20.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:20 compute-0 ceph-mon[76537]: pgmap v2142: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 13 08:35:20 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:20 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.949 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:29:00 10.100.0.13'], port_security=['fa:16:3e:35:29:00 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '800e4f3f-112f-4fce-81aa-66073d601f06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e44998e-0f16-46a9-a93d-1246b939f9b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6127cd8ed5243eaa8db677cecead62e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e9d5e37-b731-40d9-8201-1d4a96e4d6e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc203850-1b5a-4b40-b376-1eb552185ca4, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:35:20 compute-0 ovn_controller[148476]: 2025-12-13T08:35:20Z|00798|binding|INFO|Setting lport 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d up in Southbound
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.951 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d in datapath 0e44998e-0f16-46a9-a93d-1246b939f9b9 bound to our chassis
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.952 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0e44998e-0f16-46a9-a93d-1246b939f9b9
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.964 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b93c9266-4f09-4ce7-b0c2-b098bd0af967]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.965 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0e44998e-01 in ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.967 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0e44998e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.968 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1134d428-15cb-46a6-92ac-ab68f3947942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.969 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7a7de0-206a-4d5c-9d4f-bfc976db1c0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035465723867255544 of space, bias 1.0, pg target 0.10639717160176664 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667280056921076 of space, bias 1.0, pg target 0.20018401707632277 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.981500954369343e-07 of space, bias 4.0, pg target 0.0007177801145243211 quantized to 16 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:35:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:35:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:20.984 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b47faf-aecc-43dc-8377-1ae3014eeefe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.013 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0be2db-0d6f-46e7-9813-d0bb141e138f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.047 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ea772c-13e1-45f5-a72a-8c43468dad48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 NetworkManager[50376]: <info>  [1765614921.0622] manager: (tap0e44998e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/342)
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.061 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef5be6c-57ce-442e-aa52-e9eb97419708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.109 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec9084a-7cc9-4fa0-ac6d-1353f08d37da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.112 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[455dd7e8-74cc-4e7e-bfe7-3fb6d2d61cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 NetworkManager[50376]: <info>  [1765614921.1468] device (tap0e44998e-00): carrier: link connected
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.156 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb545e6-1bc1-4134-90e0-7e7d308da5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.178 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e04a41-b673-421b-a2f3-35e02a2d724c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e44998e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749836, 'reachable_time': 39900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327129, 'error': None, 'target': 'ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.201 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35cee079-0eec-49cb-b119-81381ec2f176]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:b29'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749836, 'tstamp': 749836}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327141, 'error': None, 'target': 'ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.219 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5fc54e1-4cf9-467d-85fd-fa102ebc0775]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e44998e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749836, 'reachable_time': 39900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327146, 'error': None, 'target': 'ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.255 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42793212-729e-4e7a-9474-00118dabcb73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 nova_compute[248510]: 2025-12-13 08:35:21.292 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614921.291386, 800e4f3f-112f-4fce-81aa-66073d601f06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:35:21 compute-0 nova_compute[248510]: 2025-12-13 08:35:21.292 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] VM Started (Lifecycle Event)
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[00f89d2e-4161-45fc-9333-3d304226e5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.317 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e44998e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:35:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:21.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e44998e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:21 compute-0 nova_compute[248510]: 2025-12-13 08:35:21.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:21 compute-0 NetworkManager[50376]: <info>  [1765614921.3207] manager: (tap0e44998e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Dec 13 08:35:21 compute-0 kernel: tap0e44998e-00: entered promiscuous mode
Dec 13 08:35:21 compute-0 nova_compute[248510]: 2025-12-13 08:35:21.324 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:35:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.060 248514 DEBUG nova.compute.manager [req-262d3e97-dcda-44a2-9489-caa36350e254 req-5b1432bb-62f4-404c-b5ef-50517fee7d11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Received event network-vif-plugged-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.061 248514 DEBUG oslo_concurrency.lockutils [req-262d3e97-dcda-44a2-9489-caa36350e254 req-5b1432bb-62f4-404c-b5ef-50517fee7d11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.061 248514 DEBUG oslo_concurrency.lockutils [req-262d3e97-dcda-44a2-9489-caa36350e254 req-5b1432bb-62f4-404c-b5ef-50517fee7d11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.062 248514 DEBUG oslo_concurrency.lockutils [req-262d3e97-dcda-44a2-9489-caa36350e254 req-5b1432bb-62f4-404c-b5ef-50517fee7d11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.062 248514 DEBUG nova.compute.manager [req-262d3e97-dcda-44a2-9489-caa36350e254 req-5b1432bb-62f4-404c-b5ef-50517fee7d11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Processing event network-vif-plugged-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:22.063 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0e44998e-00, col_values=(('external_ids', {'iface-id': '1b008dfd-1b79-4b71-bccf-31d49ac426a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:22 compute-0 ovn_controller[148476]: 2025-12-13T08:35:22Z|00799|binding|INFO|Releasing lport 1b008dfd-1b79-4b71-bccf-31d49ac426a3 from this chassis (sb_readonly=0)
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.066 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.067 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614921.2927432, 800e4f3f-112f-4fce-81aa-66073d601f06 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.068 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] VM Paused (Lifecycle Event)
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.071 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.075 248514 INFO nova.virt.libvirt.driver [-] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Instance spawned successfully.
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.075 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:22.083 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0e44998e-0f16-46a9-a93d-1246b939f9b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0e44998e-0f16-46a9-a93d-1246b939f9b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:22.083 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b80cd1d1-49ca-4463-ba04-f586592bfe13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:22.084 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-0e44998e-0f16-46a9-a93d-1246b939f9b9
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/0e44998e-0f16-46a9-a93d-1246b939f9b9.pid.haproxy
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 0e44998e-0f16-46a9-a93d-1246b939f9b9
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:35:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:22.085 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9', 'env', 'PROCESS_TAG=haproxy-0e44998e-0f16-46a9-a93d-1246b939f9b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0e44998e-0f16-46a9-a93d-1246b939f9b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.106 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.108 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.109 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.109 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.110 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.110 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.111 248514 DEBUG nova.virt.libvirt.driver [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:35:22 compute-0 ceph-mon[76537]: pgmap v2143: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.117 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614922.069393, 800e4f3f-112f-4fce-81aa-66073d601f06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.117 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] VM Resumed (Lifecycle Event)
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.160 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.166 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.195 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.214 248514 INFO nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Took 10.69 seconds to spawn the instance on the hypervisor.
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.215 248514 DEBUG nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.343 248514 INFO nova.compute.manager [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Took 11.86 seconds to build instance.
Dec 13 08:35:22 compute-0 nova_compute[248510]: 2025-12-13 08:35:22.376 248514 DEBUG oslo_concurrency.lockutils [None req-1ea4deee-784d-4609-8596-fcb3a4708266 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:22 compute-0 podman[327181]: 2025-12-13 08:35:22.518248308 +0000 UTC m=+0.064969495 container create 7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:35:22 compute-0 systemd[1]: Started libpod-conmon-7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a.scope.
Dec 13 08:35:22 compute-0 podman[327181]: 2025-12-13 08:35:22.481823893 +0000 UTC m=+0.028545100 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:35:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c449895f1f662a1df822e5aaf7c52ec2d39ab6bce912ebba998fe7e650419ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:35:22 compute-0 podman[327181]: 2025-12-13 08:35:22.607574026 +0000 UTC m=+0.154295243 container init 7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 08:35:22 compute-0 podman[327181]: 2025-12-13 08:35:22.618112608 +0000 UTC m=+0.164833795 container start 7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 08:35:22 compute-0 neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9[327198]: [NOTICE]   (327202) : New worker (327204) forked
Dec 13 08:35:22 compute-0 neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9[327198]: [NOTICE]   (327202) : Loading success.
Dec 13 08:35:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.290 248514 DEBUG nova.compute.manager [req-dd641ea6-074f-42cd-88e8-27222c0e575c req-14491af7-13ca-4fcb-aeac-3599e0277b2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Received event network-vif-plugged-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.291 248514 DEBUG oslo_concurrency.lockutils [req-dd641ea6-074f-42cd-88e8-27222c0e575c req-14491af7-13ca-4fcb-aeac-3599e0277b2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.292 248514 DEBUG oslo_concurrency.lockutils [req-dd641ea6-074f-42cd-88e8-27222c0e575c req-14491af7-13ca-4fcb-aeac-3599e0277b2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.293 248514 DEBUG oslo_concurrency.lockutils [req-dd641ea6-074f-42cd-88e8-27222c0e575c req-14491af7-13ca-4fcb-aeac-3599e0277b2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.294 248514 DEBUG nova.compute.manager [req-dd641ea6-074f-42cd-88e8-27222c0e575c req-14491af7-13ca-4fcb-aeac-3599e0277b2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] No waiting events found dispatching network-vif-plugged-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.294 248514 WARNING nova.compute.manager [req-dd641ea6-074f-42cd-88e8-27222c0e575c req-14491af7-13ca-4fcb-aeac-3599e0277b2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Received unexpected event network-vif-plugged-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d for instance with vm_state active and task_state None.
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.743 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614909.7427633, 68f568e2-917b-4b70-8345-8d04c0f5d1a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.744 248514 INFO nova.compute.manager [-] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] VM Stopped (Lifecycle Event)
Dec 13 08:35:24 compute-0 nova_compute[248510]: 2025-12-13 08:35:24.790 248514 DEBUG nova.compute.manager [None req-02d9ca05-190d-486e-a5b1-ea30dc4c6b90 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:35:24 compute-0 ceph-mon[76537]: pgmap v2144: 321 pgs: 321 active+clean; 88 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 13 08:35:25 compute-0 nova_compute[248510]: 2025-12-13 08:35:25.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 980 KiB/s wr, 79 op/s
Dec 13 08:35:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:26 compute-0 ceph-mon[76537]: pgmap v2145: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 980 KiB/s wr, 79 op/s
Dec 13 08:35:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 47 op/s
Dec 13 08:35:27 compute-0 podman[327215]: 2025-12-13 08:35:27.969750098 +0000 UTC m=+0.050602547 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 08:35:28 compute-0 podman[327213]: 2025-12-13 08:35:28.023877493 +0000 UTC m=+0.107686436 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:35:28 compute-0 podman[327214]: 2025-12-13 08:35:28.029323758 +0000 UTC m=+0.113589892 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.455 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "800e4f3f-112f-4fce-81aa-66073d601f06" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.455 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.456 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.456 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.456 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.457 248514 INFO nova.compute.manager [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Terminating instance
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.458 248514 DEBUG nova.compute.manager [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:35:28 compute-0 kernel: tap6b5ad842-5c (unregistering): left promiscuous mode
Dec 13 08:35:28 compute-0 NetworkManager[50376]: <info>  [1765614928.4936] device (tap6b5ad842-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:35:28 compute-0 ovn_controller[148476]: 2025-12-13T08:35:28Z|00800|binding|INFO|Releasing lport 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d from this chassis (sb_readonly=0)
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.500 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:28 compute-0 ovn_controller[148476]: 2025-12-13T08:35:28Z|00801|binding|INFO|Setting lport 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d down in Southbound
Dec 13 08:35:28 compute-0 ovn_controller[148476]: 2025-12-13T08:35:28Z|00802|binding|INFO|Removing iface tap6b5ad842-5c ovn-installed in OVS
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.502 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:28 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000050.scope: Deactivated successfully.
Dec 13 08:35:28 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000050.scope: Consumed 6.989s CPU time.
Dec 13 08:35:28 compute-0 systemd-machined[210538]: Machine qemu-99-instance-00000050 terminated.
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.698 248514 INFO nova.virt.libvirt.driver [-] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Instance destroyed successfully.
Dec 13 08:35:28 compute-0 nova_compute[248510]: 2025-12-13 08:35:28.700 248514 DEBUG nova.objects.instance [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lazy-loading 'resources' on Instance uuid 800e4f3f-112f-4fce-81aa-66073d601f06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:35:28 compute-0 ceph-mon[76537]: pgmap v2146: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 47 op/s
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.149 248514 DEBUG nova.virt.libvirt.vif [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:35:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-629210456',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-629210456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-629210456',id=80,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:35:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6127cd8ed5243eaa8db677cecead62e',ramdisk_id='',reservation_id='r-wwrj3373',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1828069126',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1828069126-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:35:22Z,user_data=None,user_id='2c09d963a7e0425c8d8237b3fb1df9af',uuid=800e4f3f-112f-4fce-81aa-66073d601f06,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.149 248514 DEBUG nova.network.os_vif_util [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Converting VIF {"id": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "address": "fa:16:3e:35:29:00", "network": {"id": "0e44998e-0f16-46a9-a93d-1246b939f9b9", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1355695316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6127cd8ed5243eaa8db677cecead62e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5ad842-5c", "ovs_interfaceid": "6b5ad842-5c2b-4f4d-b3b2-397b8926a77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.150 248514 DEBUG nova.network.os_vif_util [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:29:00,bridge_name='br-int',has_traffic_filtering=True,id=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d,network=Network(0e44998e-0f16-46a9-a93d-1246b939f9b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5ad842-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.151 248514 DEBUG os_vif [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:29:00,bridge_name='br-int',has_traffic_filtering=True,id=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d,network=Network(0e44998e-0f16-46a9-a93d-1246b939f9b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5ad842-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.151 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:29:00 10.100.0.13'], port_security=['fa:16:3e:35:29:00 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '800e4f3f-112f-4fce-81aa-66073d601f06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e44998e-0f16-46a9-a93d-1246b939f9b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6127cd8ed5243eaa8db677cecead62e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e9d5e37-b731-40d9-8201-1d4a96e4d6e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc203850-1b5a-4b40-b376-1eb552185ca4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.153 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5ad842-5c2b-4f4d-b3b2-397b8926a77d in datapath 0e44998e-0f16-46a9-a93d-1246b939f9b9 unbound from our chassis
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.154 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e44998e-0f16-46a9-a93d-1246b939f9b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.153 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b5ad842-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.155 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d253a731-e385-41ca-94f3-2202a7ba4e80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.156 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9 namespace which is not needed anymore
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.156 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.158 248514 INFO os_vif [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:29:00,bridge_name='br-int',has_traffic_filtering=True,id=6b5ad842-5c2b-4f4d-b3b2-397b8926a77d,network=Network(0e44998e-0f16-46a9-a93d-1246b939f9b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5ad842-5c')
Dec 13 08:35:29 compute-0 neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9[327198]: [NOTICE]   (327202) : haproxy version is 2.8.14-c23fe91
Dec 13 08:35:29 compute-0 neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9[327198]: [NOTICE]   (327202) : path to executable is /usr/sbin/haproxy
Dec 13 08:35:29 compute-0 neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9[327198]: [ALERT]    (327202) : Current worker (327204) exited with code 143 (Terminated)
Dec 13 08:35:29 compute-0 neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9[327198]: [WARNING]  (327202) : All workers exited. Exiting... (0)
Dec 13 08:35:29 compute-0 systemd[1]: libpod-7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a.scope: Deactivated successfully.
Dec 13 08:35:29 compute-0 conmon[327198]: conmon 7cf859bd6b9d1870fb48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a.scope/container/memory.events
Dec 13 08:35:29 compute-0 podman[327327]: 2025-12-13 08:35:29.328183184 +0000 UTC m=+0.057886869 container died 7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a-userdata-shm.mount: Deactivated successfully.
Dec 13 08:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c449895f1f662a1df822e5aaf7c52ec2d39ab6bce912ebba998fe7e650419ad-merged.mount: Deactivated successfully.
Dec 13 08:35:29 compute-0 podman[327327]: 2025-12-13 08:35:29.368366702 +0000 UTC m=+0.098070367 container cleanup 7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:35:29 compute-0 systemd[1]: libpod-conmon-7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a.scope: Deactivated successfully.
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.418 248514 INFO nova.virt.libvirt.driver [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Deleting instance files /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06_del
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.419 248514 INFO nova.virt.libvirt.driver [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Deletion of /var/lib/nova/instances/800e4f3f-112f-4fce-81aa-66073d601f06_del complete
Dec 13 08:35:29 compute-0 podman[327358]: 2025-12-13 08:35:29.431055498 +0000 UTC m=+0.040557878 container remove 7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.436 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ef528d-5389-4c17-8265-1d27b20711ad]: (4, ('Sat Dec 13 08:35:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9 (7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a)\n7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a\nSat Dec 13 08:35:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9 (7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a)\n7cf859bd6b9d1870fb4884b49ffa84cf25e1d6f5cebd713fd6cd58f7f96a4b5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.438 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf7f020-6adc-4286-9f62-b778bfc342d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.439 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e44998e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:29 compute-0 kernel: tap0e44998e-00: left promiscuous mode
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.457 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46ba917b-40de-4d3e-a070-d6b14d538286]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.474 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7feb1b-07f4-4472-a59e-be85514eaae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.475 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f54f7717-6325-4f1e-b50d-f2e9e0a11424]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.490 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2d3589-8e4c-4ce0-9523-35f9dcfbcafc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749825, 'reachable_time': 20727, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327375, 'error': None, 'target': 'ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d0e44998e\x2d0f16\x2d46a9\x2da93d\x2d1246b939f9b9.mount: Deactivated successfully.
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.493 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0e44998e-0f16-46a9-a93d-1246b939f9b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:35:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:29.493 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[026f6511-2982-462a-86d5-daf7dd3bfe6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:35:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.905 248514 INFO nova.compute.manager [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Took 1.45 seconds to destroy the instance on the hypervisor.
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.905 248514 DEBUG oslo.service.loopingcall [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.906 248514 DEBUG nova.compute.manager [-] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:35:29 compute-0 nova_compute[248510]: 2025-12-13 08:35:29.906 248514 DEBUG nova.network.neutron [-] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:35:30 compute-0 nova_compute[248510]: 2025-12-13 08:35:30.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:30 compute-0 ceph-mon[76537]: pgmap v2147: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:35:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:31 compute-0 nova_compute[248510]: 2025-12-13 08:35:31.583 248514 DEBUG nova.network.neutron [-] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:31 compute-0 nova_compute[248510]: 2025-12-13 08:35:31.617 248514 INFO nova.compute.manager [-] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Took 1.71 seconds to deallocate network for instance.
Dec 13 08:35:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 78 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Dec 13 08:35:31 compute-0 nova_compute[248510]: 2025-12-13 08:35:31.680 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:31 compute-0 nova_compute[248510]: 2025-12-13 08:35:31.680 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:31 compute-0 nova_compute[248510]: 2025-12-13 08:35:31.720 248514 DEBUG nova.compute.manager [req-aa5c1567-bf7b-40c3-8fa4-adce4d16d006 req-03f811c6-2c01-4d86-b13d-d53f49ae9afe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Received event network-vif-deleted-6b5ad842-5c2b-4f4d-b3b2-397b8926a77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:31 compute-0 nova_compute[248510]: 2025-12-13 08:35:31.780 248514 DEBUG oslo_concurrency.processutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:35:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032025799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:32 compute-0 nova_compute[248510]: 2025-12-13 08:35:32.346 248514 DEBUG oslo_concurrency.processutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:32 compute-0 nova_compute[248510]: 2025-12-13 08:35:32.353 248514 DEBUG nova.compute.provider_tree [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:35:32 compute-0 nova_compute[248510]: 2025-12-13 08:35:32.383 248514 DEBUG nova.scheduler.client.report [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:35:32 compute-0 nova_compute[248510]: 2025-12-13 08:35:32.444 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:32 compute-0 nova_compute[248510]: 2025-12-13 08:35:32.477 248514 INFO nova.scheduler.client.report [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Deleted allocations for instance 800e4f3f-112f-4fce-81aa-66073d601f06
Dec 13 08:35:32 compute-0 nova_compute[248510]: 2025-12-13 08:35:32.578 248514 DEBUG oslo_concurrency.lockutils [None req-1d8a950e-d11f-4e37-ac2b-eac3e0038b8e 2c09d963a7e0425c8d8237b3fb1df9af a6127cd8ed5243eaa8db677cecead62e - - default default] Lock "800e4f3f-112f-4fce-81aa-66073d601f06" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:32 compute-0 ceph-mon[76537]: pgmap v2148: 321 pgs: 321 active+clean; 78 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Dec 13 08:35:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2032025799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 60 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Dec 13 08:35:34 compute-0 nova_compute[248510]: 2025-12-13 08:35:34.156 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:34 compute-0 ceph-mon[76537]: pgmap v2149: 321 pgs: 321 active+clean; 60 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Dec 13 08:35:35 compute-0 nova_compute[248510]: 2025-12-13 08:35:35.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 92 op/s
Dec 13 08:35:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:37 compute-0 ceph-mon[76537]: pgmap v2150: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 92 op/s
Dec 13 08:35:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 838 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Dec 13 08:35:38 compute-0 ceph-mon[76537]: pgmap v2151: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 838 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Dec 13 08:35:39 compute-0 nova_compute[248510]: 2025-12-13 08:35:39.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 838 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Dec 13 08:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:35:40 compute-0 nova_compute[248510]: 2025-12-13 08:35:40.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:40 compute-0 ceph-mon[76537]: pgmap v2152: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 838 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Dec 13 08:35:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:40.838 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:35:40 compute-0 nova_compute[248510]: 2025-12-13 08:35:40.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:40.840 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:35:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 13 08:35:42 compute-0 ceph-mon[76537]: pgmap v2153: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 13 08:35:43 compute-0 nova_compute[248510]: 2025-12-13 08:35:43.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 13 op/s
Dec 13 08:35:43 compute-0 nova_compute[248510]: 2025-12-13 08:35:43.698 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614928.695976, 800e4f3f-112f-4fce-81aa-66073d601f06 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:35:43 compute-0 nova_compute[248510]: 2025-12-13 08:35:43.698 248514 INFO nova.compute.manager [-] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] VM Stopped (Lifecycle Event)
Dec 13 08:35:43 compute-0 nova_compute[248510]: 2025-12-13 08:35:43.730 248514 DEBUG nova.compute.manager [None req-d02e948d-d67b-479e-9148-f647d38c28d7 - - - - - -] [instance: 800e4f3f-112f-4fce-81aa-66073d601f06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:35:43 compute-0 nova_compute[248510]: 2025-12-13 08:35:43.936 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:44 compute-0 nova_compute[248510]: 2025-12-13 08:35:44.161 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:44 compute-0 ceph-mon[76537]: pgmap v2154: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 13 op/s
Dec 13 08:35:45 compute-0 nova_compute[248510]: 2025-12-13 08:35:45.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 8 op/s
Dec 13 08:35:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:46 compute-0 ceph-mon[76537]: pgmap v2155: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 8 op/s
Dec 13 08:35:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:46.843 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:46 compute-0 sshd-session[327398]: Invalid user validator from 193.32.162.146 port 51378
Dec 13 08:35:46 compute-0 nova_compute[248510]: 2025-12-13 08:35:46.879 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:46 compute-0 nova_compute[248510]: 2025-12-13 08:35:46.880 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:46 compute-0 nova_compute[248510]: 2025-12-13 08:35:46.906 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:35:46 compute-0 sshd-session[327398]: Connection closed by invalid user validator 193.32.162.146 port 51378 [preauth]
Dec 13 08:35:47 compute-0 nova_compute[248510]: 2025-12-13 08:35:47.085 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:47 compute-0 nova_compute[248510]: 2025-12-13 08:35:47.085 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:47 compute-0 nova_compute[248510]: 2025-12-13 08:35:47.097 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:35:47 compute-0 nova_compute[248510]: 2025-12-13 08:35:47.098 248514 INFO nova.compute.claims [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:35:47 compute-0 nova_compute[248510]: 2025-12-13 08:35:47.326 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:35:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:35:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3787582868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:47 compute-0 nova_compute[248510]: 2025-12-13 08:35:47.898 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:47 compute-0 nova_compute[248510]: 2025-12-13 08:35:47.905 248514 DEBUG nova.compute.provider_tree [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.384 248514 DEBUG nova.scheduler.client.report [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.417 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.631 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "d6bbe489-e49f-452b-8d44-b58e9d7b331c" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.631 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "d6bbe489-e49f-452b-8d44-b58e9d7b331c" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.647 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "d6bbe489-e49f-452b-8d44-b58e9d7b331c" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.649 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:35:48 compute-0 nova_compute[248510]: 2025-12-13 08:35:48.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:35:48 compute-0 ceph-mon[76537]: pgmap v2156: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:35:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3787582868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.053 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.053 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.063 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.064 248514 DEBUG nova.network.neutron [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.140 248514 INFO nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.218 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.544 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.545 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.546 248514 INFO nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Creating image(s)
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.572 248514 DEBUG nova.storage.rbd_utils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] rbd image 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.598 248514 DEBUG nova.storage.rbd_utils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] rbd image 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.624 248514 DEBUG nova.storage.rbd_utils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] rbd image 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.630 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.706 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.707 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.708 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.708 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.732 248514 DEBUG nova.storage.rbd_utils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] rbd image 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:49 compute-0 nova_compute[248510]: 2025-12-13 08:35:49.737 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.017 248514 DEBUG nova.policy [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '999a6abe49d04d79bc7b54dcddf62b9f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c59f538c3ca749a394de7ee2b55565db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.045 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.102 248514 DEBUG nova.storage.rbd_utils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] resizing rbd image 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.189 248514 DEBUG nova.objects.instance [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lazy-loading 'migration_context' on Instance uuid 5c021a3b-a49a-4c58-bd59-50f128f3e70e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.326 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.327 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Ensure instance console log exists: /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.328 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.329 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.329 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:50 compute-0 nova_compute[248510]: 2025-12-13 08:35:50.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:50 compute-0 ceph-mon[76537]: pgmap v2157: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:35:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:35:51 compute-0 nova_compute[248510]: 2025-12-13 08:35:51.994 248514 DEBUG nova.network.neutron [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Successfully created port: 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:35:53 compute-0 ceph-mon[76537]: pgmap v2158: 321 pgs: 321 active+clean; 41 MiB data, 586 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.349 248514 DEBUG nova.network.neutron [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Successfully updated port: 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.376 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "refresh_cache-5c021a3b-a49a-4c58-bd59-50f128f3e70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.377 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquired lock "refresh_cache-5c021a3b-a49a-4c58-bd59-50f128f3e70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.377 248514 DEBUG nova.network.neutron [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.592 248514 DEBUG nova.compute.manager [req-ba901780-9728-480b-a003-03493e26fa54 req-eb1f7dcb-bf0b-45c4-b425-6922149ae455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received event network-changed-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.593 248514 DEBUG nova.compute.manager [req-ba901780-9728-480b-a003-03493e26fa54 req-eb1f7dcb-bf0b-45c4-b425-6922149ae455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Refreshing instance network info cache due to event network-changed-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.593 248514 DEBUG oslo_concurrency.lockutils [req-ba901780-9728-480b-a003-03493e26fa54 req-eb1f7dcb-bf0b-45c4-b425-6922149ae455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c021a3b-a49a-4c58-bd59-50f128f3e70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:35:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 63 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 1.1 MiB/s wr, 2 op/s
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.697 248514 DEBUG nova.network.neutron [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.805 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.805 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:35:53 compute-0 nova_compute[248510]: 2025-12-13 08:35:53.805 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:54 compute-0 ceph-mon[76537]: pgmap v2159: 321 pgs: 321 active+clean; 63 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 1.1 MiB/s wr, 2 op/s
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.203 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:35:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820120708' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.347 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.493 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.494 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3983MB free_disk=59.974865693598986GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.494 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.494 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.569 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 5c021a3b-a49a-4c58-bd59-50f128f3e70e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.570 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.570 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:35:54 compute-0 nova_compute[248510]: 2025-12-13 08:35:54.612 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:35:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3996778717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2820120708' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:55 compute-0 nova_compute[248510]: 2025-12-13 08:35:55.210 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:55 compute-0 nova_compute[248510]: 2025-12-13 08:35:55.217 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:35:55 compute-0 nova_compute[248510]: 2025-12-13 08:35:55.246 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:35:55 compute-0 nova_compute[248510]: 2025-12-13 08:35:55.278 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:35:55 compute-0 nova_compute[248510]: 2025-12-13 08:35:55.279 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:55.416 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:55.417 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:35:55.417 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:55 compute-0 nova_compute[248510]: 2025-12-13 08:35:55.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:35:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3996778717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:35:56 compute-0 ceph-mon[76537]: pgmap v2160: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:35:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.280 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.280 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.280 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:35:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.827 248514 DEBUG nova.network.neutron [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Updating instance_info_cache with network_info: [{"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.861 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Releasing lock "refresh_cache-5c021a3b-a49a-4c58-bd59-50f128f3e70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.862 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Instance network_info: |[{"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.862 248514 DEBUG oslo_concurrency.lockutils [req-ba901780-9728-480b-a003-03493e26fa54 req-eb1f7dcb-bf0b-45c4-b425-6922149ae455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c021a3b-a49a-4c58-bd59-50f128f3e70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.862 248514 DEBUG nova.network.neutron [req-ba901780-9728-480b-a003-03493e26fa54 req-eb1f7dcb-bf0b-45c4-b425-6922149ae455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Refreshing network info cache for port 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.865 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Start _get_guest_xml network_info=[{"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.870 248514 WARNING nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.875 248514 DEBUG nova.virt.libvirt.host [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.876 248514 DEBUG nova.virt.libvirt.host [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.887 248514 DEBUG nova.virt.libvirt.host [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.888 248514 DEBUG nova.virt.libvirt.host [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.888 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.889 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.889 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.889 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.889 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.890 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.890 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.890 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.890 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.890 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.891 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.891 248514 DEBUG nova.virt.hardware [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:35:57 compute-0 nova_compute[248510]: 2025-12-13 08:35:57.894 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:35:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79563945' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:35:58 compute-0 nova_compute[248510]: 2025-12-13 08:35:58.462 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:58 compute-0 nova_compute[248510]: 2025-12-13 08:35:58.486 248514 DEBUG nova.storage.rbd_utils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] rbd image 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:35:58 compute-0 nova_compute[248510]: 2025-12-13 08:35:58.490 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:35:58 compute-0 nova_compute[248510]: 2025-12-13 08:35:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:35:58 compute-0 podman[327695]: 2025-12-13 08:35:58.971310301 +0000 UTC m=+0.054977876 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 08:35:58 compute-0 podman[327694]: 2025-12-13 08:35:58.971602799 +0000 UTC m=+0.062148665 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 08:35:58 compute-0 ceph-mon[76537]: pgmap v2161: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:35:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/79563945' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:35:58 compute-0 podman[327693]: 2025-12-13 08:35:58.998551998 +0000 UTC m=+0.090280853 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 13 08:35:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:35:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945844995' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.174 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.176 248514 DEBUG nova.virt.libvirt.vif [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-103756279',display_name='tempest-ServerGroupTestJSON-server-103756279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-103756279',id=81,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59f538c3ca749a394de7ee2b55565db',ramdisk_id='',reservation_id='r-7a87438b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1272063144',owner_user_name='tempest-ServerGroupTestJSON-1272063144-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:35:49Z,user_data=None,user_id='999a6abe49d04d79bc7b54dcddf62b9f',uuid=5c021a3b-a49a-4c58-bd59-50f128f3e70e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.177 248514 DEBUG nova.network.os_vif_util [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Converting VIF {"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.178 248514 DEBUG nova.network.os_vif_util [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:15:ba,bridge_name='br-int',has_traffic_filtering=True,id=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb,network=Network(c2492855-666e-4f18-b2f9-6400370f5a3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33dfe5a6-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.179 248514 DEBUG nova.objects.instance [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c021a3b-a49a-4c58-bd59-50f128f3e70e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.290 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <uuid>5c021a3b-a49a-4c58-bd59-50f128f3e70e</uuid>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <name>instance-00000051</name>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerGroupTestJSON-server-103756279</nova:name>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:35:57</nova:creationTime>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <nova:user uuid="999a6abe49d04d79bc7b54dcddf62b9f">tempest-ServerGroupTestJSON-1272063144-project-member</nova:user>
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <nova:project uuid="c59f538c3ca749a394de7ee2b55565db">tempest-ServerGroupTestJSON-1272063144</nova:project>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <nova:port uuid="33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb">
Dec 13 08:35:59 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <system>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <entry name="serial">5c021a3b-a49a-4c58-bd59-50f128f3e70e</entry>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <entry name="uuid">5c021a3b-a49a-4c58-bd59-50f128f3e70e</entry>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </system>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <os>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   </os>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <features>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   </features>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk">
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       </source>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk.config">
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       </source>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:35:59 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:df:15:ba"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <target dev="tap33dfe5a6-d4"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e/console.log" append="off"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <video>
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </video>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:35:59 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:35:59 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:35:59 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:35:59 compute-0 nova_compute[248510]: </domain>
Dec 13 08:35:59 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.292 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Preparing to wait for external event network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.293 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.293 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.294 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.295 248514 DEBUG nova.virt.libvirt.vif [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-103756279',display_name='tempest-ServerGroupTestJSON-server-103756279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-103756279',id=81,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59f538c3ca749a394de7ee2b55565db',ramdisk_id='',reservation_id='r-7a87438b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1272063144',owner_user_name='tempest-ServerGroupTestJSON-1272063144-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:35:49Z,user_data=None,user_id='999a6abe49d04d79bc7b54dcddf62b9f',uuid=5c021a3b-a49a-4c58-bd59-50f128f3e70e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.295 248514 DEBUG nova.network.os_vif_util [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Converting VIF {"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.296 248514 DEBUG nova.network.os_vif_util [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:15:ba,bridge_name='br-int',has_traffic_filtering=True,id=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb,network=Network(c2492855-666e-4f18-b2f9-6400370f5a3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33dfe5a6-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.296 248514 DEBUG os_vif [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:15:ba,bridge_name='br-int',has_traffic_filtering=True,id=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb,network=Network(c2492855-666e-4f18-b2f9-6400370f5a3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33dfe5a6-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.298 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.298 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.303 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33dfe5a6-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.304 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33dfe5a6-d4, col_values=(('external_ids', {'iface-id': '33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:15:ba', 'vm-uuid': '5c021a3b-a49a-4c58-bd59-50f128f3e70e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:59 compute-0 NetworkManager[50376]: <info>  [1765614959.3079] manager: (tap33dfe5a6-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:35:59 compute-0 nova_compute[248510]: 2025-12-13 08:35:59.313 248514 INFO os_vif [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:15:ba,bridge_name='br-int',has_traffic_filtering=True,id=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb,network=Network(c2492855-666e-4f18-b2f9-6400370f5a3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33dfe5a6-d4')
Dec 13 08:35:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:35:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1945844995' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.243 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.244 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.244 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] No VIF found with MAC fa:16:3e:df:15:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.245 248514 INFO nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Using config drive
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.274 248514 DEBUG nova.storage.rbd_utils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] rbd image 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.435 248514 DEBUG nova.network.neutron [req-ba901780-9728-480b-a003-03493e26fa54 req-eb1f7dcb-bf0b-45c4-b425-6922149ae455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Updated VIF entry in instance network info cache for port 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.436 248514 DEBUG nova.network.neutron [req-ba901780-9728-480b-a003-03493e26fa54 req-eb1f7dcb-bf0b-45c4-b425-6922149ae455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Updating instance_info_cache with network_info: [{"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.466 248514 DEBUG oslo_concurrency.lockutils [req-ba901780-9728-480b-a003-03493e26fa54 req-eb1f7dcb-bf0b-45c4-b425-6922149ae455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c021a3b-a49a-4c58-bd59-50f128f3e70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.881 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "e6042583-fb13-4fe5-9486-82559f893be6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:00 compute-0 nova_compute[248510]: 2025-12-13 08:36:00.882 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:01 compute-0 ceph-mon[76537]: pgmap v2162: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:36:01 compute-0 nova_compute[248510]: 2025-12-13 08:36:01.037 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:36:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:01 compute-0 nova_compute[248510]: 2025-12-13 08:36:01.511 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:01 compute-0 nova_compute[248510]: 2025-12-13 08:36:01.512 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:01 compute-0 nova_compute[248510]: 2025-12-13 08:36:01.523 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:36:01 compute-0 nova_compute[248510]: 2025-12-13 08:36:01.523 248514 INFO nova.compute.claims [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:36:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:36:01 compute-0 nova_compute[248510]: 2025-12-13 08:36:01.725 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:02 compute-0 ceph-mon[76537]: pgmap v2163: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.133 248514 INFO nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Creating config drive at /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e/disk.config
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.143 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvo1bqqru execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:36:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3607647620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.294 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvo1bqqru" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.328 248514 DEBUG nova.storage.rbd_utils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] rbd image 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.333 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e/disk.config 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.382 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.390 248514 DEBUG nova.compute.provider_tree [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.409 248514 DEBUG nova.scheduler.client.report [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.546 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.548 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.606 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.607 248514 DEBUG nova.network.neutron [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.639 248514 INFO nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.669 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.688 248514 DEBUG oslo_concurrency.processutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e/disk.config 5c021a3b-a49a-4c58-bd59-50f128f3e70e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.690 248514 INFO nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Deleting local config drive /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e/disk.config because it was imported into RBD.
Dec 13 08:36:02 compute-0 kernel: tap33dfe5a6-d4: entered promiscuous mode
Dec 13 08:36:02 compute-0 ovn_controller[148476]: 2025-12-13T08:36:02Z|00803|binding|INFO|Claiming lport 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb for this chassis.
Dec 13 08:36:02 compute-0 ovn_controller[148476]: 2025-12-13T08:36:02Z|00804|binding|INFO|33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb: Claiming fa:16:3e:df:15:ba 10.100.0.4
Dec 13 08:36:02 compute-0 NetworkManager[50376]: <info>  [1765614962.7598] manager: (tap33dfe5a6-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:02 compute-0 systemd-udevd[327850]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:36:02 compute-0 NetworkManager[50376]: <info>  [1765614962.8085] device (tap33dfe5a6-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:36:02 compute-0 NetworkManager[50376]: <info>  [1765614962.8093] device (tap33dfe5a6-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.826 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:02 compute-0 ovn_controller[148476]: 2025-12-13T08:36:02Z|00805|binding|INFO|Setting lport 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb ovn-installed in OVS
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:02 compute-0 ovn_controller[148476]: 2025-12-13T08:36:02Z|00806|binding|INFO|Setting lport 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb up in Southbound
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.854 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:15:ba 10.100.0.4'], port_security=['fa:16:3e:df:15:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5c021a3b-a49a-4c58-bd59-50f128f3e70e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2492855-666e-4f18-b2f9-6400370f5a3d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59f538c3ca749a394de7ee2b55565db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '085d73df-b3ad-44bf-b051-8638adce0938', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff2d87f-04db-4328-b369-252ec41bcd39, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.855 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb in datapath c2492855-666e-4f18-b2f9-6400370f5a3d bound to our chassis
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.857 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2492855-666e-4f18-b2f9-6400370f5a3d
Dec 13 08:36:02 compute-0 systemd-machined[210538]: New machine qemu-100-instance-00000051.
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.872 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[281ed040-a2a7-4845-9f85-394f30c3bbb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.873 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2492855-61 in ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.874 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.875 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2492855-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.875 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c182f50-8d28-4ba5-afbc-00d1fa887bca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.874 248514 INFO nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Creating image(s)
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.876 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62462c5c-d74f-47cc-8bb0-8cb0dad446fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:02 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000051.
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.889 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e5517e15-3565-49d3-a562-ebe4119aa2eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.899 248514 DEBUG nova.storage.rbd_utils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] rbd image e6042583-fb13-4fe5-9486-82559f893be6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.915 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1a1021-7e37-4d51-ac94-7f6a5b09e343]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.932 248514 DEBUG nova.storage.rbd_utils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] rbd image e6042583-fb13-4fe5-9486-82559f893be6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.945 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d41316e6-f37b-4fd3-ad77-816dda8a2bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:02 compute-0 NetworkManager[50376]: <info>  [1765614962.9556] manager: (tapc2492855-60): new Veth device (/org/freedesktop/NetworkManager/Devices/346)
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.955 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87f6bbd4-23e2-4b3a-b38e-162dfa5bccbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.980 248514 DEBUG nova.storage.rbd_utils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] rbd image e6042583-fb13-4fe5-9486-82559f893be6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:02 compute-0 nova_compute[248510]: 2025-12-13 08:36:02.988 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.990 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9967aa88-4ffe-445a-bed9-957450ea58a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:02.994 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[55aa676a-5652-4b60-8429-60e256171c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:03 compute-0 NetworkManager[50376]: <info>  [1765614963.0250] device (tapc2492855-60): carrier: link connected
Dec 13 08:36:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3607647620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.031 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5cfcb6a4-e712-42fc-a7af-7382b1f1a70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.044284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614963044347, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1205, "num_deletes": 251, "total_data_size": 1869621, "memory_usage": 1904320, "flush_reason": "Manual Compaction"}
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25f6e67b-ce19-4ed6-8a47-0ebbe68640b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2492855-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:8a:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754024, 'reachable_time': 21448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327941, 'error': None, 'target': 'ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614963062607, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 1813720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41427, "largest_seqno": 42631, "table_properties": {"data_size": 1808031, "index_size": 3020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12407, "raw_average_key_size": 19, "raw_value_size": 1796560, "raw_average_value_size": 2879, "num_data_blocks": 136, "num_entries": 624, "num_filter_entries": 624, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614852, "oldest_key_time": 1765614852, "file_creation_time": 1765614963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 18524 microseconds, and 5001 cpu microseconds.
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.062 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.063 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.064 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.064 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.062803) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 1813720 bytes OK
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.062869) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.064931) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.064957) EVENT_LOG_v1 {"time_micros": 1765614963064950, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.064978) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 1864103, prev total WAL file size 1864103, number of live WAL files 2.
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.066040) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(1771KB)], [95(7789KB)]
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614963066155, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9790151, "oldest_snapshot_seqno": -1}
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.076 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b29a03da-1b5f-44b9-a444-d14c9860a2dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:8a0e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 754024, 'tstamp': 754024}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327942, 'error': None, 'target': 'ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b66a14a-13fd-4707-a04a-23db1d4f0d5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2492855-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:8a:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754024, 'reachable_time': 21448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327960, 'error': None, 'target': 'ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.097 248514 DEBUG nova.storage.rbd_utils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] rbd image e6042583-fb13-4fe5-9486-82559f893be6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.110 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e6042583-fb13-4fe5-9486-82559f893be6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6399 keys, 8045494 bytes, temperature: kUnknown
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614963138378, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 8045494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8004981, "index_size": 23443, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 166068, "raw_average_key_size": 25, "raw_value_size": 7892308, "raw_average_value_size": 1233, "num_data_blocks": 918, "num_entries": 6399, "num_filter_entries": 6399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.141 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[04ea4aa9-6ddf-42c5-a930-232b241c654d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.138606) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 8045494 bytes
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.142635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.4 rd, 111.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(9.8) write-amplify(4.4) OK, records in: 6913, records dropped: 514 output_compression: NoCompression
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.142680) EVENT_LOG_v1 {"time_micros": 1765614963142662, "job": 56, "event": "compaction_finished", "compaction_time_micros": 72298, "compaction_time_cpu_micros": 29872, "output_level": 6, "num_output_files": 1, "total_output_size": 8045494, "num_input_records": 6913, "num_output_records": 6399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614963143563, "job": 56, "event": "table_file_deletion", "file_number": 97}
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614963145610, "job": 56, "event": "table_file_deletion", "file_number": 95}
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.065929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.145716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.145720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.145722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.145723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:36:03 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:36:03.145725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.224 248514 DEBUG nova.policy [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '71a2ea2f158b43a09b4eb1da303184d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0c8a55bacc994f02b324ecf72d6f8ae1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.226 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9262bf57-3114-4245-a6bb-c6bbb9610c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.227 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2492855-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.228 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.228 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2492855-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.230 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:03 compute-0 NetworkManager[50376]: <info>  [1765614963.2307] manager: (tapc2492855-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Dec 13 08:36:03 compute-0 kernel: tapc2492855-60: entered promiscuous mode
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.233 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2492855-60, col_values=(('external_ids', {'iface-id': '7856c4b0-be26-4a70-82cc-2374d9695144'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:03 compute-0 ovn_controller[148476]: 2025-12-13T08:36:03Z|00807|binding|INFO|Releasing lport 7856c4b0-be26-4a70-82cc-2374d9695144 from this chassis (sb_readonly=0)
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.250 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2492855-666e-4f18-b2f9-6400370f5a3d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2492855-666e-4f18-b2f9-6400370f5a3d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.252 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a9500a-8b27-44c0-8af9-8304d44754b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.253 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-c2492855-666e-4f18-b2f9-6400370f5a3d
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/c2492855-666e-4f18-b2f9-6400370f5a3d.pid.haproxy
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID c2492855-666e-4f18-b2f9-6400370f5a3d
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:36:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:03.254 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d', 'env', 'PROCESS_TAG=haproxy-c2492855-666e-4f18-b2f9-6400370f5a3d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2492855-666e-4f18-b2f9-6400370f5a3d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.287 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614963.2874377, 5c021a3b-a49a-4c58-bd59-50f128f3e70e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.288 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] VM Started (Lifecycle Event)
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.325 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.331 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614963.2875438, 5c021a3b-a49a-4c58-bd59-50f128f3e70e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.332 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] VM Paused (Lifecycle Event)
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.361 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.365 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.396 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.438 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e6042583-fb13-4fe5-9486-82559f893be6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.497 248514 DEBUG nova.storage.rbd_utils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] resizing rbd image e6042583-fb13-4fe5-9486-82559f893be6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.582 248514 DEBUG nova.objects.instance [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lazy-loading 'migration_context' on Instance uuid e6042583-fb13-4fe5-9486-82559f893be6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.586 248514 DEBUG nova.compute.manager [req-185418ab-bb7e-4e76-82ba-a56772c16b40 req-75e3709c-cd6d-4bd7-899c-d19106d8d833 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received event network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.586 248514 DEBUG oslo_concurrency.lockutils [req-185418ab-bb7e-4e76-82ba-a56772c16b40 req-75e3709c-cd6d-4bd7-899c-d19106d8d833 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.587 248514 DEBUG oslo_concurrency.lockutils [req-185418ab-bb7e-4e76-82ba-a56772c16b40 req-75e3709c-cd6d-4bd7-899c-d19106d8d833 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.587 248514 DEBUG oslo_concurrency.lockutils [req-185418ab-bb7e-4e76-82ba-a56772c16b40 req-75e3709c-cd6d-4bd7-899c-d19106d8d833 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.587 248514 DEBUG nova.compute.manager [req-185418ab-bb7e-4e76-82ba-a56772c16b40 req-75e3709c-cd6d-4bd7-899c-d19106d8d833 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Processing event network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.588 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.591 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614963.5908983, 5c021a3b-a49a-4c58-bd59-50f128f3e70e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.591 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] VM Resumed (Lifecycle Event)
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.593 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.598 248514 INFO nova.virt.libvirt.driver [-] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Instance spawned successfully.
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.599 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.607 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.608 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Ensure instance console log exists: /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.608 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.609 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.609 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.615 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.622 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.627 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.628 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.628 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.629 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.629 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.630 248514 DEBUG nova.virt.libvirt.driver [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.664 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:36:03 compute-0 podman[328127]: 2025-12-13 08:36:03.6660484 +0000 UTC m=+0.058479333 container create 24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:36:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.700 248514 INFO nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Took 14.16 seconds to spawn the instance on the hypervisor.
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.702 248514 DEBUG nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:03 compute-0 systemd[1]: Started libpod-conmon-24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9.scope.
Dec 13 08:36:03 compute-0 podman[328127]: 2025-12-13 08:36:03.634778093 +0000 UTC m=+0.027209056 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:36:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df419efffded8d9f24e80c5667b31a4578e8f29f5e8a6a1d9ba921695017003d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:03 compute-0 podman[328127]: 2025-12-13 08:36:03.752015425 +0000 UTC m=+0.144446368 container init 24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:36:03 compute-0 podman[328127]: 2025-12-13 08:36:03.758195448 +0000 UTC m=+0.150626391 container start 24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:03 compute-0 neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d[328140]: [NOTICE]   (328144) : New worker (328146) forked
Dec 13 08:36:03 compute-0 neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d[328140]: [NOTICE]   (328144) : Loading success.
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.811 248514 INFO nova.compute.manager [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Took 16.82 seconds to build instance.
Dec 13 08:36:03 compute-0 nova_compute[248510]: 2025-12-13 08:36:03.832 248514 DEBUG oslo_concurrency.lockutils [None req-d7d85d6f-3d92-4662-96ff-e753fc0d4cc8 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:04 compute-0 ceph-mon[76537]: pgmap v2164: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 13 08:36:04 compute-0 nova_compute[248510]: 2025-12-13 08:36:04.307 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:04 compute-0 nova_compute[248510]: 2025-12-13 08:36:04.970 248514 DEBUG nova.network.neutron [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Successfully created port: 295039e3-7d4e-443f-95b9-6461d093e3b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:36:05 compute-0 nova_compute[248510]: 2025-12-13 08:36:05.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 121 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 08:36:05 compute-0 nova_compute[248510]: 2025-12-13 08:36:05.780 248514 DEBUG nova.compute.manager [req-be283097-22e7-4f16-81c7-ef310c91cda0 req-a23e9b7f-2788-431c-a324-ec8387384ff2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received event network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:05 compute-0 nova_compute[248510]: 2025-12-13 08:36:05.781 248514 DEBUG oslo_concurrency.lockutils [req-be283097-22e7-4f16-81c7-ef310c91cda0 req-a23e9b7f-2788-431c-a324-ec8387384ff2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:05 compute-0 nova_compute[248510]: 2025-12-13 08:36:05.781 248514 DEBUG oslo_concurrency.lockutils [req-be283097-22e7-4f16-81c7-ef310c91cda0 req-a23e9b7f-2788-431c-a324-ec8387384ff2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:05 compute-0 nova_compute[248510]: 2025-12-13 08:36:05.782 248514 DEBUG oslo_concurrency.lockutils [req-be283097-22e7-4f16-81c7-ef310c91cda0 req-a23e9b7f-2788-431c-a324-ec8387384ff2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:05 compute-0 nova_compute[248510]: 2025-12-13 08:36:05.782 248514 DEBUG nova.compute.manager [req-be283097-22e7-4f16-81c7-ef310c91cda0 req-a23e9b7f-2788-431c-a324-ec8387384ff2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] No waiting events found dispatching network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:05 compute-0 nova_compute[248510]: 2025-12-13 08:36:05.782 248514 WARNING nova.compute.manager [req-be283097-22e7-4f16-81c7-ef310c91cda0 req-a23e9b7f-2788-431c-a324-ec8387384ff2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received unexpected event network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb for instance with vm_state active and task_state None.
Dec 13 08:36:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.554 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.555 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.555 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.556 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.556 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.557 248514 INFO nova.compute.manager [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Terminating instance
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.559 248514 DEBUG nova.compute.manager [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:36:06 compute-0 kernel: tap33dfe5a6-d4 (unregistering): left promiscuous mode
Dec 13 08:36:06 compute-0 NetworkManager[50376]: <info>  [1765614966.5904] device (tap33dfe5a6-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:36:06 compute-0 ovn_controller[148476]: 2025-12-13T08:36:06Z|00808|binding|INFO|Releasing lport 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb from this chassis (sb_readonly=0)
Dec 13 08:36:06 compute-0 ovn_controller[148476]: 2025-12-13T08:36:06Z|00809|binding|INFO|Setting lport 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb down in Southbound
Dec 13 08:36:06 compute-0 ovn_controller[148476]: 2025-12-13T08:36:06Z|00810|binding|INFO|Removing iface tap33dfe5a6-d4 ovn-installed in OVS
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:06 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000051.scope: Deactivated successfully.
Dec 13 08:36:06 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000051.scope: Consumed 3.411s CPU time.
Dec 13 08:36:06 compute-0 systemd-machined[210538]: Machine qemu-100-instance-00000051 terminated.
Dec 13 08:36:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:06.770 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:15:ba 10.100.0.4'], port_security=['fa:16:3e:df:15:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5c021a3b-a49a-4c58-bd59-50f128f3e70e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2492855-666e-4f18-b2f9-6400370f5a3d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59f538c3ca749a394de7ee2b55565db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '085d73df-b3ad-44bf-b051-8638adce0938', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff2d87f-04db-4328-b369-252ec41bcd39, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:36:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:06.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb in datapath c2492855-666e-4f18-b2f9-6400370f5a3d unbound from our chassis
Dec 13 08:36:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:06.774 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2492855-666e-4f18-b2f9-6400370f5a3d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:36:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:06.775 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fd927728-5df7-493b-aefa-079a9347b132]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:06.775 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d namespace which is not needed anymore
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.796 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.809 248514 INFO nova.virt.libvirt.driver [-] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Instance destroyed successfully.
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.810 248514 DEBUG nova.objects.instance [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lazy-loading 'resources' on Instance uuid 5c021a3b-a49a-4c58-bd59-50f128f3e70e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.896 248514 DEBUG nova.virt.libvirt.vif [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-103756279',display_name='tempest-ServerGroupTestJSON-server-103756279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-103756279',id=81,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:36:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c59f538c3ca749a394de7ee2b55565db',ramdisk_id='',reservation_id='r-7a87438b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1272063144',owner_user_name='tempest-ServerGroupTestJSON-1272063144-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:36:03Z,user_data=None,user_id='999a6abe49d04d79bc7b54dcddf62b9f',uuid=5c021a3b-a49a-4c58-bd59-50f128f3e70e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.897 248514 DEBUG nova.network.os_vif_util [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Converting VIF {"id": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "address": "fa:16:3e:df:15:ba", "network": {"id": "c2492855-666e-4f18-b2f9-6400370f5a3d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1169155584-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59f538c3ca749a394de7ee2b55565db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33dfe5a6-d4", "ovs_interfaceid": "33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.897 248514 DEBUG nova.network.os_vif_util [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:15:ba,bridge_name='br-int',has_traffic_filtering=True,id=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb,network=Network(c2492855-666e-4f18-b2f9-6400370f5a3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33dfe5a6-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.898 248514 DEBUG os_vif [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:15:ba,bridge_name='br-int',has_traffic_filtering=True,id=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb,network=Network(c2492855-666e-4f18-b2f9-6400370f5a3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33dfe5a6-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.900 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33dfe5a6-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:06 compute-0 nova_compute[248510]: 2025-12-13 08:36:06.907 248514 INFO os_vif [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:15:ba,bridge_name='br-int',has_traffic_filtering=True,id=33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb,network=Network(c2492855-666e-4f18-b2f9-6400370f5a3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33dfe5a6-d4')
Dec 13 08:36:06 compute-0 ceph-mon[76537]: pgmap v2165: 321 pgs: 321 active+clean; 121 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 08:36:06 compute-0 neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d[328140]: [NOTICE]   (328144) : haproxy version is 2.8.14-c23fe91
Dec 13 08:36:06 compute-0 neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d[328140]: [NOTICE]   (328144) : path to executable is /usr/sbin/haproxy
Dec 13 08:36:06 compute-0 neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d[328140]: [WARNING]  (328144) : Exiting Master process...
Dec 13 08:36:06 compute-0 neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d[328140]: [ALERT]    (328144) : Current worker (328146) exited with code 143 (Terminated)
Dec 13 08:36:06 compute-0 neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d[328140]: [WARNING]  (328144) : All workers exited. Exiting... (0)
Dec 13 08:36:06 compute-0 systemd[1]: libpod-24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9.scope: Deactivated successfully.
Dec 13 08:36:06 compute-0 podman[328189]: 2025-12-13 08:36:06.963009306 +0000 UTC m=+0.074045240 container died 24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9-userdata-shm.mount: Deactivated successfully.
Dec 13 08:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-df419efffded8d9f24e80c5667b31a4578e8f29f5e8a6a1d9ba921695017003d-merged.mount: Deactivated successfully.
Dec 13 08:36:07 compute-0 podman[328189]: 2025-12-13 08:36:07.082496273 +0000 UTC m=+0.193532217 container cleanup 24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:36:07 compute-0 systemd[1]: libpod-conmon-24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9.scope: Deactivated successfully.
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.160 248514 DEBUG nova.network.neutron [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Successfully updated port: 295039e3-7d4e-443f-95b9-6461d093e3b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:36:07 compute-0 podman[328237]: 2025-12-13 08:36:07.176572579 +0000 UTC m=+0.056442812 container remove 24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec752728-f58c-42ad-84c8-34ed7f43b8d0]: (4, ('Sat Dec 13 08:36:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d (24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9)\n24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9\nSat Dec 13 08:36:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d (24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9)\n24878b0de5ce7f06bb3ddbb58049d1501147accaae3fd5178ef1422e118bdfa9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.185 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79902b28-90ca-43ba-874f-6caa92e92041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2492855-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.188 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:07 compute-0 kernel: tapc2492855-60: left promiscuous mode
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.204 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7f51c6d5-527c-4cb3-b2da-f3e8dfac1937]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.213 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[599dfd5f-2d21-42bb-a7f4-8285ab9c9413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.215 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[23a973e4-04f6-4ffe-8f2b-4a9d0fae8433]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.231 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d2fd37-6d46-4101-80d5-900f775debbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754015, 'reachable_time': 33611, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328253, 'error': None, 'target': 'ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.234 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2492855-666e-4f18-b2f9-6400370f5a3d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:36:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:07.234 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ce64a447-ecda-4954-bec2-b8673d35fe3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:07 compute-0 systemd[1]: run-netns-ovnmeta\x2dc2492855\x2d666e\x2d4f18\x2db2f9\x2d6400370f5a3d.mount: Deactivated successfully.
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.265 248514 INFO nova.virt.libvirt.driver [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Deleting instance files /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e_del
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.266 248514 INFO nova.virt.libvirt.driver [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Deletion of /var/lib/nova/instances/5c021a3b-a49a-4c58-bd59-50f128f3e70e_del complete
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.345 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.346 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquired lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.346 248514 DEBUG nova.network.neutron [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.393 248514 INFO nova.compute.manager [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Took 0.83 seconds to destroy the instance on the hypervisor.
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.393 248514 DEBUG oslo.service.loopingcall [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.394 248514 DEBUG nova.compute.manager [-] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:36:07 compute-0 nova_compute[248510]: 2025-12-13 08:36:07.394 248514 DEBUG nova.network.neutron [-] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:36:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 121 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 444 KiB/s rd, 1.5 MiB/s wr, 34 op/s
Dec 13 08:36:08 compute-0 nova_compute[248510]: 2025-12-13 08:36:08.410 248514 DEBUG nova.network.neutron [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:36:08 compute-0 nova_compute[248510]: 2025-12-13 08:36:08.717 248514 DEBUG nova.compute.manager [req-29d85fb8-c7ca-4ddb-861c-d67e40e8e933 req-a6374025-700e-4a83-a635-a8d9b3cffa25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received event network-changed-295039e3-7d4e-443f-95b9-6461d093e3b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:08 compute-0 nova_compute[248510]: 2025-12-13 08:36:08.717 248514 DEBUG nova.compute.manager [req-29d85fb8-c7ca-4ddb-861c-d67e40e8e933 req-a6374025-700e-4a83-a635-a8d9b3cffa25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Refreshing instance network info cache due to event network-changed-295039e3-7d4e-443f-95b9-6461d093e3b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:36:08 compute-0 nova_compute[248510]: 2025-12-13 08:36:08.718 248514 DEBUG oslo_concurrency.lockutils [req-29d85fb8-c7ca-4ddb-861c-d67e40e8e933 req-a6374025-700e-4a83-a635-a8d9b3cffa25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:08 compute-0 nova_compute[248510]: 2025-12-13 08:36:08.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:36:09
Dec 13 08:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'cephfs.cephfs.data', 'backups']
Dec 13 08:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:36:09 compute-0 ceph-mon[76537]: pgmap v2166: 321 pgs: 321 active+clean; 121 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 444 KiB/s rd, 1.5 MiB/s wr, 34 op/s
Dec 13 08:36:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 104 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:36:10 compute-0 ceph-mon[76537]: pgmap v2167: 321 pgs: 321 active+clean; 104 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Dec 13 08:36:10 compute-0 nova_compute[248510]: 2025-12-13 08:36:10.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:36:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:36:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 88 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.835 248514 DEBUG nova.compute.manager [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received event network-vif-unplugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.835 248514 DEBUG oslo_concurrency.lockutils [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.835 248514 DEBUG oslo_concurrency.lockutils [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.836 248514 DEBUG oslo_concurrency.lockutils [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.836 248514 DEBUG nova.compute.manager [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] No waiting events found dispatching network-vif-unplugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.836 248514 DEBUG nova.compute.manager [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received event network-vif-unplugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.836 248514 DEBUG nova.compute.manager [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received event network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.837 248514 DEBUG oslo_concurrency.lockutils [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.837 248514 DEBUG oslo_concurrency.lockutils [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.837 248514 DEBUG oslo_concurrency.lockutils [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.837 248514 DEBUG nova.compute.manager [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] No waiting events found dispatching network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.837 248514 WARNING nova.compute.manager [req-97c0b5cd-f4fe-4e7a-9589-4c744b7e53a8 req-b3399ddd-5364-408b-88af-769076f66752 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received unexpected event network-vif-plugged-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb for instance with vm_state active and task_state deleting.
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.895 248514 DEBUG nova.network.neutron [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Updating instance_info_cache with network_info: [{"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.991 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Releasing lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.991 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Instance network_info: |[{"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.992 248514 DEBUG oslo_concurrency.lockutils [req-29d85fb8-c7ca-4ddb-861c-d67e40e8e933 req-a6374025-700e-4a83-a635-a8d9b3cffa25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.992 248514 DEBUG nova.network.neutron [req-29d85fb8-c7ca-4ddb-861c-d67e40e8e933 req-a6374025-700e-4a83-a635-a8d9b3cffa25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Refreshing network info cache for port 295039e3-7d4e-443f-95b9-6461d093e3b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:36:11 compute-0 nova_compute[248510]: 2025-12-13 08:36:11.995 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Start _get_guest_xml network_info=[{"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.003 248514 WARNING nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.010 248514 DEBUG nova.virt.libvirt.host [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.011 248514 DEBUG nova.virt.libvirt.host [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.015 248514 DEBUG nova.virt.libvirt.host [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.016 248514 DEBUG nova.virt.libvirt.host [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.016 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.016 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.017 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.017 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.017 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.018 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.018 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.018 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.019 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.019 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.019 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.019 248514 DEBUG nova.virt.hardware [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.022 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.060 248514 DEBUG nova.network.neutron [-] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.096 248514 INFO nova.compute.manager [-] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Took 4.70 seconds to deallocate network for instance.
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.147 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.148 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.262 248514 DEBUG oslo_concurrency.processutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246317810' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.637 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.663 248514 DEBUG nova.storage.rbd_utils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] rbd image e6042583-fb13-4fe5-9486-82559f893be6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.668 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:12 compute-0 ceph-mon[76537]: pgmap v2168: 321 pgs: 321 active+clean; 88 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 13 08:36:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2246317810' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:36:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/674101991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.821 248514 DEBUG oslo_concurrency.processutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.828 248514 DEBUG nova.compute.provider_tree [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.854 248514 DEBUG nova.scheduler.client.report [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.880 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:12 compute-0 nova_compute[248510]: 2025-12-13 08:36:12.936 248514 INFO nova.scheduler.client.report [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Deleted allocations for instance 5c021a3b-a49a-4c58-bd59-50f128f3e70e
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.017 248514 DEBUG oslo_concurrency.lockutils [None req-d30c38f5-40a0-422a-b46c-11efbd23bef2 999a6abe49d04d79bc7b54dcddf62b9f c59f538c3ca749a394de7ee2b55565db - - default default] Lock "5c021a3b-a49a-4c58-bd59-50f128f3e70e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261540003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.237 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.239 248514 DEBUG nova.virt.libvirt.vif [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:35:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-772395811',display_name='tempest-ServersTestManualDisk-server-772395811',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-772395811',id=82,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHllh+bNrxKLd6OWmyEtu3okvMAsImJ3irsRW5TQfXeQ5OmWWf+5uRaN0GBizwMjAgm6PymgOHcLbrpQ6fhr2d56uE+hQMJeLTTjfvxGuHauuGOBoMSMw9/U72Yhc3pfXQ==',key_name='tempest-keypair-763525040',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c8a55bacc994f02b324ecf72d6f8ae1',ramdisk_id='',reservation_id='r-0k3qejtx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-80053177',owner_user_name='tempest-ServersTestManualDisk-80053177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:36:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71a2ea2f158b43a09b4eb1da303184d5',uuid=e6042583-fb13-4fe5-9486-82559f893be6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.239 248514 DEBUG nova.network.os_vif_util [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Converting VIF {"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.240 248514 DEBUG nova.network.os_vif_util [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:db:37,bridge_name='br-int',has_traffic_filtering=True,id=295039e3-7d4e-443f-95b9-6461d093e3b8,network=Network(c233f790-2413-4fec-b315-717389cc1f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295039e3-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.242 248514 DEBUG nova.objects.instance [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e6042583-fb13-4fe5-9486-82559f893be6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.718 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <uuid>e6042583-fb13-4fe5-9486-82559f893be6</uuid>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <name>instance-00000052</name>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestManualDisk-server-772395811</nova:name>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:36:12</nova:creationTime>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <nova:user uuid="71a2ea2f158b43a09b4eb1da303184d5">tempest-ServersTestManualDisk-80053177-project-member</nova:user>
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <nova:project uuid="0c8a55bacc994f02b324ecf72d6f8ae1">tempest-ServersTestManualDisk-80053177</nova:project>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <nova:port uuid="295039e3-7d4e-443f-95b9-6461d093e3b8">
Dec 13 08:36:13 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <system>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <entry name="serial">e6042583-fb13-4fe5-9486-82559f893be6</entry>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <entry name="uuid">e6042583-fb13-4fe5-9486-82559f893be6</entry>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </system>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <os>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   </os>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <features>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   </features>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/e6042583-fb13-4fe5-9486-82559f893be6_disk">
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/e6042583-fb13-4fe5-9486-82559f893be6_disk.config">
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:13 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:c0:db:37"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <target dev="tap295039e3-7d"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6/console.log" append="off"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <video>
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </video>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:36:13 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:36:13 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:36:13 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:36:13 compute-0 nova_compute[248510]: </domain>
Dec 13 08:36:13 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.720 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Preparing to wait for external event network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.721 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "e6042583-fb13-4fe5-9486-82559f893be6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.721 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.721 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.722 248514 DEBUG nova.virt.libvirt.vif [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:35:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-772395811',display_name='tempest-ServersTestManualDisk-server-772395811',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-772395811',id=82,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHllh+bNrxKLd6OWmyEtu3okvMAsImJ3irsRW5TQfXeQ5OmWWf+5uRaN0GBizwMjAgm6PymgOHcLbrpQ6fhr2d56uE+hQMJeLTTjfvxGuHauuGOBoMSMw9/U72Yhc3pfXQ==',key_name='tempest-keypair-763525040',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c8a55bacc994f02b324ecf72d6f8ae1',ramdisk_id='',reservation_id='r-0k3qejtx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-80053177',owner_user_name='tempest-ServersTestManualDisk-80053177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:36:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71a2ea2f158b43a09b4eb1da303184d5',uuid=e6042583-fb13-4fe5-9486-82559f893be6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.722 248514 DEBUG nova.network.os_vif_util [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Converting VIF {"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.723 248514 DEBUG nova.network.os_vif_util [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:db:37,bridge_name='br-int',has_traffic_filtering=True,id=295039e3-7d4e-443f-95b9-6461d093e3b8,network=Network(c233f790-2413-4fec-b315-717389cc1f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295039e3-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.723 248514 DEBUG os_vif [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:db:37,bridge_name='br-int',has_traffic_filtering=True,id=295039e3-7d4e-443f-95b9-6461d093e3b8,network=Network(c233f790-2413-4fec-b315-717389cc1f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295039e3-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.724 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.724 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.725 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.729 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap295039e3-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.729 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap295039e3-7d, col_values=(('external_ids', {'iface-id': '295039e3-7d4e-443f-95b9-6461d093e3b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:db:37', 'vm-uuid': 'e6042583-fb13-4fe5-9486-82559f893be6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:36:13 compute-0 NetworkManager[50376]: <info>  [1765614973.7324] manager: (tap295039e3-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:13 compute-0 nova_compute[248510]: 2025-12-13 08:36:13.741 248514 INFO os_vif [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:db:37,bridge_name='br-int',has_traffic_filtering=True,id=295039e3-7d4e-443f-95b9-6461d093e3b8,network=Network(c233f790-2413-4fec-b315-717389cc1f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295039e3-7d')
Dec 13 08:36:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/674101991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2261540003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:14 compute-0 ceph-mon[76537]: pgmap v2169: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 13 08:36:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:36:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2418784633' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:36:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:36:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2418784633' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.321 248514 DEBUG nova.network.neutron [req-29d85fb8-c7ca-4ddb-861c-d67e40e8e933 req-a6374025-700e-4a83-a635-a8d9b3cffa25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Updated VIF entry in instance network info cache for port 295039e3-7d4e-443f-95b9-6461d093e3b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.321 248514 DEBUG nova.network.neutron [req-29d85fb8-c7ca-4ddb-861c-d67e40e8e933 req-a6374025-700e-4a83-a635-a8d9b3cffa25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Updating instance_info_cache with network_info: [{"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.539 248514 DEBUG nova.compute.manager [req-104f0e9b-4abe-4b8d-91f0-de445d34f664 req-7dc5fd87-9510-46d1-a7eb-e07aed6d3694 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Received event network-vif-deleted-33dfe5a6-d4e0-4d7c-9fd0-c400a73cf2bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.580 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.580 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.580 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] No VIF found with MAC fa:16:3e:c0:db:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.581 248514 INFO nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Using config drive
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.605 248514 DEBUG nova.storage.rbd_utils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] rbd image e6042583-fb13-4fe5-9486-82559f893be6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:15 compute-0 nova_compute[248510]: 2025-12-13 08:36:15.612 248514 DEBUG oslo_concurrency.lockutils [req-29d85fb8-c7ca-4ddb-861c-d67e40e8e933 req-a6374025-700e-4a83-a635-a8d9b3cffa25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Dec 13 08:36:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2418784633' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:36:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2418784633' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:36:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:16 compute-0 nova_compute[248510]: 2025-12-13 08:36:16.748 248514 INFO nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Creating config drive at /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6/disk.config
Dec 13 08:36:16 compute-0 nova_compute[248510]: 2025-12-13 08:36:16.755 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoel4jbb2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:16 compute-0 ceph-mon[76537]: pgmap v2170: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Dec 13 08:36:16 compute-0 nova_compute[248510]: 2025-12-13 08:36:16.897 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoel4jbb2" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:16 compute-0 nova_compute[248510]: 2025-12-13 08:36:16.924 248514 DEBUG nova.storage.rbd_utils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] rbd image e6042583-fb13-4fe5-9486-82559f893be6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:16 compute-0 nova_compute[248510]: 2025-12-13 08:36:16.928 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6/disk.config e6042583-fb13-4fe5-9486-82559f893be6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.100 248514 DEBUG oslo_concurrency.processutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6/disk.config e6042583-fb13-4fe5-9486-82559f893be6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.101 248514 INFO nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Deleting local config drive /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6/disk.config because it was imported into RBD.
Dec 13 08:36:17 compute-0 kernel: tap295039e3-7d: entered promiscuous mode
Dec 13 08:36:17 compute-0 NetworkManager[50376]: <info>  [1765614977.1707] manager: (tap295039e3-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Dec 13 08:36:17 compute-0 ovn_controller[148476]: 2025-12-13T08:36:17Z|00811|binding|INFO|Claiming lport 295039e3-7d4e-443f-95b9-6461d093e3b8 for this chassis.
Dec 13 08:36:17 compute-0 ovn_controller[148476]: 2025-12-13T08:36:17Z|00812|binding|INFO|295039e3-7d4e-443f-95b9-6461d093e3b8: Claiming fa:16:3e:c0:db:37 10.100.0.7
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.170 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.185 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:db:37 10.100.0.7'], port_security=['fa:16:3e:c0:db:37 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e6042583-fb13-4fe5-9486-82559f893be6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c233f790-2413-4fec-b315-717389cc1f77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8a55bacc994f02b324ecf72d6f8ae1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc8123e6-4496-4224-a491-533f7737dbbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccf86f8-cdd1-459c-b09d-a4b6ca014d0e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=295039e3-7d4e-443f-95b9-6461d093e3b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.187 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 295039e3-7d4e-443f-95b9-6461d093e3b8 in datapath c233f790-2413-4fec-b315-717389cc1f77 bound to our chassis
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.189 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c233f790-2413-4fec-b315-717389cc1f77
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.207 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b05aaa-3975-444d-b2de-435e4bc44ce3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.209 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc233f790-21 in ovnmeta-c233f790-2413-4fec-b315-717389cc1f77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.211 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc233f790-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.211 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[420fd289-f807-4b56-b5ae-b5069ac42dee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.212 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26b9732a-c94e-4005-a1b0-0b17bb4fd219]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 systemd-udevd[328412]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:36:17 compute-0 systemd-machined[210538]: New machine qemu-101-instance-00000052.
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.226 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[5922badb-641d-4b5f-8b7f-8b9fe402428c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 NetworkManager[50376]: <info>  [1765614977.2318] device (tap295039e3-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:36:17 compute-0 NetworkManager[50376]: <info>  [1765614977.2330] device (tap295039e3-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:36:17 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000052.
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.235 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 ovn_controller[148476]: 2025-12-13T08:36:17Z|00813|binding|INFO|Setting lport 295039e3-7d4e-443f-95b9-6461d093e3b8 ovn-installed in OVS
Dec 13 08:36:17 compute-0 ovn_controller[148476]: 2025-12-13T08:36:17Z|00814|binding|INFO|Setting lport 295039e3-7d4e-443f-95b9-6461d093e3b8 up in Southbound
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.243 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf41378-ea60-4b60-b9fb-3fc08cb0a3ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.282 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e76e116-59f4-47e0-b83c-e5e896c13088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.288 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac423ed2-ed8c-4728-8b88-50c15904554b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 NetworkManager[50376]: <info>  [1765614977.2899] manager: (tapc233f790-20): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Dec 13 08:36:17 compute-0 systemd-udevd[328416]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.321 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bb1c22-fc9b-44cb-ad88-96276b53af9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.324 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6757db-9582-407b-8fc0-173ca2457921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 NetworkManager[50376]: <info>  [1765614977.3496] device (tapc233f790-20): carrier: link connected
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.354 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c4810bd3-eda8-4f8a-a79b-1512638c5020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.374 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9751c3-6ac9-443a-9ba7-5380896852b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc233f790-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:2d:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755456, 'reachable_time': 42521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328445, 'error': None, 'target': 'ovnmeta-c233f790-2413-4fec-b315-717389cc1f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.393 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf1d1ac-519f-4ac2-8ce8-b0b62e2e68c4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:2d52'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 755456, 'tstamp': 755456}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328446, 'error': None, 'target': 'ovnmeta-c233f790-2413-4fec-b315-717389cc1f77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.416 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f8196ffc-b783-4fe5-9932-9fbb41a696fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc233f790-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:2d:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755456, 'reachable_time': 42521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328447, 'error': None, 'target': 'ovnmeta-c233f790-2413-4fec-b315-717389cc1f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.418 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.418 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.450 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3ae0c1-c119-415d-b850-7131421f2b67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.451 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.536 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.537 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.545 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.545 248514 INFO nova.compute.claims [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.626 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d94312-2e4d-4fc7-acff-8d4b03424fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.629 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc233f790-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.629 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.629 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc233f790-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 kernel: tapc233f790-20: entered promiscuous mode
Dec 13 08:36:17 compute-0 NetworkManager[50376]: <info>  [1765614977.6322] manager: (tapc233f790-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.637 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc233f790-20, col_values=(('external_ids', {'iface-id': 'd3f200a3-5a17-4d4a-bfbc-603e0a7e6812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 ovn_controller[148476]: 2025-12-13T08:36:17Z|00815|binding|INFO|Releasing lport d3f200a3-5a17-4d4a-bfbc-603e0a7e6812 from this chassis (sb_readonly=0)
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.640 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.641 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c233f790-2413-4fec-b315-717389cc1f77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c233f790-2413-4fec-b315-717389cc1f77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.642 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0fae52d4-a576-4d33-bc8b-b12b3a16370d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.643 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-c233f790-2413-4fec-b315-717389cc1f77
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/c233f790-2413-4fec-b315-717389cc1f77.pid.haproxy
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID c233f790-2413-4fec-b315-717389cc1f77
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:36:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:17.645 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c233f790-2413-4fec-b315-717389cc1f77', 'env', 'PROCESS_TAG=haproxy-c233f790-2413-4fec-b315-717389cc1f77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c233f790-2413-4fec-b315-717389cc1f77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.654 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 327 KiB/s wr, 92 op/s
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.701 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.775 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614977.774986, e6042583-fb13-4fe5-9486-82559f893be6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.776 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] VM Started (Lifecycle Event)
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.804 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.809 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614977.7751582, e6042583-fb13-4fe5-9486-82559f893be6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.810 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] VM Paused (Lifecycle Event)
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.833 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.837 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.866 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.885 248514 DEBUG nova.compute.manager [req-c9eba10f-1bf9-4279-86af-46d5339d2d0c req-d4174164-4b66-43f9-8ffb-6e3cdc9ba790 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received event network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.886 248514 DEBUG oslo_concurrency.lockutils [req-c9eba10f-1bf9-4279-86af-46d5339d2d0c req-d4174164-4b66-43f9-8ffb-6e3cdc9ba790 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6042583-fb13-4fe5-9486-82559f893be6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.886 248514 DEBUG oslo_concurrency.lockutils [req-c9eba10f-1bf9-4279-86af-46d5339d2d0c req-d4174164-4b66-43f9-8ffb-6e3cdc9ba790 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.887 248514 DEBUG oslo_concurrency.lockutils [req-c9eba10f-1bf9-4279-86af-46d5339d2d0c req-d4174164-4b66-43f9-8ffb-6e3cdc9ba790 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.887 248514 DEBUG nova.compute.manager [req-c9eba10f-1bf9-4279-86af-46d5339d2d0c req-d4174164-4b66-43f9-8ffb-6e3cdc9ba790 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Processing event network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.888 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.904 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614977.9033926, e6042583-fb13-4fe5-9486-82559f893be6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.905 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] VM Resumed (Lifecycle Event)
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.908 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.923 248514 INFO nova.virt.libvirt.driver [-] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Instance spawned successfully.
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.924 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.937 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.944 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.957 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.958 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.959 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.960 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.961 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.962 248514 DEBUG nova.virt.libvirt.driver [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:17 compute-0 nova_compute[248510]: 2025-12-13 08:36:17.968 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:36:18 compute-0 podman[328541]: 2025-12-13 08:36:18.028549604 +0000 UTC m=+0.064826711 container create 2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:36:18 compute-0 podman[328541]: 2025-12-13 08:36:17.989161846 +0000 UTC m=+0.025439043 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:36:18 compute-0 systemd[1]: Started libpod-conmon-2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0.scope.
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.154 248514 INFO nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Took 15.28 seconds to spawn the instance on the hypervisor.
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.154 248514 DEBUG nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a7a84e49fd32eaf99e2064b4fc6e94e067df1c9e6d00ecb45004d62e706345/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:18 compute-0 podman[328541]: 2025-12-13 08:36:18.180698252 +0000 UTC m=+0.216975389 container init 2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 08:36:18 compute-0 podman[328541]: 2025-12-13 08:36:18.185639955 +0000 UTC m=+0.221917062 container start 2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:36:18 compute-0 neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77[328557]: [NOTICE]   (328561) : New worker (328563) forked
Dec 13 08:36:18 compute-0 neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77[328557]: [NOTICE]   (328561) : Loading success.
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.230 248514 INFO nova.compute.manager [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Took 16.75 seconds to build instance.
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.256 248514 DEBUG oslo_concurrency.lockutils [None req-4a755ad4-a522-40c7-9ac5-8e7c62eaddf0 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:36:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359112380' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.334 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.340 248514 DEBUG nova.compute.provider_tree [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.365 248514 DEBUG nova.scheduler.client.report [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.391 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.392 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.454 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.454 248514 DEBUG nova.network.neutron [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.486 248514 INFO nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.510 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.641 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.643 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.643 248514 INFO nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Creating image(s)
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.664 248514 DEBUG nova.storage.rbd_utils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.684 248514 DEBUG nova.storage.rbd_utils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.704 248514 DEBUG nova.storage.rbd_utils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.708 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:18 compute-0 ceph-mon[76537]: pgmap v2171: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 327 KiB/s wr, 92 op/s
Dec 13 08:36:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/359112380' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.802 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.804 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.805 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.805 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.832 248514 DEBUG nova.storage.rbd_utils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.838 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:18 compute-0 nova_compute[248510]: 2025-12-13 08:36:18.872 248514 DEBUG nova.policy [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1e1f77fd6714a9bae1617c2c179169f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f34aedb6d80843b39686cb02b480702d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.373 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.421 248514 DEBUG nova.storage.rbd_utils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] resizing rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.486 248514 DEBUG nova.objects.instance [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'migration_context' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.533 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.534 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Ensure instance console log exists: /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.535 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.535 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.536 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 341 KiB/s wr, 103 op/s
Dec 13 08:36:19 compute-0 nova_compute[248510]: 2025-12-13 08:36:19.743 248514 DEBUG nova.network.neutron [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Successfully created port: 02f6066e-8276-478d-a9f4-b41637656170 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:36:20 compute-0 nova_compute[248510]: 2025-12-13 08:36:20.150 248514 DEBUG nova.compute.manager [req-b0374614-6b6c-4bff-adb1-7f192394c4ca req-a1ecc912-c113-4ea8-9ee6-a37193085b7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received event network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:20 compute-0 nova_compute[248510]: 2025-12-13 08:36:20.151 248514 DEBUG oslo_concurrency.lockutils [req-b0374614-6b6c-4bff-adb1-7f192394c4ca req-a1ecc912-c113-4ea8-9ee6-a37193085b7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6042583-fb13-4fe5-9486-82559f893be6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:20 compute-0 nova_compute[248510]: 2025-12-13 08:36:20.152 248514 DEBUG oslo_concurrency.lockutils [req-b0374614-6b6c-4bff-adb1-7f192394c4ca req-a1ecc912-c113-4ea8-9ee6-a37193085b7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:20 compute-0 nova_compute[248510]: 2025-12-13 08:36:20.153 248514 DEBUG oslo_concurrency.lockutils [req-b0374614-6b6c-4bff-adb1-7f192394c4ca req-a1ecc912-c113-4ea8-9ee6-a37193085b7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:20 compute-0 nova_compute[248510]: 2025-12-13 08:36:20.153 248514 DEBUG nova.compute.manager [req-b0374614-6b6c-4bff-adb1-7f192394c4ca req-a1ecc912-c113-4ea8-9ee6-a37193085b7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] No waiting events found dispatching network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:20 compute-0 nova_compute[248510]: 2025-12-13 08:36:20.154 248514 WARNING nova.compute.manager [req-b0374614-6b6c-4bff-adb1-7f192394c4ca req-a1ecc912-c113-4ea8-9ee6-a37193085b7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received unexpected event network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 for instance with vm_state active and task_state None.
Dec 13 08:36:20 compute-0 nova_compute[248510]: 2025-12-13 08:36:20.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:20 compute-0 sudo[328741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:36:20 compute-0 sudo[328741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:20 compute-0 sudo[328741]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:20 compute-0 ceph-mon[76537]: pgmap v2172: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 341 KiB/s wr, 103 op/s
Dec 13 08:36:20 compute-0 sudo[328766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:36:20 compute-0 sudo[328766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035767581008441156 of space, bias 1.0, pg target 0.10730274302532347 quantized to 32 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672807089742323 of space, bias 1.0, pg target 0.2001842126922697 quantized to 32 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.956660834126945e-07 of space, bias 4.0, pg target 0.0007147993000952334 quantized to 16 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:36:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:36:21 compute-0 NetworkManager[50376]: <info>  [1765614981.0735] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Dec 13 08:36:21 compute-0 NetworkManager[50376]: <info>  [1765614981.0753] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Dec 13 08:36:21 compute-0 nova_compute[248510]: 2025-12-13 08:36:21.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:21 compute-0 nova_compute[248510]: 2025-12-13 08:36:21.156 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:21 compute-0 ovn_controller[148476]: 2025-12-13T08:36:21Z|00816|binding|INFO|Releasing lport d3f200a3-5a17-4d4a-bfbc-603e0a7e6812 from this chassis (sb_readonly=0)
Dec 13 08:36:21 compute-0 nova_compute[248510]: 2025-12-13 08:36:21.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:21 compute-0 nova_compute[248510]: 2025-12-13 08:36:21.190 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:21 compute-0 sudo[328766]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 08:36:21 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:36:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:36:21 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:36:21 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:36:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:36:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:36:21 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:36:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:36:21 compute-0 sudo[328821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:36:21 compute-0 sudo[328821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:21 compute-0 sudo[328821]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:21 compute-0 sudo[328846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:36:21 compute-0 sudo[328846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 106 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 798 KiB/s wr, 66 op/s
Dec 13 08:36:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:36:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:36:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:36:21 compute-0 nova_compute[248510]: 2025-12-13 08:36:21.807 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614966.80654, 5c021a3b-a49a-4c58-bd59-50f128f3e70e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:21 compute-0 nova_compute[248510]: 2025-12-13 08:36:21.808 248514 INFO nova.compute.manager [-] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] VM Stopped (Lifecycle Event)
Dec 13 08:36:21 compute-0 podman[328884]: 2025-12-13 08:36:21.892714616 +0000 UTC m=+0.047441209 container create c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:36:21 compute-0 systemd[1]: Started libpod-conmon-c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df.scope.
Dec 13 08:36:21 compute-0 podman[328884]: 2025-12-13 08:36:21.870639538 +0000 UTC m=+0.025366161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:36:21 compute-0 nova_compute[248510]: 2025-12-13 08:36:21.975 248514 DEBUG nova.compute.manager [None req-bed19cc0-edd5-4641-8dca-b1af3f599ebe - - - - - -] [instance: 5c021a3b-a49a-4c58-bd59-50f128f3e70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:36:21 compute-0 podman[328884]: 2025-12-13 08:36:21.997919668 +0000 UTC m=+0.152646281 container init c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_leakey, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 08:36:22 compute-0 podman[328884]: 2025-12-13 08:36:22.006746588 +0000 UTC m=+0.161473181 container start c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_leakey, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:36:22 compute-0 podman[328884]: 2025-12-13 08:36:22.010237094 +0000 UTC m=+0.164963717 container attach c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_leakey, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 08:36:22 compute-0 condescending_leakey[328900]: 167 167
Dec 13 08:36:22 compute-0 systemd[1]: libpod-c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df.scope: Deactivated successfully.
Dec 13 08:36:22 compute-0 podman[328884]: 2025-12-13 08:36:22.016806647 +0000 UTC m=+0.171533240 container died c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b6ab5669617401a9cc9c1d2d830228a7d7d4ccafafcbb7746a9aa7849c66ffe-merged.mount: Deactivated successfully.
Dec 13 08:36:22 compute-0 podman[328884]: 2025-12-13 08:36:22.059706183 +0000 UTC m=+0.214432776 container remove c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:36:22 compute-0 systemd[1]: libpod-conmon-c6876c5a2dcd170a4d4a3b2fce9cc40f2ef7cce6f3b27f176ce36861ff5ed6df.scope: Deactivated successfully.
Dec 13 08:36:22 compute-0 podman[328924]: 2025-12-13 08:36:22.247892596 +0000 UTC m=+0.057220052 container create 85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:36:22 compute-0 systemd[1]: Started libpod-conmon-85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0.scope.
Dec 13 08:36:22 compute-0 podman[328924]: 2025-12-13 08:36:22.223591273 +0000 UTC m=+0.032918759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:36:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cca588ad2aeba859afff755e1ceaba8aad52190f6e27cd308a4fc10e6024c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cca588ad2aeba859afff755e1ceaba8aad52190f6e27cd308a4fc10e6024c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cca588ad2aeba859afff755e1ceaba8aad52190f6e27cd308a4fc10e6024c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cca588ad2aeba859afff755e1ceaba8aad52190f6e27cd308a4fc10e6024c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cca588ad2aeba859afff755e1ceaba8aad52190f6e27cd308a4fc10e6024c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:22 compute-0 podman[328924]: 2025-12-13 08:36:22.340291151 +0000 UTC m=+0.149618627 container init 85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 08:36:22 compute-0 podman[328924]: 2025-12-13 08:36:22.350761931 +0000 UTC m=+0.160089387 container start 85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 08:36:22 compute-0 podman[328924]: 2025-12-13 08:36:22.357121699 +0000 UTC m=+0.166449175 container attach 85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.368 248514 DEBUG nova.compute.manager [req-45161c95-2e94-40b0-ba6f-6b9bc981a6fc req-a966b2df-3e80-4eac-a566-2fe353ab73dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received event network-changed-295039e3-7d4e-443f-95b9-6461d093e3b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.370 248514 DEBUG nova.compute.manager [req-45161c95-2e94-40b0-ba6f-6b9bc981a6fc req-a966b2df-3e80-4eac-a566-2fe353ab73dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Refreshing instance network info cache due to event network-changed-295039e3-7d4e-443f-95b9-6461d093e3b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.371 248514 DEBUG oslo_concurrency.lockutils [req-45161c95-2e94-40b0-ba6f-6b9bc981a6fc req-a966b2df-3e80-4eac-a566-2fe353ab73dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.371 248514 DEBUG oslo_concurrency.lockutils [req-45161c95-2e94-40b0-ba6f-6b9bc981a6fc req-a966b2df-3e80-4eac-a566-2fe353ab73dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.372 248514 DEBUG nova.network.neutron [req-45161c95-2e94-40b0-ba6f-6b9bc981a6fc req-a966b2df-3e80-4eac-a566-2fe353ab73dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Refreshing network info cache for port 295039e3-7d4e-443f-95b9-6461d093e3b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.400 248514 DEBUG nova.network.neutron [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Successfully updated port: 02f6066e-8276-478d-a9f4-b41637656170 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.421 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.422 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquired lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.422 248514 DEBUG nova.network.neutron [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.519 248514 DEBUG nova.compute.manager [req-535e8744-8d81-4330-80e3-edebbba35c02 req-9f7601d7-35c3-46fd-82c6-637f054ec952 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-changed-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.519 248514 DEBUG nova.compute.manager [req-535e8744-8d81-4330-80e3-edebbba35c02 req-9f7601d7-35c3-46fd-82c6-637f054ec952 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing instance network info cache due to event network-changed-02f6066e-8276-478d-a9f4-b41637656170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.520 248514 DEBUG oslo_concurrency.lockutils [req-535e8744-8d81-4330-80e3-edebbba35c02 req-9f7601d7-35c3-46fd-82c6-637f054ec952 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:22 compute-0 nova_compute[248510]: 2025-12-13 08:36:22.668 248514 DEBUG nova.network.neutron [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:36:22 compute-0 ceph-mon[76537]: pgmap v2173: 321 pgs: 321 active+clean; 106 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 798 KiB/s wr, 66 op/s
Dec 13 08:36:22 compute-0 quizzical_gagarin[328941]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:36:22 compute-0 quizzical_gagarin[328941]: --> All data devices are unavailable
Dec 13 08:36:22 compute-0 systemd[1]: libpod-85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0.scope: Deactivated successfully.
Dec 13 08:36:22 compute-0 podman[328924]: 2025-12-13 08:36:22.872437656 +0000 UTC m=+0.681765112 container died 85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-30cca588ad2aeba859afff755e1ceaba8aad52190f6e27cd308a4fc10e6024c0-merged.mount: Deactivated successfully.
Dec 13 08:36:22 compute-0 podman[328924]: 2025-12-13 08:36:22.919889085 +0000 UTC m=+0.729216561 container remove 85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:36:22 compute-0 systemd[1]: libpod-conmon-85aaad4621405642a87e5e046ee41c0a01c01afa8dce3686b583a70f194401a0.scope: Deactivated successfully.
Dec 13 08:36:22 compute-0 sudo[328846]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:23 compute-0 sudo[328973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:36:23 compute-0 sudo[328973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:23 compute-0 sudo[328973]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:23 compute-0 sudo[328998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:36:23 compute-0 sudo[328998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:23 compute-0 podman[329035]: 2025-12-13 08:36:23.395444405 +0000 UTC m=+0.043041750 container create 26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:36:23 compute-0 systemd[1]: Started libpod-conmon-26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c.scope.
Dec 13 08:36:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:36:23 compute-0 podman[329035]: 2025-12-13 08:36:23.375715485 +0000 UTC m=+0.023312860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:36:23 compute-0 podman[329035]: 2025-12-13 08:36:23.482536407 +0000 UTC m=+0.130133812 container init 26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 08:36:23 compute-0 podman[329035]: 2025-12-13 08:36:23.489463369 +0000 UTC m=+0.137060714 container start 26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 08:36:23 compute-0 podman[329035]: 2025-12-13 08:36:23.492598837 +0000 UTC m=+0.140196232 container attach 26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:36:23 compute-0 beautiful_dhawan[329051]: 167 167
Dec 13 08:36:23 compute-0 systemd[1]: libpod-26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c.scope: Deactivated successfully.
Dec 13 08:36:23 compute-0 podman[329035]: 2025-12-13 08:36:23.496785561 +0000 UTC m=+0.144382906 container died 26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dhawan, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 08:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c59ede7ce65251559779a5c63e3a1692fded46f6718313e2a7657081c4a214b-merged.mount: Deactivated successfully.
Dec 13 08:36:23 compute-0 podman[329035]: 2025-12-13 08:36:23.543948172 +0000 UTC m=+0.191545517 container remove 26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dhawan, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 08:36:23 compute-0 systemd[1]: libpod-conmon-26f80df10cdce060ea2f4fabf506b7404dedf7f8c527c94bb75e4687bdbbce1c.scope: Deactivated successfully.
Dec 13 08:36:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 123 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Dec 13 08:36:23 compute-0 podman[329077]: 2025-12-13 08:36:23.703467924 +0000 UTC m=+0.041995574 container create 96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ramanujan, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 08:36:23 compute-0 systemd[1]: Started libpod-conmon-96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3.scope.
Dec 13 08:36:23 compute-0 podman[329077]: 2025-12-13 08:36:23.683393055 +0000 UTC m=+0.021920735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:36:23 compute-0 nova_compute[248510]: 2025-12-13 08:36:23.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6e88899bdf2c578d596c280e26279994b785ae101838379406fdc2fd6d5edc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6e88899bdf2c578d596c280e26279994b785ae101838379406fdc2fd6d5edc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6e88899bdf2c578d596c280e26279994b785ae101838379406fdc2fd6d5edc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6e88899bdf2c578d596c280e26279994b785ae101838379406fdc2fd6d5edc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:23 compute-0 podman[329077]: 2025-12-13 08:36:23.822552271 +0000 UTC m=+0.161079931 container init 96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 08:36:23 compute-0 podman[329077]: 2025-12-13 08:36:23.831941585 +0000 UTC m=+0.170469235 container start 96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ramanujan, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:36:23 compute-0 podman[329077]: 2025-12-13 08:36:23.838525618 +0000 UTC m=+0.177053288 container attach 96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]: {
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:     "0": [
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:         {
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "devices": [
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "/dev/loop3"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             ],
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_name": "ceph_lv0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_size": "21470642176",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "name": "ceph_lv0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "tags": {
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cluster_name": "ceph",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.crush_device_class": "",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.encrypted": "0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.objectstore": "bluestore",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osd_id": "0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.type": "block",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.vdo": "0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.with_tpm": "0"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             },
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "type": "block",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "vg_name": "ceph_vg0"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:         }
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:     ],
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:     "1": [
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:         {
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "devices": [
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "/dev/loop4"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             ],
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_name": "ceph_lv1",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_size": "21470642176",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "name": "ceph_lv1",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "tags": {
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cluster_name": "ceph",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.crush_device_class": "",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.encrypted": "0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.objectstore": "bluestore",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osd_id": "1",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.type": "block",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.vdo": "0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.with_tpm": "0"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             },
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "type": "block",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "vg_name": "ceph_vg1"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:         }
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:     ],
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:     "2": [
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:         {
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "devices": [
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "/dev/loop5"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             ],
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_name": "ceph_lv2",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_size": "21470642176",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "name": "ceph_lv2",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "tags": {
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.cluster_name": "ceph",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.crush_device_class": "",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.encrypted": "0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.objectstore": "bluestore",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osd_id": "2",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.type": "block",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.vdo": "0",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:                 "ceph.with_tpm": "0"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             },
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "type": "block",
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:             "vg_name": "ceph_vg2"
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:         }
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]:     ]
Dec 13 08:36:24 compute-0 vibrant_ramanujan[329093]: }
Dec 13 08:36:24 compute-0 systemd[1]: libpod-96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3.scope: Deactivated successfully.
Dec 13 08:36:24 compute-0 podman[329077]: 2025-12-13 08:36:24.173106747 +0000 UTC m=+0.511634397 container died 96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ramanujan, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e6e88899bdf2c578d596c280e26279994b785ae101838379406fdc2fd6d5edc-merged.mount: Deactivated successfully.
Dec 13 08:36:24 compute-0 podman[329077]: 2025-12-13 08:36:24.5782959 +0000 UTC m=+0.916823550 container remove 96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ramanujan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:36:24 compute-0 sudo[328998]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:24 compute-0 systemd[1]: libpod-conmon-96eaaa78103a2b468f88eb0a508d6df53b356d55be284d20d6a2e8a8bbfdeba3.scope: Deactivated successfully.
Dec 13 08:36:24 compute-0 sudo[329116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:36:24 compute-0 sudo[329116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:24 compute-0 sudo[329116]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:24 compute-0 sudo[329141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:36:24 compute-0 sudo[329141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:24 compute-0 ceph-mon[76537]: pgmap v2174: 321 pgs: 321 active+clean; 123 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Dec 13 08:36:25 compute-0 podman[329177]: 2025-12-13 08:36:25.033440072 +0000 UTC m=+0.050799953 container create 3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keldysh, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:36:25 compute-0 systemd[1]: Started libpod-conmon-3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592.scope.
Dec 13 08:36:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:36:25 compute-0 podman[329177]: 2025-12-13 08:36:25.004689668 +0000 UTC m=+0.022049589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:36:25 compute-0 podman[329177]: 2025-12-13 08:36:25.120860193 +0000 UTC m=+0.138220114 container init 3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 08:36:25 compute-0 podman[329177]: 2025-12-13 08:36:25.127052346 +0000 UTC m=+0.144412237 container start 3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keldysh, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:36:25 compute-0 heuristic_keldysh[329193]: 167 167
Dec 13 08:36:25 compute-0 systemd[1]: libpod-3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592.scope: Deactivated successfully.
Dec 13 08:36:25 compute-0 conmon[329193]: conmon 3f809a5c56c222d085d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592.scope/container/memory.events
Dec 13 08:36:25 compute-0 podman[329177]: 2025-12-13 08:36:25.178147235 +0000 UTC m=+0.195507236 container attach 3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 08:36:25 compute-0 podman[329177]: 2025-12-13 08:36:25.17873695 +0000 UTC m=+0.196096841 container died 3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keldysh, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5738d70fdb482b6bf0aee7f95debf2b3da2892229d532c3e49f25d13549657b-merged.mount: Deactivated successfully.
Dec 13 08:36:25 compute-0 podman[329177]: 2025-12-13 08:36:25.337854371 +0000 UTC m=+0.355214272 container remove 3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 08:36:25 compute-0 systemd[1]: libpod-conmon-3f809a5c56c222d085d435ac3b872033e66791a7a8dc5b3efba75562446c3592.scope: Deactivated successfully.
Dec 13 08:36:25 compute-0 podman[329217]: 2025-12-13 08:36:25.523254416 +0000 UTC m=+0.043933722 container create e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 08:36:25 compute-0 systemd[1]: Started libpod-conmon-e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35.scope.
Dec 13 08:36:25 compute-0 podman[329217]: 2025-12-13 08:36:25.503317201 +0000 UTC m=+0.023996527 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:36:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:36:25 compute-0 nova_compute[248510]: 2025-12-13 08:36:25.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a400515751b7f54e9a161cb301b2a9dc451302bdc489b33d9a81fdb73153ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a400515751b7f54e9a161cb301b2a9dc451302bdc489b33d9a81fdb73153ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a400515751b7f54e9a161cb301b2a9dc451302bdc489b33d9a81fdb73153ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a400515751b7f54e9a161cb301b2a9dc451302bdc489b33d9a81fdb73153ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:36:25 compute-0 podman[329217]: 2025-12-13 08:36:25.614176274 +0000 UTC m=+0.134855600 container init e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 08:36:25 compute-0 podman[329217]: 2025-12-13 08:36:25.62369774 +0000 UTC m=+0.144377036 container start e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 08:36:25 compute-0 podman[329217]: 2025-12-13 08:36:25.630159941 +0000 UTC m=+0.150839257 container attach e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 08:36:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 134 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 08:36:26 compute-0 lvm[329309]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:36:26 compute-0 lvm[329309]: VG ceph_vg0 finished
Dec 13 08:36:26 compute-0 lvm[329312]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:36:26 compute-0 lvm[329312]: VG ceph_vg1 finished
Dec 13 08:36:26 compute-0 lvm[329314]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:36:26 compute-0 lvm[329314]: VG ceph_vg2 finished
Dec 13 08:36:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:26 compute-0 jovial_franklin[329233]: {}
Dec 13 08:36:26 compute-0 systemd[1]: libpod-e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35.scope: Deactivated successfully.
Dec 13 08:36:26 compute-0 podman[329217]: 2025-12-13 08:36:26.511790605 +0000 UTC m=+1.032469911 container died e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 08:36:26 compute-0 systemd[1]: libpod-e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35.scope: Consumed 1.347s CPU time.
Dec 13 08:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-39a400515751b7f54e9a161cb301b2a9dc451302bdc489b33d9a81fdb73153ea-merged.mount: Deactivated successfully.
Dec 13 08:36:26 compute-0 podman[329217]: 2025-12-13 08:36:26.55265756 +0000 UTC m=+1.073336866 container remove e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:36:26 compute-0 systemd[1]: libpod-conmon-e07fcf269248f917ba7848bb414541481b19b1d47457d0b40d0e5cc7a64cbb35.scope: Deactivated successfully.
Dec 13 08:36:26 compute-0 sudo[329141]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:36:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:36:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:36:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:36:26 compute-0 sudo[329328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:36:26 compute-0 sudo[329328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:36:26 compute-0 sudo[329328]: pam_unix(sudo:session): session closed for user root
Dec 13 08:36:26 compute-0 ceph-mon[76537]: pgmap v2175: 321 pgs: 321 active+clean; 134 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 08:36:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:36:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.007 248514 DEBUG nova.network.neutron [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.047 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Releasing lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.047 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance network_info: |[{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.048 248514 DEBUG oslo_concurrency.lockutils [req-535e8744-8d81-4330-80e3-edebbba35c02 req-9f7601d7-35c3-46fd-82c6-637f054ec952 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.048 248514 DEBUG nova.network.neutron [req-535e8744-8d81-4330-80e3-edebbba35c02 req-9f7601d7-35c3-46fd-82c6-637f054ec952 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing network info cache for port 02f6066e-8276-478d-a9f4-b41637656170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.051 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Start _get_guest_xml network_info=[{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.056 248514 WARNING nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.077 248514 DEBUG nova.virt.libvirt.host [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.078 248514 DEBUG nova.virt.libvirt.host [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.085 248514 DEBUG nova.virt.libvirt.host [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.085 248514 DEBUG nova.virt.libvirt.host [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.086 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.086 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.087 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.087 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.087 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.087 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.087 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.088 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.088 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.088 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.088 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.089 248514 DEBUG nova.virt.hardware [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.091 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:27 compute-0 ovn_controller[148476]: 2025-12-13T08:36:27Z|00817|binding|INFO|Releasing lport d3f200a3-5a17-4d4a-bfbc-603e0a7e6812 from this chassis (sb_readonly=0)
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.327 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.364 248514 DEBUG nova.network.neutron [req-45161c95-2e94-40b0-ba6f-6b9bc981a6fc req-a966b2df-3e80-4eac-a566-2fe353ab73dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Updated VIF entry in instance network info cache for port 295039e3-7d4e-443f-95b9-6461d093e3b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.365 248514 DEBUG nova.network.neutron [req-45161c95-2e94-40b0-ba6f-6b9bc981a6fc req-a966b2df-3e80-4eac-a566-2fe353ab73dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Updating instance_info_cache with network_info: [{"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.407 248514 DEBUG oslo_concurrency.lockutils [req-45161c95-2e94-40b0-ba6f-6b9bc981a6fc req-a966b2df-3e80-4eac-a566-2fe353ab73dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e6042583-fb13-4fe5-9486-82559f893be6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 134 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 08:36:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562012237' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.723 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.751 248514 DEBUG nova.storage.rbd_utils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:27 compute-0 nova_compute[248510]: 2025-12-13 08:36:27.757 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/562012237' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961274276' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.302 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.304 248514 DEBUG nova.virt.libvirt.vif [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-100223357',display_name='tempest-ServerRescueTestJSONUnderV235-server-100223357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-100223357',id=83,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f34aedb6d80843b39686cb02b480702d',ramdisk_id='',reservation_id='r-4cwg6sq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1510317611',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1510317611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:36:18Z,user_data=None,user_id='a1e1f77fd6714a9bae1617c2c179169f',uuid=7d2022b0-ae34-41c3-b775-eca57874dc3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.304 248514 DEBUG nova.network.os_vif_util [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Converting VIF {"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.305 248514 DEBUG nova.network.os_vif_util [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:40:19,bridge_name='br-int',has_traffic_filtering=True,id=02f6066e-8276-478d-a9f4-b41637656170,network=Network(ea547472-f70b-465a-bb9c-323fe377dc37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f6066e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.306 248514 DEBUG nova.objects.instance [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.343 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <uuid>7d2022b0-ae34-41c3-b775-eca57874dc3d</uuid>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <name>instance-00000053</name>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-100223357</nova:name>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:36:27</nova:creationTime>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <nova:user uuid="a1e1f77fd6714a9bae1617c2c179169f">tempest-ServerRescueTestJSONUnderV235-1510317611-project-member</nova:user>
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <nova:project uuid="f34aedb6d80843b39686cb02b480702d">tempest-ServerRescueTestJSONUnderV235-1510317611</nova:project>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <nova:port uuid="02f6066e-8276-478d-a9f4-b41637656170">
Dec 13 08:36:28 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <system>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <entry name="serial">7d2022b0-ae34-41c3-b775-eca57874dc3d</entry>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <entry name="uuid">7d2022b0-ae34-41c3-b775-eca57874dc3d</entry>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </system>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <os>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   </os>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <features>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   </features>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7d2022b0-ae34-41c3-b775-eca57874dc3d_disk">
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config">
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:19:40:19"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <target dev="tap02f6066e-82"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/console.log" append="off"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <video>
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </video>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:36:28 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:36:28 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:36:28 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:36:28 compute-0 nova_compute[248510]: </domain>
Dec 13 08:36:28 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.345 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Preparing to wait for external event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.346 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.346 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.346 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.347 248514 DEBUG nova.virt.libvirt.vif [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-100223357',display_name='tempest-ServerRescueTestJSONUnderV235-server-100223357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-100223357',id=83,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f34aedb6d80843b39686cb02b480702d',ramdisk_id='',reservation_id='r-4cwg6sq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1510317611',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1510317611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:36:18Z,user_data=None,user_id='a1e1f77fd6714a9bae1617c2c179169f',uuid=7d2022b0-ae34-41c3-b775-eca57874dc3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.348 248514 DEBUG nova.network.os_vif_util [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Converting VIF {"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.348 248514 DEBUG nova.network.os_vif_util [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:40:19,bridge_name='br-int',has_traffic_filtering=True,id=02f6066e-8276-478d-a9f4-b41637656170,network=Network(ea547472-f70b-465a-bb9c-323fe377dc37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f6066e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.349 248514 DEBUG os_vif [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:40:19,bridge_name='br-int',has_traffic_filtering=True,id=02f6066e-8276-478d-a9f4-b41637656170,network=Network(ea547472-f70b-465a-bb9c-323fe377dc37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f6066e-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.350 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.350 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.355 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02f6066e-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.356 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02f6066e-82, col_values=(('external_ids', {'iface-id': '02f6066e-8276-478d-a9f4-b41637656170', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:40:19', 'vm-uuid': '7d2022b0-ae34-41c3-b775-eca57874dc3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:28 compute-0 NetworkManager[50376]: <info>  [1765614988.3592] manager: (tap02f6066e-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.360 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.366 248514 INFO os_vif [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:40:19,bridge_name='br-int',has_traffic_filtering=True,id=02f6066e-8276-478d-a9f4-b41637656170,network=Network(ea547472-f70b-465a-bb9c-323fe377dc37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f6066e-82')
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.427 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.427 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.427 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] No VIF found with MAC fa:16:3e:19:40:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.428 248514 INFO nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Using config drive
Dec 13 08:36:28 compute-0 nova_compute[248510]: 2025-12-13 08:36:28.451 248514 DEBUG nova.storage.rbd_utils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:28 compute-0 ceph-mon[76537]: pgmap v2176: 321 pgs: 321 active+clean; 134 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 08:36:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/961274276' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.025 248514 INFO nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Creating config drive at /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.033 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpipheu8ki execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.183 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpipheu8ki" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.204 248514 DEBUG nova.storage.rbd_utils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.208 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.335 248514 DEBUG oslo_concurrency.processutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.337 248514 INFO nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Deleting local config drive /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config because it was imported into RBD.
Dec 13 08:36:29 compute-0 kernel: tap02f6066e-82: entered promiscuous mode
Dec 13 08:36:29 compute-0 NetworkManager[50376]: <info>  [1765614989.3833] manager: (tap02f6066e-82): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.437 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:29 compute-0 ovn_controller[148476]: 2025-12-13T08:36:29Z|00818|binding|INFO|Claiming lport 02f6066e-8276-478d-a9f4-b41637656170 for this chassis.
Dec 13 08:36:29 compute-0 ovn_controller[148476]: 2025-12-13T08:36:29Z|00819|binding|INFO|02f6066e-8276-478d-a9f4-b41637656170: Claiming fa:16:3e:19:40:19 10.100.0.7
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:29 compute-0 ovn_controller[148476]: 2025-12-13T08:36:29Z|00820|binding|INFO|Setting lport 02f6066e-8276-478d-a9f4-b41637656170 ovn-installed in OVS
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:29 compute-0 systemd-udevd[329509]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:36:29 compute-0 systemd-machined[210538]: New machine qemu-102-instance-00000053.
Dec 13 08:36:29 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000053.
Dec 13 08:36:29 compute-0 NetworkManager[50376]: <info>  [1765614989.4955] device (tap02f6066e-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:36:29 compute-0 NetworkManager[50376]: <info>  [1765614989.4967] device (tap02f6066e-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:36:29 compute-0 podman[329488]: 2025-12-13 08:36:29.537107024 +0000 UTC m=+0.070125932 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Dec 13 08:36:29 compute-0 podman[329487]: 2025-12-13 08:36:29.544708343 +0000 UTC m=+0.070097832 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec 13 08:36:29 compute-0 podman[329486]: 2025-12-13 08:36:29.591934006 +0000 UTC m=+0.119700124 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:36:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 137 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.820 248514 DEBUG nova.network.neutron [req-535e8744-8d81-4330-80e3-edebbba35c02 req-9f7601d7-35c3-46fd-82c6-637f054ec952 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updated VIF entry in instance network info cache for port 02f6066e-8276-478d-a9f4-b41637656170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.822 248514 DEBUG nova.network.neutron [req-535e8744-8d81-4330-80e3-edebbba35c02 req-9f7601d7-35c3-46fd-82c6-637f054ec952 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.980 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614989.9791086, 7d2022b0-ae34-41c3-b775-eca57874dc3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:29 compute-0 nova_compute[248510]: 2025-12-13 08:36:29.981 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] VM Started (Lifecycle Event)
Dec 13 08:36:30 compute-0 ovn_controller[148476]: 2025-12-13T08:36:30Z|00821|binding|INFO|Setting lport 02f6066e-8276-478d-a9f4-b41637656170 up in Southbound
Dec 13 08:36:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:30.099 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:40:19 10.100.0.7'], port_security=['fa:16:3e:19:40:19 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7d2022b0-ae34-41c3-b775-eca57874dc3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea547472-f70b-465a-bb9c-323fe377dc37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f34aedb6d80843b39686cb02b480702d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '337b4fd8-7407-482e-af42-2202a65041d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09a270ea-9706-49f6-934c-15bff6be3cec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=02f6066e-8276-478d-a9f4-b41637656170) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:36:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:30.101 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 02f6066e-8276-478d-a9f4-b41637656170 in datapath ea547472-f70b-465a-bb9c-323fe377dc37 bound to our chassis
Dec 13 08:36:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:30.102 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea547472-f70b-465a-bb9c-323fe377dc37 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:36:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:30.104 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89a8c8d6-3139-4f29-b485-129ef7e8755d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.129 248514 DEBUG oslo_concurrency.lockutils [req-535e8744-8d81-4330-80e3-edebbba35c02 req-9f7601d7-35c3-46fd-82c6-637f054ec952 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.132 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.138 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614989.9795198, 7d2022b0-ae34-41c3-b775-eca57874dc3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.139 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] VM Paused (Lifecycle Event)
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.183 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.187 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.502 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.592 248514 DEBUG nova.compute.manager [req-9c9c6c18-205a-4da8-bc44-0fe1099de22f req-e6b0673a-be7d-4ef4-952e-eb0faf2bf43e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.592 248514 DEBUG oslo_concurrency.lockutils [req-9c9c6c18-205a-4da8-bc44-0fe1099de22f req-e6b0673a-be7d-4ef4-952e-eb0faf2bf43e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.592 248514 DEBUG oslo_concurrency.lockutils [req-9c9c6c18-205a-4da8-bc44-0fe1099de22f req-e6b0673a-be7d-4ef4-952e-eb0faf2bf43e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.593 248514 DEBUG oslo_concurrency.lockutils [req-9c9c6c18-205a-4da8-bc44-0fe1099de22f req-e6b0673a-be7d-4ef4-952e-eb0faf2bf43e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.593 248514 DEBUG nova.compute.manager [req-9c9c6c18-205a-4da8-bc44-0fe1099de22f req-e6b0673a-be7d-4ef4-952e-eb0faf2bf43e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Processing event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.593 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.596 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614990.5964923, 7d2022b0-ae34-41c3-b775-eca57874dc3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.596 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] VM Resumed (Lifecycle Event)
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.598 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.600 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.603 248514 INFO nova.virt.libvirt.driver [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance spawned successfully.
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.604 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.641 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.644 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.692 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.692 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.693 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.693 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.694 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.695 248514 DEBUG nova.virt.libvirt.driver [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.709 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.821 248514 INFO nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Took 12.18 seconds to spawn the instance on the hypervisor.
Dec 13 08:36:30 compute-0 nova_compute[248510]: 2025-12-13 08:36:30.822 248514 DEBUG nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:30 compute-0 ovn_controller[148476]: 2025-12-13T08:36:30Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:db:37 10.100.0.7
Dec 13 08:36:30 compute-0 ovn_controller[148476]: 2025-12-13T08:36:30Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:db:37 10.100.0.7
Dec 13 08:36:30 compute-0 ceph-mon[76537]: pgmap v2177: 321 pgs: 321 active+clean; 137 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Dec 13 08:36:31 compute-0 nova_compute[248510]: 2025-12-13 08:36:31.147 248514 INFO nova.compute.manager [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Took 13.64 seconds to build instance.
Dec 13 08:36:31 compute-0 nova_compute[248510]: 2025-12-13 08:36:31.173 248514 DEBUG oslo_concurrency.lockutils [None req-1655cdb7-5e9e-4e03-920b-8bb8d4cee705 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 146 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 117 op/s
Dec 13 08:36:32 compute-0 ceph-mon[76537]: pgmap v2178: 321 pgs: 321 active+clean; 146 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 117 op/s
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.056 248514 DEBUG nova.compute.manager [req-a655ba2a-9f3b-43d0-8c54-d134a41e1dd1 req-c86e1520-0a27-42c4-8c65-75e2a34c9eaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.056 248514 DEBUG oslo_concurrency.lockutils [req-a655ba2a-9f3b-43d0-8c54-d134a41e1dd1 req-c86e1520-0a27-42c4-8c65-75e2a34c9eaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.056 248514 DEBUG oslo_concurrency.lockutils [req-a655ba2a-9f3b-43d0-8c54-d134a41e1dd1 req-c86e1520-0a27-42c4-8c65-75e2a34c9eaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.057 248514 DEBUG oslo_concurrency.lockutils [req-a655ba2a-9f3b-43d0-8c54-d134a41e1dd1 req-c86e1520-0a27-42c4-8c65-75e2a34c9eaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.057 248514 DEBUG nova.compute.manager [req-a655ba2a-9f3b-43d0-8c54-d134a41e1dd1 req-c86e1520-0a27-42c4-8c65-75e2a34c9eaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] No waiting events found dispatching network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.057 248514 WARNING nova.compute.manager [req-a655ba2a-9f3b-43d0-8c54-d134a41e1dd1 req-c86e1520-0a27-42c4-8c65-75e2a34c9eaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received unexpected event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 for instance with vm_state active and task_state None.
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.516 248514 INFO nova.compute.manager [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Rescuing
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.517 248514 DEBUG oslo_concurrency.lockutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.517 248514 DEBUG oslo_concurrency.lockutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquired lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:33 compute-0 nova_compute[248510]: 2025-12-13 08:36:33.517 248514 DEBUG nova.network.neutron [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:36:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 160 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 106 op/s
Dec 13 08:36:34 compute-0 ceph-mon[76537]: pgmap v2179: 321 pgs: 321 active+clean; 160 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 106 op/s
Dec 13 08:36:35 compute-0 nova_compute[248510]: 2025-12-13 08:36:35.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Dec 13 08:36:35 compute-0 nova_compute[248510]: 2025-12-13 08:36:35.974 248514 DEBUG nova.network.neutron [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:36 compute-0 nova_compute[248510]: 2025-12-13 08:36:36.035 248514 DEBUG oslo_concurrency.lockutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Releasing lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:36 compute-0 nova_compute[248510]: 2025-12-13 08:36:36.689 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:36:37 compute-0 ceph-mon[76537]: pgmap v2180: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Dec 13 08:36:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 13 08:36:38 compute-0 nova_compute[248510]: 2025-12-13 08:36:38.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:38 compute-0 ceph-mon[76537]: pgmap v2181: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 13 08:36:38 compute-0 nova_compute[248510]: 2025-12-13 08:36:38.360 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Dec 13 08:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:36:40 compute-0 nova_compute[248510]: 2025-12-13 08:36:40.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:40 compute-0 ceph-mon[76537]: pgmap v2182: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.135 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "e6042583-fb13-4fe5-9486-82559f893be6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.135 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.136 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "e6042583-fb13-4fe5-9486-82559f893be6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.136 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.136 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.138 248514 INFO nova.compute.manager [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Terminating instance
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.139 248514 DEBUG nova.compute.manager [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:36:41 compute-0 kernel: tap295039e3-7d (unregistering): left promiscuous mode
Dec 13 08:36:41 compute-0 NetworkManager[50376]: <info>  [1765615001.3437] device (tap295039e3-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.353 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:41 compute-0 ovn_controller[148476]: 2025-12-13T08:36:41Z|00822|binding|INFO|Releasing lport 295039e3-7d4e-443f-95b9-6461d093e3b8 from this chassis (sb_readonly=0)
Dec 13 08:36:41 compute-0 ovn_controller[148476]: 2025-12-13T08:36:41Z|00823|binding|INFO|Setting lport 295039e3-7d4e-443f-95b9-6461d093e3b8 down in Southbound
Dec 13 08:36:41 compute-0 ovn_controller[148476]: 2025-12-13T08:36:41Z|00824|binding|INFO|Removing iface tap295039e3-7d ovn-installed in OVS
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:41.360 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:db:37 10.100.0.7'], port_security=['fa:16:3e:c0:db:37 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e6042583-fb13-4fe5-9486-82559f893be6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c233f790-2413-4fec-b315-717389cc1f77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8a55bacc994f02b324ecf72d6f8ae1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc8123e6-4496-4224-a491-533f7737dbbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccf86f8-cdd1-459c-b09d-a4b6ca014d0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=295039e3-7d4e-443f-95b9-6461d093e3b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:36:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:41.361 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 295039e3-7d4e-443f-95b9-6461d093e3b8 in datapath c233f790-2413-4fec-b315-717389cc1f77 unbound from our chassis
Dec 13 08:36:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:41.362 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c233f790-2413-4fec-b315-717389cc1f77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:36:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:41.364 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[379284ef-51c6-4622-9c5c-d4e078244a05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:41.365 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c233f790-2413-4fec-b315-717389cc1f77 namespace which is not needed anymore
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.383 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:41 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000052.scope: Deactivated successfully.
Dec 13 08:36:41 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000052.scope: Consumed 12.933s CPU time.
Dec 13 08:36:41 compute-0 systemd-machined[210538]: Machine qemu-101-instance-00000052 terminated.
Dec 13 08:36:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:41.469 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.586 248514 INFO nova.virt.libvirt.driver [-] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Instance destroyed successfully.
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.587 248514 DEBUG nova.objects.instance [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lazy-loading 'resources' on Instance uuid e6042583-fb13-4fe5-9486-82559f893be6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.624 248514 DEBUG nova.virt.libvirt.vif [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:35:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-772395811',display_name='tempest-ServersTestManualDisk-server-772395811',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-772395811',id=82,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHllh+bNrxKLd6OWmyEtu3okvMAsImJ3irsRW5TQfXeQ5OmWWf+5uRaN0GBizwMjAgm6PymgOHcLbrpQ6fhr2d56uE+hQMJeLTTjfvxGuHauuGOBoMSMw9/U72Yhc3pfXQ==',key_name='tempest-keypair-763525040',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:36:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c8a55bacc994f02b324ecf72d6f8ae1',ramdisk_id='',reservation_id='r-0k3qejtx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-80053177',owner_user_name='tempest-ServersTestManualDisk-80053177-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:36:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71a2ea2f158b43a09b4eb1da303184d5',uuid=e6042583-fb13-4fe5-9486-82559f893be6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.624 248514 DEBUG nova.network.os_vif_util [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Converting VIF {"id": "295039e3-7d4e-443f-95b9-6461d093e3b8", "address": "fa:16:3e:c0:db:37", "network": {"id": "c233f790-2413-4fec-b315-717389cc1f77", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1380420184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8a55bacc994f02b324ecf72d6f8ae1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295039e3-7d", "ovs_interfaceid": "295039e3-7d4e-443f-95b9-6461d093e3b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.625 248514 DEBUG nova.network.os_vif_util [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:db:37,bridge_name='br-int',has_traffic_filtering=True,id=295039e3-7d4e-443f-95b9-6461d093e3b8,network=Network(c233f790-2413-4fec-b315-717389cc1f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295039e3-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.626 248514 DEBUG os_vif [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:db:37,bridge_name='br-int',has_traffic_filtering=True,id=295039e3-7d4e-443f-95b9-6461d093e3b8,network=Network(c233f790-2413-4fec-b315-717389cc1f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295039e3-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.628 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap295039e3-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.757 248514 INFO os_vif [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:db:37,bridge_name='br-int',has_traffic_filtering=True,id=295039e3-7d4e-443f-95b9-6461d093e3b8,network=Network(c233f790-2413-4fec-b315-717389cc1f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295039e3-7d')
Dec 13 08:36:41 compute-0 nova_compute[248510]: 2025-12-13 08:36:41.844 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:41 compute-0 neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77[328557]: [NOTICE]   (328561) : haproxy version is 2.8.14-c23fe91
Dec 13 08:36:41 compute-0 neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77[328557]: [NOTICE]   (328561) : path to executable is /usr/sbin/haproxy
Dec 13 08:36:41 compute-0 neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77[328557]: [WARNING]  (328561) : Exiting Master process...
Dec 13 08:36:41 compute-0 neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77[328557]: [ALERT]    (328561) : Current worker (328563) exited with code 143 (Terminated)
Dec 13 08:36:41 compute-0 neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77[328557]: [WARNING]  (328561) : All workers exited. Exiting... (0)
Dec 13 08:36:41 compute-0 systemd[1]: libpod-2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0.scope: Deactivated successfully.
Dec 13 08:36:41 compute-0 podman[329624]: 2025-12-13 08:36:41.933518075 +0000 UTC m=+0.476608577 container died 2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 08:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0-userdata-shm.mount: Deactivated successfully.
Dec 13 08:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-98a7a84e49fd32eaf99e2064b4fc6e94e067df1c9e6d00ecb45004d62e706345-merged.mount: Deactivated successfully.
Dec 13 08:36:42 compute-0 podman[329624]: 2025-12-13 08:36:42.603760469 +0000 UTC m=+1.146850981 container cleanup 2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:36:42 compute-0 systemd[1]: libpod-conmon-2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0.scope: Deactivated successfully.
Dec 13 08:36:43 compute-0 ceph-mon[76537]: pgmap v2183: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Dec 13 08:36:43 compute-0 podman[329680]: 2025-12-13 08:36:43.619541255 +0000 UTC m=+0.988478369 container remove 2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2a6b88-5f73-4597-b5e6-ddeccab86493]: (4, ('Sat Dec 13 08:36:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77 (2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0)\n2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0\nSat Dec 13 08:36:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c233f790-2413-4fec-b315-717389cc1f77 (2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0)\n2854a6e4e817f754e96c15839745becf34f44f89c4cbcce0ba6a838416e9b2a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79a9dcac-9565-4065-99d3-c95f349d5c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.634 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc233f790-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:43 compute-0 nova_compute[248510]: 2025-12-13 08:36:43.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:43 compute-0 kernel: tapc233f790-20: left promiscuous mode
Dec 13 08:36:43 compute-0 nova_compute[248510]: 2025-12-13 08:36:43.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:43 compute-0 nova_compute[248510]: 2025-12-13 08:36:43.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.655 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d124347a-b4fd-443a-a07d-e9fc1da9eeb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee71193-08e6-4a43-813b-685ce4873538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0124b34f-42f3-4a65-81bd-b3c5f3786c4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 174 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 119 op/s
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e3de6022-2db1-4dc0-8d1d-d31d0164024b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755449, 'reachable_time': 39554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329696, 'error': None, 'target': 'ovnmeta-c233f790-2413-4fec-b315-717389cc1f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.698 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c233f790-2413-4fec-b315-717389cc1f77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.699 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8bce4196-1f0f-48d7-bfa0-caed5b6e7d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dc233f790\x2d2413\x2d4fec\x2db315\x2d717389cc1f77.mount: Deactivated successfully.
Dec 13 08:36:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:43.700 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:36:44 compute-0 ceph-mon[76537]: pgmap v2184: 321 pgs: 321 active+clean; 174 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 119 op/s
Dec 13 08:36:44 compute-0 nova_compute[248510]: 2025-12-13 08:36:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:45 compute-0 nova_compute[248510]: 2025-12-13 08:36:45.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 149 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.9 MiB/s wr, 111 op/s
Dec 13 08:36:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:46 compute-0 nova_compute[248510]: 2025-12-13 08:36:46.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:46 compute-0 nova_compute[248510]: 2025-12-13 08:36:46.797 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:36:47 compute-0 ceph-mon[76537]: pgmap v2185: 321 pgs: 321 active+clean; 149 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.9 MiB/s wr, 111 op/s
Dec 13 08:36:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 149 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Dec 13 08:36:47 compute-0 nova_compute[248510]: 2025-12-13 08:36:47.754 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:47 compute-0 nova_compute[248510]: 2025-12-13 08:36:47.757 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:47 compute-0 nova_compute[248510]: 2025-12-13 08:36:47.816 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.195 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.196 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.206 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.206 248514 INFO nova.compute.claims [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.415 248514 DEBUG nova.compute.manager [req-ed911ba5-f159-4741-9500-d6c3be6b1819 req-d5c0fc62-7334-4add-b42d-8a3c9051ed0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received event network-vif-unplugged-295039e3-7d4e-443f-95b9-6461d093e3b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.416 248514 DEBUG oslo_concurrency.lockutils [req-ed911ba5-f159-4741-9500-d6c3be6b1819 req-d5c0fc62-7334-4add-b42d-8a3c9051ed0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6042583-fb13-4fe5-9486-82559f893be6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.416 248514 DEBUG oslo_concurrency.lockutils [req-ed911ba5-f159-4741-9500-d6c3be6b1819 req-d5c0fc62-7334-4add-b42d-8a3c9051ed0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.417 248514 DEBUG oslo_concurrency.lockutils [req-ed911ba5-f159-4741-9500-d6c3be6b1819 req-d5c0fc62-7334-4add-b42d-8a3c9051ed0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.417 248514 DEBUG nova.compute.manager [req-ed911ba5-f159-4741-9500-d6c3be6b1819 req-d5c0fc62-7334-4add-b42d-8a3c9051ed0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] No waiting events found dispatching network-vif-unplugged-295039e3-7d4e-443f-95b9-6461d093e3b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.418 248514 DEBUG nova.compute.manager [req-ed911ba5-f159-4741-9500-d6c3be6b1819 req-d5c0fc62-7334-4add-b42d-8a3c9051ed0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received event network-vif-unplugged-295039e3-7d4e-443f-95b9-6461d093e3b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.544 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:48 compute-0 nova_compute[248510]: 2025-12-13 08:36:48.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:48 compute-0 ceph-mon[76537]: pgmap v2186: 321 pgs: 321 active+clean; 149 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.121 248514 INFO nova.virt.libvirt.driver [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Deleting instance files /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6_del
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.122 248514 INFO nova.virt.libvirt.driver [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Deletion of /var/lib/nova/instances/e6042583-fb13-4fe5-9486-82559f893be6_del complete
Dec 13 08:36:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:36:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/48092823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.208 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.216 248514 DEBUG nova.compute.provider_tree [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.245 248514 DEBUG nova.scheduler.client.report [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.291 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.291 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.296 248514 INFO nova.compute.manager [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Took 8.16 seconds to destroy the instance on the hypervisor.
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.297 248514 DEBUG oslo.service.loopingcall [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.297 248514 DEBUG nova.compute.manager [-] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.297 248514 DEBUG nova.network.neutron [-] [instance: e6042583-fb13-4fe5-9486-82559f893be6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.351 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.352 248514 DEBUG nova.network.neutron [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.375 248514 INFO nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.396 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.505 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.507 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.507 248514 INFO nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Creating image(s)
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.528 248514 DEBUG nova.storage.rbd_utils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] rbd image 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.557 248514 DEBUG nova.storage.rbd_utils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] rbd image 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.581 248514 DEBUG nova.storage.rbd_utils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] rbd image 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.585 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.650 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.651 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.651 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.651 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.671 248514 DEBUG nova.storage.rbd_utils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] rbd image 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.675 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 112 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.758 248514 DEBUG nova.policy [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac27853f5d944bb9ae5b4c62d1ed69b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a79fa3fc27f544959074a09c80fd20c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.832 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.833 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.833 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.833 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.833 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:36:49 compute-0 nova_compute[248510]: 2025-12-13 08:36:49.834 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/48092823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:50 compute-0 nova_compute[248510]: 2025-12-13 08:36:50.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:50 compute-0 nova_compute[248510]: 2025-12-13 08:36:50.680 248514 DEBUG nova.compute.manager [req-82da1b82-67fe-47e8-918b-a7bea63d2463 req-a653924f-9770-436f-9f24-9e47073fb9c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received event network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:50 compute-0 nova_compute[248510]: 2025-12-13 08:36:50.680 248514 DEBUG oslo_concurrency.lockutils [req-82da1b82-67fe-47e8-918b-a7bea63d2463 req-a653924f-9770-436f-9f24-9e47073fb9c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6042583-fb13-4fe5-9486-82559f893be6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:50 compute-0 nova_compute[248510]: 2025-12-13 08:36:50.681 248514 DEBUG oslo_concurrency.lockutils [req-82da1b82-67fe-47e8-918b-a7bea63d2463 req-a653924f-9770-436f-9f24-9e47073fb9c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:50 compute-0 nova_compute[248510]: 2025-12-13 08:36:50.681 248514 DEBUG oslo_concurrency.lockutils [req-82da1b82-67fe-47e8-918b-a7bea63d2463 req-a653924f-9770-436f-9f24-9e47073fb9c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:50 compute-0 nova_compute[248510]: 2025-12-13 08:36:50.681 248514 DEBUG nova.compute.manager [req-82da1b82-67fe-47e8-918b-a7bea63d2463 req-a653924f-9770-436f-9f24-9e47073fb9c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] No waiting events found dispatching network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:50 compute-0 nova_compute[248510]: 2025-12-13 08:36:50.681 248514 WARNING nova.compute.manager [req-82da1b82-67fe-47e8-918b-a7bea63d2463 req-a653924f-9770-436f-9f24-9e47073fb9c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received unexpected event network-vif-plugged-295039e3-7d4e-443f-95b9-6461d093e3b8 for instance with vm_state active and task_state deleting.
Dec 13 08:36:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:50.702 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:51 compute-0 ceph-mon[76537]: pgmap v2187: 321 pgs: 321 active+clean; 112 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 13 08:36:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:51 compute-0 nova_compute[248510]: 2025-12-13 08:36:51.424 248514 DEBUG nova.network.neutron [-] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:51 compute-0 nova_compute[248510]: 2025-12-13 08:36:51.480 248514 INFO nova.compute.manager [-] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Took 2.18 seconds to deallocate network for instance.
Dec 13 08:36:51 compute-0 nova_compute[248510]: 2025-12-13 08:36:51.485 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.810s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:51 compute-0 nova_compute[248510]: 2025-12-13 08:36:51.551 248514 DEBUG nova.storage.rbd_utils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] resizing rbd image 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:36:51 compute-0 nova_compute[248510]: 2025-12-13 08:36:51.612 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:51 compute-0 nova_compute[248510]: 2025-12-13 08:36:51.613 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 118 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 13 08:36:51 compute-0 nova_compute[248510]: 2025-12-13 08:36:51.790 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:51 compute-0 nova_compute[248510]: 2025-12-13 08:36:51.821 248514 INFO nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance shutdown successfully after 15 seconds.
Dec 13 08:36:51 compute-0 kernel: tap02f6066e-82 (unregistering): left promiscuous mode
Dec 13 08:36:51 compute-0 NetworkManager[50376]: <info>  [1765615011.9870] device (tap02f6066e-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:36:51 compute-0 ovn_controller[148476]: 2025-12-13T08:36:51Z|00825|binding|INFO|Releasing lport 02f6066e-8276-478d-a9f4-b41637656170 from this chassis (sb_readonly=0)
Dec 13 08:36:51 compute-0 ovn_controller[148476]: 2025-12-13T08:36:51Z|00826|binding|INFO|Setting lport 02f6066e-8276-478d-a9f4-b41637656170 down in Southbound
Dec 13 08:36:51 compute-0 ovn_controller[148476]: 2025-12-13T08:36:51Z|00827|binding|INFO|Removing iface tap02f6066e-82 ovn-installed in OVS
Dec 13 08:36:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:52.009 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:40:19 10.100.0.7'], port_security=['fa:16:3e:19:40:19 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7d2022b0-ae34-41c3-b775-eca57874dc3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea547472-f70b-465a-bb9c-323fe377dc37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f34aedb6d80843b39686cb02b480702d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '337b4fd8-7407-482e-af42-2202a65041d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09a270ea-9706-49f6-934c-15bff6be3cec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=02f6066e-8276-478d-a9f4-b41637656170) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:36:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:52.011 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 02f6066e-8276-478d-a9f4-b41637656170 in datapath ea547472-f70b-465a-bb9c-323fe377dc37 unbound from our chassis
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:52.013 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea547472-f70b-465a-bb9c-323fe377dc37 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:36:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:52.014 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3b752f-9674-4bf5-81f8-521da792c67e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.026 248514 DEBUG nova.objects.instance [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e7e1e09-61a5-4d7b-99f5-23665d0afde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.048 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.048 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Ensure instance console log exists: /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.050 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.051 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:52 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000053.scope: Deactivated successfully.
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.051 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:52 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000053.scope: Consumed 13.109s CPU time.
Dec 13 08:36:52 compute-0 systemd-machined[210538]: Machine qemu-102-instance-00000053 terminated.
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.058 248514 DEBUG nova.network.neutron [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Successfully created port: b270c8f6-1891-4aca-abf4-66636502ddbe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.067 248514 DEBUG oslo_concurrency.processutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.250 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.263 248514 INFO nova.virt.libvirt.driver [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance destroyed successfully.
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.264 248514 DEBUG nova.objects.instance [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'numa_topology' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.283 248514 INFO nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Attempting rescue
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.284 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.289 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.289 248514 INFO nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Creating image(s)
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.320 248514 DEBUG nova.storage.rbd_utils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.324 248514 DEBUG nova.objects.instance [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.359 248514 DEBUG nova.storage.rbd_utils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.379 248514 DEBUG nova.storage.rbd_utils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.382 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:52 compute-0 ceph-mon[76537]: pgmap v2188: 321 pgs: 321 active+clean; 118 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.424 248514 DEBUG nova.compute.manager [req-56529926-5598-40ed-9b2c-c281c1846b3e req-a52c8f26-265b-4993-a1bf-3f384fd31927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-unplugged-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.425 248514 DEBUG oslo_concurrency.lockutils [req-56529926-5598-40ed-9b2c-c281c1846b3e req-a52c8f26-265b-4993-a1bf-3f384fd31927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.425 248514 DEBUG oslo_concurrency.lockutils [req-56529926-5598-40ed-9b2c-c281c1846b3e req-a52c8f26-265b-4993-a1bf-3f384fd31927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.425 248514 DEBUG oslo_concurrency.lockutils [req-56529926-5598-40ed-9b2c-c281c1846b3e req-a52c8f26-265b-4993-a1bf-3f384fd31927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.426 248514 DEBUG nova.compute.manager [req-56529926-5598-40ed-9b2c-c281c1846b3e req-a52c8f26-265b-4993-a1bf-3f384fd31927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] No waiting events found dispatching network-vif-unplugged-02f6066e-8276-478d-a9f4-b41637656170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.426 248514 WARNING nova.compute.manager [req-56529926-5598-40ed-9b2c-c281c1846b3e req-a52c8f26-265b-4993-a1bf-3f384fd31927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received unexpected event network-vif-unplugged-02f6066e-8276-478d-a9f4-b41637656170 for instance with vm_state active and task_state rescuing.
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.477 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.478 248514 DEBUG oslo_concurrency.lockutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.478 248514 DEBUG oslo_concurrency.lockutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.479 248514 DEBUG oslo_concurrency.lockutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.504 248514 DEBUG nova.storage.rbd_utils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.508 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:36:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3818232300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.758 248514 DEBUG oslo_concurrency.processutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.765 248514 DEBUG nova.compute.provider_tree [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.889 248514 DEBUG nova.compute.manager [req-26449695-8926-4be7-923b-b3ee26942e01 req-72e173eb-b38f-4813-8b9b-090fa998c843 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Received event network-vif-deleted-295039e3-7d4e-443f-95b9-6461d093e3b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.896 248514 DEBUG nova.scheduler.client.report [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.942 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:52 compute-0 nova_compute[248510]: 2025-12-13 08:36:52.987 248514 INFO nova.scheduler.client.report [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Deleted allocations for instance e6042583-fb13-4fe5-9486-82559f893be6
Dec 13 08:36:53 compute-0 nova_compute[248510]: 2025-12-13 08:36:53.068 248514 DEBUG oslo_concurrency.lockutils [None req-ceb1e4d6-edde-4460-8af8-49219c2c81c9 71a2ea2f158b43a09b4eb1da303184d5 0c8a55bacc994f02b324ecf72d6f8ae1 - - default default] Lock "e6042583-fb13-4fe5-9486-82559f893be6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:53 compute-0 nova_compute[248510]: 2025-12-13 08:36:53.186 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:53 compute-0 nova_compute[248510]: 2025-12-13 08:36:53.253 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:53 compute-0 nova_compute[248510]: 2025-12-13 08:36:53.254 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:36:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 139 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Dec 13 08:36:53 compute-0 nova_compute[248510]: 2025-12-13 08:36:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3818232300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.410 248514 DEBUG nova.network.neutron [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Successfully updated port: b270c8f6-1891-4aca-abf4-66636502ddbe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.717 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "refresh_cache-2e7e1e09-61a5-4d7b-99f5-23665d0afde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.717 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquired lock "refresh_cache-2e7e1e09-61a5-4d7b-99f5-23665d0afde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.717 248514 DEBUG nova.network.neutron [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.820 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.821 248514 DEBUG nova.objects.instance [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'migration_context' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.842 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.842 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Start _get_guest_xml network_info=[{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "vif_mac": "fa:16:3e:19:40:19"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.843 248514 DEBUG nova.objects.instance [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'resources' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.865 248514 WARNING nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.873 248514 DEBUG nova.virt.libvirt.host [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.874 248514 DEBUG nova.virt.libvirt.host [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.878 248514 DEBUG nova.virt.libvirt.host [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.879 248514 DEBUG nova.virt.libvirt.host [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.879 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.879 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.880 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.880 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.881 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.881 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.881 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.881 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.882 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.882 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.882 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.883 248514 DEBUG nova.virt.hardware [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.883 248514 DEBUG nova.objects.instance [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.900 248514 DEBUG nova.compute.manager [req-20c369a4-12e3-4735-98c2-ba50f1216d59 req-6448fa0f-afaf-4cef-8020-ca6223dbb241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.901 248514 DEBUG oslo_concurrency.lockutils [req-20c369a4-12e3-4735-98c2-ba50f1216d59 req-6448fa0f-afaf-4cef-8020-ca6223dbb241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.901 248514 DEBUG oslo_concurrency.lockutils [req-20c369a4-12e3-4735-98c2-ba50f1216d59 req-6448fa0f-afaf-4cef-8020-ca6223dbb241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.902 248514 DEBUG oslo_concurrency.lockutils [req-20c369a4-12e3-4735-98c2-ba50f1216d59 req-6448fa0f-afaf-4cef-8020-ca6223dbb241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.902 248514 DEBUG nova.compute.manager [req-20c369a4-12e3-4735-98c2-ba50f1216d59 req-6448fa0f-afaf-4cef-8020-ca6223dbb241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] No waiting events found dispatching network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.902 248514 WARNING nova.compute.manager [req-20c369a4-12e3-4735-98c2-ba50f1216d59 req-6448fa0f-afaf-4cef-8020-ca6223dbb241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received unexpected event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 for instance with vm_state active and task_state rescuing.
Dec 13 08:36:54 compute-0 nova_compute[248510]: 2025-12-13 08:36:54.923 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:54 compute-0 ceph-mon[76537]: pgmap v2189: 321 pgs: 321 active+clean; 139 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Dec 13 08:36:55 compute-0 nova_compute[248510]: 2025-12-13 08:36:55.004 248514 DEBUG nova.network.neutron [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:36:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:55.418 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:55.418 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866438134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:55 compute-0 nova_compute[248510]: 2025-12-13 08:36:55.478 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:55 compute-0 nova_compute[248510]: 2025-12-13 08:36:55.480 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:55 compute-0 nova_compute[248510]: 2025-12-13 08:36:55.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 186 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Dec 13 08:36:55 compute-0 nova_compute[248510]: 2025-12-13 08:36:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2866438134' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.053 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.054 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.054 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.055 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.055 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/902088388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.337 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.857s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.338 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.585 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615001.5838773, e6042583-fb13-4fe5-9486-82559f893be6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.586 248514 INFO nova.compute.manager [-] [instance: e6042583-fb13-4fe5-9486-82559f893be6] VM Stopped (Lifecycle Event)
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.617 248514 DEBUG nova.compute.manager [None req-12784842-f184-4edb-9df4-deb827a2fc2c - - - - - -] [instance: e6042583-fb13-4fe5-9486-82559f893be6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:36:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:36:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4180652669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.731 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.793 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.799 248514 DEBUG nova.network.neutron [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Updating instance_info_cache with network_info: [{"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.831 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Releasing lock "refresh_cache-2e7e1e09-61a5-4d7b-99f5-23665d0afde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.831 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Instance network_info: |[{"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.834 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Start _get_guest_xml network_info=[{"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.840 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.841 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.841 248514 WARNING nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.852 248514 DEBUG nova.virt.libvirt.host [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.853 248514 DEBUG nova.virt.libvirt.host [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.858 248514 DEBUG nova.virt.libvirt.host [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.859 248514 DEBUG nova.virt.libvirt.host [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.859 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.860 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.860 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.860 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.860 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.861 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.861 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.861 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.861 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.862 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.862 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.862 248514 DEBUG nova.virt.hardware [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.866 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1553780559' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.919 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.921 248514 DEBUG nova.virt.libvirt.vif [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-100223357',display_name='tempest-ServerRescueTestJSONUnderV235-server-100223357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-100223357',id=83,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:36:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f34aedb6d80843b39686cb02b480702d',ramdisk_id='',reservation_id='r-4cwg6sq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1510317611',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1510317611-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:36:31Z,user_data=None,user_id='a1e1f77fd6714a9bae1617c2c179169f',uuid=7d2022b0-ae34-41c3-b775-eca57874dc3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "vif_mac": "fa:16:3e:19:40:19"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.921 248514 DEBUG nova.network.os_vif_util [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Converting VIF {"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "vif_mac": "fa:16:3e:19:40:19"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.922 248514 DEBUG nova.network.os_vif_util [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:40:19,bridge_name='br-int',has_traffic_filtering=True,id=02f6066e-8276-478d-a9f4-b41637656170,network=Network(ea547472-f70b-465a-bb9c-323fe377dc37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f6066e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.924 248514 DEBUG nova.objects.instance [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.954 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <uuid>7d2022b0-ae34-41c3-b775-eca57874dc3d</uuid>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <name>instance-00000053</name>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-100223357</nova:name>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:36:54</nova:creationTime>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <nova:user uuid="a1e1f77fd6714a9bae1617c2c179169f">tempest-ServerRescueTestJSONUnderV235-1510317611-project-member</nova:user>
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <nova:project uuid="f34aedb6d80843b39686cb02b480702d">tempest-ServerRescueTestJSONUnderV235-1510317611</nova:project>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <nova:port uuid="02f6066e-8276-478d-a9f4-b41637656170">
Dec 13 08:36:56 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <system>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <entry name="serial">7d2022b0-ae34-41c3-b775-eca57874dc3d</entry>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <entry name="uuid">7d2022b0-ae34-41c3-b775-eca57874dc3d</entry>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </system>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <os>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   </os>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <features>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   </features>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.rescue">
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7d2022b0-ae34-41c3-b775-eca57874dc3d_disk">
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <target dev="vdb" bus="virtio"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config.rescue">
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:56 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:19:40:19"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <target dev="tap02f6066e-82"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/console.log" append="off"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <video>
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </video>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:36:56 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:36:56 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:36:56 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:36:56 compute-0 nova_compute[248510]: </domain>
Dec 13 08:36:56 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:36:56 compute-0 nova_compute[248510]: 2025-12-13 08:36:56.963 248514 INFO nova.virt.libvirt.driver [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance destroyed successfully.
Dec 13 08:36:57 compute-0 ceph-mon[76537]: pgmap v2190: 321 pgs: 321 active+clean; 186 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Dec 13 08:36:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/902088388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4180652669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1553780559' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.050 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.051 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.051 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.051 248514 DEBUG nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] No VIF found with MAC fa:16:3e:19:40:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.052 248514 INFO nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Using config drive
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.073 248514 DEBUG nova.storage.rbd_utils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.097 248514 DEBUG nova.objects.instance [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.104 248514 DEBUG nova.compute.manager [req-5153740d-8dab-4ab2-af4c-23cc0b94f26d req-bde08b67-1089-4beb-902e-d9792e71388d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received event network-changed-b270c8f6-1891-4aca-abf4-66636502ddbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.104 248514 DEBUG nova.compute.manager [req-5153740d-8dab-4ab2-af4c-23cc0b94f26d req-bde08b67-1089-4beb-902e-d9792e71388d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Refreshing instance network info cache due to event network-changed-b270c8f6-1891-4aca-abf4-66636502ddbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.105 248514 DEBUG oslo_concurrency.lockutils [req-5153740d-8dab-4ab2-af4c-23cc0b94f26d req-bde08b67-1089-4beb-902e-d9792e71388d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e7e1e09-61a5-4d7b-99f5-23665d0afde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.105 248514 DEBUG oslo_concurrency.lockutils [req-5153740d-8dab-4ab2-af4c-23cc0b94f26d req-bde08b67-1089-4beb-902e-d9792e71388d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e7e1e09-61a5-4d7b-99f5-23665d0afde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.105 248514 DEBUG nova.network.neutron [req-5153740d-8dab-4ab2-af4c-23cc0b94f26d req-bde08b67-1089-4beb-902e-d9792e71388d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Refreshing network info cache for port b270c8f6-1891-4aca-abf4-66636502ddbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.145 248514 DEBUG nova.objects.instance [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'keypairs' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.164 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.165 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3895MB free_disk=59.91513375286013GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.165 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.165 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.250 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7d2022b0-ae34-41c3-b775-eca57874dc3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.250 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2e7e1e09-61a5-4d7b-99f5-23665d0afde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.251 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.251 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.275 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.313 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.313 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.327 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.363 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.427 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873773344' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.473 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.496 248514 DEBUG nova.storage.rbd_utils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] rbd image 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:57 compute-0 nova_compute[248510]: 2025-12-13 08:36:57.500 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 186 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 233 KiB/s rd, 2.6 MiB/s wr, 81 op/s
Dec 13 08:36:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:36:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/779431303' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.006 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.014 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.038 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.074 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.074 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:36:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915312604' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.195 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.198 248514 DEBUG nova.virt.libvirt.vif [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:36:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-379002819',display_name='tempest-ServerMetadataTestJSON-server-379002819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-379002819',id=84,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79fa3fc27f544959074a09c80fd20c1',ramdisk_id='',reservation_id='r-gy7asnk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-416071345',owner_user_name='tempest-ServerMetadataTestJSON-416071345-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:36:49Z,user_data=None,user_id='1ac27853f5d944bb9ae5b4c62d1ed69b',uuid=2e7e1e09-61a5-4d7b-99f5-23665d0afde1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.199 248514 DEBUG nova.network.os_vif_util [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Converting VIF {"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.200 248514 DEBUG nova.network.os_vif_util [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:05:2d,bridge_name='br-int',has_traffic_filtering=True,id=b270c8f6-1891-4aca-abf4-66636502ddbe,network=Network(70b21d89-4836-4553-a91f-f95dc859aef6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb270c8f6-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.201 248514 DEBUG nova.objects.instance [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e7e1e09-61a5-4d7b-99f5-23665d0afde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:36:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3873773344' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/779431303' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.328 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <uuid>2e7e1e09-61a5-4d7b-99f5-23665d0afde1</uuid>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <name>instance-00000054</name>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerMetadataTestJSON-server-379002819</nova:name>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:36:56</nova:creationTime>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <nova:user uuid="1ac27853f5d944bb9ae5b4c62d1ed69b">tempest-ServerMetadataTestJSON-416071345-project-member</nova:user>
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <nova:project uuid="a79fa3fc27f544959074a09c80fd20c1">tempest-ServerMetadataTestJSON-416071345</nova:project>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <nova:port uuid="b270c8f6-1891-4aca-abf4-66636502ddbe">
Dec 13 08:36:58 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <system>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <entry name="serial">2e7e1e09-61a5-4d7b-99f5-23665d0afde1</entry>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <entry name="uuid">2e7e1e09-61a5-4d7b-99f5-23665d0afde1</entry>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </system>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <os>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   </os>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <features>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   </features>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk">
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk.config">
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       </source>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:36:58 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:bb:05:2d"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <target dev="tapb270c8f6-18"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1/console.log" append="off"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <video>
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </video>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:36:58 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:36:58 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:36:58 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:36:58 compute-0 nova_compute[248510]: </domain>
Dec 13 08:36:58 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.330 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Preparing to wait for external event network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.330 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.331 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.331 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.332 248514 DEBUG nova.virt.libvirt.vif [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:36:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-379002819',display_name='tempest-ServerMetadataTestJSON-server-379002819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-379002819',id=84,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79fa3fc27f544959074a09c80fd20c1',ramdisk_id='',reservation_id='r-gy7asnk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-416071345',owner_user_name='tempest-ServerMetadataTestJSON-416071345-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:36:49Z,user_data=None,user_id='1ac27853f5d944bb9ae5b4c62d1ed69b',uuid=2e7e1e09-61a5-4d7b-99f5-23665d0afde1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.332 248514 DEBUG nova.network.os_vif_util [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Converting VIF {"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.333 248514 DEBUG nova.network.os_vif_util [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:05:2d,bridge_name='br-int',has_traffic_filtering=True,id=b270c8f6-1891-4aca-abf4-66636502ddbe,network=Network(70b21d89-4836-4553-a91f-f95dc859aef6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb270c8f6-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.333 248514 DEBUG os_vif [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:05:2d,bridge_name='br-int',has_traffic_filtering=True,id=b270c8f6-1891-4aca-abf4-66636502ddbe,network=Network(70b21d89-4836-4553-a91f-f95dc859aef6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb270c8f6-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.335 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.335 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.338 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb270c8f6-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.339 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb270c8f6-18, col_values=(('external_ids', {'iface-id': 'b270c8f6-1891-4aca-abf4-66636502ddbe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:05:2d', 'vm-uuid': '2e7e1e09-61a5-4d7b-99f5-23665d0afde1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:36:58 compute-0 NetworkManager[50376]: <info>  [1765615018.3414] manager: (tapb270c8f6-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.352 248514 INFO os_vif [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:05:2d,bridge_name='br-int',has_traffic_filtering=True,id=b270c8f6-1891-4aca-abf4-66636502ddbe,network=Network(70b21d89-4836-4553-a91f-f95dc859aef6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb270c8f6-18')
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.471 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.472 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.472 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] No VIF found with MAC fa:16:3e:bb:05:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.473 248514 INFO nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Using config drive
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.496 248514 DEBUG nova.storage.rbd_utils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] rbd image 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.523 248514 INFO nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Creating config drive at /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config.rescue
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.528 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpivdhq4r6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.680 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpivdhq4r6" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.708 248514 DEBUG nova.storage.rbd_utils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] rbd image 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:58 compute-0 nova_compute[248510]: 2025-12-13 08:36:58.713 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config.rescue 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.075 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.076 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.076 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.076 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.239 248514 INFO nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Creating config drive at /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1/disk.config
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.248 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaz0op8_q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.392 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaz0op8_q" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.474 248514 DEBUG nova.storage.rbd_utils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] rbd image 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.477 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1/disk.config 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:36:59 compute-0 ceph-mon[76537]: pgmap v2191: 321 pgs: 321 active+clean; 186 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 233 KiB/s rd, 2.6 MiB/s wr, 81 op/s
Dec 13 08:36:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/915312604' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.623 248514 DEBUG oslo_concurrency.processutils [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config.rescue 7d2022b0-ae34-41c3-b775-eca57874dc3d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.910s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.625 248514 INFO nova.virt.libvirt.driver [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Deleting local config drive /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d/disk.config.rescue because it was imported into RBD.
Dec 13 08:36:59 compute-0 NetworkManager[50376]: <info>  [1765615019.6914] manager: (tap02f6066e-82): new Tun device (/org/freedesktop/NetworkManager/Devices/357)
Dec 13 08:36:59 compute-0 kernel: tap02f6066e-82: entered promiscuous mode
Dec 13 08:36:59 compute-0 ovn_controller[148476]: 2025-12-13T08:36:59Z|00828|binding|INFO|Claiming lport 02f6066e-8276-478d-a9f4-b41637656170 for this chassis.
Dec 13 08:36:59 compute-0 ovn_controller[148476]: 2025-12-13T08:36:59Z|00829|binding|INFO|02f6066e-8276-478d-a9f4-b41637656170: Claiming fa:16:3e:19:40:19 10.100.0.7
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 213 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 3.8 MiB/s wr, 93 op/s
Dec 13 08:36:59 compute-0 ovn_controller[148476]: 2025-12-13T08:36:59Z|00830|binding|INFO|Setting lport 02f6066e-8276-478d-a9f4-b41637656170 ovn-installed in OVS
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:59 compute-0 nova_compute[248510]: 2025-12-13 08:36:59.724 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:36:59 compute-0 systemd-udevd[330349]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:36:59 compute-0 NetworkManager[50376]: <info>  [1765615019.7722] device (tap02f6066e-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:36:59 compute-0 NetworkManager[50376]: <info>  [1765615019.7731] device (tap02f6066e-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:36:59 compute-0 podman[330323]: 2025-12-13 08:36:59.804950871 +0000 UTC m=+0.069276801 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 13 08:36:59 compute-0 ovn_controller[148476]: 2025-12-13T08:36:59Z|00831|binding|INFO|Setting lport 02f6066e-8276-478d-a9f4-b41637656170 up in Southbound
Dec 13 08:36:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:59.806 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:40:19 10.100.0.7'], port_security=['fa:16:3e:19:40:19 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7d2022b0-ae34-41c3-b775-eca57874dc3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea547472-f70b-465a-bb9c-323fe377dc37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f34aedb6d80843b39686cb02b480702d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '337b4fd8-7407-482e-af42-2202a65041d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09a270ea-9706-49f6-934c-15bff6be3cec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=02f6066e-8276-478d-a9f4-b41637656170) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:36:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:59.807 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 02f6066e-8276-478d-a9f4-b41637656170 in datapath ea547472-f70b-465a-bb9c-323fe377dc37 bound to our chassis
Dec 13 08:36:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:59.808 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea547472-f70b-465a-bb9c-323fe377dc37 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:36:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:36:59.808 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b7521d9d-725b-48d9-91d8-7bc427628f3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:36:59 compute-0 podman[330321]: 2025-12-13 08:36:59.81296861 +0000 UTC m=+0.077821694 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 08:36:59 compute-0 systemd-machined[210538]: New machine qemu-103-instance-00000053.
Dec 13 08:36:59 compute-0 podman[330319]: 2025-12-13 08:36:59.83389357 +0000 UTC m=+0.098587990 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:36:59 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000053.
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.167 248514 DEBUG nova.compute.manager [req-e07da514-1ccb-44a5-86e6-89ed52b8e388 req-a265ac36-c696-4ec1-8969-ac2161a8733c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.168 248514 DEBUG oslo_concurrency.lockutils [req-e07da514-1ccb-44a5-86e6-89ed52b8e388 req-a265ac36-c696-4ec1-8969-ac2161a8733c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.169 248514 DEBUG oslo_concurrency.lockutils [req-e07da514-1ccb-44a5-86e6-89ed52b8e388 req-a265ac36-c696-4ec1-8969-ac2161a8733c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.169 248514 DEBUG oslo_concurrency.lockutils [req-e07da514-1ccb-44a5-86e6-89ed52b8e388 req-a265ac36-c696-4ec1-8969-ac2161a8733c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.169 248514 DEBUG nova.compute.manager [req-e07da514-1ccb-44a5-86e6-89ed52b8e388 req-a265ac36-c696-4ec1-8969-ac2161a8733c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] No waiting events found dispatching network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.170 248514 WARNING nova.compute.manager [req-e07da514-1ccb-44a5-86e6-89ed52b8e388 req-a265ac36-c696-4ec1-8969-ac2161a8733c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received unexpected event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 for instance with vm_state active and task_state rescuing.
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.342 248514 DEBUG nova.network.neutron [req-5153740d-8dab-4ab2-af4c-23cc0b94f26d req-bde08b67-1089-4beb-902e-d9792e71388d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Updated VIF entry in instance network info cache for port b270c8f6-1891-4aca-abf4-66636502ddbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.343 248514 DEBUG nova.network.neutron [req-5153740d-8dab-4ab2-af4c-23cc0b94f26d req-bde08b67-1089-4beb-902e-d9792e71388d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Updating instance_info_cache with network_info: [{"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.367 248514 DEBUG oslo_concurrency.lockutils [req-5153740d-8dab-4ab2-af4c-23cc0b94f26d req-bde08b67-1089-4beb-902e-d9792e71388d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e7e1e09-61a5-4d7b-99f5-23665d0afde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.518 248514 DEBUG oslo_concurrency.processutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1/disk.config 2e7e1e09-61a5-4d7b-99f5-23665d0afde1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.519 248514 INFO nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Deleting local config drive /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1/disk.config because it was imported into RBD.
Dec 13 08:37:00 compute-0 ceph-mon[76537]: pgmap v2192: 321 pgs: 321 active+clean; 213 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 3.8 MiB/s wr, 93 op/s
Dec 13 08:37:00 compute-0 kernel: tapb270c8f6-18: entered promiscuous mode
Dec 13 08:37:00 compute-0 systemd-udevd[330351]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:37:00 compute-0 ovn_controller[148476]: 2025-12-13T08:37:00Z|00832|binding|INFO|Claiming lport b270c8f6-1891-4aca-abf4-66636502ddbe for this chassis.
Dec 13 08:37:00 compute-0 ovn_controller[148476]: 2025-12-13T08:37:00Z|00833|binding|INFO|b270c8f6-1891-4aca-abf4-66636502ddbe: Claiming fa:16:3e:bb:05:2d 10.100.0.14
Dec 13 08:37:00 compute-0 NetworkManager[50376]: <info>  [1765615020.5750] manager: (tapb270c8f6-18): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.582 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:05:2d 10.100.0.14'], port_security=['fa:16:3e:bb:05:2d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2e7e1e09-61a5-4d7b-99f5-23665d0afde1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b21d89-4836-4553-a91f-f95dc859aef6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79fa3fc27f544959074a09c80fd20c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '61dfca8f-7310-431a-84b1-8a81d32b112f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a4fd5b26-8e64-421d-81b0-a9918830587f, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b270c8f6-1891-4aca-abf4-66636502ddbe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.583 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b270c8f6-1891-4aca-abf4-66636502ddbe in datapath 70b21d89-4836-4553-a91f-f95dc859aef6 bound to our chassis
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.584 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 70b21d89-4836-4553-a91f-f95dc859aef6
Dec 13 08:37:00 compute-0 NetworkManager[50376]: <info>  [1765615020.5910] device (tapb270c8f6-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:37:00 compute-0 NetworkManager[50376]: <info>  [1765615020.5922] device (tapb270c8f6-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:37:00 compute-0 ovn_controller[148476]: 2025-12-13T08:37:00Z|00834|binding|INFO|Setting lport b270c8f6-1891-4aca-abf4-66636502ddbe ovn-installed in OVS
Dec 13 08:37:00 compute-0 ovn_controller[148476]: 2025-12-13T08:37:00Z|00835|binding|INFO|Setting lport b270c8f6-1891-4aca-abf4-66636502ddbe up in Southbound
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.597 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc2a519-85f5-4cdd-8272-a6cff282929b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.598 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap70b21d89-41 in ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.600 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap70b21d89-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.601 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78783fc7-1a43-4785-947a-6155bb41fcc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.602 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78965c49-1b7d-4175-8738-2d4194980204]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.614 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[785232b0-3947-4ad8-bcba-4374b5d60f50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 systemd-machined[210538]: New machine qemu-104-instance-00000054.
Dec 13 08:37:00 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000054.
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[797e502e-5ede-4efc-8fef-3cbba3e1c408]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.660 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a769547-5d1f-489a-a193-73f98503d998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.663 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:00 compute-0 NetworkManager[50376]: <info>  [1765615020.6679] manager: (tap70b21d89-40): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.667 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[504bca12-b0d1-401a-a2af-44a1a7e00cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.676 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 7d2022b0-ae34-41c3-b775-eca57874dc3d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.677 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615020.675499, 7d2022b0-ae34-41c3-b775-eca57874dc3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.677 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] VM Resumed (Lifecycle Event)
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.681 248514 DEBUG nova.compute.manager [None req-7dccebf4-1d6c-48c8-8cb8-f7d2e3b9b8a2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.703 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.703 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4cceb8b1-1de1-408a-8752-07a75933ecce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.706 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbb8a04-7487-444c-ad31-a61b32e9d164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.706 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:37:00 compute-0 NetworkManager[50376]: <info>  [1765615020.7298] device (tap70b21d89-40): carrier: link connected
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.733 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31e49918-ee94-413b-99ff-af342a6ab576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.748 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f09fced-4bc9-4129-ab12-5227c0bbf213]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap70b21d89-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:a3:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759794, 'reachable_time': 15234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330505, 'error': None, 'target': 'ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.751 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.751 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615020.6762059, 7d2022b0-ae34-41c3-b775-eca57874dc3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.751 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] VM Started (Lifecycle Event)
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.770 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0720985c-5909-41a4-a9b2-f8fc34a05e18]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:a370'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 759794, 'tstamp': 759794}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330506, 'error': None, 'target': 'ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.786 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.790 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.794 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a0ba19-0a99-46e2-8de8-8710c9a70c3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap70b21d89-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:a3:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759794, 'reachable_time': 15234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330507, 'error': None, 'target': 'ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.833 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb22aaf7-826e-40b3-b19c-a1da1cb8d213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47b08385-f725-40a5-b3a1-1f7bd3cc5fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.917 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70b21d89-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.917 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.918 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70b21d89-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:00 compute-0 kernel: tap70b21d89-40: entered promiscuous mode
Dec 13 08:37:00 compute-0 NetworkManager[50376]: <info>  [1765615020.9203] manager: (tap70b21d89-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.922 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap70b21d89-40, col_values=(('external_ids', {'iface-id': '506d5671-0efa-4dd3-81d1-72332bf6a6c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:00 compute-0 ovn_controller[148476]: 2025-12-13T08:37:00Z|00836|binding|INFO|Releasing lport 506d5671-0efa-4dd3-81d1-72332bf6a6c4 from this chassis (sb_readonly=0)
Dec 13 08:37:00 compute-0 nova_compute[248510]: 2025-12-13 08:37:00.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.940 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/70b21d89-4836-4553-a91f-f95dc859aef6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/70b21d89-4836-4553-a91f-f95dc859aef6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.940 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1cd892-6a0d-4b04-9b0c-1f5969ca4ecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.942 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-70b21d89-4836-4553-a91f-f95dc859aef6
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/70b21d89-4836-4553-a91f-f95dc859aef6.pid.haproxy
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 70b21d89-4836-4553-a91f-f95dc859aef6
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:37:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:00.943 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6', 'env', 'PROCESS_TAG=haproxy-70b21d89-4836-4553-a91f-f95dc859aef6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/70b21d89-4836-4553-a91f-f95dc859aef6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.242 248514 DEBUG nova.compute.manager [req-930f5301-fc98-4061-9932-d2aadbc082a6 req-e0268ae0-d4a2-447a-8681-3e880930fc2c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received event network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.244 248514 DEBUG oslo_concurrency.lockutils [req-930f5301-fc98-4061-9932-d2aadbc082a6 req-e0268ae0-d4a2-447a-8681-3e880930fc2c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.244 248514 DEBUG oslo_concurrency.lockutils [req-930f5301-fc98-4061-9932-d2aadbc082a6 req-e0268ae0-d4a2-447a-8681-3e880930fc2c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.244 248514 DEBUG oslo_concurrency.lockutils [req-930f5301-fc98-4061-9932-d2aadbc082a6 req-e0268ae0-d4a2-447a-8681-3e880930fc2c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.245 248514 DEBUG nova.compute.manager [req-930f5301-fc98-4061-9932-d2aadbc082a6 req-e0268ae0-d4a2-447a-8681-3e880930fc2c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Processing event network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:37:01 compute-0 podman[330557]: 2025-12-13 08:37:01.296185183 +0000 UTC m=+0.024213022 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:37:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:01 compute-0 podman[330557]: 2025-12-13 08:37:01.443587784 +0000 UTC m=+0.171615603 container create f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.514 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615021.514544, 2e7e1e09-61a5-4d7b-99f5-23665d0afde1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.516 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] VM Started (Lifecycle Event)
Dec 13 08:37:01 compute-0 ovn_controller[148476]: 2025-12-13T08:37:01Z|00837|binding|INFO|Releasing lport 506d5671-0efa-4dd3-81d1-72332bf6a6c4 from this chassis (sb_readonly=0)
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.522 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.529 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:37:01 compute-0 systemd[1]: Started libpod-conmon-f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2.scope.
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.542 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.551 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Instance spawned successfully.
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.552 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.561 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:37:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a2a44c1c9ed191744a557f3fd46f02bf99a3c068776a090d78379194f89655/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:01 compute-0 ovn_controller[148476]: 2025-12-13T08:37:01Z|00838|binding|INFO|Releasing lport 506d5671-0efa-4dd3-81d1-72332bf6a6c4 from this chassis (sb_readonly=0)
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.663 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.665 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.666 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.667 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.667 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.668 248514 DEBUG nova.virt.libvirt.driver [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:01 compute-0 podman[330557]: 2025-12-13 08:37:01.688392503 +0000 UTC m=+0.416420372 container init f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:37:01 compute-0 podman[330557]: 2025-12-13 08:37:01.694571957 +0000 UTC m=+0.422599786 container start f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 08:37:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 213 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 104 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.712 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.713 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615021.5154107, 2e7e1e09-61a5-4d7b-99f5-23665d0afde1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.713 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] VM Paused (Lifecycle Event)
Dec 13 08:37:01 compute-0 neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6[330596]: [NOTICE]   (330600) : New worker (330602) forked
Dec 13 08:37:01 compute-0 neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6[330596]: [NOTICE]   (330600) : Loading success.
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.750 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.754 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615021.5321267, 2e7e1e09-61a5-4d7b-99f5-23665d0afde1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.754 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] VM Resumed (Lifecycle Event)
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.778 248514 INFO nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Took 12.27 seconds to spawn the instance on the hypervisor.
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.779 248514 DEBUG nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.785 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:01 compute-0 nova_compute[248510]: 2025-12-13 08:37:01.796 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.016 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.066 248514 INFO nova.compute.manager [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Took 13.91 seconds to build instance.
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.142 248514 DEBUG oslo_concurrency.lockutils [None req-66a09d21-acf3-4f75-86be-692ac2cbc39a 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.791 248514 DEBUG nova.compute.manager [req-7866a1a3-15bb-4825-8c67-a1af3757115e req-75f9f51c-e54f-4379-86d0-5366ec8287be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.791 248514 DEBUG oslo_concurrency.lockutils [req-7866a1a3-15bb-4825-8c67-a1af3757115e req-75f9f51c-e54f-4379-86d0-5366ec8287be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.792 248514 DEBUG oslo_concurrency.lockutils [req-7866a1a3-15bb-4825-8c67-a1af3757115e req-75f9f51c-e54f-4379-86d0-5366ec8287be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.792 248514 DEBUG oslo_concurrency.lockutils [req-7866a1a3-15bb-4825-8c67-a1af3757115e req-75f9f51c-e54f-4379-86d0-5366ec8287be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.792 248514 DEBUG nova.compute.manager [req-7866a1a3-15bb-4825-8c67-a1af3757115e req-75f9f51c-e54f-4379-86d0-5366ec8287be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] No waiting events found dispatching network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:37:02 compute-0 nova_compute[248510]: 2025-12-13 08:37:02.792 248514 WARNING nova.compute.manager [req-7866a1a3-15bb-4825-8c67-a1af3757115e req-75f9f51c-e54f-4379-86d0-5366ec8287be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received unexpected event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 for instance with vm_state rescued and task_state None.
Dec 13 08:37:02 compute-0 ceph-mon[76537]: pgmap v2193: 321 pgs: 321 active+clean; 213 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 104 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Dec 13 08:37:03 compute-0 nova_compute[248510]: 2025-12-13 08:37:03.338 248514 DEBUG nova.compute.manager [req-28de0371-49cb-497d-b6c4-42d968ecec7d req-d52acacd-cf55-4011-bd8a-3c9e6501dd3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received event network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:03 compute-0 nova_compute[248510]: 2025-12-13 08:37:03.339 248514 DEBUG oslo_concurrency.lockutils [req-28de0371-49cb-497d-b6c4-42d968ecec7d req-d52acacd-cf55-4011-bd8a-3c9e6501dd3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:03 compute-0 nova_compute[248510]: 2025-12-13 08:37:03.339 248514 DEBUG oslo_concurrency.lockutils [req-28de0371-49cb-497d-b6c4-42d968ecec7d req-d52acacd-cf55-4011-bd8a-3c9e6501dd3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:03 compute-0 nova_compute[248510]: 2025-12-13 08:37:03.340 248514 DEBUG oslo_concurrency.lockutils [req-28de0371-49cb-497d-b6c4-42d968ecec7d req-d52acacd-cf55-4011-bd8a-3c9e6501dd3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:03 compute-0 nova_compute[248510]: 2025-12-13 08:37:03.340 248514 DEBUG nova.compute.manager [req-28de0371-49cb-497d-b6c4-42d968ecec7d req-d52acacd-cf55-4011-bd8a-3c9e6501dd3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] No waiting events found dispatching network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:37:03 compute-0 nova_compute[248510]: 2025-12-13 08:37:03.340 248514 WARNING nova.compute.manager [req-28de0371-49cb-497d-b6c4-42d968ecec7d req-d52acacd-cf55-4011-bd8a-3c9e6501dd3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received unexpected event network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe for instance with vm_state active and task_state None.
Dec 13 08:37:03 compute-0 nova_compute[248510]: 2025-12-13 08:37:03.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 119 op/s
Dec 13 08:37:04 compute-0 nova_compute[248510]: 2025-12-13 08:37:04.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:37:04 compute-0 ceph-mon[76537]: pgmap v2194: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 119 op/s
Dec 13 08:37:05 compute-0 nova_compute[248510]: 2025-12-13 08:37:05.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 193 op/s
Dec 13 08:37:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:06 compute-0 nova_compute[248510]: 2025-12-13 08:37:06.750 248514 DEBUG nova.compute.manager [req-c63dda32-949d-457e-be12-b1374af94f76 req-5ff7ce8f-a399-402a-9a65-f5f8b5c04084 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-changed-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:06 compute-0 nova_compute[248510]: 2025-12-13 08:37:06.752 248514 DEBUG nova.compute.manager [req-c63dda32-949d-457e-be12-b1374af94f76 req-5ff7ce8f-a399-402a-9a65-f5f8b5c04084 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing instance network info cache due to event network-changed-02f6066e-8276-478d-a9f4-b41637656170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:37:06 compute-0 nova_compute[248510]: 2025-12-13 08:37:06.753 248514 DEBUG oslo_concurrency.lockutils [req-c63dda32-949d-457e-be12-b1374af94f76 req-5ff7ce8f-a399-402a-9a65-f5f8b5c04084 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:06 compute-0 nova_compute[248510]: 2025-12-13 08:37:06.754 248514 DEBUG oslo_concurrency.lockutils [req-c63dda32-949d-457e-be12-b1374af94f76 req-5ff7ce8f-a399-402a-9a65-f5f8b5c04084 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:06 compute-0 nova_compute[248510]: 2025-12-13 08:37:06.754 248514 DEBUG nova.network.neutron [req-c63dda32-949d-457e-be12-b1374af94f76 req-5ff7ce8f-a399-402a-9a65-f5f8b5c04084 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing network info cache for port 02f6066e-8276-478d-a9f4-b41637656170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:37:07 compute-0 ceph-mon[76537]: pgmap v2195: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 193 op/s
Dec 13 08:37:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 161 op/s
Dec 13 08:37:08 compute-0 nova_compute[248510]: 2025-12-13 08:37:08.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:08 compute-0 nova_compute[248510]: 2025-12-13 08:37:08.906 248514 DEBUG nova.network.neutron [req-c63dda32-949d-457e-be12-b1374af94f76 req-5ff7ce8f-a399-402a-9a65-f5f8b5c04084 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updated VIF entry in instance network info cache for port 02f6066e-8276-478d-a9f4-b41637656170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:37:08 compute-0 nova_compute[248510]: 2025-12-13 08:37:08.907 248514 DEBUG nova.network.neutron [req-c63dda32-949d-457e-be12-b1374af94f76 req-5ff7ce8f-a399-402a-9a65-f5f8b5c04084 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:09 compute-0 ceph-mon[76537]: pgmap v2196: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 161 op/s
Dec 13 08:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:37:09
Dec 13 08:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data']
Dec 13 08:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.638 248514 DEBUG nova.compute.manager [req-d23e4062-b4d0-433b-84ba-5a9ca6226785 req-e290abd3-7ca6-4643-89ed-8e5fac269ded 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-changed-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.638 248514 DEBUG nova.compute.manager [req-d23e4062-b4d0-433b-84ba-5a9ca6226785 req-e290abd3-7ca6-4643-89ed-8e5fac269ded 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing instance network info cache due to event network-changed-02f6066e-8276-478d-a9f4-b41637656170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.638 248514 DEBUG oslo_concurrency.lockutils [req-d23e4062-b4d0-433b-84ba-5a9ca6226785 req-e290abd3-7ca6-4643-89ed-8e5fac269ded 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.659 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.660 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.660 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.660 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.661 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.662 248514 INFO nova.compute.manager [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Terminating instance
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.663 248514 DEBUG nova.compute.manager [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.664 248514 DEBUG oslo_concurrency.lockutils [req-c63dda32-949d-457e-be12-b1374af94f76 req-5ff7ce8f-a399-402a-9a65-f5f8b5c04084 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.665 248514 DEBUG oslo_concurrency.lockutils [req-d23e4062-b4d0-433b-84ba-5a9ca6226785 req-e290abd3-7ca6-4643-89ed-8e5fac269ded 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.665 248514 DEBUG nova.network.neutron [req-d23e4062-b4d0-433b-84ba-5a9ca6226785 req-e290abd3-7ca6-4643-89ed-8e5fac269ded 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing network info cache for port 02f6066e-8276-478d-a9f4-b41637656170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:37:09 compute-0 kernel: tapb270c8f6-18 (unregistering): left promiscuous mode
Dec 13 08:37:09 compute-0 NetworkManager[50376]: <info>  [1765615029.7068] device (tapb270c8f6-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:37:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 161 op/s
Dec 13 08:37:09 compute-0 ovn_controller[148476]: 2025-12-13T08:37:09Z|00839|binding|INFO|Releasing lport b270c8f6-1891-4aca-abf4-66636502ddbe from this chassis (sb_readonly=0)
Dec 13 08:37:09 compute-0 ovn_controller[148476]: 2025-12-13T08:37:09Z|00840|binding|INFO|Setting lport b270c8f6-1891-4aca-abf4-66636502ddbe down in Southbound
Dec 13 08:37:09 compute-0 ovn_controller[148476]: 2025-12-13T08:37:09Z|00841|binding|INFO|Removing iface tapb270c8f6-18 ovn-installed in OVS
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:09.750 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:05:2d 10.100.0.14'], port_security=['fa:16:3e:bb:05:2d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2e7e1e09-61a5-4d7b-99f5-23665d0afde1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b21d89-4836-4553-a91f-f95dc859aef6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79fa3fc27f544959074a09c80fd20c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61dfca8f-7310-431a-84b1-8a81d32b112f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a4fd5b26-8e64-421d-81b0-a9918830587f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b270c8f6-1891-4aca-abf4-66636502ddbe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:37:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:09.752 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b270c8f6-1891-4aca-abf4-66636502ddbe in datapath 70b21d89-4836-4553-a91f-f95dc859aef6 unbound from our chassis
Dec 13 08:37:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:09.754 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 70b21d89-4836-4553-a91f-f95dc859aef6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:37:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:09.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d55f1988-909f-428a-ace9-5f9daed59367]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:09.755 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6 namespace which is not needed anymore
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:09 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Deactivated successfully.
Dec 13 08:37:09 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Consumed 9.010s CPU time.
Dec 13 08:37:09 compute-0 systemd-machined[210538]: Machine qemu-104-instance-00000054 terminated.
Dec 13 08:37:09 compute-0 neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6[330596]: [NOTICE]   (330600) : haproxy version is 2.8.14-c23fe91
Dec 13 08:37:09 compute-0 neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6[330596]: [NOTICE]   (330600) : path to executable is /usr/sbin/haproxy
Dec 13 08:37:09 compute-0 neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6[330596]: [WARNING]  (330600) : Exiting Master process...
Dec 13 08:37:09 compute-0 neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6[330596]: [WARNING]  (330600) : Exiting Master process...
Dec 13 08:37:09 compute-0 neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6[330596]: [ALERT]    (330600) : Current worker (330602) exited with code 143 (Terminated)
Dec 13 08:37:09 compute-0 neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6[330596]: [WARNING]  (330600) : All workers exited. Exiting... (0)
Dec 13 08:37:09 compute-0 systemd[1]: libpod-f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2.scope: Deactivated successfully.
Dec 13 08:37:09 compute-0 conmon[330596]: conmon f0fa7eee5c007353dd6f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2.scope/container/memory.events
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.916 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Instance destroyed successfully.
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.919 248514 DEBUG nova.objects.instance [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lazy-loading 'resources' on Instance uuid 2e7e1e09-61a5-4d7b-99f5-23665d0afde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:37:09 compute-0 podman[330634]: 2025-12-13 08:37:09.927711837 +0000 UTC m=+0.073200509 container died f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.946 248514 DEBUG nova.virt.libvirt.vif [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:36:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-379002819',display_name='tempest-ServerMetadataTestJSON-server-379002819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-379002819',id=84,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:37:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79fa3fc27f544959074a09c80fd20c1',ramdisk_id='',reservation_id='r-gy7asnk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-416071345',owner_user_name='tempest-ServerMetadataTestJSON-416071345-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:37:07Z,user_data=None,user_id='1ac27853f5d944bb9ae5b4c62d1ed69b',uuid=2e7e1e09-61a5-4d7b-99f5-23665d0afde1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.946 248514 DEBUG nova.network.os_vif_util [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Converting VIF {"id": "b270c8f6-1891-4aca-abf4-66636502ddbe", "address": "fa:16:3e:bb:05:2d", "network": {"id": "70b21d89-4836-4553-a91f-f95dc859aef6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-371727464-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79fa3fc27f544959074a09c80fd20c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb270c8f6-18", "ovs_interfaceid": "b270c8f6-1891-4aca-abf4-66636502ddbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.947 248514 DEBUG nova.network.os_vif_util [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:05:2d,bridge_name='br-int',has_traffic_filtering=True,id=b270c8f6-1891-4aca-abf4-66636502ddbe,network=Network(70b21d89-4836-4553-a91f-f95dc859aef6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb270c8f6-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.947 248514 DEBUG os_vif [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:05:2d,bridge_name='br-int',has_traffic_filtering=True,id=b270c8f6-1891-4aca-abf4-66636502ddbe,network=Network(70b21d89-4836-4553-a91f-f95dc859aef6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb270c8f6-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.951 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb270c8f6-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.955 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:09 compute-0 nova_compute[248510]: 2025-12-13 08:37:09.959 248514 INFO os_vif [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:05:2d,bridge_name='br-int',has_traffic_filtering=True,id=b270c8f6-1891-4aca-abf4-66636502ddbe,network=Network(70b21d89-4836-4553-a91f-f95dc859aef6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb270c8f6-18')
Dec 13 08:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2-userdata-shm.mount: Deactivated successfully.
Dec 13 08:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4a2a44c1c9ed191744a557f3fd46f02bf99a3c068776a090d78379194f89655-merged.mount: Deactivated successfully.
Dec 13 08:37:09 compute-0 podman[330634]: 2025-12-13 08:37:09.97618248 +0000 UTC m=+0.121671152 container cleanup f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 08:37:09 compute-0 systemd[1]: libpod-conmon-f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2.scope: Deactivated successfully.
Dec 13 08:37:10 compute-0 podman[330685]: 2025-12-13 08:37:10.054632288 +0000 UTC m=+0.053900769 container remove f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.062 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b19c717c-e9e3-4388-a68a-4b763d57fec2]: (4, ('Sat Dec 13 08:37:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6 (f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2)\nf0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2\nSat Dec 13 08:37:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6 (f0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2)\nf0fa7eee5c007353dd6f282381257c2bac7724d253df546f05603d12362f97d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.066 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c272428c-8327-457b-a407-ac8f4fc75c71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.067 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70b21d89-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:10 compute-0 kernel: tap70b21d89-40: left promiscuous mode
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.082 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea47aa2e-73d5-464e-9c43-570b076add89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.095 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.099 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7d00ab-77e3-451b-919a-2a8e423e96b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.100 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8d8e13-f692-409f-80e8-fd7e1dfd7a87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.119 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea17893d-ad93-4de5-a0d2-dff0f9bb0dc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759787, 'reachable_time': 35928, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330701, 'error': None, 'target': 'ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.121 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-70b21d89-4836-4553-a91f-f95dc859aef6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:37:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:10.122 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc2db69-e72a-4a3d-aa4e-7f564546cac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d70b21d89\x2d4836\x2d4553\x2da91f\x2df95dc859aef6.mount: Deactivated successfully.
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.247 248514 INFO nova.virt.libvirt.driver [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Deleting instance files /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1_del
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.251 248514 INFO nova.virt.libvirt.driver [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Deletion of /var/lib/nova/instances/2e7e1e09-61a5-4d7b-99f5-23665d0afde1_del complete
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.327 248514 INFO nova.compute.manager [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Took 0.66 seconds to destroy the instance on the hypervisor.
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.327 248514 DEBUG oslo.service.loopingcall [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.328 248514 DEBUG nova.compute.manager [-] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.328 248514 DEBUG nova.network.neutron [-] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:37:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:37:10 compute-0 nova_compute[248510]: 2025-12-13 08:37:10.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:11 compute-0 ceph-mon[76537]: pgmap v2197: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 161 op/s
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.326 248514 DEBUG nova.network.neutron [-] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.368 248514 INFO nova.compute.manager [-] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Took 1.04 seconds to deallocate network for instance.
Dec 13 08:37:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.418 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.419 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.456 248514 DEBUG nova.compute.manager [req-925931c8-6139-4ce0-9a9c-b6f54d9e271e req-1d1b4e53-65c6-46f5-80bd-87603bec1060 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received event network-vif-deleted-b270c8f6-1891-4aca-abf4-66636502ddbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.503 248514 DEBUG oslo_concurrency.processutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.607 248514 DEBUG nova.network.neutron [req-d23e4062-b4d0-433b-84ba-5a9ca6226785 req-e290abd3-7ca6-4643-89ed-8e5fac269ded 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updated VIF entry in instance network info cache for port 02f6066e-8276-478d-a9f4-b41637656170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.608 248514 DEBUG nova.network.neutron [req-d23e4062-b4d0-433b-84ba-5a9ca6226785 req-e290abd3-7ca6-4643-89ed-8e5fac269ded 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.634 248514 DEBUG oslo_concurrency.lockutils [req-d23e4062-b4d0-433b-84ba-5a9ca6226785 req-e290abd3-7ca6-4643-89ed-8e5fac269ded 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 149 op/s
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.921 248514 DEBUG nova.compute.manager [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received event network-vif-unplugged-b270c8f6-1891-4aca-abf4-66636502ddbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.921 248514 DEBUG oslo_concurrency.lockutils [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.922 248514 DEBUG oslo_concurrency.lockutils [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.922 248514 DEBUG oslo_concurrency.lockutils [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.922 248514 DEBUG nova.compute.manager [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] No waiting events found dispatching network-vif-unplugged-b270c8f6-1891-4aca-abf4-66636502ddbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.922 248514 WARNING nova.compute.manager [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received unexpected event network-vif-unplugged-b270c8f6-1891-4aca-abf4-66636502ddbe for instance with vm_state deleted and task_state None.
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.922 248514 DEBUG nova.compute.manager [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received event network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.923 248514 DEBUG oslo_concurrency.lockutils [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.923 248514 DEBUG oslo_concurrency.lockutils [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.924 248514 DEBUG oslo_concurrency.lockutils [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.924 248514 DEBUG nova.compute.manager [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] No waiting events found dispatching network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:37:11 compute-0 nova_compute[248510]: 2025-12-13 08:37:11.924 248514 WARNING nova.compute.manager [req-47248fe5-3fb5-4118-8afd-48ef4d43992b req-123ad01c-1f35-4543-94e5-310c78f9976b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Received unexpected event network-vif-plugged-b270c8f6-1891-4aca-abf4-66636502ddbe for instance with vm_state deleted and task_state None.
Dec 13 08:37:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:37:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232566639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:12 compute-0 nova_compute[248510]: 2025-12-13 08:37:12.114 248514 DEBUG oslo_concurrency.processutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:12 compute-0 nova_compute[248510]: 2025-12-13 08:37:12.119 248514 DEBUG nova.compute.provider_tree [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:37:12 compute-0 nova_compute[248510]: 2025-12-13 08:37:12.149 248514 DEBUG nova.scheduler.client.report [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:37:12 compute-0 nova_compute[248510]: 2025-12-13 08:37:12.182 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:12 compute-0 nova_compute[248510]: 2025-12-13 08:37:12.238 248514 INFO nova.scheduler.client.report [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Deleted allocations for instance 2e7e1e09-61a5-4d7b-99f5-23665d0afde1
Dec 13 08:37:12 compute-0 nova_compute[248510]: 2025-12-13 08:37:12.336 248514 DEBUG oslo_concurrency.lockutils [None req-d30fe2ed-4f33-4f89-aca0-f34d2ccc0d20 1ac27853f5d944bb9ae5b4c62d1ed69b a79fa3fc27f544959074a09c80fd20c1 - - default default] Lock "2e7e1e09-61a5-4d7b-99f5-23665d0afde1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:13 compute-0 ceph-mon[76537]: pgmap v2198: 321 pgs: 321 active+clean; 214 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 149 op/s
Dec 13 08:37:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2232566639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 195 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 14 KiB/s wr, 168 op/s
Dec 13 08:37:14 compute-0 ceph-mon[76537]: pgmap v2199: 321 pgs: 321 active+clean; 195 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 14 KiB/s wr, 168 op/s
Dec 13 08:37:14 compute-0 nova_compute[248510]: 2025-12-13 08:37:14.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:14 compute-0 NetworkManager[50376]: <info>  [1765615034.3910] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Dec 13 08:37:14 compute-0 NetworkManager[50376]: <info>  [1765615034.3919] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Dec 13 08:37:14 compute-0 nova_compute[248510]: 2025-12-13 08:37:14.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:14 compute-0 nova_compute[248510]: 2025-12-13 08:37:14.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:37:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/705405797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:37:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:37:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/705405797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:37:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/705405797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:37:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/705405797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:37:15 compute-0 nova_compute[248510]: 2025-12-13 08:37:15.104 248514 DEBUG nova.compute.manager [req-9e3a1dee-22d5-4768-8617-e00e6f7e1faf req-8d07131a-dbf1-4261-83fd-724e469d06a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-changed-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:15 compute-0 nova_compute[248510]: 2025-12-13 08:37:15.104 248514 DEBUG nova.compute.manager [req-9e3a1dee-22d5-4768-8617-e00e6f7e1faf req-8d07131a-dbf1-4261-83fd-724e469d06a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing instance network info cache due to event network-changed-02f6066e-8276-478d-a9f4-b41637656170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:37:15 compute-0 nova_compute[248510]: 2025-12-13 08:37:15.104 248514 DEBUG oslo_concurrency.lockutils [req-9e3a1dee-22d5-4768-8617-e00e6f7e1faf req-8d07131a-dbf1-4261-83fd-724e469d06a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:15 compute-0 nova_compute[248510]: 2025-12-13 08:37:15.104 248514 DEBUG oslo_concurrency.lockutils [req-9e3a1dee-22d5-4768-8617-e00e6f7e1faf req-8d07131a-dbf1-4261-83fd-724e469d06a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:15 compute-0 nova_compute[248510]: 2025-12-13 08:37:15.104 248514 DEBUG nova.network.neutron [req-9e3a1dee-22d5-4768-8617-e00e6f7e1faf req-8d07131a-dbf1-4261-83fd-724e469d06a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing network info cache for port 02f6066e-8276-478d-a9f4-b41637656170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:37:15 compute-0 nova_compute[248510]: 2025-12-13 08:37:15.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:15 compute-0 nova_compute[248510]: 2025-12-13 08:37:15.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 KiB/s wr, 162 op/s
Dec 13 08:37:16 compute-0 ceph-mon[76537]: pgmap v2200: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 KiB/s wr, 162 op/s
Dec 13 08:37:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 12 KiB/s wr, 85 op/s
Dec 13 08:37:18 compute-0 ceph-mon[76537]: pgmap v2201: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 12 KiB/s wr, 85 op/s
Dec 13 08:37:18 compute-0 nova_compute[248510]: 2025-12-13 08:37:18.869 248514 DEBUG nova.network.neutron [req-9e3a1dee-22d5-4768-8617-e00e6f7e1faf req-8d07131a-dbf1-4261-83fd-724e469d06a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updated VIF entry in instance network info cache for port 02f6066e-8276-478d-a9f4-b41637656170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:37:18 compute-0 nova_compute[248510]: 2025-12-13 08:37:18.870 248514 DEBUG nova.network.neutron [req-9e3a1dee-22d5-4768-8617-e00e6f7e1faf req-8d07131a-dbf1-4261-83fd-724e469d06a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:18 compute-0 nova_compute[248510]: 2025-12-13 08:37:18.897 248514 DEBUG oslo_concurrency.lockutils [req-9e3a1dee-22d5-4768-8617-e00e6f7e1faf req-8d07131a-dbf1-4261-83fd-724e469d06a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 22 KiB/s wr, 85 op/s
Dec 13 08:37:20 compute-0 nova_compute[248510]: 2025-12-13 08:37:20.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:20 compute-0 nova_compute[248510]: 2025-12-13 08:37:20.180 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:20 compute-0 nova_compute[248510]: 2025-12-13 08:37:20.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:20 compute-0 ceph-mon[76537]: pgmap v2202: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 22 KiB/s wr, 85 op/s
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001115922281392316 of space, bias 1.0, pg target 0.3347766844176948 quantized to 32 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672713318288408 of space, bias 1.0, pg target 0.20018139954865222 quantized to 32 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.956350332623915e-07 of space, bias 4.0, pg target 0.0007147620399148699 quantized to 16 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:37:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:37:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 22 KiB/s wr, 85 op/s
Dec 13 08:37:22 compute-0 nova_compute[248510]: 2025-12-13 08:37:22.653 248514 DEBUG nova.compute.manager [req-080e7079-13b1-456a-8b38-5e800d22588f req-d45d274c-acf3-4949-aaa7-045a7814ca2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-changed-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:22 compute-0 nova_compute[248510]: 2025-12-13 08:37:22.654 248514 DEBUG nova.compute.manager [req-080e7079-13b1-456a-8b38-5e800d22588f req-d45d274c-acf3-4949-aaa7-045a7814ca2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing instance network info cache due to event network-changed-02f6066e-8276-478d-a9f4-b41637656170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:37:22 compute-0 nova_compute[248510]: 2025-12-13 08:37:22.654 248514 DEBUG oslo_concurrency.lockutils [req-080e7079-13b1-456a-8b38-5e800d22588f req-d45d274c-acf3-4949-aaa7-045a7814ca2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:22 compute-0 nova_compute[248510]: 2025-12-13 08:37:22.654 248514 DEBUG oslo_concurrency.lockutils [req-080e7079-13b1-456a-8b38-5e800d22588f req-d45d274c-acf3-4949-aaa7-045a7814ca2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:22 compute-0 nova_compute[248510]: 2025-12-13 08:37:22.654 248514 DEBUG nova.network.neutron [req-080e7079-13b1-456a-8b38-5e800d22588f req-d45d274c-acf3-4949-aaa7-045a7814ca2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Refreshing network info cache for port 02f6066e-8276-478d-a9f4-b41637656170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:37:22 compute-0 ceph-mon[76537]: pgmap v2203: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 22 KiB/s wr, 85 op/s
Dec 13 08:37:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 22 KiB/s wr, 85 op/s
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.376 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.377 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.402 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.563 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.563 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.573 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.573 248514 INFO nova.compute.claims [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.741 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:24 compute-0 ceph-mon[76537]: pgmap v2204: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 22 KiB/s wr, 85 op/s
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.903 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615029.9013834, 2e7e1e09-61a5-4d7b-99f5-23665d0afde1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.903 248514 INFO nova.compute.manager [-] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] VM Stopped (Lifecycle Event)
Dec 13 08:37:24 compute-0 nova_compute[248510]: 2025-12-13 08:37:24.935 248514 DEBUG nova.compute.manager [None req-9d2d3f38-cfe2-446d-b3e8-426e7481c158 - - - - - -] [instance: 2e7e1e09-61a5-4d7b-99f5-23665d0afde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:37:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840205699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.357 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.364 248514 DEBUG nova.compute.provider_tree [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.404 248514 DEBUG nova.scheduler.client.report [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.433 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.434 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.523 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.524 248514 DEBUG nova.network.neutron [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.558 248514 INFO nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.597 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.667 248514 DEBUG nova.network.neutron [req-080e7079-13b1-456a-8b38-5e800d22588f req-d45d274c-acf3-4949-aaa7-045a7814ca2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updated VIF entry in instance network info cache for port 02f6066e-8276-478d-a9f4-b41637656170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.667 248514 DEBUG nova.network.neutron [req-080e7079-13b1-456a-8b38-5e800d22588f req-d45d274c-acf3-4949-aaa7-045a7814ca2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [{"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.709 248514 DEBUG oslo_concurrency.lockutils [req-080e7079-13b1-456a-8b38-5e800d22588f req-d45d274c-acf3-4949-aaa7-045a7814ca2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7d2022b0-ae34-41c3-b775-eca57874dc3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 443 KiB/s rd, 21 KiB/s wr, 64 op/s
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.746 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.748 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.748 248514 INFO nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating image(s)
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.770 248514 DEBUG nova.storage.rbd_utils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.791 248514 DEBUG nova.storage.rbd_utils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.810 248514 DEBUG nova.storage.rbd_utils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.814 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.898 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.899 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.900 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.900 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3840205699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.927 248514 DEBUG nova.storage.rbd_utils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:25 compute-0 nova_compute[248510]: 2025-12-13 08:37:25.932 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.461 248514 DEBUG nova.policy [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7507939da64e4320a1c6f389d0fc9045', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.468527) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615046468594, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 955, "num_deletes": 256, "total_data_size": 1346254, "memory_usage": 1371336, "flush_reason": "Manual Compaction"}
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615046513093, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1322362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42632, "largest_seqno": 43586, "table_properties": {"data_size": 1317688, "index_size": 2262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10283, "raw_average_key_size": 19, "raw_value_size": 1308205, "raw_average_value_size": 2454, "num_data_blocks": 101, "num_entries": 533, "num_filter_entries": 533, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614964, "oldest_key_time": 1765614964, "file_creation_time": 1765615046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 44732 microseconds, and 8832 cpu microseconds.
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.513244) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1322362 bytes OK
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.513283) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.543096) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.543164) EVENT_LOG_v1 {"time_micros": 1765615046543152, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.543200) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1341641, prev total WAL file size 1341641, number of live WAL files 2.
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.544242) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353039' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1291KB)], [98(7856KB)]
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615046544361, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9367856, "oldest_snapshot_seqno": -1}
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.582 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6408 keys, 9240695 bytes, temperature: kUnknown
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615046609743, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 9240695, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9198367, "index_size": 25194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 167207, "raw_average_key_size": 26, "raw_value_size": 9083875, "raw_average_value_size": 1417, "num_data_blocks": 990, "num_entries": 6408, "num_filter_entries": 6408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.610002) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 9240695 bytes
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.613441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.1 rd, 141.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.7 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(14.1) write-amplify(7.0) OK, records in: 6932, records dropped: 524 output_compression: NoCompression
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.613463) EVENT_LOG_v1 {"time_micros": 1765615046613453, "job": 58, "event": "compaction_finished", "compaction_time_micros": 65450, "compaction_time_cpu_micros": 22916, "output_level": 6, "num_output_files": 1, "total_output_size": 9240695, "num_input_records": 6932, "num_output_records": 6408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615046613895, "job": 58, "event": "table_file_deletion", "file_number": 100}
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615046616185, "job": 58, "event": "table_file_deletion", "file_number": 98}
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.544124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.616238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.616244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.616247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.616252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:37:26 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:37:26.616256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.649 248514 DEBUG nova.storage.rbd_utils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] resizing rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:37:26 compute-0 sudo[330897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:37:26 compute-0 sudo[330897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:26 compute-0 sudo[330897]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:26 compute-0 sudo[330922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 08:37:26 compute-0 sudo[330922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.853 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.854 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.855 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.855 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.855 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.857 248514 INFO nova.compute.manager [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Terminating instance
Dec 13 08:37:26 compute-0 nova_compute[248510]: 2025-12-13 08:37:26.858 248514 DEBUG nova.compute.manager [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:37:27 compute-0 ceph-mon[76537]: pgmap v2205: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 443 KiB/s rd, 21 KiB/s wr, 64 op/s
Dec 13 08:37:27 compute-0 kernel: tap02f6066e-82 (unregistering): left promiscuous mode
Dec 13 08:37:27 compute-0 NetworkManager[50376]: <info>  [1765615047.0947] device (tap02f6066e-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:37:27 compute-0 ovn_controller[148476]: 2025-12-13T08:37:27Z|00842|binding|INFO|Releasing lport 02f6066e-8276-478d-a9f4-b41637656170 from this chassis (sb_readonly=0)
Dec 13 08:37:27 compute-0 ovn_controller[148476]: 2025-12-13T08:37:27Z|00843|binding|INFO|Setting lport 02f6066e-8276-478d-a9f4-b41637656170 down in Southbound
Dec 13 08:37:27 compute-0 ovn_controller[148476]: 2025-12-13T08:37:27Z|00844|binding|INFO|Removing iface tap02f6066e-82 ovn-installed in OVS
Dec 13 08:37:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:27.112 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:40:19 10.100.0.7'], port_security=['fa:16:3e:19:40:19 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7d2022b0-ae34-41c3-b775-eca57874dc3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea547472-f70b-465a-bb9c-323fe377dc37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f34aedb6d80843b39686cb02b480702d', 'neutron:revision_number': '8', 'neutron:security_group_ids': '337b4fd8-7407-482e-af42-2202a65041d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09a270ea-9706-49f6-934c-15bff6be3cec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=02f6066e-8276-478d-a9f4-b41637656170) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:37:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:27.114 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 02f6066e-8276-478d-a9f4-b41637656170 in datapath ea547472-f70b-465a-bb9c-323fe377dc37 unbound from our chassis
Dec 13 08:37:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:27.115 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea547472-f70b-465a-bb9c-323fe377dc37 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 08:37:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:27.117 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4179953a-48e0-4f9f-bfc7-87a7a38025e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.135 248514 DEBUG nova.objects.instance [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:37:27 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Deactivated successfully.
Dec 13 08:37:27 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Consumed 13.275s CPU time.
Dec 13 08:37:27 compute-0 systemd-machined[210538]: Machine qemu-103-instance-00000053 terminated.
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.155 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.155 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Ensure instance console log exists: /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.156 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.156 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.156 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.296 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.309 248514 INFO nova.virt.libvirt.driver [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Instance destroyed successfully.
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.309 248514 DEBUG nova.objects.instance [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lazy-loading 'resources' on Instance uuid 7d2022b0-ae34-41c3-b775-eca57874dc3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.324 248514 DEBUG nova.virt.libvirt.vif [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:36:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-100223357',display_name='tempest-ServerRescueTestJSONUnderV235-server-100223357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-100223357',id=83,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:37:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f34aedb6d80843b39686cb02b480702d',ramdisk_id='',reservation_id='r-4cwg6sq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1510317611',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1510317611-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:37:00Z,user_data=None,user_id='a1e1f77fd6714a9bae1617c2c179169f',uuid=7d2022b0-ae34-41c3-b775-eca57874dc3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.324 248514 DEBUG nova.network.os_vif_util [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Converting VIF {"id": "02f6066e-8276-478d-a9f4-b41637656170", "address": "fa:16:3e:19:40:19", "network": {"id": "ea547472-f70b-465a-bb9c-323fe377dc37", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2018581525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f34aedb6d80843b39686cb02b480702d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f6066e-82", "ovs_interfaceid": "02f6066e-8276-478d-a9f4-b41637656170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.325 248514 DEBUG nova.network.os_vif_util [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:40:19,bridge_name='br-int',has_traffic_filtering=True,id=02f6066e-8276-478d-a9f4-b41637656170,network=Network(ea547472-f70b-465a-bb9c-323fe377dc37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f6066e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.326 248514 DEBUG os_vif [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:40:19,bridge_name='br-int',has_traffic_filtering=True,id=02f6066e-8276-478d-a9f4-b41637656170,network=Network(ea547472-f70b-465a-bb9c-323fe377dc37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f6066e-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.328 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.328 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02f6066e-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.334 248514 INFO os_vif [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:40:19,bridge_name='br-int',has_traffic_filtering=True,id=02f6066e-8276-478d-a9f4-b41637656170,network=Network(ea547472-f70b-465a-bb9c-323fe377dc37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f6066e-82')
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.434 248514 DEBUG nova.compute.manager [req-567cf03a-3bfd-45ca-9735-ce24da18f629 req-3ae39eea-cc47-455c-ab93-b60a8dce6d18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-unplugged-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.435 248514 DEBUG oslo_concurrency.lockutils [req-567cf03a-3bfd-45ca-9735-ce24da18f629 req-3ae39eea-cc47-455c-ab93-b60a8dce6d18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.435 248514 DEBUG oslo_concurrency.lockutils [req-567cf03a-3bfd-45ca-9735-ce24da18f629 req-3ae39eea-cc47-455c-ab93-b60a8dce6d18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.436 248514 DEBUG oslo_concurrency.lockutils [req-567cf03a-3bfd-45ca-9735-ce24da18f629 req-3ae39eea-cc47-455c-ab93-b60a8dce6d18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.436 248514 DEBUG nova.compute.manager [req-567cf03a-3bfd-45ca-9735-ce24da18f629 req-3ae39eea-cc47-455c-ab93-b60a8dce6d18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] No waiting events found dispatching network-vif-unplugged-02f6066e-8276-478d-a9f4-b41637656170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.436 248514 DEBUG nova.compute.manager [req-567cf03a-3bfd-45ca-9735-ce24da18f629 req-3ae39eea-cc47-455c-ab93-b60a8dce6d18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-unplugged-02f6066e-8276-478d-a9f4-b41637656170 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:37:27 compute-0 podman[331013]: 2025-12-13 08:37:27.538990304 +0000 UTC m=+0.310489332 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:37:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 10 KiB/s wr, 0 op/s
Dec 13 08:37:27 compute-0 podman[331013]: 2025-12-13 08:37:27.756088076 +0000 UTC m=+0.527587064 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:37:27 compute-0 nova_compute[248510]: 2025-12-13 08:37:27.757 248514 DEBUG nova.network.neutron [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Successfully created port: ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:37:28 compute-0 nova_compute[248510]: 2025-12-13 08:37:28.459 248514 INFO nova.virt.libvirt.driver [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Deleting instance files /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d_del
Dec 13 08:37:28 compute-0 nova_compute[248510]: 2025-12-13 08:37:28.460 248514 INFO nova.virt.libvirt.driver [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Deletion of /var/lib/nova/instances/7d2022b0-ae34-41c3-b775-eca57874dc3d_del complete
Dec 13 08:37:28 compute-0 nova_compute[248510]: 2025-12-13 08:37:28.622 248514 INFO nova.compute.manager [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Took 1.76 seconds to destroy the instance on the hypervisor.
Dec 13 08:37:28 compute-0 nova_compute[248510]: 2025-12-13 08:37:28.624 248514 DEBUG oslo.service.loopingcall [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:37:28 compute-0 nova_compute[248510]: 2025-12-13 08:37:28.624 248514 DEBUG nova.compute.manager [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:37:28 compute-0 nova_compute[248510]: 2025-12-13 08:37:28.625 248514 DEBUG nova.network.neutron [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:37:28 compute-0 sudo[330922]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:37:28 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:37:28 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:28 compute-0 sudo[331231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:37:28 compute-0 sudo[331231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:28 compute-0 sudo[331231]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:28 compute-0 sudo[331256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:37:28 compute-0 sudo[331256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:29 compute-0 ceph-mon[76537]: pgmap v2206: 321 pgs: 321 active+clean; 169 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 10 KiB/s wr, 0 op/s
Dec 13 08:37:29 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:29 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:29 compute-0 sudo[331256]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:37:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:37:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:37:29 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:37:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:37:29 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:37:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:37:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:37:29 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:37:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:37:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:37:29 compute-0 sudo[331312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:37:29 compute-0 sudo[331312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:29 compute-0 sudo[331312]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:29 compute-0 sudo[331337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:37:29 compute-0 sudo[331337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.567 248514 DEBUG nova.compute.manager [req-4b3a1141-79ab-479c-9b30-a14108d9c5de req-381bfcce-e2b3-4960-b1cb-2690dcbbbb55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.567 248514 DEBUG oslo_concurrency.lockutils [req-4b3a1141-79ab-479c-9b30-a14108d9c5de req-381bfcce-e2b3-4960-b1cb-2690dcbbbb55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.568 248514 DEBUG oslo_concurrency.lockutils [req-4b3a1141-79ab-479c-9b30-a14108d9c5de req-381bfcce-e2b3-4960-b1cb-2690dcbbbb55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.568 248514 DEBUG oslo_concurrency.lockutils [req-4b3a1141-79ab-479c-9b30-a14108d9c5de req-381bfcce-e2b3-4960-b1cb-2690dcbbbb55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.568 248514 DEBUG nova.compute.manager [req-4b3a1141-79ab-479c-9b30-a14108d9c5de req-381bfcce-e2b3-4960-b1cb-2690dcbbbb55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] No waiting events found dispatching network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.569 248514 WARNING nova.compute.manager [req-4b3a1141-79ab-479c-9b30-a14108d9c5de req-381bfcce-e2b3-4960-b1cb-2690dcbbbb55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received unexpected event network-vif-plugged-02f6066e-8276-478d-a9f4-b41637656170 for instance with vm_state rescued and task_state deleting.
Dec 13 08:37:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 189 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 19 op/s
Dec 13 08:37:29 compute-0 podman[331374]: 2025-12-13 08:37:29.757712023 +0000 UTC m=+0.049244104 container create 39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:37:29 compute-0 systemd[1]: Started libpod-conmon-39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7.scope.
Dec 13 08:37:29 compute-0 podman[331374]: 2025-12-13 08:37:29.730579969 +0000 UTC m=+0.022112070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:37:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:29 compute-0 podman[331374]: 2025-12-13 08:37:29.861628714 +0000 UTC m=+0.153160815 container init 39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:37:29 compute-0 podman[331374]: 2025-12-13 08:37:29.872290118 +0000 UTC m=+0.163822199 container start 39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kepler, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:37:29 compute-0 systemd[1]: libpod-39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7.scope: Deactivated successfully.
Dec 13 08:37:29 compute-0 sharp_kepler[331390]: 167 167
Dec 13 08:37:29 compute-0 conmon[331390]: conmon 39432f3d2488d69a0a2a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7.scope/container/memory.events
Dec 13 08:37:29 compute-0 podman[331374]: 2025-12-13 08:37:29.885687881 +0000 UTC m=+0.177219972 container attach 39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kepler, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 08:37:29 compute-0 podman[331374]: 2025-12-13 08:37:29.887775953 +0000 UTC m=+0.179308034 container died 39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:37:29 compute-0 podman[331393]: 2025-12-13 08:37:29.923625023 +0000 UTC m=+0.079345401 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 08:37:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d95db2890552ab2a3fe73718f721f5304440e4eb0949e11729550bf6994e15d0-merged.mount: Deactivated successfully.
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.964 248514 DEBUG nova.network.neutron [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Successfully updated port: ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:37:29 compute-0 podman[331374]: 2025-12-13 08:37:29.974320122 +0000 UTC m=+0.265852223 container remove 39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kepler, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.987 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.987 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:29 compute-0 nova_compute[248510]: 2025-12-13 08:37:29.988 248514 DEBUG nova.network.neutron [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:37:29 compute-0 systemd[1]: libpod-conmon-39432f3d2488d69a0a2ac14323172742944598b7c0d722b889726c5a83d314a7.scope: Deactivated successfully.
Dec 13 08:37:30 compute-0 podman[331392]: 2025-12-13 08:37:30.019398402 +0000 UTC m=+0.168497456 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:37:30 compute-0 podman[331418]: 2025-12-13 08:37:30.063896077 +0000 UTC m=+0.160691032 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 13 08:37:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:37:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:37:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:37:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:37:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:37:30 compute-0 ceph-mon[76537]: pgmap v2207: 321 pgs: 321 active+clean; 189 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 19 op/s
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.092 248514 DEBUG nova.network.neutron [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.111 248514 DEBUG nova.compute.manager [req-f0fa4d49-3ccd-40ae-9129-0bdcbc831c19 req-fe60620f-a456-4151-b85a-ac304718fa43 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.112 248514 DEBUG nova.compute.manager [req-f0fa4d49-3ccd-40ae-9129-0bdcbc831c19 req-fe60620f-a456-4151-b85a-ac304718fa43 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing instance network info cache due to event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.112 248514 DEBUG oslo_concurrency.lockutils [req-f0fa4d49-3ccd-40ae-9129-0bdcbc831c19 req-fe60620f-a456-4151-b85a-ac304718fa43 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.113 248514 INFO nova.compute.manager [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Took 1.49 seconds to deallocate network for instance.
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.163 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.164 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:30 compute-0 podman[331477]: 2025-12-13 08:37:30.179257062 +0000 UTC m=+0.080596053 container create add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 08:37:30 compute-0 podman[331477]: 2025-12-13 08:37:30.122563314 +0000 UTC m=+0.023902325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:37:30 compute-0 systemd[1]: Started libpod-conmon-add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d.scope.
Dec 13 08:37:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e7928fe8e5ec743e67a30324cd8fa8afcb4703ce7a4b80d940c81628658930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e7928fe8e5ec743e67a30324cd8fa8afcb4703ce7a4b80d940c81628658930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e7928fe8e5ec743e67a30324cd8fa8afcb4703ce7a4b80d940c81628658930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e7928fe8e5ec743e67a30324cd8fa8afcb4703ce7a4b80d940c81628658930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e7928fe8e5ec743e67a30324cd8fa8afcb4703ce7a4b80d940c81628658930/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.282 248514 DEBUG oslo_concurrency.processutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.313 248514 DEBUG nova.network.neutron [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:37:30 compute-0 podman[331477]: 2025-12-13 08:37:30.315594447 +0000 UTC m=+0.216933499 container init add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 08:37:30 compute-0 podman[331477]: 2025-12-13 08:37:30.325477533 +0000 UTC m=+0.226816534 container start add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 08:37:30 compute-0 podman[331477]: 2025-12-13 08:37:30.341171773 +0000 UTC m=+0.242510784 container attach add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:37:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376969826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:30 compute-0 cranky_fermat[331494]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:37:30 compute-0 cranky_fermat[331494]: --> All data devices are unavailable
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.825 248514 DEBUG oslo_concurrency.processutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.834 248514 DEBUG nova.compute.provider_tree [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:37:30 compute-0 systemd[1]: libpod-add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d.scope: Deactivated successfully.
Dec 13 08:37:30 compute-0 podman[331477]: 2025-12-13 08:37:30.85446846 +0000 UTC m=+0.755807451 container died add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.856 248514 DEBUG nova.scheduler.client.report [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.895 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8e7928fe8e5ec743e67a30324cd8fa8afcb4703ce7a4b80d940c81628658930-merged.mount: Deactivated successfully.
Dec 13 08:37:30 compute-0 nova_compute[248510]: 2025-12-13 08:37:30.933 248514 INFO nova.scheduler.client.report [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Deleted allocations for instance 7d2022b0-ae34-41c3-b775-eca57874dc3d
Dec 13 08:37:31 compute-0 podman[331477]: 2025-12-13 08:37:31.021438647 +0000 UTC m=+0.922777638 container remove add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.040 248514 DEBUG oslo_concurrency.lockutils [None req-002f79ca-912f-4170-86fd-2dfe6ad952c2 a1e1f77fd6714a9bae1617c2c179169f f34aedb6d80843b39686cb02b480702d - - default default] Lock "7d2022b0-ae34-41c3-b775-eca57874dc3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:31 compute-0 systemd[1]: libpod-conmon-add21393cf40813c60313dddc435ff78cfabf77bc706a02cd5619d87ea379e7d.scope: Deactivated successfully.
Dec 13 08:37:31 compute-0 sudo[331337]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:31 compute-0 sudo[331548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:37:31 compute-0 sudo[331548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:31 compute-0 sudo[331548]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:31 compute-0 sudo[331573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:37:31 compute-0 sudo[331573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.276 248514 DEBUG nova.network.neutron [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.305 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.306 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance network_info: |[{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.306 248514 DEBUG oslo_concurrency.lockutils [req-f0fa4d49-3ccd-40ae-9129-0bdcbc831c19 req-fe60620f-a456-4151-b85a-ac304718fa43 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.306 248514 DEBUG nova.network.neutron [req-f0fa4d49-3ccd-40ae-9129-0bdcbc831c19 req-fe60620f-a456-4151-b85a-ac304718fa43 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.309 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Start _get_guest_xml network_info=[{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.314 248514 WARNING nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.321 248514 DEBUG nova.virt.libvirt.host [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.322 248514 DEBUG nova.virt.libvirt.host [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.330 248514 DEBUG nova.virt.libvirt.host [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.331 248514 DEBUG nova.virt.libvirt.host [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.331 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.332 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.332 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.332 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.332 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.332 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.332 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.333 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.333 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.333 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.333 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.333 248514 DEBUG nova.virt.hardware [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.336 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3376969826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:31 compute-0 podman[331614]: 2025-12-13 08:37:31.462866959 +0000 UTC m=+0.041860680 container create 2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 08:37:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:31 compute-0 systemd[1]: Started libpod-conmon-2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f.scope.
Dec 13 08:37:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:31 compute-0 podman[331614]: 2025-12-13 08:37:31.441687153 +0000 UTC m=+0.020680904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:37:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 145 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Dec 13 08:37:31 compute-0 podman[331614]: 2025-12-13 08:37:31.732448714 +0000 UTC m=+0.311442455 container init 2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:37:31 compute-0 podman[331614]: 2025-12-13 08:37:31.74194616 +0000 UTC m=+0.320939881 container start 2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_black, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 08:37:31 compute-0 podman[331614]: 2025-12-13 08:37:31.747508828 +0000 UTC m=+0.326502569 container attach 2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_black, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:37:31 compute-0 stoic_black[331649]: 167 167
Dec 13 08:37:31 compute-0 podman[331614]: 2025-12-13 08:37:31.749133778 +0000 UTC m=+0.328127509 container died 2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_black, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:37:31 compute-0 systemd[1]: libpod-2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f.scope: Deactivated successfully.
Dec 13 08:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9c126d12d192a777e675629732b10ba4156d6f20b1660b3bab84944ff97557c-merged.mount: Deactivated successfully.
Dec 13 08:37:31 compute-0 podman[331614]: 2025-12-13 08:37:31.799353225 +0000 UTC m=+0.378346946 container remove 2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:37:31 compute-0 systemd[1]: libpod-conmon-2f75584ead089524cdeccbdc093c72a5f4b2d4439f6a4af0f16e9350dbdd211f.scope: Deactivated successfully.
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:37:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172215206' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.923 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.944 248514 DEBUG nova.storage.rbd_utils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:31 compute-0 nova_compute[248510]: 2025-12-13 08:37:31.948 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:31 compute-0 podman[331682]: 2025-12-13 08:37:31.984116514 +0000 UTC m=+0.039468311 container create 37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pascal, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 08:37:32 compute-0 systemd[1]: Started libpod-conmon-37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4.scope.
Dec 13 08:37:32 compute-0 podman[331682]: 2025-12-13 08:37:31.964925287 +0000 UTC m=+0.020277114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:37:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7792d5ea8d51524f4e5c78e00beaad76adf2c0bd702410d05ef6a64cd277f9f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7792d5ea8d51524f4e5c78e00beaad76adf2c0bd702410d05ef6a64cd277f9f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7792d5ea8d51524f4e5c78e00beaad76adf2c0bd702410d05ef6a64cd277f9f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7792d5ea8d51524f4e5c78e00beaad76adf2c0bd702410d05ef6a64cd277f9f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:32 compute-0 podman[331682]: 2025-12-13 08:37:32.081641246 +0000 UTC m=+0.136993043 container init 37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pascal, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:37:32 compute-0 podman[331682]: 2025-12-13 08:37:32.088835054 +0000 UTC m=+0.144186841 container start 37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 08:37:32 compute-0 podman[331682]: 2025-12-13 08:37:32.092356422 +0000 UTC m=+0.147708239 container attach 37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pascal, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.277 248514 DEBUG nova.compute.manager [req-45420ff6-614a-4ce1-b0ae-e759988cbfcd req-91e8c102-d7a2-4e91-afec-1044e0500cbe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Received event network-vif-deleted-02f6066e-8276-478d-a9f4-b41637656170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:32 compute-0 nervous_pascal[331711]: {
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:     "0": [
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:         {
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "devices": [
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "/dev/loop3"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             ],
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_name": "ceph_lv0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_size": "21470642176",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "name": "ceph_lv0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "tags": {
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cluster_name": "ceph",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.crush_device_class": "",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.encrypted": "0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.objectstore": "bluestore",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osd_id": "0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.type": "block",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.vdo": "0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.with_tpm": "0"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             },
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "type": "block",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "vg_name": "ceph_vg0"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:         }
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:     ],
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:     "1": [
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:         {
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "devices": [
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "/dev/loop4"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             ],
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_name": "ceph_lv1",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_size": "21470642176",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "name": "ceph_lv1",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "tags": {
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cluster_name": "ceph",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.crush_device_class": "",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.encrypted": "0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.objectstore": "bluestore",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osd_id": "1",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.type": "block",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.vdo": "0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.with_tpm": "0"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             },
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "type": "block",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "vg_name": "ceph_vg1"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:         }
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:     ],
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:     "2": [
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:         {
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "devices": [
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "/dev/loop5"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             ],
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_name": "ceph_lv2",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_size": "21470642176",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "name": "ceph_lv2",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "tags": {
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.cluster_name": "ceph",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.crush_device_class": "",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.encrypted": "0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.objectstore": "bluestore",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osd_id": "2",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.type": "block",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.vdo": "0",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:                 "ceph.with_tpm": "0"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             },
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "type": "block",
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:             "vg_name": "ceph_vg2"
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:         }
Dec 13 08:37:32 compute-0 nervous_pascal[331711]:     ]
Dec 13 08:37:32 compute-0 nervous_pascal[331711]: }
Dec 13 08:37:32 compute-0 systemd[1]: libpod-37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4.scope: Deactivated successfully.
Dec 13 08:37:32 compute-0 conmon[331711]: conmon 37bcf7b1a3462c2c8c37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4.scope/container/memory.events
Dec 13 08:37:32 compute-0 podman[331682]: 2025-12-13 08:37:32.395391137 +0000 UTC m=+0.450742964 container died 37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pascal, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:37:32 compute-0 ceph-mon[76537]: pgmap v2208: 321 pgs: 321 active+clean; 145 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Dec 13 08:37:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1172215206' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:37:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7792d5ea8d51524f4e5c78e00beaad76adf2c0bd702410d05ef6a64cd277f9f3-merged.mount: Deactivated successfully.
Dec 13 08:37:32 compute-0 podman[331682]: 2025-12-13 08:37:32.446105377 +0000 UTC m=+0.501457174 container remove 37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pascal, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:37:32 compute-0 systemd[1]: libpod-conmon-37bcf7b1a3462c2c8c3764676110eff96c162a3afa3b7677de8da5a6f1097bb4.scope: Deactivated successfully.
Dec 13 08:37:32 compute-0 sudo[331573]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:37:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3161494066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.519 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.521 248514 DEBUG nova.virt.libvirt.vif [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMCBc3yf7DLFWm969JJ3AJvRq1SqBawRmsOjScixeqlFSyjq4/Kpbcw0olzxybOT1DbERtB0mKMV4pquo3M97LIG1LWOqbG4HPkmobMKh41xqoYhtSOyaVVjlfwlnNokPA==',key_name='tempest-keypair-2123324397',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:37:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.521 248514 DEBUG nova.network.os_vif_util [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.523 248514 DEBUG nova.network.os_vif_util [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.524 248514 DEBUG nova.objects.instance [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:37:32 compute-0 sudo[331754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.546 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <uuid>9b486227-b98c-4393-9a3c-aae3e3c419a8</uuid>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <name>instance-00000055</name>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestOtherB-server-647049604</nova:name>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:37:31</nova:creationTime>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <nova:user uuid="7507939da64e4320a1c6f389d0fc9045">tempest-ServerActionsTestOtherB-1515133862-project-member</nova:user>
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <nova:project uuid="f0aee359fbaa484eb7ead3f81eef51e7">tempest-ServerActionsTestOtherB-1515133862</nova:project>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <nova:port uuid="ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01">
Dec 13 08:37:32 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <system>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <entry name="serial">9b486227-b98c-4393-9a3c-aae3e3c419a8</entry>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <entry name="uuid">9b486227-b98c-4393-9a3c-aae3e3c419a8</entry>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </system>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <os>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   </os>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <features>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   </features>
Dec 13 08:37:32 compute-0 sudo[331754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk">
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config">
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:37:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e0:82:3a"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <target dev="tapea5aafe7-a7"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/console.log" append="off"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <video>
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </video>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 sudo[331754]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:37:32 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:37:32 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:37:32 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:37:32 compute-0 nova_compute[248510]: </domain>
Dec 13 08:37:32 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.548 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Preparing to wait for external event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.549 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.549 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.549 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.550 248514 DEBUG nova.virt.libvirt.vif [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMCBc3yf7DLFWm969JJ3AJvRq1SqBawRmsOjScixeqlFSyjq4/Kpbcw0olzxybOT1DbERtB0mKMV4pquo3M97LIG1LWOqbG4HPkmobMKh41xqoYhtSOyaVVjlfwlnNokPA==',key_name='tempest-keypair-2123324397',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:37:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.551 248514 DEBUG nova.network.os_vif_util [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.551 248514 DEBUG nova.network.os_vif_util [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.552 248514 DEBUG os_vif [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.553 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.553 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.557 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea5aafe7-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.558 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea5aafe7-a7, col_values=(('external_ids', {'iface-id': 'ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:82:3a', 'vm-uuid': '9b486227-b98c-4393-9a3c-aae3e3c419a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:32 compute-0 NetworkManager[50376]: <info>  [1765615052.5603] manager: (tapea5aafe7-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.566 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.568 248514 INFO os_vif [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7')
Dec 13 08:37:32 compute-0 sudo[331781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:37:32 compute-0 sudo[331781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.648 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.649 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.649 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No VIF found with MAC fa:16:3e:e0:82:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.649 248514 INFO nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Using config drive
Dec 13 08:37:32 compute-0 nova_compute[248510]: 2025-12-13 08:37:32.671 248514 DEBUG nova.storage.rbd_utils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:32 compute-0 podman[331837]: 2025-12-13 08:37:32.871224853 +0000 UTC m=+0.035751499 container create 340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:37:32 compute-0 systemd[1]: Started libpod-conmon-340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5.scope.
Dec 13 08:37:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:32 compute-0 podman[331837]: 2025-12-13 08:37:32.942810671 +0000 UTC m=+0.107337347 container init 340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 08:37:32 compute-0 podman[331837]: 2025-12-13 08:37:32.948708318 +0000 UTC m=+0.113234964 container start 340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_lichterman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 08:37:32 compute-0 podman[331837]: 2025-12-13 08:37:32.854948939 +0000 UTC m=+0.019475585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:37:32 compute-0 podman[331837]: 2025-12-13 08:37:32.952143043 +0000 UTC m=+0.116669689 container attach 340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:37:32 compute-0 interesting_lichterman[331853]: 167 167
Dec 13 08:37:32 compute-0 systemd[1]: libpod-340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5.scope: Deactivated successfully.
Dec 13 08:37:32 compute-0 conmon[331853]: conmon 340b44470ee10fe1ec22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5.scope/container/memory.events
Dec 13 08:37:32 compute-0 podman[331837]: 2025-12-13 08:37:32.955720742 +0000 UTC m=+0.120247388 container died 340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:37:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2615309d1e48df8eb13583b0a2508473ac895965f2f1a04acaeb9747c55812a2-merged.mount: Deactivated successfully.
Dec 13 08:37:32 compute-0 podman[331837]: 2025-12-13 08:37:32.996378131 +0000 UTC m=+0.160904777 container remove 340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:37:33 compute-0 systemd[1]: libpod-conmon-340b44470ee10fe1ec22448841210437dd872cea45cf3168ae991fce85086ca5.scope: Deactivated successfully.
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.083 248514 DEBUG nova.network.neutron [req-f0fa4d49-3ccd-40ae-9129-0bdcbc831c19 req-fe60620f-a456-4151-b85a-ac304718fa43 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated VIF entry in instance network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.084 248514 DEBUG nova.network.neutron [req-f0fa4d49-3ccd-40ae-9129-0bdcbc831c19 req-fe60620f-a456-4151-b85a-ac304718fa43 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.110 248514 DEBUG oslo_concurrency.lockutils [req-f0fa4d49-3ccd-40ae-9129-0bdcbc831c19 req-fe60620f-a456-4151-b85a-ac304718fa43 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:33 compute-0 podman[331877]: 2025-12-13 08:37:33.160387224 +0000 UTC m=+0.046128316 container create 7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:37:33 compute-0 systemd[1]: Started libpod-conmon-7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e.scope.
Dec 13 08:37:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:33 compute-0 podman[331877]: 2025-12-13 08:37:33.142543931 +0000 UTC m=+0.028285043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548c86b638f28c23dff6547e5bd983ec11c477f8b5b29529302496ec78e6db50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548c86b638f28c23dff6547e5bd983ec11c477f8b5b29529302496ec78e6db50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548c86b638f28c23dff6547e5bd983ec11c477f8b5b29529302496ec78e6db50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548c86b638f28c23dff6547e5bd983ec11c477f8b5b29529302496ec78e6db50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:33 compute-0 podman[331877]: 2025-12-13 08:37:33.251768604 +0000 UTC m=+0.137509696 container init 7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:37:33 compute-0 podman[331877]: 2025-12-13 08:37:33.258093121 +0000 UTC m=+0.143834213 container start 7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mcnulty, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 13 08:37:33 compute-0 podman[331877]: 2025-12-13 08:37:33.261544147 +0000 UTC m=+0.147285259 container attach 7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mcnulty, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.273 248514 INFO nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating config drive at /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.279 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntplrunm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3161494066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.428 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntplrunm" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.453 248514 DEBUG nova.storage.rbd_utils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.457 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.588 248514 DEBUG oslo_concurrency.processutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.589 248514 INFO nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting local config drive /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config because it was imported into RBD.
Dec 13 08:37:33 compute-0 NetworkManager[50376]: <info>  [1765615053.6417] manager: (tapea5aafe7-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Dec 13 08:37:33 compute-0 kernel: tapea5aafe7-a7: entered promiscuous mode
Dec 13 08:37:33 compute-0 ovn_controller[148476]: 2025-12-13T08:37:33Z|00845|binding|INFO|Claiming lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for this chassis.
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:33 compute-0 ovn_controller[148476]: 2025-12-13T08:37:33Z|00846|binding|INFO|ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01: Claiming fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.658 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:82:3a 10.100.0.3'], port_security=['fa:16:3e:e0:82:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b486227-b98c-4393-9a3c-aae3e3c419a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35769c0b-1e0e-43bc-832c-d54c65a53a36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.659 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 in datapath 369f7528-6571-47b6-a030-5281647e1eac bound to our chassis
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.661 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:37:33 compute-0 ovn_controller[148476]: 2025-12-13T08:37:33Z|00847|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 ovn-installed in OVS
Dec 13 08:37:33 compute-0 ovn_controller[148476]: 2025-12-13T08:37:33Z|00848|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 up in Southbound
Dec 13 08:37:33 compute-0 nova_compute[248510]: 2025-12-13 08:37:33.672 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.679 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d266c2ea-00fb-45bc-9b53-e50c6104021c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.684 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap369f7528-61 in ovnmeta-369f7528-6571-47b6-a030-5281647e1eac namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.686 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap369f7528-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87e7717f-27e9-497e-9806-1c94d501e0d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.687 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35280051-0656-4603-9fa8-96380557797a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 systemd-machined[210538]: New machine qemu-105-instance-00000055.
Dec 13 08:37:33 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000055.
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.699 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[6925adff-8734-415d-853f-3d2614eedcaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 systemd-udevd[331989]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:37:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 88 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Dec 13 08:37:33 compute-0 NetworkManager[50376]: <info>  [1765615053.7205] device (tapea5aafe7-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:37:33 compute-0 NetworkManager[50376]: <info>  [1765615053.7213] device (tapea5aafe7-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[569d4662-d3b5-408b-9d25-a785f83ce61f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.758 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d4ea63-6959-46e4-b9e1-4490302e5da7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.765 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f0709606-a65d-43d3-99a0-85a2b197bdcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 NetworkManager[50376]: <info>  [1765615053.7666] manager: (tap369f7528-60): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.807 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[187887cc-df97-429e-9d92-600dafd9d37a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.811 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[13cc7754-8ba3-4837-8f2f-716f4b8c6c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 NetworkManager[50376]: <info>  [1765615053.8359] device (tap369f7528-60): carrier: link connected
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.841 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a4c91a-7744-486e-8205-9f123b1f0dee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.859 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e28b0e4e-f13a-4cac-a7c0-a4640c29cf90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 21848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332049, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.875 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[924b4655-a152-4f3d-bc3c-7b2a282a37f9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe29:b141'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763105, 'tstamp': 763105}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332052, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.896 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[83ea61e3-9238-4702-883e-4dc4737657a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 21848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332053, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:33.931 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f541a91-962e-47da-8cf3-3791b0d8a9cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:33 compute-0 lvm[332060]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:37:33 compute-0 lvm[332060]: VG ceph_vg0 finished
Dec 13 08:37:33 compute-0 lvm[332070]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:37:33 compute-0 lvm[332070]: VG ceph_vg1 finished
Dec 13 08:37:33 compute-0 lvm[332082]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:37:33 compute-0 lvm[332082]: VG ceph_vg2 finished
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.001 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aef7c22a-a444-45c5-ae16-d33d79d4b696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.003 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.003 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.004 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.005 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:34 compute-0 NetworkManager[50376]: <info>  [1765615054.0065] manager: (tap369f7528-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Dec 13 08:37:34 compute-0 kernel: tap369f7528-60: entered promiscuous mode
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.008 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.014 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:34 compute-0 ovn_controller[148476]: 2025-12-13T08:37:34Z|00849|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.015 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.017 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/369f7528-6571-47b6-a030-5281647e1eac.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/369f7528-6571-47b6-a030-5281647e1eac.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.018 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52ca04d9-42fd-4465-a2e8-cc60e08e0c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.019 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/369f7528-6571-47b6-a030-5281647e1eac.pid.haproxy
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:37:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:34.020 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'env', 'PROCESS_TAG=haproxy-369f7528-6571-47b6-a030-5281647e1eac', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/369f7528-6571-47b6-a030-5281647e1eac.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.031 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:34 compute-0 silly_mcnulty[331894]: {}
Dec 13 08:37:34 compute-0 systemd[1]: libpod-7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e.scope: Deactivated successfully.
Dec 13 08:37:34 compute-0 podman[331877]: 2025-12-13 08:37:34.125359089 +0000 UTC m=+1.011100181 container died 7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mcnulty, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 08:37:34 compute-0 systemd[1]: libpod-7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e.scope: Consumed 1.291s CPU time.
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.147 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615054.1474829, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.148 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Started (Lifecycle Event)
Dec 13 08:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-548c86b638f28c23dff6547e5bd983ec11c477f8b5b29529302496ec78e6db50-merged.mount: Deactivated successfully.
Dec 13 08:37:34 compute-0 podman[331877]: 2025-12-13 08:37:34.17132014 +0000 UTC m=+1.057061222 container remove 7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.179 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.190 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615054.149994, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.190 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Paused (Lifecycle Event)
Dec 13 08:37:34 compute-0 systemd[1]: libpod-conmon-7f078787b2f072117fbe53471dda85b0a5b62f70c13848107a4a433189f8b25e.scope: Deactivated successfully.
Dec 13 08:37:34 compute-0 sudo[331781]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.223 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:34 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.228 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:37:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:37:34 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.250 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:37:34 compute-0 sudo[332127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:37:34 compute-0 sudo[332127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:37:34 compute-0 sudo[332127]: pam_unix(sudo:session): session closed for user root
Dec 13 08:37:34 compute-0 podman[332174]: 2025-12-13 08:37:34.397079306 +0000 UTC m=+0.041351247 container create 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 08:37:34 compute-0 ceph-mon[76537]: pgmap v2209: 321 pgs: 321 active+clean; 88 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Dec 13 08:37:34 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:34 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:37:34 compute-0 systemd[1]: Started libpod-conmon-2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6.scope.
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.439 248514 DEBUG nova.compute.manager [req-9e0c0991-56a6-479a-b298-533a12f0e9c7 req-b9fa21c6-ddbf-4691-a046-315db6ab1a81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.439 248514 DEBUG oslo_concurrency.lockutils [req-9e0c0991-56a6-479a-b298-533a12f0e9c7 req-b9fa21c6-ddbf-4691-a046-315db6ab1a81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.440 248514 DEBUG oslo_concurrency.lockutils [req-9e0c0991-56a6-479a-b298-533a12f0e9c7 req-b9fa21c6-ddbf-4691-a046-315db6ab1a81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.440 248514 DEBUG oslo_concurrency.lockutils [req-9e0c0991-56a6-479a-b298-533a12f0e9c7 req-b9fa21c6-ddbf-4691-a046-315db6ab1a81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.440 248514 DEBUG nova.compute.manager [req-9e0c0991-56a6-479a-b298-533a12f0e9c7 req-b9fa21c6-ddbf-4691-a046-315db6ab1a81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Processing event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.441 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.445 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615054.444879, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.445 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Resumed (Lifecycle Event)
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.447 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.450 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance spawned successfully.
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.451 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:37:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c47fe0120125029b14bb1c725f8596f8bf16f2a2dd657933ca7281e9bf15dff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.470 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:34 compute-0 podman[332174]: 2025-12-13 08:37:34.375514641 +0000 UTC m=+0.019786602 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.475 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:37:34 compute-0 podman[332174]: 2025-12-13 08:37:34.478853657 +0000 UTC m=+0.123125618 container init 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.479 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.480 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.480 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.480 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.481 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.481 248514 DEBUG nova.virt.libvirt.driver [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:37:34 compute-0 podman[332174]: 2025-12-13 08:37:34.483846011 +0000 UTC m=+0.128117952 container start 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:37:34 compute-0 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [NOTICE]   (332193) : New worker (332195) forked
Dec 13 08:37:34 compute-0 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [NOTICE]   (332193) : Loading success.
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.514 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.552 248514 INFO nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 8.81 seconds to spawn the instance on the hypervisor.
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.552 248514 DEBUG nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.644 248514 INFO nova.compute.manager [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 10.13 seconds to build instance.
Dec 13 08:37:34 compute-0 nova_compute[248510]: 2025-12-13 08:37:34.676 248514 DEBUG oslo_concurrency.lockutils [None req-3abed4e7-ec3c-482a-876c-e19ba1165123 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 88 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 13 08:37:35 compute-0 nova_compute[248510]: 2025-12-13 08:37:35.722 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:36 compute-0 nova_compute[248510]: 2025-12-13 08:37:36.588 248514 DEBUG nova.compute.manager [req-f79256a3-f4a6-4d35-965f-8adf11dc3bc5 req-1eeab569-2e1e-46e4-b796-4ad79c6d7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:36 compute-0 nova_compute[248510]: 2025-12-13 08:37:36.588 248514 DEBUG oslo_concurrency.lockutils [req-f79256a3-f4a6-4d35-965f-8adf11dc3bc5 req-1eeab569-2e1e-46e4-b796-4ad79c6d7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:36 compute-0 nova_compute[248510]: 2025-12-13 08:37:36.588 248514 DEBUG oslo_concurrency.lockutils [req-f79256a3-f4a6-4d35-965f-8adf11dc3bc5 req-1eeab569-2e1e-46e4-b796-4ad79c6d7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:36 compute-0 nova_compute[248510]: 2025-12-13 08:37:36.589 248514 DEBUG oslo_concurrency.lockutils [req-f79256a3-f4a6-4d35-965f-8adf11dc3bc5 req-1eeab569-2e1e-46e4-b796-4ad79c6d7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:36 compute-0 nova_compute[248510]: 2025-12-13 08:37:36.589 248514 DEBUG nova.compute.manager [req-f79256a3-f4a6-4d35-965f-8adf11dc3bc5 req-1eeab569-2e1e-46e4-b796-4ad79c6d7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:37:36 compute-0 nova_compute[248510]: 2025-12-13 08:37:36.589 248514 WARNING nova.compute.manager [req-f79256a3-f4a6-4d35-965f-8adf11dc3bc5 req-1eeab569-2e1e-46e4-b796-4ad79c6d7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state None.
Dec 13 08:37:36 compute-0 ceph-mon[76537]: pgmap v2210: 321 pgs: 321 active+clean; 88 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 13 08:37:37 compute-0 nova_compute[248510]: 2025-12-13 08:37:37.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:37 compute-0 ovn_controller[148476]: 2025-12-13T08:37:37Z|00850|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 08:37:37 compute-0 nova_compute[248510]: 2025-12-13 08:37:37.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 88 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 13 08:37:37 compute-0 ovn_controller[148476]: 2025-12-13T08:37:37Z|00851|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 08:37:37 compute-0 nova_compute[248510]: 2025-12-13 08:37:37.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:38 compute-0 NetworkManager[50376]: <info>  [1765615058.5507] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Dec 13 08:37:38 compute-0 nova_compute[248510]: 2025-12-13 08:37:38.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:38 compute-0 NetworkManager[50376]: <info>  [1765615058.5518] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Dec 13 08:37:38 compute-0 ovn_controller[148476]: 2025-12-13T08:37:38Z|00852|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 08:37:38 compute-0 ovn_controller[148476]: 2025-12-13T08:37:38Z|00853|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 08:37:38 compute-0 nova_compute[248510]: 2025-12-13 08:37:38.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:38 compute-0 ceph-mon[76537]: pgmap v2211: 321 pgs: 321 active+clean; 88 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 13 08:37:39 compute-0 nova_compute[248510]: 2025-12-13 08:37:39.586 248514 DEBUG nova.compute.manager [req-af3f3677-8535-4f75-b595-8419b91db5bd req-4f960f16-9c56-4d89-8a64-7a065a483341 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:39 compute-0 nova_compute[248510]: 2025-12-13 08:37:39.586 248514 DEBUG nova.compute.manager [req-af3f3677-8535-4f75-b595-8419b91db5bd req-4f960f16-9c56-4d89-8a64-7a065a483341 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing instance network info cache due to event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:37:39 compute-0 nova_compute[248510]: 2025-12-13 08:37:39.587 248514 DEBUG oslo_concurrency.lockutils [req-af3f3677-8535-4f75-b595-8419b91db5bd req-4f960f16-9c56-4d89-8a64-7a065a483341 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:39 compute-0 nova_compute[248510]: 2025-12-13 08:37:39.587 248514 DEBUG oslo_concurrency.lockutils [req-af3f3677-8535-4f75-b595-8419b91db5bd req-4f960f16-9c56-4d89-8a64-7a065a483341 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:39 compute-0 nova_compute[248510]: 2025-12-13 08:37:39.587 248514 DEBUG nova.network.neutron [req-af3f3677-8535-4f75-b595-8419b91db5bd req-4f960f16-9c56-4d89-8a64-7a065a483341 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:37:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Dec 13 08:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:37:40 compute-0 nova_compute[248510]: 2025-12-13 08:37:40.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:40 compute-0 ceph-mon[76537]: pgmap v2212: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Dec 13 08:37:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 194 KiB/s wr, 136 op/s
Dec 13 08:37:42 compute-0 nova_compute[248510]: 2025-12-13 08:37:42.309 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615047.3075807, 7d2022b0-ae34-41c3-b775-eca57874dc3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:42 compute-0 nova_compute[248510]: 2025-12-13 08:37:42.309 248514 INFO nova.compute.manager [-] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] VM Stopped (Lifecycle Event)
Dec 13 08:37:42 compute-0 nova_compute[248510]: 2025-12-13 08:37:42.346 248514 DEBUG nova.compute.manager [None req-7b201d49-2458-46ff-8979-c4b598197a4b - - - - - -] [instance: 7d2022b0-ae34-41c3-b775-eca57874dc3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:42 compute-0 nova_compute[248510]: 2025-12-13 08:37:42.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:42 compute-0 nova_compute[248510]: 2025-12-13 08:37:42.762 248514 DEBUG nova.network.neutron [req-af3f3677-8535-4f75-b595-8419b91db5bd req-4f960f16-9c56-4d89-8a64-7a065a483341 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated VIF entry in instance network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:37:42 compute-0 nova_compute[248510]: 2025-12-13 08:37:42.762 248514 DEBUG nova.network.neutron [req-af3f3677-8535-4f75-b595-8419b91db5bd req-4f960f16-9c56-4d89-8a64-7a065a483341 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:42 compute-0 nova_compute[248510]: 2025-12-13 08:37:42.791 248514 DEBUG oslo_concurrency.lockutils [req-af3f3677-8535-4f75-b595-8419b91db5bd req-4f960f16-9c56-4d89-8a64-7a065a483341 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:42 compute-0 ceph-mon[76537]: pgmap v2213: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 194 KiB/s wr, 136 op/s
Dec 13 08:37:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 91 op/s
Dec 13 08:37:43 compute-0 nova_compute[248510]: 2025-12-13 08:37:43.745 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:43 compute-0 nova_compute[248510]: 2025-12-13 08:37:43.745 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:43 compute-0 nova_compute[248510]: 2025-12-13 08:37:43.769 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:37:43 compute-0 rsyslogd[1002]: imjournal: 5487 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 13 08:37:43 compute-0 nova_compute[248510]: 2025-12-13 08:37:43.869 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:43 compute-0 nova_compute[248510]: 2025-12-13 08:37:43.870 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:43 compute-0 nova_compute[248510]: 2025-12-13 08:37:43.881 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:37:43 compute-0 nova_compute[248510]: 2025-12-13 08:37:43.881 248514 INFO nova.compute.claims [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.049 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:37:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967052416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:44.589 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.590 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:44.591 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.596 248514 DEBUG nova.compute.provider_tree [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.650 248514 DEBUG nova.scheduler.client.report [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.682 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.683 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.737 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.737 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.767 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.789 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:37:44 compute-0 ceph-mon[76537]: pgmap v2214: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 91 op/s
Dec 13 08:37:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/967052416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.917 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.918 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.919 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Creating image(s)
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.943 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.967 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.985 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:44 compute-0 nova_compute[248510]: 2025-12-13 08:37:44.989 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.080 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.082 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.083 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.084 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.115 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.121 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fa9180aa-8387-44cb-9e70-535f0652e390_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.153 248514 DEBUG nova.policy [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be7c161a509a4392b0b424b31178f424', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4497dd13cb734db4999e9c01823dc0fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.606 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fa9180aa-8387-44cb-9e70-535f0652e390_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.674 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] resizing rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:37:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.757 248514 DEBUG nova.objects.instance [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lazy-loading 'migration_context' on Instance uuid fa9180aa-8387-44cb-9e70-535f0652e390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.776 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.777 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Ensure instance console log exists: /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.777 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.777 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:45 compute-0 nova_compute[248510]: 2025-12-13 08:37:45.778 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:46 compute-0 nova_compute[248510]: 2025-12-13 08:37:46.716 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Successfully created port: 3b4d29b5-8158-4612-8f60-a6a57327d01c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:37:46 compute-0 ceph-mon[76537]: pgmap v2215: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 13 08:37:47 compute-0 nova_compute[248510]: 2025-12-13 08:37:47.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:47 compute-0 ovn_controller[148476]: 2025-12-13T08:37:47Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 08:37:47 compute-0 ovn_controller[148476]: 2025-12-13T08:37:47Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 08:37:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 64 op/s
Dec 13 08:37:48 compute-0 nova_compute[248510]: 2025-12-13 08:37:48.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:37:48 compute-0 ceph-mon[76537]: pgmap v2216: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 64 op/s
Dec 13 08:37:49 compute-0 nova_compute[248510]: 2025-12-13 08:37:49.361 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Successfully updated port: 3b4d29b5-8158-4612-8f60-a6a57327d01c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:37:49 compute-0 nova_compute[248510]: 2025-12-13 08:37:49.380 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:49 compute-0 nova_compute[248510]: 2025-12-13 08:37:49.381 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquired lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:49 compute-0 nova_compute[248510]: 2025-12-13 08:37:49.381 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:37:49 compute-0 nova_compute[248510]: 2025-12-13 08:37:49.663 248514 DEBUG nova.compute.manager [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-changed-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:37:49 compute-0 nova_compute[248510]: 2025-12-13 08:37:49.664 248514 DEBUG nova.compute.manager [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Refreshing instance network info cache due to event network-changed-3b4d29b5-8158-4612-8f60-a6a57327d01c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:37:49 compute-0 nova_compute[248510]: 2025-12-13 08:37:49.664 248514 DEBUG oslo_concurrency.lockutils [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 133 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 121 op/s
Dec 13 08:37:49 compute-0 nova_compute[248510]: 2025-12-13 08:37:49.754 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:37:50 compute-0 nova_compute[248510]: 2025-12-13 08:37:50.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:50.594 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:50 compute-0 nova_compute[248510]: 2025-12-13 08:37:50.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:50 compute-0 nova_compute[248510]: 2025-12-13 08:37:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:37:50 compute-0 nova_compute[248510]: 2025-12-13 08:37:50.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:37:50 compute-0 nova_compute[248510]: 2025-12-13 08:37:50.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:37:50 compute-0 nova_compute[248510]: 2025-12-13 08:37:50.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:37:50 compute-0 ceph-mon[76537]: pgmap v2217: 321 pgs: 321 active+clean; 133 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 121 op/s
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.214 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.215 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.215 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.216 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.267 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Updating instance_info_cache with network_info: [{"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.296 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Releasing lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.296 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance network_info: |[{"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.297 248514 DEBUG oslo_concurrency.lockutils [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.298 248514 DEBUG nova.network.neutron [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Refreshing network info cache for port 3b4d29b5-8158-4612-8f60-a6a57327d01c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.305 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start _get_guest_xml network_info=[{"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.311 248514 WARNING nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.325 248514 DEBUG nova.virt.libvirt.host [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.325 248514 DEBUG nova.virt.libvirt.host [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.331 248514 DEBUG nova.virt.libvirt.host [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.332 248514 DEBUG nova.virt.libvirt.host [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.333 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.333 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.333 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.334 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.334 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.334 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.334 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.335 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.335 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.335 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.335 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.336 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.340 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 479 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Dec 13 08:37:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:37:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/112773468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.962 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/112773468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.990 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:51 compute-0 nova_compute[248510]: 2025-12-13 08:37:51.993 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:37:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305898004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.570 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.572 248514 DEBUG nova.virt.libvirt.vif [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-354314434',display_name='tempest-ServerPasswordTestJSON-server-354314434',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-354314434',id=86,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4497dd13cb734db4999e9c01823dc0fc',ramdisk_id='',reservation_id='r-dolnm7xr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1689122570',owner_user_name='tempest-ServerPasswordTestJSON-1689122570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:37:44Z,user_data=None,user_id='be7c161a509a4392b0b424b31178f424',uuid=fa9180aa-8387-44cb-9e70-535f0652e390,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.572 248514 DEBUG nova.network.os_vif_util [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converting VIF {"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.573 248514 DEBUG nova.network.os_vif_util [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.574 248514 DEBUG nova.objects.instance [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lazy-loading 'pci_devices' on Instance uuid fa9180aa-8387-44cb-9e70-535f0652e390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.590 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <uuid>fa9180aa-8387-44cb-9e70-535f0652e390</uuid>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <name>instance-00000056</name>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerPasswordTestJSON-server-354314434</nova:name>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:37:51</nova:creationTime>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <nova:user uuid="be7c161a509a4392b0b424b31178f424">tempest-ServerPasswordTestJSON-1689122570-project-member</nova:user>
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <nova:project uuid="4497dd13cb734db4999e9c01823dc0fc">tempest-ServerPasswordTestJSON-1689122570</nova:project>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <nova:port uuid="3b4d29b5-8158-4612-8f60-a6a57327d01c">
Dec 13 08:37:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <system>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <entry name="serial">fa9180aa-8387-44cb-9e70-535f0652e390</entry>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <entry name="uuid">fa9180aa-8387-44cb-9e70-535f0652e390</entry>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </system>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <os>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   </os>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <features>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   </features>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/fa9180aa-8387-44cb-9e70-535f0652e390_disk">
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       </source>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/fa9180aa-8387-44cb-9e70-535f0652e390_disk.config">
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       </source>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:37:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:07:47:7f"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <target dev="tap3b4d29b5-81"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/console.log" append="off"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <video>
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </video>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:37:52 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:37:52 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:37:52 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:37:52 compute-0 nova_compute[248510]: </domain>
Dec 13 08:37:52 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.591 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Preparing to wait for external event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.591 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.592 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.592 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.593 248514 DEBUG nova.virt.libvirt.vif [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-354314434',display_name='tempest-ServerPasswordTestJSON-server-354314434',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-354314434',id=86,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4497dd13cb734db4999e9c01823dc0fc',ramdisk_id='',reservation_id='r-dolnm7xr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1689122570',owner_user_name='tempest-ServerPasswordTestJSON-1689122570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:37:44Z,user_data=None,user_id='be7c161a509a4392b0b424b31178f424',uuid=fa9180aa-8387-44cb-9e70-535f0652e390,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.593 248514 DEBUG nova.network.os_vif_util [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converting VIF {"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.593 248514 DEBUG nova.network.os_vif_util [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.594 248514 DEBUG os_vif [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.595 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.595 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b4d29b5-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b4d29b5-81, col_values=(('external_ids', {'iface-id': '3b4d29b5-8158-4612-8f60-a6a57327d01c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:47:7f', 'vm-uuid': 'fa9180aa-8387-44cb-9e70-535f0652e390'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:52 compute-0 NetworkManager[50376]: <info>  [1765615072.6008] manager: (tap3b4d29b5-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.610 248514 INFO os_vif [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81')
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.669 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.669 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.669 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] No VIF found with MAC fa:16:3e:07:47:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.670 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Using config drive
Dec 13 08:37:52 compute-0 nova_compute[248510]: 2025-12-13 08:37:52.697 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:52 compute-0 ceph-mon[76537]: pgmap v2218: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 479 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Dec 13 08:37:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2305898004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.339 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Creating config drive at /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.344 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53veqlx4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.487 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53veqlx4" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.515 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.518 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config fa9180aa-8387-44cb-9e70-535f0652e390_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.657 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config fa9180aa-8387-44cb-9e70-535f0652e390_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.658 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Deleting local config drive /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config because it was imported into RBD.
Dec 13 08:37:53 compute-0 kernel: tap3b4d29b5-81: entered promiscuous mode
Dec 13 08:37:53 compute-0 NetworkManager[50376]: <info>  [1765615073.7116] manager: (tap3b4d29b5-81): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Dec 13 08:37:53 compute-0 ovn_controller[148476]: 2025-12-13T08:37:53Z|00854|binding|INFO|Claiming lport 3b4d29b5-8158-4612-8f60-a6a57327d01c for this chassis.
Dec 13 08:37:53 compute-0 ovn_controller[148476]: 2025-12-13T08:37:53Z|00855|binding|INFO|3b4d29b5-8158-4612-8f60-a6a57327d01c: Claiming fa:16:3e:07:47:7f 10.100.0.11
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.720 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:47:7f 10.100.0.11'], port_security=['fa:16:3e:07:47:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fa9180aa-8387-44cb-9e70-535f0652e390', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4497dd13cb734db4999e9c01823dc0fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82b41d33-9cc9-41ca-a7e4-ff901fea742b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1169a38-50f8-4ffe-a49f-5841c99ba14f, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b4d29b5-8158-4612-8f60-a6a57327d01c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.722 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b4d29b5-8158-4612-8f60-a6a57327d01c in datapath b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 bound to our chassis
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.724 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0588191-fbf6-4ff4-8e6f-2ba53eda08f1
Dec 13 08:37:53 compute-0 ovn_controller[148476]: 2025-12-13T08:37:53Z|00856|binding|INFO|Setting lport 3b4d29b5-8158-4612-8f60-a6a57327d01c ovn-installed in OVS
Dec 13 08:37:53 compute-0 ovn_controller[148476]: 2025-12-13T08:37:53Z|00857|binding|INFO|Setting lport 3b4d29b5-8158-4612-8f60-a6a57327d01c up in Southbound
Dec 13 08:37:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Dec 13 08:37:53 compute-0 nova_compute[248510]: 2025-12-13 08:37:53.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[701de2c0-4316-4f90-a72e-a2e02ed6c15c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.737 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb0588191-f1 in ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.739 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb0588191-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.739 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f764fa34-4007-48d1-8522-e7eeca0e0634]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.740 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aff6e5b1-34d0-4ca9-900f-569422e129a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.753 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa4b131-3c10-40a7-8345-0c45954f5316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 systemd-machined[210538]: New machine qemu-106-instance-00000056.
Dec 13 08:37:53 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-00000056.
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.770 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[102aff35-16fd-427d-ad5c-a0e1474c72ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 systemd-udevd[332533]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:37:53 compute-0 NetworkManager[50376]: <info>  [1765615073.7928] device (tap3b4d29b5-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:37:53 compute-0 NetworkManager[50376]: <info>  [1765615073.7934] device (tap3b4d29b5-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.798 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd24c2e-70b3-437d-a03d-0c37140da830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.803 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f761b2b0-7ec8-423b-84d1-d99c5493dcfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 systemd-udevd[332536]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:37:53 compute-0 NetworkManager[50376]: <info>  [1765615073.8048] manager: (tapb0588191-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.833 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9348087e-31a8-42e1-b1a0-a6a94903d413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.836 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[026ed689-1e9b-409a-8deb-e0872afa9557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 NetworkManager[50376]: <info>  [1765615073.8637] device (tapb0588191-f0): carrier: link connected
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.871 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[08701887-d5b7-4ac2-852e-818359e80a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c993e89-f292-4f3f-a9dd-2b49450e09fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0588191-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:00:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765107, 'reachable_time': 31885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332562, 'error': None, 'target': 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.909 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f1c212-3a29-4e70-8b25-1e24649d7954]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 765107, 'tstamp': 765107}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332563, 'error': None, 'target': 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.928 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[499307b2-b434-40bb-b16b-64afb43a89c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0588191-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:00:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765107, 'reachable_time': 31885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332564, 'error': None, 'target': 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.965 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aff49922-9306-44df-b934-04b8ce7f6cc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.042 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[230afc5b-3292-4230-b0bb-02bdfc6dd0f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.044 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0588191-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.044 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.045 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0588191-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:54 compute-0 NetworkManager[50376]: <info>  [1765615074.0476] manager: (tapb0588191-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:54 compute-0 kernel: tapb0588191-f0: entered promiscuous mode
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.050 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.052 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0588191-f0, col_values=(('external_ids', {'iface-id': '4ccc34c2-c568-4745-9202-7dc43ff5cdee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.053 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:54 compute-0 ovn_controller[148476]: 2025-12-13T08:37:54Z|00858|binding|INFO|Releasing lport 4ccc34c2-c568-4745-9202-7dc43ff5cdee from this chassis (sb_readonly=0)
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.069 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b0588191-fbf6-4ff4-8e6f-2ba53eda08f1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b0588191-fbf6-4ff4-8e6f-2ba53eda08f1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.070 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2bccaa2e-f4bb-4f69-b415-a130a8fde0fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.071 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/b0588191-fbf6-4ff4-8e6f-2ba53eda08f1.pid.haproxy
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID b0588191-fbf6-4ff4-8e6f-2ba53eda08f1
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:37:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.071 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'env', 'PROCESS_TAG=haproxy-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b0588191-fbf6-4ff4-8e6f-2ba53eda08f1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.133 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615074.132913, fa9180aa-8387-44cb-9e70-535f0652e390 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.133 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] VM Started (Lifecycle Event)
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.249 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.262 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615074.1332505, fa9180aa-8387-44cb-9e70-535f0652e390 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.262 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] VM Paused (Lifecycle Event)
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.287 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.291 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.318 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:37:54 compute-0 podman[332638]: 2025-12-13 08:37:54.466733408 +0000 UTC m=+0.050946216 container create 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.474 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.479 248514 DEBUG nova.network.neutron [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Updated VIF entry in instance network info cache for port 3b4d29b5-8158-4612-8f60-a6a57327d01c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.480 248514 DEBUG nova.network.neutron [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Updating instance_info_cache with network_info: [{"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:37:54 compute-0 systemd[1]: Started libpod-conmon-064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac.scope.
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.514 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.514 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.515 248514 DEBUG oslo_concurrency.lockutils [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:37:54 compute-0 nova_compute[248510]: 2025-12-13 08:37:54.516 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:37:54 compute-0 podman[332638]: 2025-12-13 08:37:54.439781899 +0000 UTC m=+0.023994727 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:37:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5316c9592aac0df80f7827a4e0b5f1ce24a99c5a9f3cbdd4429ab127d248b430/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:37:54 compute-0 podman[332638]: 2025-12-13 08:37:54.581422266 +0000 UTC m=+0.165635094 container init 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 08:37:54 compute-0 podman[332638]: 2025-12-13 08:37:54.587313713 +0000 UTC m=+0.171526521 container start 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:37:54 compute-0 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [NOTICE]   (332657) : New worker (332659) forked
Dec 13 08:37:54 compute-0 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [NOTICE]   (332657) : Loading success.
Dec 13 08:37:54 compute-0 ceph-mon[76537]: pgmap v2219: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Dec 13 08:37:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:55.418 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:37:55.420 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 13 08:37:55 compute-0 nova_compute[248510]: 2025-12-13 08:37:55.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:55 compute-0 nova_compute[248510]: 2025-12-13 08:37:55.978 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:37:56 compute-0 nova_compute[248510]: 2025-12-13 08:37:56.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:37:56 compute-0 nova_compute[248510]: 2025-12-13 08:37:56.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:56 compute-0 nova_compute[248510]: 2025-12-13 08:37:56.805 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:56 compute-0 nova_compute[248510]: 2025-12-13 08:37:56.805 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:56 compute-0 nova_compute[248510]: 2025-12-13 08:37:56.805 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:37:56 compute-0 nova_compute[248510]: 2025-12-13 08:37:56.806 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:56 compute-0 ceph-mon[76537]: pgmap v2220: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 13 08:37:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:37:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143835827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.381 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.472 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.473 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.477 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.477 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.720 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.722 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3714MB free_disk=59.921302248723805GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.722 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.723 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:37:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b486227-b98c-4393-9a3c-aae3e3c419a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance fa9180aa-8387-44cb-9e70-535f0652e390 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:37:57 compute-0 nova_compute[248510]: 2025-12-13 08:37:57.894 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:37:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4143835827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:37:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776613182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:58 compute-0 nova_compute[248510]: 2025-12-13 08:37:58.452 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:37:58 compute-0 nova_compute[248510]: 2025-12-13 08:37:58.459 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:37:58 compute-0 nova_compute[248510]: 2025-12-13 08:37:58.479 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:37:58 compute-0 nova_compute[248510]: 2025-12-13 08:37:58.514 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:37:58 compute-0 nova_compute[248510]: 2025-12-13 08:37:58.515 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:37:59 compute-0 ceph-mon[76537]: pgmap v2221: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 13 08:37:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/776613182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:37:59 compute-0 nova_compute[248510]: 2025-12-13 08:37:59.515 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:37:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Dec 13 08:37:59 compute-0 nova_compute[248510]: 2025-12-13 08:37:59.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:37:59 compute-0 nova_compute[248510]: 2025-12-13 08:37:59.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.467 248514 DEBUG nova.compute.manager [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.511 248514 INFO nova.compute.manager [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] instance snapshotting
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.512 248514 DEBUG nova.objects.instance [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'flavor' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.734 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.841 248514 INFO nova.virt.libvirt.driver [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Beginning live snapshot process
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.920 248514 DEBUG nova.compute.manager [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.920 248514 DEBUG oslo_concurrency.lockutils [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.920 248514 DEBUG oslo_concurrency.lockutils [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.921 248514 DEBUG oslo_concurrency.lockutils [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.921 248514 DEBUG nova.compute.manager [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Processing event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.922 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.927 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615080.9268777, fa9180aa-8387-44cb-9e70-535f0652e390 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.927 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] VM Resumed (Lifecycle Event)
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.930 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.947 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.948 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.972 248514 INFO nova.virt.libvirt.driver [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance spawned successfully.
Dec 13 08:38:00 compute-0 nova_compute[248510]: 2025-12-13 08:38:00.972 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:38:00 compute-0 podman[332715]: 2025-12-13 08:38:00.992250404 +0000 UTC m=+0.060465642 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 08:38:01 compute-0 podman[332714]: 2025-12-13 08:38:01.021015978 +0000 UTC m=+0.103023869 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.024 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.027 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.035 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.035 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.035 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.036 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.036 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.037 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:01 compute-0 podman[332713]: 2025-12-13 08:38:01.040256716 +0000 UTC m=+0.119830947 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.051 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.090 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:01 compute-0 ceph-mon[76537]: pgmap v2222: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.297 248514 INFO nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Took 16.38 seconds to spawn the instance on the hypervisor.
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.297 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.307 248514 DEBUG nova.virt.libvirt.imagebackend [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.315 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.315 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.323 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.323 248514 INFO nova.compute.claims [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.399 248514 INFO nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Took 17.57 seconds to build instance.
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.459 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.546 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:01 compute-0 nova_compute[248510]: 2025-12-13 08:38:01.616 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(4de1c3c190d94cda92595db338170aee) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:38:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Dec 13 08:38:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Dec 13 08:38:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Dec 13 08:38:02 compute-0 ceph-mon[76537]: pgmap v2223: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Dec 13 08:38:02 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.148 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk@4de1c3c190d94cda92595db338170aee to images/bd9a009b-216b-49be-81a8-600047037026 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:38:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:38:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030052450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.207 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.213 248514 DEBUG nova.compute.provider_tree [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.245 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/bd9a009b-216b-49be-81a8-600047037026 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.286 248514 DEBUG nova.scheduler.client.report [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.334 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.335 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.400 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.401 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.426 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.456 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.574 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.577 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.580 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Creating image(s)
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.600 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.623 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.655 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.660 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.692 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.696 248514 DEBUG nova.policy [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '302853b7f91745baadf52361a0a7d535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c968daf58e624ebda00676d79f6bde96', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.739 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.739 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.740 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.740 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.758 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.761 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6fb6c605-344c-4ed9-806d-96964b0474f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:02 compute-0 nova_compute[248510]: 2025-12-13 08:38:02.801 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(4de1c3c190d94cda92595db338170aee) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.061 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6fb6c605-344c-4ed9-806d-96964b0474f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.085 248514 DEBUG nova.compute.manager [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.085 248514 DEBUG oslo_concurrency.lockutils [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.085 248514 DEBUG oslo_concurrency.lockutils [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.086 248514 DEBUG oslo_concurrency.lockutils [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.086 248514 DEBUG nova.compute.manager [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] No waiting events found dispatching network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.086 248514 WARNING nova.compute.manager [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received unexpected event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c for instance with vm_state active and task_state None.
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.112 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] resizing rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:38:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Dec 13 08:38:03 compute-0 ceph-mon[76537]: osdmap e228: 3 total, 3 up, 3 in
Dec 13 08:38:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2030052450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Dec 13 08:38:03 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.160 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(bd9a009b-216b-49be-81a8-600047037026) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.241 248514 DEBUG nova.objects.instance [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'migration_context' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.265 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.266 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Ensure instance console log exists: /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.267 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.267 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.267 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 199 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 51 op/s
Dec 13 08:38:03 compute-0 nova_compute[248510]: 2025-12-13 08:38:03.773 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Successfully created port: 3b8730f4-e225-4c0a-bf95-708c9c122a4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.088 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.089 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.117 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:38:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Dec 13 08:38:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Dec 13 08:38:04 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Dec 13 08:38:04 compute-0 ceph-mon[76537]: osdmap e229: 3 total, 3 up, 3 in
Dec 13 08:38:04 compute-0 ceph-mon[76537]: pgmap v2226: 321 pgs: 321 active+clean; 199 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 51 op/s
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.248 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.249 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.261 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.262 248514 INFO nova.compute.claims [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.423 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.503 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.503 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.504 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.504 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.504 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.506 248514 INFO nova.compute.manager [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Terminating instance
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.507 248514 DEBUG nova.compute.manager [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:38:04 compute-0 kernel: tap3b4d29b5-81 (unregistering): left promiscuous mode
Dec 13 08:38:04 compute-0 NetworkManager[50376]: <info>  [1765615084.5427] device (tap3b4d29b5-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:38:04 compute-0 ovn_controller[148476]: 2025-12-13T08:38:04Z|00859|binding|INFO|Releasing lport 3b4d29b5-8158-4612-8f60-a6a57327d01c from this chassis (sb_readonly=0)
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 ovn_controller[148476]: 2025-12-13T08:38:04Z|00860|binding|INFO|Setting lport 3b4d29b5-8158-4612-8f60-a6a57327d01c down in Southbound
Dec 13 08:38:04 compute-0 ovn_controller[148476]: 2025-12-13T08:38:04Z|00861|binding|INFO|Removing iface tap3b4d29b5-81 ovn-installed in OVS
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.558 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:47:7f 10.100.0.11'], port_security=['fa:16:3e:07:47:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fa9180aa-8387-44cb-9e70-535f0652e390', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4497dd13cb734db4999e9c01823dc0fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82b41d33-9cc9-41ca-a7e4-ff901fea742b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1169a38-50f8-4ffe-a49f-5841c99ba14f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b4d29b5-8158-4612-8f60-a6a57327d01c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.560 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b4d29b5-8158-4612-8f60-a6a57327d01c in datapath b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 unbound from our chassis
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.563 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.564 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4449b9ea-4d84-4b4d-8693-77d5af4c1b5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.565 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 namespace which is not needed anymore
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000056.scope: Deactivated successfully.
Dec 13 08:38:04 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000056.scope: Consumed 4.023s CPU time.
Dec 13 08:38:04 compute-0 systemd-machined[210538]: Machine qemu-106-instance-00000056 terminated.
Dec 13 08:38:04 compute-0 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [NOTICE]   (332657) : haproxy version is 2.8.14-c23fe91
Dec 13 08:38:04 compute-0 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [NOTICE]   (332657) : path to executable is /usr/sbin/haproxy
Dec 13 08:38:04 compute-0 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [WARNING]  (332657) : Exiting Master process...
Dec 13 08:38:04 compute-0 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [ALERT]    (332657) : Current worker (332659) exited with code 143 (Terminated)
Dec 13 08:38:04 compute-0 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [WARNING]  (332657) : All workers exited. Exiting... (0)
Dec 13 08:38:04 compute-0 systemd[1]: libpod-064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac.scope: Deactivated successfully.
Dec 13 08:38:04 compute-0 podman[333149]: 2025-12-13 08:38:04.711570489 +0000 UTC m=+0.040072767 container died 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5316c9592aac0df80f7827a4e0b5f1ce24a99c5a9f3cbdd4429ab127d248b430-merged.mount: Deactivated successfully.
Dec 13 08:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac-userdata-shm.mount: Deactivated successfully.
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.752 248514 INFO nova.virt.libvirt.driver [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance destroyed successfully.
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.753 248514 DEBUG nova.objects.instance [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lazy-loading 'resources' on Instance uuid fa9180aa-8387-44cb-9e70-535f0652e390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:04 compute-0 podman[333149]: 2025-12-13 08:38:04.757232322 +0000 UTC m=+0.085734610 container cleanup 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.775 248514 DEBUG nova.virt.libvirt.vif [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-354314434',display_name='tempest-ServerPasswordTestJSON-server-354314434',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-354314434',id=86,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:38:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4497dd13cb734db4999e9c01823dc0fc',ramdisk_id='',reservation_id='r-dolnm7xr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1689122570',owner_user_name='tempest-ServerPasswordTestJSON-1689122570-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:38:04Z,user_data=None,user_id='be7c161a509a4392b0b424b31178f424',uuid=fa9180aa-8387-44cb-9e70-535f0652e390,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.776 248514 DEBUG nova.network.os_vif_util [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converting VIF {"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:38:04 compute-0 systemd[1]: libpod-conmon-064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac.scope: Deactivated successfully.
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.777 248514 DEBUG nova.network.os_vif_util [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.777 248514 DEBUG os_vif [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.779 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4d29b5-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.785 248514 INFO os_vif [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81')
Dec 13 08:38:04 compute-0 podman[333191]: 2025-12-13 08:38:04.837162797 +0000 UTC m=+0.044483495 container remove 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.843 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64e45504-ec32-4457-ac65-f58d4ce805b4]: (4, ('Sat Dec 13 08:38:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 (064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac)\n064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac\nSat Dec 13 08:38:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 (064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac)\n064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.845 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[672a7193-c261-48a4-90c1-93377de0015a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.846 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0588191-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 kernel: tapb0588191-f0: left promiscuous mode
Dec 13 08:38:04 compute-0 nova_compute[248510]: 2025-12-13 08:38:04.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.866 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db41451e-2bfe-4cb4-88c6-70f2762e6c25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.882 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[461cd323-6d2a-4c54-a443-542900ba44df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.883 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75b17f18-d9e1-4010-bca1-92aebfcfecd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.900 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7f520d-a208-40a7-848a-e1a107f2023e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765100, 'reachable_time': 24787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333223, 'error': None, 'target': 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.902 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:38:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.903 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e45f4e06-f3da-4673-904a-2da79dd9e07e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:04 compute-0 systemd[1]: run-netns-ovnmeta\x2db0588191\x2dfbf6\x2d4ff4\x2d8e6f\x2d2ba53eda08f1.mount: Deactivated successfully.
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.003 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Successfully updated port: 3b8730f4-e225-4c0a-bf95-708c9c122a4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.034 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.035 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquired lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.035 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.047 248514 INFO nova.virt.libvirt.driver [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Deleting instance files /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390_del
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.048 248514 INFO nova.virt.libvirt.driver [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Deletion of /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390_del complete
Dec 13 08:38:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:38:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255841871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.073 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.078 248514 DEBUG nova.compute.provider_tree [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.097 248514 DEBUG nova.scheduler.client.report [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.103 248514 INFO nova.compute.manager [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Took 0.60 seconds to destroy the instance on the hypervisor.
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.104 248514 DEBUG oslo.service.loopingcall [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.104 248514 DEBUG nova.compute.manager [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.104 248514 DEBUG nova.network.neutron [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.127 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.127 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:38:05 compute-0 ceph-mon[76537]: osdmap e230: 3 total, 3 up, 3 in
Dec 13 08:38:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1255841871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.196 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.196 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.218 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.236 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.351 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.353 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.354 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating image(s)
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.380 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.404 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.425 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.429 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.461 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.471 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-unplugged-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.471 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.471 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] No waiting events found dispatching network-vif-unplugged-3b4d29b5-8158-4612-8f60-a6a57327d01c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-unplugged-3b4d29b5-8158-4612-8f60-a6a57327d01c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-changed-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Refreshing instance network info cache due to event network-changed-3b8730f4-e225-4c0a-bf95-708c9c122a4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.473 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.502 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.502 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.503 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.503 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.521 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.524 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 88cb43c1-f01b-4098-84ea-d372176a0e20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.718 248514 DEBUG nova.policy [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '302853b7f91745baadf52361a0a7d535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c968daf58e624ebda00676d79f6bde96', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:38:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 270 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.5 MiB/s wr, 332 op/s
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.926 248514 INFO nova.virt.libvirt.driver [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Snapshot image upload complete
Dec 13 08:38:05 compute-0 nova_compute[248510]: 2025-12-13 08:38:05.927 248514 INFO nova.compute.manager [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 5.38 seconds to snapshot the instance on the hypervisor.
Dec 13 08:38:06 compute-0 ceph-mon[76537]: pgmap v2228: 321 pgs: 321 active+clean; 270 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.5 MiB/s wr, 332 op/s
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.194 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 88cb43c1-f01b-4098-84ea-d372176a0e20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.282 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] resizing rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.405 248514 DEBUG nova.objects.instance [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'migration_context' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.414 248514 DEBUG nova.compute.manager [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Dec 13 08:38:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.488 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.488 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Ensure instance console log exists: /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.489 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.489 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.490 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.728 248514 DEBUG nova.network.neutron [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.754 248514 INFO nova.compute.manager [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Took 1.65 seconds to deallocate network for instance.
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.836 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:06 compute-0 nova_compute[248510]: 2025-12-13 08:38:06.836 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.024 248514 DEBUG oslo_concurrency.processutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.467 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Updating instance_info_cache with network_info: [{"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:38:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:38:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392605823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.570 248514 DEBUG oslo_concurrency.processutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.578 248514 DEBUG nova.compute.provider_tree [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:38:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2392605823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.693 248514 DEBUG nova.compute.manager [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.694 248514 DEBUG oslo_concurrency.lockutils [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.695 248514 DEBUG oslo_concurrency.lockutils [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.695 248514 DEBUG oslo_concurrency.lockutils [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.695 248514 DEBUG nova.compute.manager [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] No waiting events found dispatching network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.695 248514 WARNING nova.compute.manager [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received unexpected event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c for instance with vm_state deleted and task_state None.
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.696 248514 DEBUG nova.compute.manager [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-deleted-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.698 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Releasing lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.698 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance network_info: |[{"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.699 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.699 248514 DEBUG nova.network.neutron [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Refreshing network info cache for port 3b8730f4-e225-4c0a-bf95-708c9c122a4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.702 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start _get_guest_xml network_info=[{"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.706 248514 DEBUG nova.scheduler.client.report [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.713 248514 WARNING nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.718 248514 DEBUG nova.virt.libvirt.host [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.719 248514 DEBUG nova.virt.libvirt.host [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.724 248514 DEBUG nova.virt.libvirt.host [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.725 248514 DEBUG nova.virt.libvirt.host [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.725 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.725 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.726 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.726 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.728 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.728 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.728 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:38:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 270 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.5 MiB/s wr, 332 op/s
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.732 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.772 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Successfully created port: 7527f90e-f037-4a94-a011-f952b6e72722 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.782 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.823 248514 INFO nova.scheduler.client.report [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Deleted allocations for instance fa9180aa-8387-44cb-9e70-535f0652e390
Dec 13 08:38:07 compute-0 nova_compute[248510]: 2025-12-13 08:38:07.931 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241218707' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.311 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.332 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.337 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:08 compute-0 ceph-mon[76537]: pgmap v2229: 321 pgs: 321 active+clean; 270 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.5 MiB/s wr, 332 op/s
Dec 13 08:38:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2241218707' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.624 248514 DEBUG nova.compute.manager [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.665 248514 INFO nova.compute.manager [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] instance snapshotting
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.666 248514 DEBUG nova.objects.instance [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'flavor' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701116610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.935 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.936 248514 DEBUG nova.virt.libvirt.vif [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1394190380',display_name='tempest-ServerRescueNegativeTestJSON-server-1394190380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1394190380',id=87,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-689d01nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:02Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=6fb6c605-344c-4ed9-806d-96964b0474f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.936 248514 DEBUG nova.network.os_vif_util [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.937 248514 DEBUG nova.network.os_vif_util [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.938 248514 DEBUG nova.objects.instance [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.960 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <uuid>6fb6c605-344c-4ed9-806d-96964b0474f9</uuid>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <name>instance-00000057</name>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1394190380</nova:name>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:38:07</nova:creationTime>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <nova:user uuid="302853b7f91745baadf52361a0a7d535">tempest-ServerRescueNegativeTestJSON-1030815648-project-member</nova:user>
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <nova:project uuid="c968daf58e624ebda00676d79f6bde96">tempest-ServerRescueNegativeTestJSON-1030815648</nova:project>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <nova:port uuid="3b8730f4-e225-4c0a-bf95-708c9c122a4f">
Dec 13 08:38:08 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <system>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <entry name="serial">6fb6c605-344c-4ed9-806d-96964b0474f9</entry>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <entry name="uuid">6fb6c605-344c-4ed9-806d-96964b0474f9</entry>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </system>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <os>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   </os>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <features>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   </features>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/6fb6c605-344c-4ed9-806d-96964b0474f9_disk">
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config">
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e7:9a:26"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <target dev="tap3b8730f4-e2"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/console.log" append="off"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <video>
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </video>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:38:08 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:38:08 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:38:08 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:38:08 compute-0 nova_compute[248510]: </domain>
Dec 13 08:38:08 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.961 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Preparing to wait for external event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.961 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.962 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.962 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.963 248514 DEBUG nova.virt.libvirt.vif [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1394190380',display_name='tempest-ServerRescueNegativeTestJSON-server-1394190380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1394190380',id=87,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-689d01nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:02Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=6fb6c605-344c-4ed9-806d-96964b0474f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.963 248514 DEBUG nova.network.os_vif_util [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.963 248514 DEBUG nova.network.os_vif_util [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.964 248514 DEBUG os_vif [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.964 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.965 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.965 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.970 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b8730f4-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.970 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b8730f4-e2, col_values=(('external_ids', {'iface-id': '3b8730f4-e225-4c0a-bf95-708c9c122a4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:9a:26', 'vm-uuid': '6fb6c605-344c-4ed9-806d-96964b0474f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:08 compute-0 NetworkManager[50376]: <info>  [1765615088.9723] manager: (tap3b8730f4-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.974 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.976 248514 INFO nova.virt.libvirt.driver [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Beginning live snapshot process
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:08 compute-0 nova_compute[248510]: 2025-12-13 08:38:08.983 248514 INFO os_vif [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2')
Dec 13 08:38:09 compute-0 nova_compute[248510]: 2025-12-13 08:38:09.116 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:09 compute-0 nova_compute[248510]: 2025-12-13 08:38:09.116 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:09 compute-0 nova_compute[248510]: 2025-12-13 08:38:09.116 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No VIF found with MAC fa:16:3e:e7:9a:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:38:09 compute-0 nova_compute[248510]: 2025-12-13 08:38:09.117 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Using config drive
Dec 13 08:38:09 compute-0 nova_compute[248510]: 2025-12-13 08:38:09.139 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:09 compute-0 nova_compute[248510]: 2025-12-13 08:38:09.188 248514 DEBUG nova.virt.libvirt.imagebackend [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:38:09
Dec 13 08:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups']
Dec 13 08:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:38:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1701116610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:09 compute-0 nova_compute[248510]: 2025-12-13 08:38:09.727 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(c2d837b896cf41d2a900caaedc6b42bf) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:38:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 291 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 11 MiB/s wr, 302 op/s
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.219 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Successfully updated port: 7527f90e-f037-4a94-a011-f952b6e72722 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.248 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.249 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquired lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.249 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.396 248514 DEBUG nova.compute.manager [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-changed-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.397 248514 DEBUG nova.compute.manager [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Refreshing instance network info cache due to event network-changed-7527f90e-f037-4a94-a011-f952b6e72722. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.397 248514 DEBUG oslo_concurrency.lockutils [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.594 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Creating config drive at /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.601 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb1nkvx5o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Dec 13 08:38:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Dec 13 08:38:10 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Dec 13 08:38:10 compute-0 ceph-mon[76537]: pgmap v2230: 321 pgs: 321 active+clean; 291 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 11 MiB/s wr, 302 op/s
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.642 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.649 248514 DEBUG nova.network.neutron [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Updated VIF entry in instance network info cache for port 3b8730f4-e225-4c0a-bf95-708c9c122a4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.650 248514 DEBUG nova.network.neutron [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Updating instance_info_cache with network_info: [{"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.670 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk@c2d837b896cf41d2a900caaedc6b42bf to images/54456b38-e7f9-43d3-a9dc-39087af61485 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:38:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.705 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.745 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb1nkvx5o" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.776 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.781 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.822 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:38:10 compute-0 nova_compute[248510]: 2025-12-13 08:38:10.841 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/54456b38-e7f9-43d3-a9dc-39087af61485 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.114 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.115 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Deleting local config drive /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config because it was imported into RBD.
Dec 13 08:38:11 compute-0 kernel: tap3b8730f4-e2: entered promiscuous mode
Dec 13 08:38:11 compute-0 NetworkManager[50376]: <info>  [1765615091.1621] manager: (tap3b8730f4-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Dec 13 08:38:11 compute-0 ovn_controller[148476]: 2025-12-13T08:38:11Z|00862|binding|INFO|Claiming lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f for this chassis.
Dec 13 08:38:11 compute-0 ovn_controller[148476]: 2025-12-13T08:38:11Z|00863|binding|INFO|3b8730f4-e225-4c0a-bf95-708c9c122a4f: Claiming fa:16:3e:e7:9a:26 10.100.0.6
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:11 compute-0 ovn_controller[148476]: 2025-12-13T08:38:11Z|00864|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f ovn-installed in OVS
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.181 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:9a:26 10.100.0.6'], port_security=['fa:16:3e:e7:9a:26 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6fb6c605-344c-4ed9-806d-96964b0474f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b8730f4-e225-4c0a-bf95-708c9c122a4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:11 compute-0 ovn_controller[148476]: 2025-12-13T08:38:11Z|00865|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f up in Southbound
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.183 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b8730f4-e225-4c0a-bf95-708c9c122a4f in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 bound to our chassis
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.181 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.183 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.185 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 08:38:11 compute-0 systemd-udevd[333657]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.196 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d454df-17e2-4374-919d-cd4bb0877447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.197 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf249527d-f1 in ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.199 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf249527d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.199 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d137f9ac-27d4-4116-953c-38f2d76f3e25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.200 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1425ceff-bcf5-4b8f-b4a6-8daa865aa041]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 systemd-machined[210538]: New machine qemu-107-instance-00000057.
Dec 13 08:38:11 compute-0 NetworkManager[50376]: <info>  [1765615091.2077] device (tap3b8730f4-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:38:11 compute-0 NetworkManager[50376]: <info>  [1765615091.2086] device (tap3b8730f4-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:38:11 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-00000057.
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.211 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9203f856-5441-4900-91db-f27860424ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.226 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[750b0465-ca13-43b2-b76b-48a44840596a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.255 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[731c8516-4026-4ca8-8dc5-1725a7c807ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.260 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbb41b8-1a62-414f-b722-9c6670ff16f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 NetworkManager[50376]: <info>  [1765615091.2609] manager: (tapf249527d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/375)
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.297 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(c2d837b896cf41d2a900caaedc6b42bf) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.303 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[47d2a422-2e5e-4a19-b18c-02ed97a34160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.306 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7ae40b-a82e-4e9d-a23d-0a6de0602ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 NetworkManager[50376]: <info>  [1765615091.3285] device (tapf249527d-f0): carrier: link connected
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.332 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0154d072-2b6c-4e1c-83c6-2ee57b85645f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.349 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91ef3c13-b95d-45dd-93e6-fc7c9d880c9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333707, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.366 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[131f2321-ed77-4e34-a603-1b9e5e690d0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:d2c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766854, 'tstamp': 766854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333708, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.385 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af3c0be6-7d92-4988-a8e3-5b16733c3f5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333709, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.420 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5b0d51-cba2-4bb5-9a1d-93e061e4f6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c71ccb06-39d5-4297-b318-7ff9c0cceff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.474 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:11 compute-0 NetworkManager[50376]: <info>  [1765615091.4782] manager: (tapf249527d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:11 compute-0 kernel: tapf249527d-f0: entered promiscuous mode
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.482 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Dec 13 08:38:11 compute-0 ovn_controller[148476]: 2025-12-13T08:38:11Z|00866|binding|INFO|Releasing lport 238a7791-e0e6-4f94-b696-bd1f6886a564 from this chassis (sb_readonly=0)
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.485 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f249527d-f9e6-43ce-a178-f71fc1d38891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f249527d-f9e6-43ce-a178-f71fc1d38891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.485 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68dbe151-e804-4535-8fdb-7d5cea1170fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.486 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/f249527d-f9e6-43ce-a178-f71fc1d38891.pid.haproxy
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:38:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.487 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'env', 'PROCESS_TAG=haproxy-f249527d-f9e6-43ce-a178-f71fc1d38891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f249527d-f9e6-43ce-a178-f71fc1d38891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:38:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:11 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.501 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.519 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(54456b38-e7f9-43d3-a9dc-39087af61485) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:38:11 compute-0 ceph-mon[76537]: osdmap e231: 3 total, 3 up, 3 in
Dec 13 08:38:11 compute-0 ceph-mon[76537]: osdmap e232: 3 total, 3 up, 3 in
Dec 13 08:38:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 292 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 261 op/s
Dec 13 08:38:11 compute-0 podman[333758]: 2025-12-13 08:38:11.875018073 +0000 UTC m=+0.060168806 container create a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.924 248514 DEBUG nova.compute.manager [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.926 248514 DEBUG oslo_concurrency.lockutils [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.927 248514 DEBUG oslo_concurrency.lockutils [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.927 248514 DEBUG oslo_concurrency.lockutils [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:11 compute-0 nova_compute[248510]: 2025-12-13 08:38:11.927 248514 DEBUG nova.compute.manager [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Processing event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:38:11 compute-0 podman[333758]: 2025-12-13 08:38:11.838711991 +0000 UTC m=+0.023862774 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:38:11 compute-0 systemd[1]: Started libpod-conmon-a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441.scope.
Dec 13 08:38:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1630a06e863d3fc2328692d0f3a97579d4a2f822414283bd74ab2bb685b53f54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:11 compute-0 podman[333758]: 2025-12-13 08:38:11.988381538 +0000 UTC m=+0.173532251 container init a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 08:38:11 compute-0 podman[333758]: 2025-12-13 08:38:11.993345091 +0000 UTC m=+0.178495784 container start a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:38:12 compute-0 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [NOTICE]   (333778) : New worker (333780) forked
Dec 13 08:38:12 compute-0 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [NOTICE]   (333778) : Loading success.
Dec 13 08:38:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Dec 13 08:38:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Dec 13 08:38:12 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Dec 13 08:38:12 compute-0 ceph-mon[76537]: pgmap v2233: 321 pgs: 321 active+clean; 292 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 261 op/s
Dec 13 08:38:12 compute-0 ceph-mon[76537]: osdmap e233: 3 total, 3 up, 3 in
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.718 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updating instance_info_cache with network_info: [{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.966 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Releasing lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.966 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance network_info: |[{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.966 248514 DEBUG oslo_concurrency.lockutils [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.967 248514 DEBUG nova.network.neutron [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Refreshing network info cache for port 7527f90e-f037-4a94-a011-f952b6e72722 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.970 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start _get_guest_xml network_info=[{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.976 248514 WARNING nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.983 248514 DEBUG nova.virt.libvirt.host [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.983 248514 DEBUG nova.virt.libvirt.host [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.989 248514 DEBUG nova.virt.libvirt.host [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.989 248514 DEBUG nova.virt.libvirt.host [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.990 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.990 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.991 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.991 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.991 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.991 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.992 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.992 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.992 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.992 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.993 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.993 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:38:12 compute-0 nova_compute[248510]: 2025-12-13 08:38:12.996 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.041 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615093.0412018, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.042 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Started (Lifecycle Event)
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.044 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.049 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.053 248514 INFO nova.virt.libvirt.driver [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance spawned successfully.
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.053 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.072 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.081 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.081 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.082 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.082 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.083 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.083 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.092 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.120 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.121 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615093.0438383, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.121 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Paused (Lifecycle Event)
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.192 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.196 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615093.046616, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.196 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Resumed (Lifecycle Event)
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.208 248514 INFO nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Took 10.63 seconds to spawn the instance on the hypervisor.
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.208 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.243 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.253 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.280 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.295 248514 INFO nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Took 12.00 seconds to build instance.
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.318 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/161677277' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.630 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/161677277' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.673 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.680 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 329 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.6 MiB/s wr, 160 op/s
Dec 13 08:38:13 compute-0 nova_compute[248510]: 2025-12-13 08:38:13.973 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398371592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.299 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.301 248514 DEBUG nova.virt.libvirt.vif [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1217173991',display_name='tempest-ServerRescueNegativeTestJSON-server-1217173991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1217173991',id=88,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-axcrlu30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:05Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=88cb43c1-f01b-4098-84ea-d372176a0e20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.301 248514 DEBUG nova.network.os_vif_util [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.302 248514 DEBUG nova.network.os_vif_util [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.303 248514 DEBUG nova.objects.instance [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.407 248514 DEBUG nova.compute.manager [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.408 248514 DEBUG oslo_concurrency.lockutils [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.408 248514 DEBUG oslo_concurrency.lockutils [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.408 248514 DEBUG oslo_concurrency.lockutils [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.409 248514 DEBUG nova.compute.manager [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.409 248514 WARNING nova.compute.manager [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state active and task_state None.
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.500 248514 INFO nova.virt.libvirt.driver [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Snapshot image upload complete
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.501 248514 INFO nova.compute.manager [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 5.81 seconds to snapshot the instance on the hypervisor.
Dec 13 08:38:14 compute-0 ceph-mon[76537]: pgmap v2235: 321 pgs: 321 active+clean; 329 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.6 MiB/s wr, 160 op/s
Dec 13 08:38:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3398371592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.855 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <uuid>88cb43c1-f01b-4098-84ea-d372176a0e20</uuid>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <name>instance-00000058</name>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1217173991</nova:name>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:38:12</nova:creationTime>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <nova:user uuid="302853b7f91745baadf52361a0a7d535">tempest-ServerRescueNegativeTestJSON-1030815648-project-member</nova:user>
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <nova:project uuid="c968daf58e624ebda00676d79f6bde96">tempest-ServerRescueNegativeTestJSON-1030815648</nova:project>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <nova:port uuid="7527f90e-f037-4a94-a011-f952b6e72722">
Dec 13 08:38:14 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <system>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <entry name="serial">88cb43c1-f01b-4098-84ea-d372176a0e20</entry>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <entry name="uuid">88cb43c1-f01b-4098-84ea-d372176a0e20</entry>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </system>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <os>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   </os>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <features>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   </features>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk">
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config">
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:f6:f9:b9"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <target dev="tap7527f90e-f0"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/console.log" append="off"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <video>
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </video>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:38:14 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:38:14 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:38:14 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:38:14 compute-0 nova_compute[248510]: </domain>
Dec 13 08:38:14 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.857 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Preparing to wait for external event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.858 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.858 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.859 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.860 248514 DEBUG nova.virt.libvirt.vif [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1217173991',display_name='tempest-ServerRescueNegativeTestJSON-server-1217173991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1217173991',id=88,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-axcrlu30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:05Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=88cb43c1-f01b-4098-84ea-d372176a0e20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.860 248514 DEBUG nova.network.os_vif_util [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.861 248514 DEBUG nova.network.os_vif_util [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.861 248514 DEBUG os_vif [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.862 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.863 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.863 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.866 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7527f90e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.867 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7527f90e-f0, col_values=(('external_ids', {'iface-id': '7527f90e-f037-4a94-a011-f952b6e72722', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:f9:b9', 'vm-uuid': '88cb43c1-f01b-4098-84ea-d372176a0e20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:14 compute-0 NetworkManager[50376]: <info>  [1765615094.8693] manager: (tap7527f90e-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.875 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:14 compute-0 nova_compute[248510]: 2025-12-13 08:38:14.876 248514 INFO os_vif [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0')
Dec 13 08:38:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:38:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377857341' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:38:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:38:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377857341' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:38:15 compute-0 nova_compute[248510]: 2025-12-13 08:38:15.152 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:15 compute-0 nova_compute[248510]: 2025-12-13 08:38:15.152 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:15 compute-0 nova_compute[248510]: 2025-12-13 08:38:15.152 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No VIF found with MAC fa:16:3e:f6:f9:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:38:15 compute-0 nova_compute[248510]: 2025-12-13 08:38:15.153 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Using config drive
Dec 13 08:38:15 compute-0 nova_compute[248510]: 2025-12-13 08:38:15.172 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1377857341' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:38:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1377857341' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:38:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 8.8 MiB/s wr, 329 op/s
Dec 13 08:38:15 compute-0 nova_compute[248510]: 2025-12-13 08:38:15.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:15 compute-0 nova_compute[248510]: 2025-12-13 08:38:15.887 248514 DEBUG nova.compute.manager [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Dec 13 08:38:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:16 compute-0 nova_compute[248510]: 2025-12-13 08:38:16.666 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating config drive at /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config
Dec 13 08:38:16 compute-0 nova_compute[248510]: 2025-12-13 08:38:16.672 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeca5gtue execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:16 compute-0 ceph-mon[76537]: pgmap v2236: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 8.8 MiB/s wr, 329 op/s
Dec 13 08:38:16 compute-0 nova_compute[248510]: 2025-12-13 08:38:16.830 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeca5gtue" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:16 compute-0 ovn_controller[148476]: 2025-12-13T08:38:16Z|00867|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 08:38:16 compute-0 ovn_controller[148476]: 2025-12-13T08:38:16Z|00868|binding|INFO|Releasing lport 238a7791-e0e6-4f94-b696-bd1f6886a564 from this chassis (sb_readonly=0)
Dec 13 08:38:16 compute-0 nova_compute[248510]: 2025-12-13 08:38:16.879 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:16 compute-0 nova_compute[248510]: 2025-12-13 08:38:16.890 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:16 compute-0 nova_compute[248510]: 2025-12-13 08:38:16.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.047 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.048 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deleting local config drive /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config because it was imported into RBD.
Dec 13 08:38:17 compute-0 kernel: tap7527f90e-f0: entered promiscuous mode
Dec 13 08:38:17 compute-0 NetworkManager[50376]: <info>  [1765615097.1046] manager: (tap7527f90e-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:17 compute-0 ovn_controller[148476]: 2025-12-13T08:38:17Z|00869|binding|INFO|Claiming lport 7527f90e-f037-4a94-a011-f952b6e72722 for this chassis.
Dec 13 08:38:17 compute-0 ovn_controller[148476]: 2025-12-13T08:38:17Z|00870|binding|INFO|7527f90e-f037-4a94-a011-f952b6e72722: Claiming fa:16:3e:f6:f9:b9 10.100.0.7
Dec 13 08:38:17 compute-0 ovn_controller[148476]: 2025-12-13T08:38:17Z|00871|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 ovn-installed in OVS
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.124 248514 DEBUG nova.network.neutron [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updated VIF entry in instance network info cache for port 7527f90e-f037-4a94-a011-f952b6e72722. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.125 248514 DEBUG nova.network.neutron [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updating instance_info_cache with network_info: [{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:17 compute-0 ovn_controller[148476]: 2025-12-13T08:38:17Z|00872|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 up in Southbound
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.130 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f9:b9 10.100.0.7'], port_security=['fa:16:3e:f6:f9:b9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '88cb43c1-f01b-4098-84ea-d372176a0e20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7527f90e-f037-4a94-a011-f952b6e72722) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.132 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7527f90e-f037-4a94-a011-f952b6e72722 in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 bound to our chassis
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.134 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 08:38:17 compute-0 systemd-udevd[333968]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.157 248514 DEBUG oslo_concurrency.lockutils [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:38:17 compute-0 systemd-machined[210538]: New machine qemu-108-instance-00000058.
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.158 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[20261668-c7ae-464a-b63b-20347ec3e595]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:17 compute-0 NetworkManager[50376]: <info>  [1765615097.1650] device (tap7527f90e-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:38:17 compute-0 NetworkManager[50376]: <info>  [1765615097.1662] device (tap7527f90e-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:38:17 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000058.
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.205 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95e05d14-8958-4853-b869-84fed2b96213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.208 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0fd6aa-7de0-4916-a61d-640f3ccfaf19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.236 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7e951b-2c83-4461-926b-49cdfc0a97f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.253 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fcba9291-7acb-4ee5-a8e5-2e8df3449436]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333981, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.270 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a99f37ed-dec8-4adb-8e12-c15d4ea5c413]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766865, 'tstamp': 766865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333982, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766868, 'tstamp': 766868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333982, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.276 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.491 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615097.490849, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.492 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Started (Lifecycle Event)
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.540 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.545 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615097.49206, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.545 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Paused (Lifecycle Event)
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.586 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.591 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.616 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.622 248514 DEBUG nova.compute.manager [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.622 248514 DEBUG oslo_concurrency.lockutils [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.623 248514 DEBUG oslo_concurrency.lockutils [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.623 248514 DEBUG oslo_concurrency.lockutils [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.623 248514 DEBUG nova.compute.manager [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Processing event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.624 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.628 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615097.628413, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.629 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Resumed (Lifecycle Event)
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.631 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.637 248514 INFO nova.virt.libvirt.driver [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance spawned successfully.
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.638 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.718 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.718 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.719 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.719 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.720 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.720 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 6.6 MiB/s wr, 203 op/s
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.787 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.797 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.889 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.952 248514 INFO nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Took 12.60 seconds to spawn the instance on the hypervisor.
Dec 13 08:38:17 compute-0 nova_compute[248510]: 2025-12-13 08:38:17.953 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:18 compute-0 nova_compute[248510]: 2025-12-13 08:38:18.344 248514 INFO nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Took 14.12 seconds to build instance.
Dec 13 08:38:18 compute-0 nova_compute[248510]: 2025-12-13 08:38:18.415 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:18 compute-0 ceph-mon[76537]: pgmap v2237: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 6.6 MiB/s wr, 203 op/s
Dec 13 08:38:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Dec 13 08:38:19 compute-0 nova_compute[248510]: 2025-12-13 08:38:19.751 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615084.7509744, fa9180aa-8387-44cb-9e70-535f0652e390 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:19 compute-0 nova_compute[248510]: 2025-12-13 08:38:19.753 248514 INFO nova.compute.manager [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] VM Stopped (Lifecycle Event)
Dec 13 08:38:19 compute-0 nova_compute[248510]: 2025-12-13 08:38:19.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.324 248514 DEBUG nova.compute.manager [None req-7b783003-2c4c-42f2-a2cf-a81cd6384099 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.589 248514 DEBUG nova.compute.manager [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.589 248514 DEBUG oslo_concurrency.lockutils [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.590 248514 DEBUG oslo_concurrency.lockutils [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.590 248514 DEBUG oslo_concurrency.lockutils [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.590 248514 DEBUG nova.compute.manager [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.591 248514 WARNING nova.compute.manager [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state active and task_state None.
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.616 248514 DEBUG nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.673 248514 INFO nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] instance snapshotting
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.675 248514 DEBUG nova.objects.instance [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'flavor' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:20 compute-0 nova_compute[248510]: 2025-12-13 08:38:20.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:20 compute-0 ceph-mon[76537]: pgmap v2238: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Dec 13 08:38:20 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014668542161819687 of space, bias 1.0, pg target 0.4400562648545906 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0021822167660036987 of space, bias 1.0, pg target 0.6546650298011096 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.947345789036047e-07 of space, bias 4.0, pg target 0.0007136814946843256 quantized to 16 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:38:21 compute-0 nova_compute[248510]: 2025-12-13 08:38:21.473 248514 INFO nova.compute.manager [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Rescuing
Dec 13 08:38:21 compute-0 nova_compute[248510]: 2025-12-13 08:38:21.474 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:38:21 compute-0 nova_compute[248510]: 2025-12-13 08:38:21.474 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquired lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:38:21 compute-0 nova_compute[248510]: 2025-12-13 08:38:21.474 248514 DEBUG nova.network.neutron [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:38:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Dec 13 08:38:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Dec 13 08:38:21 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Dec 13 08:38:21 compute-0 nova_compute[248510]: 2025-12-13 08:38:21.645 248514 INFO nova.virt.libvirt.driver [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Beginning live snapshot process
Dec 13 08:38:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 5.1 MiB/s wr, 269 op/s
Dec 13 08:38:21 compute-0 nova_compute[248510]: 2025-12-13 08:38:21.998 248514 DEBUG nova.virt.libvirt.imagebackend [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:38:22 compute-0 ceph-mon[76537]: osdmap e234: 3 total, 3 up, 3 in
Dec 13 08:38:22 compute-0 ceph-mon[76537]: pgmap v2240: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 5.1 MiB/s wr, 269 op/s
Dec 13 08:38:22 compute-0 nova_compute[248510]: 2025-12-13 08:38:22.561 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(e1b63fbb027f475faf9cba72882a99bc) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:38:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Dec 13 08:38:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Dec 13 08:38:23 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Dec 13 08:38:23 compute-0 nova_compute[248510]: 2025-12-13 08:38:23.544 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk@e1b63fbb027f475faf9cba72882a99bc to images/8b0d2f78-fa72-484b-b0f0-842d1746a5ea clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:38:23 compute-0 nova_compute[248510]: 2025-12-13 08:38:23.618 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/8b0d2f78-fa72-484b-b0f0-842d1746a5ea flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:38:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 372 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 22 KiB/s wr, 168 op/s
Dec 13 08:38:24 compute-0 nova_compute[248510]: 2025-12-13 08:38:24.114 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(e1b63fbb027f475faf9cba72882a99bc) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:38:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Dec 13 08:38:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Dec 13 08:38:24 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Dec 13 08:38:24 compute-0 ceph-mon[76537]: osdmap e235: 3 total, 3 up, 3 in
Dec 13 08:38:24 compute-0 ceph-mon[76537]: pgmap v2242: 321 pgs: 321 active+clean; 372 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 22 KiB/s wr, 168 op/s
Dec 13 08:38:24 compute-0 nova_compute[248510]: 2025-12-13 08:38:24.539 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(8b0d2f78-fa72-484b-b0f0-842d1746a5ea) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:38:24 compute-0 nova_compute[248510]: 2025-12-13 08:38:24.722 248514 DEBUG nova.network.neutron [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updating instance_info_cache with network_info: [{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:38:24 compute-0 nova_compute[248510]: 2025-12-13 08:38:24.831 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Releasing lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:38:24 compute-0 nova_compute[248510]: 2025-12-13 08:38:24.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Dec 13 08:38:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Dec 13 08:38:25 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Dec 13 08:38:25 compute-0 ceph-mon[76537]: osdmap e236: 3 total, 3 up, 3 in
Dec 13 08:38:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 429 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 9.7 MiB/s wr, 313 op/s
Dec 13 08:38:25 compute-0 nova_compute[248510]: 2025-12-13 08:38:25.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:25 compute-0 nova_compute[248510]: 2025-12-13 08:38:25.767 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:38:25 compute-0 ovn_controller[148476]: 2025-12-13T08:38:25Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:9a:26 10.100.0.6
Dec 13 08:38:25 compute-0 ovn_controller[148476]: 2025-12-13T08:38:25Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:9a:26 10.100.0.6
Dec 13 08:38:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:26 compute-0 ceph-mon[76537]: osdmap e237: 3 total, 3 up, 3 in
Dec 13 08:38:26 compute-0 ceph-mon[76537]: pgmap v2245: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 429 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 9.7 MiB/s wr, 313 op/s
Dec 13 08:38:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 429 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.9 MiB/s wr, 221 op/s
Dec 13 08:38:28 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec 13 08:38:28 compute-0 nova_compute[248510]: 2025-12-13 08:38:28.398 248514 INFO nova.virt.libvirt.driver [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Snapshot image upload complete
Dec 13 08:38:28 compute-0 nova_compute[248510]: 2025-12-13 08:38:28.399 248514 INFO nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 7.69 seconds to snapshot the instance on the hypervisor.
Dec 13 08:38:28 compute-0 ceph-mon[76537]: pgmap v2246: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 429 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.9 MiB/s wr, 221 op/s
Dec 13 08:38:28 compute-0 nova_compute[248510]: 2025-12-13 08:38:28.890 248514 DEBUG nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Dec 13 08:38:28 compute-0 nova_compute[248510]: 2025-12-13 08:38:28.890 248514 DEBUG nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Dec 13 08:38:28 compute-0 nova_compute[248510]: 2025-12-13 08:38:28.890 248514 DEBUG nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting image bd9a009b-216b-49be-81a8-600047037026 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.270 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0d52c1df-d252-4012-b05c-40737f1089bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.270 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.289 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.397 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.398 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.406 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.407 248514 INFO nova.compute.claims [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.613 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:29 compute-0 ovn_controller[148476]: 2025-12-13T08:38:29Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:f9:b9 10.100.0.7
Dec 13 08:38:29 compute-0 ovn_controller[148476]: 2025-12-13T08:38:29Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:f9:b9 10.100.0.7
Dec 13 08:38:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 479 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 12 MiB/s wr, 277 op/s
Dec 13 08:38:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Dec 13 08:38:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Dec 13 08:38:29 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Dec 13 08:38:29 compute-0 nova_compute[248510]: 2025-12-13 08:38:29.877 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:38:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564059498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.215 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.221 248514 DEBUG nova.compute.provider_tree [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.256 248514 DEBUG nova.scheduler.client.report [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.280 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.281 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.338 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.416 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.462 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.620 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.621 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.622 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Creating image(s)
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.642 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.664 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.688 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.692 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.770 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.771 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.771 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.771 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.793 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:30 compute-0 nova_compute[248510]: 2025-12-13 08:38:30.798 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d52c1df-d252-4012-b05c-40737f1089bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:30 compute-0 ceph-mon[76537]: pgmap v2247: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 479 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 12 MiB/s wr, 277 op/s
Dec 13 08:38:30 compute-0 ceph-mon[76537]: osdmap e238: 3 total, 3 up, 3 in
Dec 13 08:38:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2564059498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.099 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d52c1df-d252-4012-b05c-40737f1089bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.153 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] resizing rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.221 248514 DEBUG nova.objects.instance [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d52c1df-d252-4012-b05c-40737f1089bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.231 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.231 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.239 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.239 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Ensure instance console log exists: /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.239 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.240 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.240 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.241 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.246 248514 WARNING nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.251 248514 DEBUG nova.virt.libvirt.host [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.251 248514 DEBUG nova.virt.libvirt.host [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.254 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.257 248514 DEBUG nova.virt.libvirt.host [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.258 248514 DEBUG nova.virt.libvirt.host [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.258 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.258 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.259 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.259 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.259 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.259 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.260 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.260 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.260 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.260 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.261 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.261 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.263 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.352 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.352 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.361 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.361 248514 INFO nova.compute.claims [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:38:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.603 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 494 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.4 MiB/s wr, 253 op/s
Dec 13 08:38:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697837179' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.815 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/697837179' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.844 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:31 compute-0 nova_compute[248510]: 2025-12-13 08:38:31.849 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:31 compute-0 podman[334417]: 2025-12-13 08:38:31.986918085 +0000 UTC m=+0.068994554 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:38:32 compute-0 podman[334418]: 2025-12-13 08:38:32.005156898 +0000 UTC m=+0.083570536 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:38:32 compute-0 podman[334416]: 2025-12-13 08:38:32.016533901 +0000 UTC m=+0.097446731 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 08:38:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:38:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368706835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.235 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.241 248514 DEBUG nova.compute.provider_tree [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.378 248514 DEBUG nova.scheduler.client.report [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.417 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.419 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:38:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392828137' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.483 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.495 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.496 248514 DEBUG nova.objects.instance [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d52c1df-d252-4012-b05c-40737f1089bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.509 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.517 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <uuid>0d52c1df-d252-4012-b05c-40737f1089bb</uuid>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <name>instance-00000059</name>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerShowV247Test-server-990588052</nova:name>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:38:31</nova:creationTime>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <nova:user uuid="0e8471eedc0e4ae0a028132802bc1967">tempest-ServerShowV247Test-2063492104-project-member</nova:user>
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <nova:project uuid="c4b40eb3fb314c44867a54f3ba244ec1">tempest-ServerShowV247Test-2063492104</nova:project>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <system>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <entry name="serial">0d52c1df-d252-4012-b05c-40737f1089bb</entry>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <entry name="uuid">0d52c1df-d252-4012-b05c-40737f1089bb</entry>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     </system>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <os>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   </os>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <features>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   </features>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0d52c1df-d252-4012-b05c-40737f1089bb_disk">
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0d52c1df-d252-4012-b05c-40737f1089bb_disk.config">
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/console.log" append="off"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <video>
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     </video>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:38:32 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:38:32 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:38:32 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:38:32 compute-0 nova_compute[248510]: </domain>
Dec 13 08:38:32 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.532 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.656 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.658 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.658 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating image(s)
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.677 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.701 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.721 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.725 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.787 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.788 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.788 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Using config drive
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.808 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.815 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.815 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.816 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.816 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:32 compute-0 ceph-mon[76537]: pgmap v2249: 321 pgs: 321 active+clean; 494 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.4 MiB/s wr, 253 op/s
Dec 13 08:38:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3368706835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3392828137' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.962 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:32 compute-0 nova_compute[248510]: 2025-12-13 08:38:32.966 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.344 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Creating config drive at /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.349 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1sgfnms execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.509 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1sgfnms" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.541 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.546 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 489 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.3 MiB/s wr, 234 op/s
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.837 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.871s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.881 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.882 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Deleting local config drive /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config because it was imported into RBD.
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.918 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] resizing rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:38:33 compute-0 systemd-machined[210538]: New machine qemu-109-instance-00000059.
Dec 13 08:38:33 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-00000059.
Dec 13 08:38:33 compute-0 nova_compute[248510]: 2025-12-13 08:38:33.984 248514 DEBUG nova.objects.instance [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.008 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.008 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Ensure instance console log exists: /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.009 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.009 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.010 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.011 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.017 248514 WARNING nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.021 248514 DEBUG nova.virt.libvirt.host [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.022 248514 DEBUG nova.virt.libvirt.host [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.024 248514 DEBUG nova.virt.libvirt.host [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.025 248514 DEBUG nova.virt.libvirt.host [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.025 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.025 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.026 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.026 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.026 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.027 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.027 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.027 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.027 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.028 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.028 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.028 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.031 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.359 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615114.3590355, 0d52c1df-d252-4012-b05c-40737f1089bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.360 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] VM Resumed (Lifecycle Event)
Dec 13 08:38:34 compute-0 sudo[334800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.371 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.372 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:38:34 compute-0 sudo[334800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.376 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance spawned successfully.
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.376 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:38:34 compute-0 sudo[334800]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.396 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.407 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.408 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.408 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.409 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.409 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.409 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:34 compute-0 sudo[334826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:38:34 compute-0 sudo[334826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.444 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.445 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615114.3593073, 0d52c1df-d252-4012-b05c-40737f1089bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.445 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] VM Started (Lifecycle Event)
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.486 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.495 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.503 248514 INFO nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Took 3.88 seconds to spawn the instance on the hypervisor.
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.504 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732251910' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.575 248514 INFO nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Took 5.22 seconds to build instance.
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.593 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.621 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.629 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.665 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:34 compute-0 nova_compute[248510]: 2025-12-13 08:38:34.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Dec 13 08:38:34 compute-0 ceph-mon[76537]: pgmap v2250: 321 pgs: 321 active+clean; 489 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.3 MiB/s wr, 234 op/s
Dec 13 08:38:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1732251910' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Dec 13 08:38:35 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Dec 13 08:38:35 compute-0 sudo[334826]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:38:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:38:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:38:35 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:38:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:38:35 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:38:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:38:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:38:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:38:35 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:38:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:38:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:38:35 compute-0 sudo[334921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:38:35 compute-0 sudo[334921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:35 compute-0 sudo[334921]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605667858' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:35 compute-0 nova_compute[248510]: 2025-12-13 08:38:35.256 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:35 compute-0 nova_compute[248510]: 2025-12-13 08:38:35.258 248514 DEBUG nova.objects.instance [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:35 compute-0 sudo[334948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:38:35 compute-0 sudo[334948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:35 compute-0 nova_compute[248510]: 2025-12-13 08:38:35.486 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <uuid>0187165c-81b1-43b8-81f5-05e847fe1fa6</uuid>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <name>instance-0000005a</name>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerShowV247Test-server-1806406257</nova:name>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:38:34</nova:creationTime>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <nova:user uuid="0e8471eedc0e4ae0a028132802bc1967">tempest-ServerShowV247Test-2063492104-project-member</nova:user>
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <nova:project uuid="c4b40eb3fb314c44867a54f3ba244ec1">tempest-ServerShowV247Test-2063492104</nova:project>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <system>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <entry name="serial">0187165c-81b1-43b8-81f5-05e847fe1fa6</entry>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <entry name="uuid">0187165c-81b1-43b8-81f5-05e847fe1fa6</entry>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     </system>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <os>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   </os>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <features>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   </features>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0187165c-81b1-43b8-81f5-05e847fe1fa6_disk">
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config">
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:35 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/console.log" append="off"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <video>
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     </video>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:38:35 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:38:35 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:38:35 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:38:35 compute-0 nova_compute[248510]: </domain>
Dec 13 08:38:35 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:38:35 compute-0 podman[334988]: 2025-12-13 08:38:35.538784491 +0000 UTC m=+0.021922655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:38:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 511 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 326 op/s
Dec 13 08:38:35 compute-0 nova_compute[248510]: 2025-12-13 08:38:35.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:35 compute-0 podman[334988]: 2025-12-13 08:38:35.824450406 +0000 UTC m=+0.307588550 container create 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:38:35 compute-0 nova_compute[248510]: 2025-12-13 08:38:35.852 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:35 compute-0 nova_compute[248510]: 2025-12-13 08:38:35.853 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:35 compute-0 nova_compute[248510]: 2025-12-13 08:38:35.854 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Using config drive
Dec 13 08:38:35 compute-0 nova_compute[248510]: 2025-12-13 08:38:35.883 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:36 compute-0 ceph-mon[76537]: osdmap e239: 3 total, 3 up, 3 in
Dec 13 08:38:36 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:38:36 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:38:36 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:38:36 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:38:36 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:38:36 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:38:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3605667858' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:36 compute-0 nova_compute[248510]: 2025-12-13 08:38:36.030 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:38:36 compute-0 systemd[1]: Started libpod-conmon-3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8.scope.
Dec 13 08:38:36 compute-0 nova_compute[248510]: 2025-12-13 08:38:36.076 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating config drive at /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config
Dec 13 08:38:36 compute-0 nova_compute[248510]: 2025-12-13 08:38:36.082 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmt80305 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:38:36 compute-0 nova_compute[248510]: 2025-12-13 08:38:36.229 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmt80305" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:36 compute-0 podman[334988]: 2025-12-13 08:38:36.260174886 +0000 UTC m=+0.743313080 container init 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:38:36 compute-0 podman[334988]: 2025-12-13 08:38:36.270168384 +0000 UTC m=+0.753306538 container start 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:38:36 compute-0 festive_snyder[335022]: 167 167
Dec 13 08:38:36 compute-0 systemd[1]: libpod-3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8.scope: Deactivated successfully.
Dec 13 08:38:36 compute-0 nova_compute[248510]: 2025-12-13 08:38:36.293 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:36 compute-0 nova_compute[248510]: 2025-12-13 08:38:36.298 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:36 compute-0 podman[334988]: 2025-12-13 08:38:36.309331767 +0000 UTC m=+0.792469931 container attach 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:38:36 compute-0 podman[334988]: 2025-12-13 08:38:36.309777408 +0000 UTC m=+0.792915572 container died 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 08:38:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Dec 13 08:38:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Dec 13 08:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-df3a950504ea75905efcbb2541771f6d9d73773bdda7d80eef6ce72f2d8f6f35-merged.mount: Deactivated successfully.
Dec 13 08:38:36 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Dec 13 08:38:37 compute-0 podman[334988]: 2025-12-13 08:38:37.161637642 +0000 UTC m=+1.644775786 container remove 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 08:38:37 compute-0 systemd[1]: libpod-conmon-3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8.scope: Deactivated successfully.
Dec 13 08:38:37 compute-0 ceph-mon[76537]: pgmap v2252: 321 pgs: 321 active+clean; 511 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 326 op/s
Dec 13 08:38:37 compute-0 ceph-mon[76537]: osdmap e240: 3 total, 3 up, 3 in
Dec 13 08:38:37 compute-0 podman[335085]: 2025-12-13 08:38:37.339314445 +0000 UTC m=+0.025933185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:38:37 compute-0 podman[335085]: 2025-12-13 08:38:37.566757804 +0000 UTC m=+0.253376544 container create f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:38:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 511 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 6.3 MiB/s wr, 172 op/s
Dec 13 08:38:37 compute-0 nova_compute[248510]: 2025-12-13 08:38:37.836 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:37 compute-0 nova_compute[248510]: 2025-12-13 08:38:37.838 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deleting local config drive /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config because it was imported into RBD.
Dec 13 08:38:37 compute-0 systemd[1]: Started libpod-conmon-f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a.scope.
Dec 13 08:38:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:37 compute-0 systemd-machined[210538]: New machine qemu-110-instance-0000005a.
Dec 13 08:38:37 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-0000005a.
Dec 13 08:38:37 compute-0 podman[335085]: 2025-12-13 08:38:37.968479041 +0000 UTC m=+0.655097791 container init f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 08:38:37 compute-0 podman[335085]: 2025-12-13 08:38:37.978644124 +0000 UTC m=+0.665262844 container start f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:38:37 compute-0 podman[335085]: 2025-12-13 08:38:37.989394441 +0000 UTC m=+0.676013161 container attach f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:38:38 compute-0 ceph-mon[76537]: pgmap v2254: 321 pgs: 321 active+clean; 511 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 6.3 MiB/s wr, 172 op/s
Dec 13 08:38:38 compute-0 gifted_mahavira[335104]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:38:38 compute-0 gifted_mahavira[335104]: --> All data devices are unavailable
Dec 13 08:38:38 compute-0 systemd[1]: libpod-f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a.scope: Deactivated successfully.
Dec 13 08:38:38 compute-0 podman[335085]: 2025-12-13 08:38:38.538095717 +0000 UTC m=+1.224714437 container died f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:38:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727-merged.mount: Deactivated successfully.
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.713 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615118.7073314, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.716 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Resumed (Lifecycle Event)
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.720 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.722 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:38:38 compute-0 podman[335085]: 2025-12-13 08:38:38.723546813 +0000 UTC m=+1.410165543 container remove f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.728 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance spawned successfully.
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.730 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:38:38 compute-0 systemd[1]: libpod-conmon-f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a.scope: Deactivated successfully.
Dec 13 08:38:38 compute-0 sudo[334948]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.784 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.792 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.792 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.794 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.794 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.795 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.796 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.803 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:38 compute-0 sudo[335190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:38:38 compute-0 sudo[335190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:38 compute-0 sudo[335190]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:38 compute-0 kernel: tap7527f90e-f0 (unregistering): left promiscuous mode
Dec 13 08:38:38 compute-0 NetworkManager[50376]: <info>  [1765615118.8535] device (tap7527f90e-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.860 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.860 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615118.7079344, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.861 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Started (Lifecycle Event)
Dec 13 08:38:38 compute-0 ovn_controller[148476]: 2025-12-13T08:38:38Z|00873|binding|INFO|Releasing lport 7527f90e-f037-4a94-a011-f952b6e72722 from this chassis (sb_readonly=0)
Dec 13 08:38:38 compute-0 ovn_controller[148476]: 2025-12-13T08:38:38Z|00874|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 down in Southbound
Dec 13 08:38:38 compute-0 ovn_controller[148476]: 2025-12-13T08:38:38Z|00875|binding|INFO|Removing iface tap7527f90e-f0 ovn-installed in OVS
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.880 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f9:b9 10.100.0.7'], port_security=['fa:16:3e:f6:f9:b9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '88cb43c1-f01b-4098-84ea-d372176a0e20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7527f90e-f037-4a94-a011-f952b6e72722) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.881 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7527f90e-f037-4a94-a011-f952b6e72722 in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis
Dec 13 08:38:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.883 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.901 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[179f9efb-02cd-4b2e-8853-4e91ac9b727f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:38 compute-0 sudo[335217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:38:38 compute-0 sudo[335217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.918 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.925 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:38 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000058.scope: Deactivated successfully.
Dec 13 08:38:38 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000058.scope: Consumed 12.032s CPU time.
Dec 13 08:38:38 compute-0 systemd-machined[210538]: Machine qemu-108-instance-00000058 terminated.
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.938 248514 INFO nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Took 6.28 seconds to spawn the instance on the hypervisor.
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.939 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.939 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9e888ab3-de0c-4630-8d19-5399325b136d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.946 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fac61326-7d56-4cec-a568-e19ce4b05f70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:38 compute-0 nova_compute[248510]: 2025-12-13 08:38:38.951 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:38:38 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.981 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a962a4fb-b489-40ef-8dd3-cf86bdd7a9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.005 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8563d4f2-dd24-41e4-a1d7-a90e4e7cd883]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335250, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.027 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2d51e569-9731-44c9-9e1c-67e96867f383]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766865, 'tstamp': 766865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335251, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766868, 'tstamp': 766868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335251, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.031 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.031 248514 INFO nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Took 7.71 seconds to build instance.
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.034 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.039 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.043 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.043 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.055 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.111 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance shutdown successfully after 13 seconds.
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.117 248514 INFO nova.virt.libvirt.driver [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance destroyed successfully.
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.117 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'numa_topology' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.137 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Attempting rescue
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.138 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.143 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.144 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating image(s)
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.164 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.173 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.239 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.265 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.272 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.307 248514 DEBUG nova.compute.manager [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.309 248514 DEBUG oslo_concurrency.lockutils [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.310 248514 DEBUG oslo_concurrency.lockutils [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.311 248514 DEBUG oslo_concurrency.lockutils [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.311 248514 DEBUG nova.compute.manager [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.312 248514 WARNING nova.compute.manager [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state active and task_state rescuing.
Dec 13 08:38:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Dec 13 08:38:39 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.365 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.366 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.367 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.368 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.396 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.400 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:39 compute-0 podman[335350]: 2025-12-13 08:38:39.539982539 +0000 UTC m=+0.097060732 container create df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 08:38:39 compute-0 podman[335350]: 2025-12-13 08:38:39.474221156 +0000 UTC m=+0.031299369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:38:39 compute-0 systemd[1]: Started libpod-conmon-df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820.scope.
Dec 13 08:38:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:38:39 compute-0 podman[335350]: 2025-12-13 08:38:39.671542666 +0000 UTC m=+0.228620859 container init df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:38:39 compute-0 podman[335350]: 2025-12-13 08:38:39.682143639 +0000 UTC m=+0.239221832 container start df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:38:39 compute-0 eloquent_lumiere[335384]: 167 167
Dec 13 08:38:39 compute-0 systemd[1]: libpod-df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820.scope: Deactivated successfully.
Dec 13 08:38:39 compute-0 podman[335350]: 2025-12-13 08:38:39.710247527 +0000 UTC m=+0.267325750 container attach df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:38:39 compute-0 podman[335350]: 2025-12-13 08:38:39.711372895 +0000 UTC m=+0.268451088 container died df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:38:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 473 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.6 MiB/s wr, 261 op/s
Dec 13 08:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b2e97aec0366d44b2018c10784c3bd5ae3e323c75c1156fef2c3b9bb1c6f5d3-merged.mount: Deactivated successfully.
Dec 13 08:38:39 compute-0 nova_compute[248510]: 2025-12-13 08:38:39.885 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:40 compute-0 podman[335350]: 2025-12-13 08:38:40.07234128 +0000 UTC m=+0.629419473 container remove df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:38:40 compute-0 systemd[1]: libpod-conmon-df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820.scope: Deactivated successfully.
Dec 13 08:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.135 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.136 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'migration_context' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.166 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.168 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start _get_guest_xml network_info=[{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "vif_mac": "fa:16:3e:f6:f9:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.169 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'resources' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.214 248514 WARNING nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.220 248514 DEBUG nova.virt.libvirt.host [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.221 248514 DEBUG nova.virt.libvirt.host [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.232 248514 DEBUG nova.virt.libvirt.host [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.232 248514 DEBUG nova.virt.libvirt.host [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.233 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.233 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.234 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.234 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.235 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.235 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.235 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.236 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.236 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.236 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.237 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.237 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.237 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.261 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:40 compute-0 podman[335409]: 2025-12-13 08:38:40.345127014 +0000 UTC m=+0.093982565 container create 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:38:40 compute-0 ceph-mon[76537]: osdmap e241: 3 total, 3 up, 3 in
Dec 13 08:38:40 compute-0 ceph-mon[76537]: pgmap v2256: 321 pgs: 321 active+clean; 473 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.6 MiB/s wr, 261 op/s
Dec 13 08:38:40 compute-0 podman[335409]: 2025-12-13 08:38:40.282912899 +0000 UTC m=+0.031768480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:38:40 compute-0 systemd[1]: Started libpod-conmon-10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652.scope.
Dec 13 08:38:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:40 compute-0 podman[335409]: 2025-12-13 08:38:40.484664439 +0000 UTC m=+0.233520020 container init 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:38:40 compute-0 podman[335409]: 2025-12-13 08:38:40.491320824 +0000 UTC m=+0.240176385 container start 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 08:38:40 compute-0 podman[335409]: 2025-12-13 08:38:40.4996143 +0000 UTC m=+0.248469861 container attach 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771877666' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.817 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:40 compute-0 nova_compute[248510]: 2025-12-13 08:38:40.818 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:40 compute-0 practical_wozniak[335445]: {
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:     "0": [
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:         {
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "devices": [
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "/dev/loop3"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             ],
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_name": "ceph_lv0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_size": "21470642176",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "name": "ceph_lv0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "tags": {
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cluster_name": "ceph",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.crush_device_class": "",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.encrypted": "0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.objectstore": "bluestore",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osd_id": "0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.type": "block",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.vdo": "0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.with_tpm": "0"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             },
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "type": "block",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "vg_name": "ceph_vg0"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:         }
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:     ],
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:     "1": [
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:         {
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "devices": [
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "/dev/loop4"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             ],
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_name": "ceph_lv1",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_size": "21470642176",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "name": "ceph_lv1",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "tags": {
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cluster_name": "ceph",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.crush_device_class": "",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.encrypted": "0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.objectstore": "bluestore",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osd_id": "1",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.type": "block",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.vdo": "0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.with_tpm": "0"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             },
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "type": "block",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "vg_name": "ceph_vg1"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:         }
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:     ],
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:     "2": [
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:         {
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "devices": [
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "/dev/loop5"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             ],
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_name": "ceph_lv2",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_size": "21470642176",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "name": "ceph_lv2",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "tags": {
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.cluster_name": "ceph",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.crush_device_class": "",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.encrypted": "0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.objectstore": "bluestore",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osd_id": "2",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.type": "block",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.vdo": "0",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:                 "ceph.with_tpm": "0"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             },
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "type": "block",
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:             "vg_name": "ceph_vg2"
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:         }
Dec 13 08:38:40 compute-0 practical_wozniak[335445]:     ]
Dec 13 08:38:40 compute-0 practical_wozniak[335445]: }
Dec 13 08:38:40 compute-0 systemd[1]: libpod-10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652.scope: Deactivated successfully.
Dec 13 08:38:40 compute-0 podman[335409]: 2025-12-13 08:38:40.87010134 +0000 UTC m=+0.618956901 container died 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 08:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e-merged.mount: Deactivated successfully.
Dec 13 08:38:40 compute-0 podman[335409]: 2025-12-13 08:38:40.943974005 +0000 UTC m=+0.692829566 container remove 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 08:38:40 compute-0 systemd[1]: libpod-conmon-10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652.scope: Deactivated successfully.
Dec 13 08:38:40 compute-0 sudo[335217]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:41 compute-0 sudo[335490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:38:41 compute-0 sudo[335490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:41 compute-0 sudo[335490]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:41 compute-0 sudo[335515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:38:41 compute-0 sudo[335515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1823226581' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:41 compute-0 nova_compute[248510]: 2025-12-13 08:38:41.416 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:41 compute-0 nova_compute[248510]: 2025-12-13 08:38:41.418 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:41 compute-0 podman[335551]: 2025-12-13 08:38:41.383539631 +0000 UTC m=+0.021845164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:38:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/771877666' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1823226581' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:41 compute-0 podman[335551]: 2025-12-13 08:38:41.566799642 +0000 UTC m=+0.205105155 container create 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 08:38:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 455 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 228 op/s
Dec 13 08:38:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4050967200' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:41 compute-0 nova_compute[248510]: 2025-12-13 08:38:41.961 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:41 compute-0 nova_compute[248510]: 2025-12-13 08:38:41.963 248514 DEBUG nova.virt.libvirt.vif [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1217173991',display_name='tempest-ServerRescueNegativeTestJSON-server-1217173991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1217173991',id=88,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:38:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-axcrlu30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:18Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=88cb43c1-f01b-4098-84ea-d372176a0e20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "vif_mac": "fa:16:3e:f6:f9:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:38:41 compute-0 nova_compute[248510]: 2025-12-13 08:38:41.963 248514 DEBUG nova.network.os_vif_util [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "vif_mac": "fa:16:3e:f6:f9:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:38:41 compute-0 nova_compute[248510]: 2025-12-13 08:38:41.964 248514 DEBUG nova.network.os_vif_util [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:38:41 compute-0 nova_compute[248510]: 2025-12-13 08:38:41.965 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:41 compute-0 systemd[1]: Started libpod-conmon-5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb.scope.
Dec 13 08:38:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:38:42 compute-0 podman[335551]: 2025-12-13 08:38:42.055377845 +0000 UTC m=+0.693683358 container init 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 08:38:42 compute-0 podman[335551]: 2025-12-13 08:38:42.062479121 +0000 UTC m=+0.700784634 container start 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:38:42 compute-0 focused_chebyshev[335588]: 167 167
Dec 13 08:38:42 compute-0 systemd[1]: libpod-5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb.scope: Deactivated successfully.
Dec 13 08:38:42 compute-0 podman[335551]: 2025-12-13 08:38:42.099255085 +0000 UTC m=+0.737560598 container attach 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:38:42 compute-0 podman[335551]: 2025-12-13 08:38:42.099519081 +0000 UTC m=+0.737824594 container died 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.145 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <uuid>88cb43c1-f01b-4098-84ea-d372176a0e20</uuid>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <name>instance-00000058</name>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1217173991</nova:name>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:38:40</nova:creationTime>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <nova:user uuid="302853b7f91745baadf52361a0a7d535">tempest-ServerRescueNegativeTestJSON-1030815648-project-member</nova:user>
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <nova:project uuid="c968daf58e624ebda00676d79f6bde96">tempest-ServerRescueNegativeTestJSON-1030815648</nova:project>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <nova:port uuid="7527f90e-f037-4a94-a011-f952b6e72722">
Dec 13 08:38:42 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <system>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <entry name="serial">88cb43c1-f01b-4098-84ea-d372176a0e20</entry>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <entry name="uuid">88cb43c1-f01b-4098-84ea-d372176a0e20</entry>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </system>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <os>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   </os>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <features>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   </features>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue">
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk">
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <target dev="vdb" bus="virtio"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue">
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:42 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:f6:f9:b9"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <target dev="tap7527f90e-f0"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/console.log" append="off"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <video>
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </video>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:38:42 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:38:42 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:38:42 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:38:42 compute-0 nova_compute[248510]: </domain>
Dec 13 08:38:42 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.159 248514 INFO nova.virt.libvirt.driver [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance destroyed successfully.
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.230 248514 INFO nova.compute.manager [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Rebuilding instance
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.318 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.318 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.319 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.319 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No VIF found with MAC fa:16:3e:f6:f9:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.319 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Using config drive
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.343 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.353 248514 DEBUG nova.compute.manager [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.354 248514 DEBUG oslo_concurrency.lockutils [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.354 248514 DEBUG oslo_concurrency.lockutils [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.354 248514 DEBUG oslo_concurrency.lockutils [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.354 248514 DEBUG nova.compute.manager [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.355 248514 WARNING nova.compute.manager [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state active and task_state rescuing.
Dec 13 08:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e734d26709e43dba86abeaf0044ea193bc6dffd26e94601e451ecb6546ec2746-merged.mount: Deactivated successfully.
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.404 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.438 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'keypairs' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:42 compute-0 ceph-mon[76537]: pgmap v2257: 321 pgs: 321 active+clean; 455 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 228 op/s
Dec 13 08:38:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4050967200' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:42 compute-0 podman[335551]: 2025-12-13 08:38:42.674209943 +0000 UTC m=+1.312515456 container remove 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:38:42 compute-0 systemd[1]: libpod-conmon-5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb.scope: Deactivated successfully.
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.820 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.845 248514 DEBUG nova.compute.manager [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.901 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.923 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:42 compute-0 podman[335634]: 2025-12-13 08:38:42.848103332 +0000 UTC m=+0.031184596 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.953 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'resources' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.967 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.986 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:38:42 compute-0 nova_compute[248510]: 2025-12-13 08:38:42.989 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:38:43 compute-0 podman[335634]: 2025-12-13 08:38:43.030937442 +0000 UTC m=+0.214018676 container create e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 08:38:43 compute-0 systemd[1]: Started libpod-conmon-e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e.scope.
Dec 13 08:38:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:38:43 compute-0 nova_compute[248510]: 2025-12-13 08:38:43.288 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating config drive at /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue
Dec 13 08:38:43 compute-0 podman[335634]: 2025-12-13 08:38:43.291638446 +0000 UTC m=+0.474719710 container init e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:38:43 compute-0 nova_compute[248510]: 2025-12-13 08:38:43.299 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi5g0lf52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:43 compute-0 podman[335634]: 2025-12-13 08:38:43.300354433 +0000 UTC m=+0.483435667 container start e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 08:38:43 compute-0 podman[335634]: 2025-12-13 08:38:43.326312208 +0000 UTC m=+0.509393462 container attach e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 08:38:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 439 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Dec 13 08:38:44 compute-0 lvm[335730]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:38:44 compute-0 lvm[335731]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:38:44 compute-0 lvm[335731]: VG ceph_vg1 finished
Dec 13 08:38:44 compute-0 lvm[335730]: VG ceph_vg0 finished
Dec 13 08:38:44 compute-0 lvm[335733]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:38:44 compute-0 lvm[335733]: VG ceph_vg2 finished
Dec 13 08:38:44 compute-0 strange_hofstadter[335649]: {}
Dec 13 08:38:44 compute-0 systemd[1]: libpod-e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e.scope: Deactivated successfully.
Dec 13 08:38:44 compute-0 systemd[1]: libpod-e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e.scope: Consumed 1.364s CPU time.
Dec 13 08:38:44 compute-0 podman[335736]: 2025-12-13 08:38:44.180712346 +0000 UTC m=+0.023235028 container died e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd-merged.mount: Deactivated successfully.
Dec 13 08:38:44 compute-0 podman[335736]: 2025-12-13 08:38:44.233102036 +0000 UTC m=+0.075624728 container remove e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 08:38:44 compute-0 systemd[1]: libpod-conmon-e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e.scope: Deactivated successfully.
Dec 13 08:38:44 compute-0 sudo[335515]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:38:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:38:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:38:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:38:44 compute-0 sudo[335751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:38:44 compute-0 sudo[335751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.354 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi5g0lf52" returned: 0 in 1.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:44 compute-0 sudo[335751]: pam_unix(sudo:session): session closed for user root
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.376 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.380 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.507 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.508 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deleting local config drive /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue because it was imported into RBD.
Dec 13 08:38:44 compute-0 kernel: tap7527f90e-f0: entered promiscuous mode
Dec 13 08:38:44 compute-0 systemd-udevd[335728]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:38:44 compute-0 NetworkManager[50376]: <info>  [1765615124.5829] manager: (tap7527f90e-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:44 compute-0 ovn_controller[148476]: 2025-12-13T08:38:44Z|00876|binding|INFO|Claiming lport 7527f90e-f037-4a94-a011-f952b6e72722 for this chassis.
Dec 13 08:38:44 compute-0 ovn_controller[148476]: 2025-12-13T08:38:44Z|00877|binding|INFO|7527f90e-f037-4a94-a011-f952b6e72722: Claiming fa:16:3e:f6:f9:b9 10.100.0.7
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.595 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f9:b9 10.100.0.7'], port_security=['fa:16:3e:f6:f9:b9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '88cb43c1-f01b-4098-84ea-d372176a0e20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '5', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7527f90e-f037-4a94-a011-f952b6e72722) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.597 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7527f90e-f037-4a94-a011-f952b6e72722 in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 bound to our chassis
Dec 13 08:38:44 compute-0 NetworkManager[50376]: <info>  [1765615124.5978] device (tap7527f90e-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:38:44 compute-0 NetworkManager[50376]: <info>  [1765615124.5988] device (tap7527f90e-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.599 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 08:38:44 compute-0 ovn_controller[148476]: 2025-12-13T08:38:44Z|00878|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 ovn-installed in OVS
Dec 13 08:38:44 compute-0 ovn_controller[148476]: 2025-12-13T08:38:44Z|00879|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 up in Southbound
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62aed008-e1b9-47d2-9abf-c73458753945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:44 compute-0 systemd-machined[210538]: New machine qemu-111-instance-00000058.
Dec 13 08:38:44 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-00000058.
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.655 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a19b711-9c95-4bba-9830-31b0dd7b683d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.660 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eacc9ea9-2ff3-48c8-8dc0-96d87d394cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.692 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[51990ac6-e592-4a2f-b3b2-5d4177dfcebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.710 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcee6e8-0a06-4521-bc4f-dce691625cb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335835, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd815f8d-0ddd-4277-86a6-d3a5c7402c53]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766865, 'tstamp': 766865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335838, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766868, 'tstamp': 766868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335838, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.737 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.742 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:44 compute-0 ceph-mon[76537]: pgmap v2258: 321 pgs: 321 active+clean; 439 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Dec 13 08:38:44 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:38:44 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:38:44 compute-0 nova_compute[248510]: 2025-12-13 08:38:44.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.253 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 88cb43c1-f01b-4098-84ea-d372176a0e20 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.254 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615125.2533755, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.255 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Resumed (Lifecycle Event)
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.260 248514 DEBUG nova.compute.manager [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.357 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.360 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 418 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.4 MiB/s wr, 266 op/s
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.798 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.798 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615125.254314, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.799 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Started (Lifecycle Event)
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.881 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.885 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:45 compute-0 nova_compute[248510]: 2025-12-13 08:38:45.961 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 13 08:38:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Dec 13 08:38:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Dec 13 08:38:46 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Dec 13 08:38:46 compute-0 nova_compute[248510]: 2025-12-13 08:38:46.649 248514 DEBUG nova.compute.manager [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:46 compute-0 nova_compute[248510]: 2025-12-13 08:38:46.650 248514 DEBUG oslo_concurrency.lockutils [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:46 compute-0 nova_compute[248510]: 2025-12-13 08:38:46.651 248514 DEBUG oslo_concurrency.lockutils [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:46 compute-0 nova_compute[248510]: 2025-12-13 08:38:46.651 248514 DEBUG oslo_concurrency.lockutils [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:46 compute-0 nova_compute[248510]: 2025-12-13 08:38:46.651 248514 DEBUG nova.compute.manager [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:46 compute-0 nova_compute[248510]: 2025-12-13 08:38:46.651 248514 WARNING nova.compute.manager [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state rescued and task_state None.
Dec 13 08:38:46 compute-0 nova_compute[248510]: 2025-12-13 08:38:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:38:46 compute-0 ceph-mon[76537]: pgmap v2259: 321 pgs: 321 active+clean; 418 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.4 MiB/s wr, 266 op/s
Dec 13 08:38:46 compute-0 ceph-mon[76537]: osdmap e242: 3 total, 3 up, 3 in
Dec 13 08:38:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:46.923 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:46.924 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:38:46 compute-0 nova_compute[248510]: 2025-12-13 08:38:46.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 418 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Dec 13 08:38:48 compute-0 ovn_controller[148476]: 2025-12-13T08:38:48Z|00880|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 13 08:38:48 compute-0 ceph-mon[76537]: pgmap v2261: 321 pgs: 321 active+clean; 418 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Dec 13 08:38:48 compute-0 nova_compute[248510]: 2025-12-13 08:38:48.827 248514 DEBUG nova.compute.manager [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:48 compute-0 nova_compute[248510]: 2025-12-13 08:38:48.827 248514 DEBUG oslo_concurrency.lockutils [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:48 compute-0 nova_compute[248510]: 2025-12-13 08:38:48.827 248514 DEBUG oslo_concurrency.lockutils [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:48 compute-0 nova_compute[248510]: 2025-12-13 08:38:48.828 248514 DEBUG oslo_concurrency.lockutils [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:48 compute-0 nova_compute[248510]: 2025-12-13 08:38:48.828 248514 DEBUG nova.compute.manager [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:48 compute-0 nova_compute[248510]: 2025-12-13 08:38:48.828 248514 WARNING nova.compute.manager [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state rescued and task_state None.
Dec 13 08:38:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 436 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 268 op/s
Dec 13 08:38:49 compute-0 nova_compute[248510]: 2025-12-13 08:38:49.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.521 248514 INFO nova.compute.manager [None req-3b87bcea-62e9-4416-a763-2cb56083df2d 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Pausing
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.522 248514 DEBUG nova.objects.instance [None req-3b87bcea-62e9-4416-a763-2cb56083df2d 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'flavor' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.555 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615130.5549407, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.555 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Paused (Lifecycle Event)
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.557 248514 DEBUG nova.compute.manager [None req-3b87bcea-62e9-4416-a763-2cb56083df2d 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.601 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.608 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:38:50 compute-0 nova_compute[248510]: 2025-12-13 08:38:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:38:50 compute-0 ceph-mon[76537]: pgmap v2262: 321 pgs: 321 active+clean; 436 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 268 op/s
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.148 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.148 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.148 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.148 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.676 248514 INFO nova.compute.manager [None req-33feb07c-a376-4c65-a100-6bc40bbe0a95 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Unpausing
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.677 248514 DEBUG nova.objects.instance [None req-33feb07c-a376-4c65-a100-6bc40bbe0a95 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'flavor' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.706 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615131.7060363, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.707 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Resumed (Lifecycle Event)
Dec 13 08:38:51 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.710 248514 DEBUG nova.virt.libvirt.guest [None req-33feb07c-a376-4c65-a100-6bc40bbe0a95 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.711 248514 DEBUG nova.compute.manager [None req-33feb07c-a376-4c65-a100-6bc40bbe0a95 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.734 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.736 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:38:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 451 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 257 op/s
Dec 13 08:38:51 compute-0 nova_compute[248510]: 2025-12-13 08:38:51.761 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] During sync_power_state the instance has a pending task (unpausing). Skip.
Dec 13 08:38:52 compute-0 nova_compute[248510]: 2025-12-13 08:38:52.651 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:38:52 compute-0 nova_compute[248510]: 2025-12-13 08:38:52.670 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:38:52 compute-0 nova_compute[248510]: 2025-12-13 08:38:52.671 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:38:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Dec 13 08:38:52 compute-0 ceph-mon[76537]: pgmap v2263: 321 pgs: 321 active+clean; 451 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 257 op/s
Dec 13 08:38:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Dec 13 08:38:52 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Dec 13 08:38:53 compute-0 nova_compute[248510]: 2025-12-13 08:38:53.033 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:38:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 467 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 276 op/s
Dec 13 08:38:53 compute-0 nova_compute[248510]: 2025-12-13 08:38:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:38:53 compute-0 ceph-mon[76537]: osdmap e243: 3 total, 3 up, 3 in
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.044 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.044 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.137 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.231 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.232 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.232 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.232 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.232 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.233 248514 INFO nova.compute.manager [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Terminating instance
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.234 248514 DEBUG nova.compute.manager [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.247 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.247 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.254 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.254 248514 INFO nova.compute.claims [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:38:54 compute-0 kernel: tap7527f90e-f0 (unregistering): left promiscuous mode
Dec 13 08:38:54 compute-0 NetworkManager[50376]: <info>  [1765615134.2769] device (tap7527f90e-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.290 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:54 compute-0 ovn_controller[148476]: 2025-12-13T08:38:54Z|00881|binding|INFO|Releasing lport 7527f90e-f037-4a94-a011-f952b6e72722 from this chassis (sb_readonly=0)
Dec 13 08:38:54 compute-0 ovn_controller[148476]: 2025-12-13T08:38:54Z|00882|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 down in Southbound
Dec 13 08:38:54 compute-0 ovn_controller[148476]: 2025-12-13T08:38:54Z|00883|binding|INFO|Removing iface tap7527f90e-f0 ovn-installed in OVS
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.300 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f9:b9 10.100.0.7'], port_security=['fa:16:3e:f6:f9:b9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '88cb43c1-f01b-4098-84ea-d372176a0e20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '6', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7527f90e-f037-4a94-a011-f952b6e72722) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.301 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7527f90e-f037-4a94-a011-f952b6e72722 in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.302 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.320 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87c44d16-a827-4fa4-86cb-4e98fa3ebd18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.347 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[118d0996-9b78-47f7-afef-d6e8ef016cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.349 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2953401-58ab-4851-a619-6f958bd05c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:54 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000058.scope: Deactivated successfully.
Dec 13 08:38:54 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000058.scope: Consumed 9.713s CPU time.
Dec 13 08:38:54 compute-0 systemd-machined[210538]: Machine qemu-111-instance-00000058 terminated.
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.378 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[19331db5-c74d-4981-9e93-0ed8a0b5f15e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.395 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e02352af-9955-4215-8d3a-a8e8c94bb729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335911, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.412 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f15d4f2-45bb-4666-92fc-1450c5fb9ba9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766865, 'tstamp': 766865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335912, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766868, 'tstamp': 766868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335912, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.413 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.419 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.419 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.420 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.420 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.474 248514 INFO nova.virt.libvirt.driver [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance destroyed successfully.
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.474 248514 DEBUG nova.objects.instance [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'resources' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.492 248514 DEBUG nova.virt.libvirt.vif [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1217173991',display_name='tempest-ServerRescueNegativeTestJSON-server-1217173991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1217173991',id=88,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:38:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-axcrlu30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:38:45Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=88cb43c1-f01b-4098-84ea-d372176a0e20,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.493 248514 DEBUG nova.network.os_vif_util [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.494 248514 DEBUG nova.network.os_vif_util [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.494 248514 DEBUG os_vif [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.495 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7527f90e-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.500 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.502 248514 INFO os_vif [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0')
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.543 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:54 compute-0 ceph-mon[76537]: pgmap v2265: 321 pgs: 321 active+clean; 467 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 276 op/s
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.950 248514 INFO nova.virt.libvirt.driver [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deleting instance files /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20_del
Dec 13 08:38:54 compute-0 nova_compute[248510]: 2025-12-13 08:38:54.951 248514 INFO nova.virt.libvirt.driver [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deletion of /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20_del complete
Dec 13 08:38:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:38:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243695784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.106 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.111 248514 DEBUG nova.compute.provider_tree [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:38:55 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Dec 13 08:38:55 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Consumed 13.384s CPU time.
Dec 13 08:38:55 compute-0 systemd-machined[210538]: Machine qemu-110-instance-0000005a terminated.
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.327 248514 DEBUG nova.scheduler.client.report [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.386 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.387 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.396 248514 INFO nova.compute.manager [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Took 1.16 seconds to destroy the instance on the hypervisor.
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.397 248514 DEBUG oslo.service.loopingcall [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.397 248514 DEBUG nova.compute.manager [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.397 248514 DEBUG nova.network.neutron [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:38:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:55.420 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.499 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.500 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.568 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.630 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:38:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 460 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.6 MiB/s wr, 284 op/s
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:55 compute-0 nova_compute[248510]: 2025-12-13 08:38:55.828 248514 DEBUG nova.policy [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7507939da64e4320a1c6f389d0fc9045', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:38:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Dec 13 08:38:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3243695784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Dec 13 08:38:55 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Dec 13 08:38:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:55.927 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.081 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.083 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.083 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Creating image(s)
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.104 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.128 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.152 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.156 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.199 248514 DEBUG nova.compute.manager [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.200 248514 DEBUG oslo_concurrency.lockutils [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.200 248514 DEBUG oslo_concurrency.lockutils [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.200 248514 DEBUG oslo_concurrency.lockutils [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.201 248514 DEBUG nova.compute.manager [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.201 248514 DEBUG nova.compute.manager [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.204 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance shutdown successfully after 13 seconds.
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.213 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance destroyed successfully.
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.219 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance destroyed successfully.
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.240 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.241 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.241 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.242 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.263 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.267 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 658e5f04-399b-4a8a-8680-5ae9717949c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.574 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 658e5f04-399b-4a8a-8680-5ae9717949c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.631 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deleting instance files /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6_del
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.632 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deletion of /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6_del complete
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.640 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] resizing rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.735 248514 DEBUG nova.objects.instance [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.776 248514 DEBUG nova.network.neutron [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.778 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.778 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Ensure instance console log exists: /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.779 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.779 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.779 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.871 248514 INFO nova.compute.manager [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Took 1.47 seconds to deallocate network for instance.
Dec 13 08:38:56 compute-0 ceph-mon[76537]: pgmap v2266: 321 pgs: 321 active+clean; 460 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.6 MiB/s wr, 284 op/s
Dec 13 08:38:56 compute-0 ceph-mon[76537]: osdmap e244: 3 total, 3 up, 3 in
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.940 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.941 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating image(s)
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.967 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:56 compute-0 nova_compute[248510]: 2025-12-13 08:38:56.990 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.015 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.019 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.056 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.057 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.096 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.097 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.098 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.098 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.117 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.120 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.285 248514 DEBUG oslo_concurrency.processutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.454 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Successfully created port: f767e871-4f9e-414e-a61d-c70cffe80128 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:38:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 460 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.6 MiB/s wr, 185 op/s
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.779 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.846 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] resizing rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:38:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Dec 13 08:38:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Dec 13 08:38:57 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.931 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.932 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Ensure instance console log exists: /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.932 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.933 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.933 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.934 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:38:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:38:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505906422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.939 248514 WARNING nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.960 248514 DEBUG oslo_concurrency.processutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.961 248514 DEBUG nova.virt.libvirt.host [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.961 248514 DEBUG nova.virt.libvirt.host [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.965 248514 DEBUG nova.compute.provider_tree [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.967 248514 DEBUG nova.virt.libvirt.host [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.968 248514 DEBUG nova.virt.libvirt.host [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.968 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.968 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.971 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.989 248514 DEBUG nova.scheduler.client.report [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:38:57 compute-0 nova_compute[248510]: 2025-12-13 08:38:57.996 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.065 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.070 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.070 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.071 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.071 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.165 248514 INFO nova.scheduler.client.report [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Deleted allocations for instance 88cb43c1-f01b-4098-84ea-d372176a0e20
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.262 248514 DEBUG nova.compute.manager [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.262 248514 DEBUG oslo_concurrency.lockutils [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.263 248514 DEBUG oslo_concurrency.lockutils [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.263 248514 DEBUG oslo_concurrency.lockutils [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.263 248514 DEBUG nova.compute.manager [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.264 248514 WARNING nova.compute.manager [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state deleted and task_state None.
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.264 248514 DEBUG nova.compute.manager [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-deleted-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.269 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.356 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Successfully updated port: f767e871-4f9e-414e-a61d-c70cffe80128 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.372 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.372 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.372 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.493 248514 DEBUG nova.compute.manager [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-changed-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.493 248514 DEBUG nova.compute.manager [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Refreshing instance network info cache due to event network-changed-f767e871-4f9e-414e-a61d-c70cffe80128. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.493 248514 DEBUG oslo_concurrency.lockutils [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:38:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833616363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.574 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.608 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.615 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:38:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421621160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.660 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.672 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.691 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.691 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.692 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.692 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.692 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.695 248514 INFO nova.compute.manager [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Terminating instance
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.696 248514 DEBUG nova.compute.manager [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:38:58 compute-0 kernel: tap3b8730f4-e2 (unregistering): left promiscuous mode
Dec 13 08:38:58 compute-0 NetworkManager[50376]: <info>  [1765615138.7482] device (tap3b8730f4-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00884|binding|INFO|Releasing lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f from this chassis (sb_readonly=0)
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00885|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f down in Southbound
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00886|binding|INFO|Removing iface tap3b8730f4-e2 ovn-installed in OVS
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.767 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:9a:26 10.100.0.6'], port_security=['fa:16:3e:e7:9a:26 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6fb6c605-344c-4ed9-806d-96964b0474f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b8730f4-e225-4c0a-bf95-708c9c122a4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.769 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b8730f4-e225-4c0a-bf95-708c9c122a4f in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis
Dec 13 08:38:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.771 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f249527d-f9e6-43ce-a178-f71fc1d38891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:38:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.772 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec09ef2-7583-4fc1-b1a4-8510d67d563e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.773 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 namespace which is not needed anymore
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.775 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:58 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000057.scope: Deactivated successfully.
Dec 13 08:38:58 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000057.scope: Consumed 15.149s CPU time.
Dec 13 08:38:58 compute-0 systemd-machined[210538]: Machine qemu-107-instance-00000057 terminated.
Dec 13 08:38:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Dec 13 08:38:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Dec 13 08:38:58 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Dec 13 08:38:58 compute-0 ceph-mon[76537]: pgmap v2268: 321 pgs: 321 active+clean; 460 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.6 MiB/s wr, 185 op/s
Dec 13 08:38:58 compute-0 ceph-mon[76537]: osdmap e245: 3 total, 3 up, 3 in
Dec 13 08:38:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1505906422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3833616363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2421621160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:38:58 compute-0 kernel: tap3b8730f4-e2: entered promiscuous mode
Dec 13 08:38:58 compute-0 systemd-udevd[336427]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:38:58 compute-0 kernel: tap3b8730f4-e2 (unregistering): left promiscuous mode
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00887|binding|INFO|Claiming lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f for this chassis.
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00888|binding|INFO|3b8730f4-e225-4c0a-bf95-708c9c122a4f: Claiming fa:16:3e:e7:9a:26 10.100.0.6
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.918 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.930 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:9a:26 10.100.0.6'], port_security=['fa:16:3e:e7:9a:26 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6fb6c605-344c-4ed9-806d-96964b0474f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b8730f4-e225-4c0a-bf95-708c9c122a4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:58 compute-0 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [NOTICE]   (333778) : haproxy version is 2.8.14-c23fe91
Dec 13 08:38:58 compute-0 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [NOTICE]   (333778) : path to executable is /usr/sbin/haproxy
Dec 13 08:38:58 compute-0 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [WARNING]  (333778) : Exiting Master process...
Dec 13 08:38:58 compute-0 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [WARNING]  (333778) : Exiting Master process...
Dec 13 08:38:58 compute-0 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [ALERT]    (333778) : Current worker (333780) exited with code 143 (Terminated)
Dec 13 08:38:58 compute-0 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [WARNING]  (333778) : All workers exited. Exiting... (0)
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.941 248514 INFO nova.virt.libvirt.driver [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance destroyed successfully.
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.942 248514 DEBUG nova.objects.instance [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'resources' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:58 compute-0 systemd[1]: libpod-a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441.scope: Deactivated successfully.
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00889|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f ovn-installed in OVS
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00890|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f up in Southbound
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00891|binding|INFO|Releasing lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f from this chassis (sb_readonly=1)
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00892|if_status|INFO|Dropped 4 log messages in last 381 seconds (most recently, 381 seconds ago) due to excessive rate
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00893|if_status|INFO|Not setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f down as sb is readonly
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00894|binding|INFO|Removing iface tap3b8730f4-e2 ovn-installed in OVS
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00895|binding|INFO|Releasing lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f from this chassis (sb_readonly=0)
Dec 13 08:38:58 compute-0 ovn_controller[148476]: 2025-12-13T08:38:58Z|00896|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f down in Southbound
Dec 13 08:38:58 compute-0 podman[336447]: 2025-12-13 08:38:58.95959103 +0000 UTC m=+0.076170283 container died a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.972 248514 DEBUG nova.virt.libvirt.vif [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:37:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1394190380',display_name='tempest-ServerRescueNegativeTestJSON-server-1394190380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1394190380',id=87,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:38:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-689d01nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:38:51Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=6fb6c605-344c-4ed9-806d-96964b0474f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.973 248514 DEBUG nova.network.os_vif_util [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.974 248514 DEBUG nova.network.os_vif_util [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.975 248514 DEBUG os_vif [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:38:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.975 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:9a:26 10.100.0.6'], port_security=['fa:16:3e:e7:9a:26 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6fb6c605-344c-4ed9-806d-96964b0474f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b8730f4-e225-4c0a-bf95-708c9c122a4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.977 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.977 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b8730f4-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.984 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441-userdata-shm.mount: Deactivated successfully.
Dec 13 08:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1630a06e863d3fc2328692d0f3a97579d4a2f822414283bd74ab2bb685b53f54-merged.mount: Deactivated successfully.
Dec 13 08:38:58 compute-0 nova_compute[248510]: 2025-12-13 08:38:58.992 248514 INFO os_vif [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2')
Dec 13 08:38:59 compute-0 podman[336447]: 2025-12-13 08:38:59.006475224 +0000 UTC m=+0.123054477 container cleanup a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.011 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.011 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:38:59 compute-0 systemd[1]: libpod-conmon-a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441.scope: Deactivated successfully.
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.017 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.018 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.023 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.023 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:38:59 compute-0 podman[336493]: 2025-12-13 08:38:59.076292178 +0000 UTC m=+0.047708326 container remove a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[66fea91a-c5b7-43c0-b2e0-c7b920c4580f]: (4, ('Sat Dec 13 08:38:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 (a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441)\na833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441\nSat Dec 13 08:38:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 (a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441)\na833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.084 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[442f5feb-1dde-4313-a783-f71f268db8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.086 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.089 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:59 compute-0 kernel: tapf249527d-f0: left promiscuous mode
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.129 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a097e4-fb29-4c1b-bac3-00ed3efb605c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.141 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a895af-9bb2-4dcc-863e-15216b363103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.145 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db6e3c38-6411-4a55-bb6d-21daccdc347f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.165 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18a09cd6-a366-4720-be16-de1764e54653]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766846, 'reachable_time': 37032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336513, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 systemd[1]: run-netns-ovnmeta\x2df249527d\x2df9e6\x2d43ce\x2da178\x2df71fc1d38891.mount: Deactivated successfully.
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.170 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.171 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc4ee7d-2951-474b-a615-a2d570183d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.172 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b8730f4-e225-4c0a-bf95-708c9c122a4f in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.173 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f249527d-f9e6-43ce-a178-f71fc1d38891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.174 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76892199-5ea3-4d97-ab47-6d8d491983d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.174 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b8730f4-e225-4c0a-bf95-708c9c122a4f in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.175 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f249527d-f9e6-43ce-a178-f71fc1d38891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:38:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.175 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09ab73ae-ab57-40ed-ae03-95ccc94fa585]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:38:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:38:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185825765' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.259 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.261 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <uuid>0187165c-81b1-43b8-81f5-05e847fe1fa6</uuid>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <name>instance-0000005a</name>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerShowV247Test-server-1806406257</nova:name>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:38:57</nova:creationTime>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <nova:user uuid="0e8471eedc0e4ae0a028132802bc1967">tempest-ServerShowV247Test-2063492104-project-member</nova:user>
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <nova:project uuid="c4b40eb3fb314c44867a54f3ba244ec1">tempest-ServerShowV247Test-2063492104</nova:project>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <system>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <entry name="serial">0187165c-81b1-43b8-81f5-05e847fe1fa6</entry>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <entry name="uuid">0187165c-81b1-43b8-81f5-05e847fe1fa6</entry>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     </system>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <os>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   </os>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <features>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   </features>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0187165c-81b1-43b8-81f5-05e847fe1fa6_disk">
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config">
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       </source>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:38:59 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/console.log" append="off"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <video>
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     </video>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:38:59 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:38:59 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:38:59 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:38:59 compute-0 nova_compute[248510]: </domain>
Dec 13 08:38:59 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.294 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.295 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3201MB free_disk=59.752280401065946GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.296 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.296 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.321 248514 INFO nova.virt.libvirt.driver [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Deleting instance files /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9_del
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.322 248514 INFO nova.virt.libvirt.driver [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Deletion of /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9_del complete
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.328 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.328 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.328 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Using config drive
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.346 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.396 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.435 248514 INFO nova.compute.manager [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Took 0.74 seconds to destroy the instance on the hypervisor.
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.435 248514 DEBUG oslo.service.loopingcall [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.436 248514 DEBUG nova.compute.manager [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.436 248514 DEBUG nova.network.neutron [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.439 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'keypairs' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.444 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b486227-b98c-4393-9a3c-aae3e3c419a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.444 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 6fb6c605-344c-4ed9-806d-96964b0474f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.445 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0d52c1df-d252-4012-b05c-40737f1089bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.445 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0187165c-81b1-43b8-81f5-05e847fe1fa6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.445 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 658e5f04-399b-4a8a-8680-5ae9717949c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.445 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.446 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.582 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 377 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 7.8 MiB/s wr, 304 op/s
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.879 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating config drive at /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config
Dec 13 08:38:59 compute-0 nova_compute[248510]: 2025-12-13 08:38:59.884 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8z49aori execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:38:59 compute-0 ceph-mon[76537]: osdmap e246: 3 total, 3 up, 3 in
Dec 13 08:38:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/185825765' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.026 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8z49aori" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.057 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.061 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.095 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updating instance_info_cache with network_info: [{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.129 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.130 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance network_info: |[{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.130 248514 DEBUG oslo_concurrency.lockutils [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.130 248514 DEBUG nova.network.neutron [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Refreshing network info cache for port f767e871-4f9e-414e-a61d-c70cffe80128 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.134 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start _get_guest_xml network_info=[{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:39:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:39:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527932293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.140 248514 WARNING nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.148 248514 DEBUG nova.virt.libvirt.host [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.149 248514 DEBUG nova.virt.libvirt.host [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.155 248514 DEBUG nova.virt.libvirt.host [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.155 248514 DEBUG nova.virt.libvirt.host [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.156 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.156 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.157 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.157 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.157 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.157 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.158 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.158 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.158 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.158 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.159 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.159 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.163 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.193 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.201 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.237 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.281 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.282 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.417 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.418 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deleting local config drive /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config because it was imported into RBD.
Dec 13 08:39:00 compute-0 systemd-machined[210538]: New machine qemu-112-instance-0000005a.
Dec 13 08:39:00 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005a.
Dec 13 08:39:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:39:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674907218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.769 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.790 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.794 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.910 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 0187165c-81b1-43b8-81f5-05e847fe1fa6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.911 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615140.9099798, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.911 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Resumed (Lifecycle Event)
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.915 248514 DEBUG nova.compute.manager [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.915 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:39:00 compute-0 ceph-mon[76537]: pgmap v2271: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 377 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 7.8 MiB/s wr, 304 op/s
Dec 13 08:39:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2527932293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2674907218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.920 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance spawned successfully.
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.921 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.979 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.986 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.990 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.991 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.991 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.992 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.992 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:00 compute-0 nova_compute[248510]: 2025-12-13 08:39:00.993 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.047 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.048 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615140.9167469, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.048 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Started (Lifecycle Event)
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.116 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.119 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.155 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.166 248514 DEBUG nova.compute.manager [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.195 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.196 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.196 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.197 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.197 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.197 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.197 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.198 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.198 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.198 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.198 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.199 248514 WARNING nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state active and task_state deleting.
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.199 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.199 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.199 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.200 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.200 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.200 248514 WARNING nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state active and task_state deleting.
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.200 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.201 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.201 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.201 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.201 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.202 248514 WARNING nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state active and task_state deleting.
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.202 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.202 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.202 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.203 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.203 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.203 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.260 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.261 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.261 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.364 248514 DEBUG nova.network.neutron [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.395 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.409 248514 INFO nova.compute.manager [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Took 1.97 seconds to deallocate network for instance.
Dec 13 08:39:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:39:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269903454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.481 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.482 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.491 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.493 248514 DEBUG nova.virt.libvirt.vif [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2058674229',display_name='tempest-ServerActionsTestOtherB-server-2058674229',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2058674229',id=91,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-hq6btaus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:55Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=658e5f04-399b-4a8a-8680-5ae9717949c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.493 248514 DEBUG nova.network.os_vif_util [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.494 248514 DEBUG nova.network.os_vif_util [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.496 248514 DEBUG nova.objects.instance [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.511 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <uuid>658e5f04-399b-4a8a-8680-5ae9717949c0</uuid>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <name>instance-0000005b</name>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestOtherB-server-2058674229</nova:name>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:39:00</nova:creationTime>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <nova:user uuid="7507939da64e4320a1c6f389d0fc9045">tempest-ServerActionsTestOtherB-1515133862-project-member</nova:user>
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <nova:project uuid="f0aee359fbaa484eb7ead3f81eef51e7">tempest-ServerActionsTestOtherB-1515133862</nova:project>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <nova:port uuid="f767e871-4f9e-414e-a61d-c70cffe80128">
Dec 13 08:39:01 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <system>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <entry name="serial">658e5f04-399b-4a8a-8680-5ae9717949c0</entry>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <entry name="uuid">658e5f04-399b-4a8a-8680-5ae9717949c0</entry>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </system>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <os>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   </os>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <features>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   </features>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/658e5f04-399b-4a8a-8680-5ae9717949c0_disk">
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config">
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:39:01 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:96:d2:10"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <target dev="tapf767e871-4f"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/console.log" append="off"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <video>
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </video>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:39:01 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:39:01 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:39:01 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:39:01 compute-0 nova_compute[248510]: </domain>
Dec 13 08:39:01 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.518 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Preparing to wait for external event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.518 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.519 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.519 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.520 248514 DEBUG nova.virt.libvirt.vif [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2058674229',display_name='tempest-ServerActionsTestOtherB-server-2058674229',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2058674229',id=91,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-hq6btaus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:55Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=658e5f04-399b-4a8a-8680-5ae9717949c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.520 248514 DEBUG nova.network.os_vif_util [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.521 248514 DEBUG nova.network.os_vif_util [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.522 248514 DEBUG os_vif [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.524 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.524 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.528 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf767e871-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.529 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf767e871-4f, col_values=(('external_ids', {'iface-id': 'f767e871-4f9e-414e-a61d-c70cffe80128', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:d2:10', 'vm-uuid': '658e5f04-399b-4a8a-8680-5ae9717949c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:01 compute-0 NetworkManager[50376]: <info>  [1765615141.5323] manager: (tapf767e871-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.538 248514 INFO os_vif [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f')
Dec 13 08:39:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.602 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.602 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.603 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No VIF found with MAC fa:16:3e:96:d2:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.603 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Using config drive
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.623 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:01 compute-0 nova_compute[248510]: 2025-12-13 08:39:01.695 248514 DEBUG oslo_concurrency.processutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 350 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 7.2 MiB/s wr, 410 op/s
Dec 13 08:39:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3269903454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.188 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Creating config drive at /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.198 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp690hln9a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:39:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524472740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.283 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.284 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.284 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.285 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.299 248514 DEBUG oslo_concurrency.processutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.305 248514 DEBUG nova.compute.provider_tree [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.344 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp690hln9a" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.371 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.374 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.451 248514 DEBUG nova.scheduler.client.report [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.492 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.493 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Deleting local config drive /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config because it was imported into RBD.
Dec 13 08:39:02 compute-0 kernel: tapf767e871-4f: entered promiscuous mode
Dec 13 08:39:02 compute-0 NetworkManager[50376]: <info>  [1765615142.5405] manager: (tapf767e871-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:02 compute-0 ovn_controller[148476]: 2025-12-13T08:39:02Z|00897|binding|INFO|Claiming lport f767e871-4f9e-414e-a61d-c70cffe80128 for this chassis.
Dec 13 08:39:02 compute-0 ovn_controller[148476]: 2025-12-13T08:39:02Z|00898|binding|INFO|f767e871-4f9e-414e-a61d-c70cffe80128: Claiming fa:16:3e:96:d2:10 10.100.0.4
Dec 13 08:39:02 compute-0 ovn_controller[148476]: 2025-12-13T08:39:02Z|00899|binding|INFO|Setting lport f767e871-4f9e-414e-a61d-c70cffe80128 ovn-installed in OVS
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:02 compute-0 systemd-machined[210538]: New machine qemu-113-instance-0000005b.
Dec 13 08:39:02 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005b.
Dec 13 08:39:02 compute-0 systemd-udevd[336837]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:39:02 compute-0 NetworkManager[50376]: <info>  [1765615142.6298] device (tapf767e871-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:39:02 compute-0 NetworkManager[50376]: <info>  [1765615142.6318] device (tapf767e871-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:39:02 compute-0 podman[336809]: 2025-12-13 08:39:02.681992651 +0000 UTC m=+0.107052659 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 08:39:02 compute-0 podman[336810]: 2025-12-13 08:39:02.699576678 +0000 UTC m=+0.112660349 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:39:02 compute-0 podman[336808]: 2025-12-13 08:39:02.708492079 +0000 UTC m=+0.135079235 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 08:39:02 compute-0 ovn_controller[148476]: 2025-12-13T08:39:02Z|00900|binding|INFO|Setting lport f767e871-4f9e-414e-a61d-c70cffe80128 up in Southbound
Dec 13 08:39:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.882 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:d2:10 10.100.0.4'], port_security=['fa:16:3e:96:d2:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '658e5f04-399b-4a8a-8680-5ae9717949c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77183472-893b-4c33-ab3e-e88f01770ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f767e871-4f9e-414e-a61d-c70cffe80128) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:39:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.883 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f767e871-4f9e-414e-a61d-c70cffe80128 in datapath 369f7528-6571-47b6-a030-5281647e1eac bound to our chassis
Dec 13 08:39:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.885 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.909 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[057690bf-00c8-4cbc-97e1-25d2f9a467c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:02 compute-0 ceph-mon[76537]: pgmap v2272: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 350 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 7.2 MiB/s wr, 410 op/s
Dec 13 08:39:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1524472740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:02 compute-0 nova_compute[248510]: 2025-12-13 08:39:02.939 248514 INFO nova.scheduler.client.report [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Deleted allocations for instance 6fb6c605-344c-4ed9-806d-96964b0474f9
Dec 13 08:39:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.945 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a636c15d-8cfa-4c2c-bed1-44372008ed7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.948 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[48a19ed5-1d2a-451d-b302-1c6a170da622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.975 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a08efb6a-5096-437a-b74d-1dbb36f8d77e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.997 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3566b6ef-6bfd-411f-907e-881ba1368fe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 21848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336886, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ba331a-7d18-4e03-b6cf-f3cfc68b2ca3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336887, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336887, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.019 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.022 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.022 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.034 248514 DEBUG nova.network.neutron [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updated VIF entry in instance network info cache for port f767e871-4f9e-414e-a61d-c70cffe80128. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.035 248514 DEBUG nova.network.neutron [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updating instance_info_cache with network_info: [{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.234 248514 DEBUG oslo_concurrency.lockutils [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.368 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.659 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615143.6589673, 658e5f04-399b-4a8a-8680-5ae9717949c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.660 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] VM Started (Lifecycle Event)
Dec 13 08:39:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 315 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.5 MiB/s wr, 409 op/s
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.961 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.961 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.962 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0187165c-81b1-43b8-81f5-05e847fe1fa6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.962 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.962 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.963 248514 INFO nova.compute.manager [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Terminating instance
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.964 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "refresh_cache-0187165c-81b1-43b8-81f5-05e847fe1fa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.964 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquired lock "refresh_cache-0187165c-81b1-43b8-81f5-05e847fe1fa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.964 248514 DEBUG nova.network.neutron [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.969 248514 DEBUG nova.compute.manager [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.969 248514 DEBUG oslo_concurrency.lockutils [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.969 248514 DEBUG oslo_concurrency.lockutils [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.969 248514 DEBUG oslo_concurrency.lockutils [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.970 248514 DEBUG nova.compute.manager [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.970 248514 WARNING nova.compute.manager [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state deleted and task_state None.
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.970 248514 DEBUG nova.compute.manager [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-deleted-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.971 248514 DEBUG nova.compute.manager [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.971 248514 DEBUG oslo_concurrency.lockutils [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.971 248514 DEBUG oslo_concurrency.lockutils [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.972 248514 DEBUG oslo_concurrency.lockutils [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.972 248514 DEBUG nova.compute.manager [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Processing event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.973 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.976 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.979 248514 INFO nova.virt.libvirt.driver [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance spawned successfully.
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.980 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.987 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:03 compute-0 nova_compute[248510]: 2025-12-13 08:39:03.995 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.077 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.078 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.078 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.079 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.079 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.079 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.082 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.082 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615143.6591413, 658e5f04-399b-4a8a-8680-5ae9717949c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.083 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] VM Paused (Lifecycle Event)
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.162 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.165 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615143.9764216, 658e5f04-399b-4a8a-8680-5ae9717949c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.166 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] VM Resumed (Lifecycle Event)
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.455 248514 INFO nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Took 8.37 seconds to spawn the instance on the hypervisor.
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.455 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:04 compute-0 ceph-mon[76537]: pgmap v2273: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 315 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.5 MiB/s wr, 409 op/s
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.493 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.496 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.676 248514 INFO nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Took 10.46 seconds to build instance.
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.693 248514 DEBUG nova.network.neutron [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:39:04 compute-0 nova_compute[248510]: 2025-12-13 08:39:04.701 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.451 248514 DEBUG nova.network.neutron [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.474 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Releasing lock "refresh_cache-0187165c-81b1-43b8-81f5-05e847fe1fa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.474 248514 DEBUG nova.compute.manager [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:39:05 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Dec 13 08:39:05 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005a.scope: Consumed 5.042s CPU time.
Dec 13 08:39:05 compute-0 systemd-machined[210538]: Machine qemu-112-instance-0000005a terminated.
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.696 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance destroyed successfully.
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.697 248514 DEBUG nova.objects.instance [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'resources' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.4 MiB/s wr, 492 op/s
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.817 248514 INFO nova.compute.manager [None req-53f112dd-3570-4f28-844f-fd194a4e95f5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Get console output
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.956 248514 INFO nova.virt.libvirt.driver [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deleting instance files /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6_del
Dec 13 08:39:05 compute-0 nova_compute[248510]: 2025-12-13 08:39:05.957 248514 INFO nova.virt.libvirt.driver [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deletion of /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6_del complete
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.043 248514 INFO nova.compute.manager [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Took 0.57 seconds to destroy the instance on the hypervisor.
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.044 248514 DEBUG oslo.service.loopingcall [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.044 248514 DEBUG nova.compute.manager [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.044 248514 DEBUG nova.network.neutron [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:39:06 compute-0 sshd-session[336947]: Invalid user node from 193.32.162.146 port 34202
Dec 13 08:39:06 compute-0 sshd-session[336947]: Connection closed by invalid user node 193.32.162.146 port 34202 [preauth]
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.634 248514 DEBUG nova.network.neutron [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:39:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Dec 13 08:39:06 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:06 compute-0 ceph-mon[76537]: pgmap v2274: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.4 MiB/s wr, 492 op/s
Dec 13 08:39:06 compute-0 ceph-mon[76537]: osdmap e247: 3 total, 3 up, 3 in
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.903 248514 DEBUG nova.compute.manager [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.904 248514 DEBUG oslo_concurrency.lockutils [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.904 248514 DEBUG oslo_concurrency.lockutils [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.905 248514 DEBUG oslo_concurrency.lockutils [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.905 248514 DEBUG nova.compute.manager [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] No waiting events found dispatching network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.905 248514 WARNING nova.compute.manager [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received unexpected event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 for instance with vm_state active and task_state None.
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.918 248514 DEBUG nova.network.neutron [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:06 compute-0 nova_compute[248510]: 2025-12-13 08:39:06.944 248514 INFO nova.compute.manager [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Took 0.90 seconds to deallocate network for instance.
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.005 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.006 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.164 248514 DEBUG oslo_concurrency.processutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:39:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141122604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.745 248514 DEBUG oslo_concurrency.processutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.751 248514 DEBUG nova.compute.provider_tree [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:39:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 339 op/s
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.779 248514 DEBUG nova.scheduler.client.report [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.815 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2141122604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.875 248514 INFO nova.scheduler.client.report [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Deleted allocations for instance 0187165c-81b1-43b8-81f5-05e847fe1fa6
Dec 13 08:39:07 compute-0 nova_compute[248510]: 2025-12-13 08:39:07.995 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Dec 13 08:39:08 compute-0 ceph-mon[76537]: pgmap v2276: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 339 op/s
Dec 13 08:39:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Dec 13 08:39:08 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Dec 13 08:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:39:09
Dec 13 08:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'vms']
Dec 13 08:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.471 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615134.469909, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.472 248514 INFO nova.compute.manager [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Stopped (Lifecycle Event)
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.494 248514 DEBUG nova.compute.manager [None req-a355d750-857c-4b61-a3bf-4e5cd5976a5a - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 259 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 42 KiB/s wr, 279 op/s
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.851 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0d52c1df-d252-4012-b05c-40737f1089bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.852 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.852 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0d52c1df-d252-4012-b05c-40737f1089bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.853 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.853 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.854 248514 INFO nova.compute.manager [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Terminating instance
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.855 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "refresh_cache-0d52c1df-d252-4012-b05c-40737f1089bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.855 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquired lock "refresh_cache-0d52c1df-d252-4012-b05c-40737f1089bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:39:09 compute-0 nova_compute[248510]: 2025-12-13 08:39:09.855 248514 DEBUG nova.network.neutron [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:39:09 compute-0 ceph-mon[76537]: osdmap e248: 3 total, 3 up, 3 in
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:39:10 compute-0 nova_compute[248510]: 2025-12-13 08:39:10.139 248514 DEBUG nova.network.neutron [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:39:10 compute-0 ovn_controller[148476]: 2025-12-13T08:39:10Z|00901|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 08:39:10 compute-0 nova_compute[248510]: 2025-12-13 08:39:10.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:10 compute-0 nova_compute[248510]: 2025-12-13 08:39:10.516 248514 DEBUG nova.network.neutron [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:10 compute-0 nova_compute[248510]: 2025-12-13 08:39:10.546 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Releasing lock "refresh_cache-0d52c1df-d252-4012-b05c-40737f1089bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:10 compute-0 nova_compute[248510]: 2025-12-13 08:39:10.547 248514 DEBUG nova.compute.manager [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:39:10 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000059.scope: Deactivated successfully.
Dec 13 08:39:10 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000059.scope: Consumed 13.035s CPU time.
Dec 13 08:39:10 compute-0 systemd-machined[210538]: Machine qemu-109-instance-00000059 terminated.
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:39:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:39:10 compute-0 nova_compute[248510]: 2025-12-13 08:39:10.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:10 compute-0 nova_compute[248510]: 2025-12-13 08:39:10.767 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance destroyed successfully.
Dec 13 08:39:10 compute-0 nova_compute[248510]: 2025-12-13 08:39:10.768 248514 DEBUG nova.objects.instance [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'resources' on Instance uuid 0d52c1df-d252-4012-b05c-40737f1089bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Dec 13 08:39:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Dec 13 08:39:10 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Dec 13 08:39:10 compute-0 ceph-mon[76537]: pgmap v2278: 321 pgs: 321 active+clean; 259 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 42 KiB/s wr, 279 op/s
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.057 248514 INFO nova.virt.libvirt.driver [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Deleting instance files /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb_del
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.058 248514 INFO nova.virt.libvirt.driver [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Deletion of /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb_del complete
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.121 248514 INFO nova.compute.manager [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Took 0.57 seconds to destroy the instance on the hypervisor.
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.122 248514 DEBUG oslo.service.loopingcall [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.122 248514 DEBUG nova.compute.manager [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.122 248514 DEBUG nova.network.neutron [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.300 248514 DEBUG nova.network.neutron [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.325 248514 DEBUG nova.network.neutron [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.356 248514 INFO nova.compute.manager [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Took 0.23 seconds to deallocate network for instance.
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.423 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.424 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.534 248514 DEBUG oslo_concurrency.processutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:11 compute-0 nova_compute[248510]: 2025-12-13 08:39:11.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 KiB/s wr, 180 op/s
Dec 13 08:39:11 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Dec 13 08:39:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:11.993235) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:39:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Dec 13 08:39:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615151993278, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1485, "num_deletes": 257, "total_data_size": 2146999, "memory_usage": 2177232, "flush_reason": "Manual Compaction"}
Dec 13 08:39:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Dec 13 08:39:11 compute-0 ceph-mon[76537]: osdmap e249: 3 total, 3 up, 3 in
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152009289, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2084922, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43587, "largest_seqno": 45071, "table_properties": {"data_size": 2077834, "index_size": 4099, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15447, "raw_average_key_size": 20, "raw_value_size": 2063406, "raw_average_value_size": 2758, "num_data_blocks": 181, "num_entries": 748, "num_filter_entries": 748, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615046, "oldest_key_time": 1765615046, "file_creation_time": 1765615151, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 16355 microseconds, and 5543 cpu microseconds.
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.009587) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2084922 bytes OK
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.009687) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.011688) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.011702) EVENT_LOG_v1 {"time_micros": 1765615152011698, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.011718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2140307, prev total WAL file size 2140307, number of live WAL files 2.
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.012997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2036KB)], [101(9024KB)]
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152013060, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11325617, "oldest_snapshot_seqno": -1}
Dec 13 08:39:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:39:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344372783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6627 keys, 9594797 bytes, temperature: kUnknown
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152085172, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9594797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9550445, "index_size": 26704, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 172670, "raw_average_key_size": 26, "raw_value_size": 9431515, "raw_average_value_size": 1423, "num_data_blocks": 1048, "num_entries": 6627, "num_filter_entries": 6627, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615152, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.085464) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9594797 bytes
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.087022) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.8 rd, 132.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 7156, records dropped: 529 output_compression: NoCompression
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.087041) EVENT_LOG_v1 {"time_micros": 1765615152087032, "job": 60, "event": "compaction_finished", "compaction_time_micros": 72211, "compaction_time_cpu_micros": 24607, "output_level": 6, "num_output_files": 1, "total_output_size": 9594797, "num_input_records": 7156, "num_output_records": 6627, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152087512, "job": 60, "event": "table_file_deletion", "file_number": 103}
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152089009, "job": 60, "event": "table_file_deletion", "file_number": 101}
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.012892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:39:12 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:39:12 compute-0 nova_compute[248510]: 2025-12-13 08:39:12.104 248514 DEBUG oslo_concurrency.processutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:12 compute-0 nova_compute[248510]: 2025-12-13 08:39:12.110 248514 DEBUG nova.compute.provider_tree [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:39:12 compute-0 nova_compute[248510]: 2025-12-13 08:39:12.135 248514 DEBUG nova.scheduler.client.report [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:39:12 compute-0 nova_compute[248510]: 2025-12-13 08:39:12.170 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:12 compute-0 nova_compute[248510]: 2025-12-13 08:39:12.205 248514 INFO nova.scheduler.client.report [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Deleted allocations for instance 0d52c1df-d252-4012-b05c-40737f1089bb
Dec 13 08:39:12 compute-0 nova_compute[248510]: 2025-12-13 08:39:12.324 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:12 compute-0 ceph-mon[76537]: pgmap v2280: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 KiB/s wr, 180 op/s
Dec 13 08:39:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3344372783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 234 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.8 KiB/s wr, 179 op/s
Dec 13 08:39:13 compute-0 nova_compute[248510]: 2025-12-13 08:39:13.939 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615138.9386199, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:13 compute-0 nova_compute[248510]: 2025-12-13 08:39:13.940 248514 INFO nova.compute.manager [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Stopped (Lifecycle Event)
Dec 13 08:39:13 compute-0 nova_compute[248510]: 2025-12-13 08:39:13.969 248514 DEBUG nova.compute.manager [None req-8c8d1fa7-6f98-40e2-8ffe-12c4933c9474 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:15 compute-0 ceph-mon[76537]: pgmap v2281: 321 pgs: 321 active+clean; 234 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.8 KiB/s wr, 179 op/s
Dec 13 08:39:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:39:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2352041844' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.053 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.053 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:39:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2352041844' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.082 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.191 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.191 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.201 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.202 248514 INFO nova.compute.claims [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.411 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 169 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 311 KiB/s wr, 218 op/s
Dec 13 08:39:15 compute-0 nova_compute[248510]: 2025-12-13 08:39:15.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:39:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2448708107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.068 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.075 248514 DEBUG nova.compute.provider_tree [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:39:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2352041844' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:39:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2352041844' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.108 248514 DEBUG nova.scheduler.client.report [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.137 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.138 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.229 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.229 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.317 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.341 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:39:16 compute-0 ovn_controller[148476]: 2025-12-13T08:39:16Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:d2:10 10.100.0.4
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.461 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:39:16 compute-0 ovn_controller[148476]: 2025-12-13T08:39:16Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:d2:10 10.100.0.4
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.462 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.462 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Creating image(s)
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.482 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.506 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.528 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.532 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.625 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.626 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.627 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.627 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.652 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.656 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Dec 13 08:39:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Dec 13 08:39:16 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Dec 13 08:39:16 compute-0 nova_compute[248510]: 2025-12-13 08:39:16.695 248514 DEBUG nova.policy [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7507939da64e4320a1c6f389d0fc9045', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:39:17 compute-0 nova_compute[248510]: 2025-12-13 08:39:17.009 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:17 compute-0 nova_compute[248510]: 2025-12-13 08:39:17.066 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] resizing rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:39:17 compute-0 ceph-mon[76537]: pgmap v2282: 321 pgs: 321 active+clean; 169 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 311 KiB/s wr, 218 op/s
Dec 13 08:39:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2448708107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:17 compute-0 ceph-mon[76537]: osdmap e250: 3 total, 3 up, 3 in
Dec 13 08:39:17 compute-0 nova_compute[248510]: 2025-12-13 08:39:17.137 248514 DEBUG nova.objects.instance [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:17 compute-0 nova_compute[248510]: 2025-12-13 08:39:17.161 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:39:17 compute-0 nova_compute[248510]: 2025-12-13 08:39:17.162 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Ensure instance console log exists: /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:39:17 compute-0 nova_compute[248510]: 2025-12-13 08:39:17.162 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:17 compute-0 nova_compute[248510]: 2025-12-13 08:39:17.162 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:17 compute-0 nova_compute[248510]: 2025-12-13 08:39:17.163 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 169 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 309 KiB/s wr, 123 op/s
Dec 13 08:39:18 compute-0 nova_compute[248510]: 2025-12-13 08:39:18.464 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Successfully created port: ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:39:19 compute-0 ceph-mon[76537]: pgmap v2284: 321 pgs: 321 active+clean; 169 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 309 KiB/s wr, 123 op/s
Dec 13 08:39:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 219 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 4.0 MiB/s wr, 140 op/s
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.772 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.773 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.773 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.774 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.774 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.774 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.933 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.970 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.970 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Image id 0ed20320-9c25-4108-ad76-64b3cb3500ce yields fingerprint 7e19890462cb757da298333dcef0801755c35301 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.970 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] image 0ed20320-9c25-4108-ad76-64b3cb3500ce at (/var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301): checking
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.971 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] image 0ed20320-9c25-4108-ad76-64b3cb3500ce at (/var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.973 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.973 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] 9b486227-b98c-4393-9a3c-aae3e3c419a8 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.973 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] 658e5f04-399b-4a8a-8680-5ae9717949c0 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.974 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.974 248514 WARNING nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.974 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Active base files: /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.974 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Removable base files: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.975 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.975 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.975 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.975 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 13 08:39:19 compute-0 nova_compute[248510]: 2025-12-13 08:39:19.976 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Dec 13 08:39:20 compute-0 nova_compute[248510]: 2025-12-13 08:39:20.694 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615145.692605, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:20 compute-0 nova_compute[248510]: 2025-12-13 08:39:20.694 248514 INFO nova.compute.manager [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Stopped (Lifecycle Event)
Dec 13 08:39:20 compute-0 nova_compute[248510]: 2025-12-13 08:39:20.719 248514 DEBUG nova.compute.manager [None req-b98b473b-1560-416c-bc4f-862fbc8a59a9 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:20 compute-0 nova_compute[248510]: 2025-12-13 08:39:20.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016849448052577893 of space, bias 1.0, pg target 0.5054834415773368 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675473055647338 of space, bias 1.0, pg target 0.20026419166942014 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.955729329617855e-07 of space, bias 4.0, pg target 0.0007146875195541426 quantized to 16 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:39:21 compute-0 ceph-mon[76537]: pgmap v2285: 321 pgs: 321 active+clean; 219 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 4.0 MiB/s wr, 140 op/s
Dec 13 08:39:21 compute-0 nova_compute[248510]: 2025-12-13 08:39:21.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:21 compute-0 nova_compute[248510]: 2025-12-13 08:39:21.588 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Successfully updated port: ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:39:21 compute-0 nova_compute[248510]: 2025-12-13 08:39:21.616 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:39:21 compute-0 nova_compute[248510]: 2025-12-13 08:39:21.616 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:39:21 compute-0 nova_compute[248510]: 2025-12-13 08:39:21.616 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:39:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 526 KiB/s rd, 4.7 MiB/s wr, 167 op/s
Dec 13 08:39:22 compute-0 nova_compute[248510]: 2025-12-13 08:39:22.083 248514 DEBUG nova.compute.manager [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-changed-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:22 compute-0 nova_compute[248510]: 2025-12-13 08:39:22.084 248514 DEBUG nova.compute.manager [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Refreshing instance network info cache due to event network-changed-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:39:22 compute-0 nova_compute[248510]: 2025-12-13 08:39:22.084 248514 DEBUG oslo_concurrency.lockutils [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:39:22 compute-0 ceph-mon[76537]: pgmap v2286: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 526 KiB/s rd, 4.7 MiB/s wr, 167 op/s
Dec 13 08:39:22 compute-0 nova_compute[248510]: 2025-12-13 08:39:22.218 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:39:22 compute-0 nova_compute[248510]: 2025-12-13 08:39:22.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 4.7 MiB/s wr, 149 op/s
Dec 13 08:39:24 compute-0 ceph-mon[76537]: pgmap v2287: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 4.7 MiB/s wr, 149 op/s
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.031 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updating instance_info_cache with network_info: [{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.067 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.068 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance network_info: |[{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.069 248514 DEBUG oslo_concurrency.lockutils [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.070 248514 DEBUG nova.network.neutron [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Refreshing network info cache for port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.073 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Start _get_guest_xml network_info=[{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.077 248514 WARNING nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.084 248514 DEBUG nova.virt.libvirt.host [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.085 248514 DEBUG nova.virt.libvirt.host [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.093 248514 DEBUG nova.virt.libvirt.host [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.095 248514 DEBUG nova.virt.libvirt.host [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.096 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.096 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.097 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.097 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.098 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.098 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.098 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.099 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.099 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.099 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.100 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.100 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.104 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:39:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031110307' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.746 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 4.5 MiB/s wr, 101 op/s
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.766 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.769 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.800 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615150.765342, 0d52c1df-d252-4012-b05c-40737f1089bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.801 248514 INFO nova.compute.manager [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] VM Stopped (Lifecycle Event)
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2031110307' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:25 compute-0 nova_compute[248510]: 2025-12-13 08:39:25.830 248514 DEBUG nova.compute.manager [None req-77d26965-f904-4a80-aef8-45990654c32e - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:39:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886587860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.336 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.338 248514 DEBUG nova.virt.libvirt.vif [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1902812623',display_name='tempest-ServerActionsTestOtherB-server-1902812623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1902812623',id=92,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-8adwm7jj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:39:16Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.339 248514 DEBUG nova.network.os_vif_util [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.340 248514 DEBUG nova.network.os_vif_util [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.342 248514 DEBUG nova.objects.instance [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.467 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <uuid>2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be</uuid>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <name>instance-0000005c</name>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestOtherB-server-1902812623</nova:name>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:39:25</nova:creationTime>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <nova:user uuid="7507939da64e4320a1c6f389d0fc9045">tempest-ServerActionsTestOtherB-1515133862-project-member</nova:user>
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <nova:project uuid="f0aee359fbaa484eb7ead3f81eef51e7">tempest-ServerActionsTestOtherB-1515133862</nova:project>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <nova:port uuid="ac5a6aec-ff77-4185-a9cb-f95e9ee9461a">
Dec 13 08:39:26 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <system>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <entry name="serial">2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be</entry>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <entry name="uuid">2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be</entry>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </system>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <os>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   </os>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <features>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   </features>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk">
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       </source>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config">
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       </source>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:39:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1f:ee:c9"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <target dev="tapac5a6aec-ff"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/console.log" append="off"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <video>
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </video>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:39:26 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:39:26 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:39:26 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:39:26 compute-0 nova_compute[248510]: </domain>
Dec 13 08:39:26 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.469 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Preparing to wait for external event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.470 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.470 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.470 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.471 248514 DEBUG nova.virt.libvirt.vif [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1902812623',display_name='tempest-ServerActionsTestOtherB-server-1902812623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1902812623',id=92,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-8adwm7jj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:39:16Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.472 248514 DEBUG nova.network.os_vif_util [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.472 248514 DEBUG nova.network.os_vif_util [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.473 248514 DEBUG os_vif [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.474 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.474 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.479 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac5a6aec-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.480 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac5a6aec-ff, col_values=(('external_ids', {'iface-id': 'ac5a6aec-ff77-4185-a9cb-f95e9ee9461a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:ee:c9', 'vm-uuid': '2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:26 compute-0 NetworkManager[50376]: <info>  [1765615166.4835] manager: (tapac5a6aec-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.491 248514 INFO os_vif [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff')
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.577 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.578 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.578 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No VIF found with MAC fa:16:3e:1f:ee:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.579 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Using config drive
Dec 13 08:39:26 compute-0 nova_compute[248510]: 2025-12-13 08:39:26.602 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:26 compute-0 ceph-mon[76537]: pgmap v2288: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 4.5 MiB/s wr, 101 op/s
Dec 13 08:39:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3886587860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.406 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Creating config drive at /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.417 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4s1bd4bs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.586 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4s1bd4bs" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.617 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.624 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.765 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.766 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Deleting local config drive /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config because it was imported into RBD.
Dec 13 08:39:27 compute-0 NetworkManager[50376]: <info>  [1765615167.8235] manager: (tapac5a6aec-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Dec 13 08:39:27 compute-0 kernel: tapac5a6aec-ff: entered promiscuous mode
Dec 13 08:39:27 compute-0 ovn_controller[148476]: 2025-12-13T08:39:27Z|00902|binding|INFO|Claiming lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a for this chassis.
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:27 compute-0 ovn_controller[148476]: 2025-12-13T08:39:27Z|00903|binding|INFO|ac5a6aec-ff77-4185-a9cb-f95e9ee9461a: Claiming fa:16:3e:1f:ee:c9 10.100.0.13
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.844 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:27 compute-0 ovn_controller[148476]: 2025-12-13T08:39:27Z|00904|binding|INFO|Setting lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a ovn-installed in OVS
Dec 13 08:39:27 compute-0 nova_compute[248510]: 2025-12-13 08:39:27.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:27 compute-0 systemd-udevd[337343]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:39:27 compute-0 systemd-machined[210538]: New machine qemu-114-instance-0000005c.
Dec 13 08:39:27 compute-0 NetworkManager[50376]: <info>  [1765615167.8675] device (tapac5a6aec-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:39:27 compute-0 NetworkManager[50376]: <info>  [1765615167.8680] device (tapac5a6aec-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:39:27 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-0000005c.
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.051 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:ee:c9 10.100.0.13'], port_security=['fa:16:3e:1f:ee:c9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77183472-893b-4c33-ab3e-e88f01770ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:39:28 compute-0 ovn_controller[148476]: 2025-12-13T08:39:28Z|00905|binding|INFO|Setting lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a up in Southbound
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.053 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a in datapath 369f7528-6571-47b6-a030-5281647e1eac bound to our chassis
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.055 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.071 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64181ff1-0f91-463d-8f98-2a17fe7bc3b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.117 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c23e4d4b-c745-4f5e-8140-b8dc08d66cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.120 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fc439d7b-dc04-4bd3-920f-5b9029e2645d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.148 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[50747038-9244-412f-b9e5-836b488c7ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.166 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2dddcd82-e552-4bc8-8859-3482a4440908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337358, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.180 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ad939c4f-943d-4af3-b869-d130b6f1e4b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337359, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337359, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.182 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.184 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.185 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.364 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615168.3643646, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.365 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Started (Lifecycle Event)
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.405 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.409 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615168.3645506, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.409 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Paused (Lifecycle Event)
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.450 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.453 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.484 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.625 248514 DEBUG nova.network.neutron [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updated VIF entry in instance network info cache for port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.625 248514 DEBUG nova.network.neutron [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updating instance_info_cache with network_info: [{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:28 compute-0 nova_compute[248510]: 2025-12-13 08:39:28.658 248514 DEBUG oslo_concurrency.lockutils [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:28 compute-0 ceph-mon[76537]: pgmap v2289: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Dec 13 08:39:29 compute-0 nova_compute[248510]: 2025-12-13 08:39:29.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.7 MiB/s wr, 90 op/s
Dec 13 08:39:30 compute-0 nova_compute[248510]: 2025-12-13 08:39:30.815 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:30 compute-0 ceph-mon[76537]: pgmap v2290: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.7 MiB/s wr, 90 op/s
Dec 13 08:39:31 compute-0 nova_compute[248510]: 2025-12-13 08:39:31.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec 13 08:39:32 compute-0 ceph-mon[76537]: pgmap v2291: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec 13 08:39:32 compute-0 podman[337404]: 2025-12-13 08:39:32.974309738 +0000 UTC m=+0.058981866 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 13 08:39:32 compute-0 podman[337403]: 2025-12-13 08:39:32.997432062 +0000 UTC m=+0.076430679 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 08:39:33 compute-0 podman[337402]: 2025-12-13 08:39:33.011305636 +0000 UTC m=+0.101075661 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:39:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Dec 13 08:39:34 compute-0 ceph-mon[76537]: pgmap v2292: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.970 248514 DEBUG nova.compute.manager [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.970 248514 DEBUG oslo_concurrency.lockutils [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.971 248514 DEBUG oslo_concurrency.lockutils [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.971 248514 DEBUG oslo_concurrency.lockutils [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.971 248514 DEBUG nova.compute.manager [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Processing event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.972 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.974 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615174.974791, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.975 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Resumed (Lifecycle Event)
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.976 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.979 248514 INFO nova.virt.libvirt.driver [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance spawned successfully.
Dec 13 08:39:34 compute-0 nova_compute[248510]: 2025-12-13 08:39:34.979 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.077 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.081 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.081 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.082 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.082 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.083 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.083 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.087 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.138 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.195 248514 INFO nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Took 18.73 seconds to spawn the instance on the hypervisor.
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.195 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.284 248514 INFO nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Took 20.12 seconds to build instance.
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.427 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.445 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.445 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.472 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.587 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.588 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.597 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.597 248514 INFO nova.compute.claims [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:39:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 13 08:39:35 compute-0 nova_compute[248510]: 2025-12-13 08:39:35.815 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.136 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.485 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:39:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3273581284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.683 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.690 248514 DEBUG nova.compute.provider_tree [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.762 248514 DEBUG nova.scheduler.client.report [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.794 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.795 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:39:36 compute-0 ceph-mon[76537]: pgmap v2293: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 13 08:39:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3273581284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.936 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.936 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:39:36 compute-0 nova_compute[248510]: 2025-12-13 08:39:36.991 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.024 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.128 248514 DEBUG nova.compute.manager [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.130 248514 DEBUG oslo_concurrency.lockutils [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.131 248514 DEBUG oslo_concurrency.lockutils [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.131 248514 DEBUG oslo_concurrency.lockutils [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.132 248514 DEBUG nova.compute.manager [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] No waiting events found dispatching network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.133 248514 WARNING nova.compute.manager [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received unexpected event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a for instance with vm_state active and task_state pausing.
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.189 248514 INFO nova.compute.manager [None req-a437fb84-64ec-418f-aa6e-6d859119238c 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Pausing
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.191 248514 DEBUG nova.objects.instance [None req-a437fb84-64ec-418f-aa6e-6d859119238c 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'flavor' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.237 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.239 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.239 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Creating image(s)
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.260 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.282 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.308 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.312 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.359 248514 DEBUG nova.policy [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ab92f76b5ae549d8bae02bb7911221d6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '791fc8cca65d4bfd9d9f2f19018d60fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.367 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615177.3676865, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.368 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Paused (Lifecycle Event)
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.370 248514 DEBUG nova.compute.manager [None req-a437fb84-64ec-418f-aa6e-6d859119238c 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.407 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.411 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.412 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.413 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.413 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.438 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.443 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 70a00398-fa02-482c-a33e-f190895c8d28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.793 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 70a00398-fa02-482c-a33e-f190895c8d28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.858 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] resizing rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:39:37 compute-0 nova_compute[248510]: 2025-12-13 08:39:37.941 248514 DEBUG nova.objects.instance [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lazy-loading 'migration_context' on Instance uuid 70a00398-fa02-482c-a33e-f190895c8d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:38 compute-0 nova_compute[248510]: 2025-12-13 08:39:38.021 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:39:38 compute-0 nova_compute[248510]: 2025-12-13 08:39:38.022 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Ensure instance console log exists: /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:39:38 compute-0 nova_compute[248510]: 2025-12-13 08:39:38.023 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:38 compute-0 nova_compute[248510]: 2025-12-13 08:39:38.023 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:38 compute-0 nova_compute[248510]: 2025-12-13 08:39:38.024 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:38 compute-0 ceph-mon[76537]: pgmap v2294: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 13 08:39:39 compute-0 nova_compute[248510]: 2025-12-13 08:39:39.107 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Successfully created port: 004ae3d0-5827-4393-953d-aa704915956b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:39:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 272 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 53 op/s
Dec 13 08:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:39:40 compute-0 nova_compute[248510]: 2025-12-13 08:39:40.844 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:41 compute-0 ceph-mon[76537]: pgmap v2295: 321 pgs: 321 active+clean; 272 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 53 op/s
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.488 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.526 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Successfully updated port: 004ae3d0-5827-4393-953d-aa704915956b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.554 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.554 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquired lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.555 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:39:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 293 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.813 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.813 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.814 248514 INFO nova.compute.manager [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Shelving
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.854 248514 DEBUG nova.compute.manager [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-changed-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.854 248514 DEBUG nova.compute.manager [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Refreshing instance network info cache due to event network-changed-004ae3d0-5827-4393-953d-aa704915956b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.855 248514 DEBUG oslo_concurrency.lockutils [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:39:41 compute-0 kernel: tapac5a6aec-ff (unregistering): left promiscuous mode
Dec 13 08:39:41 compute-0 NetworkManager[50376]: <info>  [1765615181.8668] device (tapac5a6aec-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:39:41 compute-0 ovn_controller[148476]: 2025-12-13T08:39:41Z|00906|binding|INFO|Releasing lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a from this chassis (sb_readonly=0)
Dec 13 08:39:41 compute-0 ovn_controller[148476]: 2025-12-13T08:39:41Z|00907|binding|INFO|Setting lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a down in Southbound
Dec 13 08:39:41 compute-0 ovn_controller[148476]: 2025-12-13T08:39:41Z|00908|binding|INFO|Removing iface tapac5a6aec-ff ovn-installed in OVS
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.947 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:ee:c9 10.100.0.13'], port_security=['fa:16:3e:1f:ee:c9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77183472-893b-4c33-ab3e-e88f01770ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:39:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.948 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a in datapath 369f7528-6571-47b6-a030-5281647e1eac unbound from our chassis
Dec 13 08:39:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.949 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:39:41 compute-0 nova_compute[248510]: 2025-12-13 08:39:41.958 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.967 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b9947fee-f03b-4f85-a711-492b4bb46b50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:41 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Dec 13 08:39:41 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005c.scope: Consumed 2.927s CPU time.
Dec 13 08:39:41 compute-0 systemd-machined[210538]: Machine qemu-114-instance-0000005c terminated.
Dec 13 08:39:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.996 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[862e1021-4b15-4051-a830-16640b8a6016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.999 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[50c28bc3-fc9a-4981-a96d-e623bb58dc4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:42 compute-0 nova_compute[248510]: 2025-12-13 08:39:42.009 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.025 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31b9fa0b-ad35-403e-843b-0f3f5314fa76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51e53e98-948e-4d6e-89ed-84310ccbc7ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337661, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:42 compute-0 nova_compute[248510]: 2025-12-13 08:39:42.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:42 compute-0 nova_compute[248510]: 2025-12-13 08:39:42.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.062 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94203b55-d89d-4272-b378-c4b2d5dee372]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337664, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337664, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.063 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:42 compute-0 nova_compute[248510]: 2025-12-13 08:39:42.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:42 compute-0 nova_compute[248510]: 2025-12-13 08:39:42.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.069 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.069 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.070 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.070 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:42 compute-0 nova_compute[248510]: 2025-12-13 08:39:42.072 248514 INFO nova.virt.libvirt.driver [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance destroyed successfully.
Dec 13 08:39:42 compute-0 nova_compute[248510]: 2025-12-13 08:39:42.072 248514 DEBUG nova.objects.instance [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:42 compute-0 ceph-mon[76537]: pgmap v2296: 321 pgs: 321 active+clean; 293 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 08:39:43 compute-0 nova_compute[248510]: 2025-12-13 08:39:43.147 248514 INFO nova.virt.libvirt.driver [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Beginning cold snapshot process
Dec 13 08:39:43 compute-0 nova_compute[248510]: 2025-12-13 08:39:43.346 248514 DEBUG nova.virt.libvirt.imagebackend [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:39:43 compute-0 nova_compute[248510]: 2025-12-13 08:39:43.738 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(5f10e4dbf0b14f0fa5e1d5c60a24ded4) on rbd image(2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:39:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 08:39:43 compute-0 nova_compute[248510]: 2025-12-13 08:39:43.861 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Updating instance_info_cache with network_info: [{"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.090 248514 DEBUG nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-vif-unplugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 DEBUG nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] No waiting events found dispatching network-vif-unplugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 WARNING nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received unexpected event network-vif-unplugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a for instance with vm_state paused and task_state shelving_image_uploading.
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] No waiting events found dispatching network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.093 248514 WARNING nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received unexpected event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a for instance with vm_state paused and task_state shelving_image_uploading.
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.096 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Releasing lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.097 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance network_info: |[{"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.097 248514 DEBUG oslo_concurrency.lockutils [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.097 248514 DEBUG nova.network.neutron [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Refreshing network info cache for port 004ae3d0-5827-4393-953d-aa704915956b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.100 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start _get_guest_xml network_info=[{"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.105 248514 WARNING nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.112 248514 DEBUG nova.virt.libvirt.host [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.113 248514 DEBUG nova.virt.libvirt.host [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.116 248514 DEBUG nova.virt.libvirt.host [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.116 248514 DEBUG nova.virt.libvirt.host [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.116 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.119 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.121 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:44 compute-0 sudo[337745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:39:44 compute-0 sudo[337745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:44 compute-0 sudo[337745]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:44 compute-0 sudo[337770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:39:44 compute-0 sudo[337770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:39:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571528689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.713 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.740 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:44 compute-0 nova_compute[248510]: 2025-12-13 08:39:44.746 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:45 compute-0 sudo[337770]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:39:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:39:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:39:45 compute-0 ceph-mon[76537]: pgmap v2297: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 08:39:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3571528689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:39:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109038917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.511 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.765s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.513 248514 DEBUG nova.virt.libvirt.vif [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-764535591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-764535591',id=93,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='791fc8cca65d4bfd9d9f2f19018d60fb',ramdisk_id='',reservation_id='r-m1z94rmm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1097359454',owner_user_name='tempest-ServerTagsTestJSON-1097359454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:39:37Z,user_data=None,user_id='ab92f76b5ae549d8bae02bb7911221d6',uuid=70a00398-fa02-482c-a33e-f190895c8d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.513 248514 DEBUG nova.network.os_vif_util [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converting VIF {"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.514 248514 DEBUG nova.network.os_vif_util [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.515 248514 DEBUG nova.objects.instance [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 70a00398-fa02-482c-a33e-f190895c8d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.540 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <uuid>70a00398-fa02-482c-a33e-f190895c8d28</uuid>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <name>instance-0000005d</name>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerTagsTestJSON-server-764535591</nova:name>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:39:44</nova:creationTime>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <nova:user uuid="ab92f76b5ae549d8bae02bb7911221d6">tempest-ServerTagsTestJSON-1097359454-project-member</nova:user>
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <nova:project uuid="791fc8cca65d4bfd9d9f2f19018d60fb">tempest-ServerTagsTestJSON-1097359454</nova:project>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <nova:port uuid="004ae3d0-5827-4393-953d-aa704915956b">
Dec 13 08:39:45 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <system>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <entry name="serial">70a00398-fa02-482c-a33e-f190895c8d28</entry>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <entry name="uuid">70a00398-fa02-482c-a33e-f190895c8d28</entry>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </system>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <os>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   </os>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <features>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   </features>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/70a00398-fa02-482c-a33e-f190895c8d28_disk">
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       </source>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/70a00398-fa02-482c-a33e-f190895c8d28_disk.config">
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       </source>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:39:45 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b0:7b:f4"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <target dev="tap004ae3d0-58"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/console.log" append="off"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <video>
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </video>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:39:45 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:39:45 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:39:45 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:39:45 compute-0 nova_compute[248510]: </domain>
Dec 13 08:39:45 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.541 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Preparing to wait for external event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.541 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.541 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.542 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.542 248514 DEBUG nova.virt.libvirt.vif [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-764535591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-764535591',id=93,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='791fc8cca65d4bfd9d9f2f19018d60fb',ramdisk_id='',reservation_id='r-m1z94rmm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1097359454',owner_user_name='tempest-ServerTagsTestJSON-1097359454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:39:37Z,user_data=None,user_id='ab92f76b5ae549d8bae02bb7911221d6',uuid=70a00398-fa02-482c-a33e-f190895c8d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.542 248514 DEBUG nova.network.os_vif_util [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converting VIF {"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.543 248514 DEBUG nova.network.os_vif_util [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.543 248514 DEBUG os_vif [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.544 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.545 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.548 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap004ae3d0-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.549 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap004ae3d0-58, col_values=(('external_ids', {'iface-id': '004ae3d0-5827-4393-953d-aa704915956b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:7b:f4', 'vm-uuid': '70a00398-fa02-482c-a33e-f190895c8d28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:39:45 compute-0 NetworkManager[50376]: <info>  [1765615185.5528] manager: (tap004ae3d0-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.560 248514 INFO os_vif [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58')
Dec 13 08:39:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:39:45 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:39:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:39:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:39:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:39:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:39:45 compute-0 sudo[337870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:39:45 compute-0 sudo[337870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:45 compute-0 sudo[337870]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:45 compute-0 sudo[337895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:39:45 compute-0 sudo[337895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.761 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.762 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.762 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] No VIF found with MAC fa:16:3e:b0:7b:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.762 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Using config drive
Dec 13 08:39:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.785 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:45 compute-0 nova_compute[248510]: 2025-12-13 08:39:45.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:46 compute-0 podman[337950]: 2025-12-13 08:39:46.064665848 +0000 UTC m=+0.040966669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:39:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:39:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:39:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3109038917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:39:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:39:46 compute-0 ceph-mon[76537]: osdmap e251: 3 total, 3 up, 3 in
Dec 13 08:39:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:39:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:39:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:39:46 compute-0 ceph-mon[76537]: pgmap v2299: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 08:39:46 compute-0 podman[337950]: 2025-12-13 08:39:46.381898486 +0000 UTC m=+0.358199227 container create af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:39:46 compute-0 nova_compute[248510]: 2025-12-13 08:39:46.481 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk@5f10e4dbf0b14f0fa5e1d5c60a24ded4 to images/764d0410-b25b-4414-843f-9f74e17a2d49 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:39:46 compute-0 systemd[1]: Started libpod-conmon-af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c.scope.
Dec 13 08:39:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:39:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:46 compute-0 podman[337950]: 2025-12-13 08:39:46.814521279 +0000 UTC m=+0.790822060 container init af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:39:46 compute-0 podman[337950]: 2025-12-13 08:39:46.821268407 +0000 UTC m=+0.797569158 container start af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:39:46 compute-0 jovial_hofstadter[337973]: 167 167
Dec 13 08:39:46 compute-0 systemd[1]: libpod-af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c.scope: Deactivated successfully.
Dec 13 08:39:46 compute-0 podman[337950]: 2025-12-13 08:39:46.84072447 +0000 UTC m=+0.817025311 container attach af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:39:46 compute-0 podman[337950]: 2025-12-13 08:39:46.842232888 +0000 UTC m=+0.818533649 container died af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:39:46 compute-0 nova_compute[248510]: 2025-12-13 08:39:46.853 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Creating config drive at /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config
Dec 13 08:39:46 compute-0 nova_compute[248510]: 2025-12-13 08:39:46.858 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3u11fmb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6a68ac05c552a44b21e6e055d43115fe9a26b1dc6215dc5ad7b5f0d95ce6a48-merged.mount: Deactivated successfully.
Dec 13 08:39:46 compute-0 nova_compute[248510]: 2025-12-13 08:39:46.992 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/764d0410-b25b-4414-843f-9f74e17a2d49 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:39:47 compute-0 podman[337950]: 2025-12-13 08:39:47.009537052 +0000 UTC m=+0.985837783 container remove af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:39:47 compute-0 systemd[1]: libpod-conmon-af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c.scope: Deactivated successfully.
Dec 13 08:39:47 compute-0 nova_compute[248510]: 2025-12-13 08:39:47.038 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3u11fmb" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:47 compute-0 nova_compute[248510]: 2025-12-13 08:39:47.073 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:39:47 compute-0 nova_compute[248510]: 2025-12-13 08:39:47.077 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config 70a00398-fa02-482c-a33e-f190895c8d28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:47 compute-0 nova_compute[248510]: 2025-12-13 08:39:47.211 248514 DEBUG nova.network.neutron [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Updated VIF entry in instance network info cache for port 004ae3d0-5827-4393-953d-aa704915956b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:39:47 compute-0 nova_compute[248510]: 2025-12-13 08:39:47.212 248514 DEBUG nova.network.neutron [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Updating instance_info_cache with network_info: [{"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:47 compute-0 nova_compute[248510]: 2025-12-13 08:39:47.232 248514 DEBUG oslo_concurrency.lockutils [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:47 compute-0 podman[338065]: 2025-12-13 08:39:47.248199599 +0000 UTC m=+0.094669792 container create 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 08:39:47 compute-0 podman[338065]: 2025-12-13 08:39:47.17896584 +0000 UTC m=+0.025436063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:39:47 compute-0 systemd[1]: Started libpod-conmon-075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d.scope.
Dec 13 08:39:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:39:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 08:39:47 compute-0 nova_compute[248510]: 2025-12-13 08:39:47.977 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:48 compute-0 podman[338065]: 2025-12-13 08:39:48.042443784 +0000 UTC m=+0.888914027 container init 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:39:48 compute-0 podman[338065]: 2025-12-13 08:39:48.050609377 +0000 UTC m=+0.897079570 container start 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.092 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.093 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:39:48 compute-0 podman[338065]: 2025-12-13 08:39:48.373204928 +0000 UTC m=+1.219675141 container attach 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:39:48 compute-0 ceph-mon[76537]: pgmap v2300: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.583 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config 70a00398-fa02-482c-a33e-f190895c8d28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.584 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Deleting local config drive /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config because it was imported into RBD.
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.600 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(5f10e4dbf0b14f0fa5e1d5c60a24ded4) on rbd image(2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:39:48 compute-0 kernel: tap004ae3d0-58: entered promiscuous mode
Dec 13 08:39:48 compute-0 NetworkManager[50376]: <info>  [1765615188.6428] manager: (tap004ae3d0-58): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Dec 13 08:39:48 compute-0 ovn_controller[148476]: 2025-12-13T08:39:48Z|00909|binding|INFO|Claiming lport 004ae3d0-5827-4393-953d-aa704915956b for this chassis.
Dec 13 08:39:48 compute-0 ovn_controller[148476]: 2025-12-13T08:39:48Z|00910|binding|INFO|004ae3d0-5827-4393-953d-aa704915956b: Claiming fa:16:3e:b0:7b:f4 10.100.0.4
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.655 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:7b:f4 10.100.0.4'], port_security=['fa:16:3e:b0:7b:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '70a00398-fa02-482c-a33e-f190895c8d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '791fc8cca65d4bfd9d9f2f19018d60fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '480cd559-e8d2-47d2-86f5-7cc9a5743178', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e0ff201-d141-4a6c-ab2a-8538bea7e081, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=004ae3d0-5827-4393-953d-aa704915956b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.656 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 004ae3d0-5827-4393-953d-aa704915956b in datapath 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf bound to our chassis
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.657 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf
Dec 13 08:39:48 compute-0 ovn_controller[148476]: 2025-12-13T08:39:48Z|00911|binding|INFO|Setting lport 004ae3d0-5827-4393-953d-aa704915956b ovn-installed in OVS
Dec 13 08:39:48 compute-0 ovn_controller[148476]: 2025-12-13T08:39:48Z|00912|binding|INFO|Setting lport 004ae3d0-5827-4393-953d-aa704915956b up in Southbound
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.670 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[de7d369f-fcad-40af-9fa4-684ae8b845b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.671 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c5b1dd4-51 in ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.673 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c5b1dd4-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8654e7b-5bc5-40a6-a9b4-b1ebc3ea9828]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2773a0f6-f50e-4650-b34b-aa77050ec541]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 systemd-udevd[338154]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:39:48 compute-0 systemd-machined[210538]: New machine qemu-115-instance-0000005d.
Dec 13 08:39:48 compute-0 NetworkManager[50376]: <info>  [1765615188.6907] device (tap004ae3d0-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:39:48 compute-0 NetworkManager[50376]: <info>  [1765615188.6913] device (tap004ae3d0-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.689 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[5479c4a2-f6c4-4fcc-82c3-198fa67290c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-0000005d.
Dec 13 08:39:48 compute-0 eager_feynman[338097]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:39:48 compute-0 eager_feynman[338097]: --> All data devices are unavailable
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.714 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2444f8-8018-4137-9dd7-bb113d48477d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 systemd[1]: libpod-075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d.scope: Deactivated successfully.
Dec 13 08:39:48 compute-0 podman[338065]: 2025-12-13 08:39:48.730758207 +0000 UTC m=+1.577228410 container died 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.744 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3421fc3f-1ed8-4fcc-a52b-6cc979faa064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 NetworkManager[50376]: <info>  [1765615188.7522] manager: (tap6c5b1dd4-50): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.752 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[65c4ee34-db3b-4707-89a7-04f8b1bb5e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708-merged.mount: Deactivated successfully.
Dec 13 08:39:48 compute-0 podman[338065]: 2025-12-13 08:39:48.779175549 +0000 UTC m=+1.625645742 container remove 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:39:48 compute-0 systemd[1]: libpod-conmon-075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d.scope: Deactivated successfully.
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.794 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4f53a573-0bae-47a4-b422-7b1178b1b6c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.798 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[09d825e4-65d2-4e12-b644-c2f2e6375418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 sudo[337895]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:48 compute-0 NetworkManager[50376]: <info>  [1765615188.8249] device (tap6c5b1dd4-50): carrier: link connected
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.831 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c272d29f-0413-43f6-8820-27b4700ae2b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.847 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf0cd5-3220-4d94-b7ec-36378c2a7016]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c5b1dd4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:fd:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776604, 'reachable_time': 23218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338205, 'error': None, 'target': 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.863 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae65dce2-9104-4e3b-a963-c7b48b011a43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:fd77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776604, 'tstamp': 776604}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338219, 'error': None, 'target': 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.882 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed52c01-f0b2-47c6-8d0f-222d10e541ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c5b1dd4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:fd:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776604, 'reachable_time': 23218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338225, 'error': None, 'target': 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 sudo[338199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:39:48 compute-0 sudo[338199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:48 compute-0 sudo[338199]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.925 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a78f4dd1-8d0f-44e5-95cb-088b8a0d4399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 sudo[338229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:39:48 compute-0 sudo[338229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.986 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3c83b3-f0da-4c83-b7a8-334ce5ef5859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.988 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c5b1dd4-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.988 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.989 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c5b1dd4-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:48 compute-0 kernel: tap6c5b1dd4-50: entered promiscuous mode
Dec 13 08:39:48 compute-0 NetworkManager[50376]: <info>  [1765615188.9927] manager: (tap6c5b1dd4-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c5b1dd4-50, col_values=(('external_ids', {'iface-id': 'b1e704a1-ed75-41ef-b655-bc56c3a0d60e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:48 compute-0 nova_compute[248510]: 2025-12-13 08:39:48.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:48 compute-0 ovn_controller[148476]: 2025-12-13T08:39:48Z|00913|binding|INFO|Releasing lport b1e704a1-ed75-41ef-b655-bc56c3a0d60e from this chassis (sb_readonly=0)
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:49.018 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c5b1dd4-5a39-41f9-a98f-1f901a1328cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c5b1dd4-5a39-41f9-a98f-1f901a1328cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:49.019 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a72565f6-9dad-425d-b061-98de22e34955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:49.020 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6c5b1dd4-5a39-41f9-a98f-1f901a1328cf.pid.haproxy
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:39:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:49.023 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'env', 'PROCESS_TAG=haproxy-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c5b1dd4-5a39-41f9-a98f-1f901a1328cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.240 248514 DEBUG nova.compute.manager [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.240 248514 DEBUG oslo_concurrency.lockutils [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.240 248514 DEBUG oslo_concurrency.lockutils [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.241 248514 DEBUG oslo_concurrency.lockutils [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.241 248514 DEBUG nova.compute.manager [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Processing event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:39:49 compute-0 podman[338277]: 2025-12-13 08:39:49.254149765 +0000 UTC m=+0.038634101 container create c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:39:49 compute-0 systemd[1]: Started libpod-conmon-c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c.scope.
Dec 13 08:39:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:39:49 compute-0 podman[338277]: 2025-12-13 08:39:49.330154952 +0000 UTC m=+0.114639298 container init c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 08:39:49 compute-0 podman[338277]: 2025-12-13 08:39:49.236224509 +0000 UTC m=+0.020708835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:39:49 compute-0 podman[338277]: 2025-12-13 08:39:49.337061534 +0000 UTC m=+0.121545860 container start c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 08:39:49 compute-0 podman[338277]: 2025-12-13 08:39:49.341272888 +0000 UTC m=+0.125757234 container attach c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:39:49 compute-0 exciting_hofstadter[338309]: 167 167
Dec 13 08:39:49 compute-0 systemd[1]: libpod-c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c.scope: Deactivated successfully.
Dec 13 08:39:49 compute-0 podman[338277]: 2025-12-13 08:39:49.345433031 +0000 UTC m=+0.129917357 container died c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3273eec25cc77b6dd7c90bd62fc36041eb13640fbd996f106f9a30af5a1c27a-merged.mount: Deactivated successfully.
Dec 13 08:39:49 compute-0 podman[338277]: 2025-12-13 08:39:49.390437769 +0000 UTC m=+0.174922095 container remove c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 08:39:49 compute-0 systemd[1]: libpod-conmon-c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c.scope: Deactivated successfully.
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.444 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615189.4445612, 70a00398-fa02-482c-a33e-f190895c8d28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.445 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] VM Started (Lifecycle Event)
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.447 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.451 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.456 248514 INFO nova.virt.libvirt.driver [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance spawned successfully.
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.457 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:39:49 compute-0 podman[338374]: 2025-12-13 08:39:49.458799137 +0000 UTC m=+0.061060808 container create e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.476 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.482 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.484 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.484 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.485 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.485 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.485 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.486 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:39:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Dec 13 08:39:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Dec 13 08:39:49 compute-0 systemd[1]: Started libpod-conmon-e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb.scope.
Dec 13 08:39:49 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.519 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.520 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615189.4471893, 70a00398-fa02-482c-a33e-f190895c8d28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.520 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] VM Paused (Lifecycle Event)
Dec 13 08:39:49 compute-0 podman[338374]: 2025-12-13 08:39:49.421124041 +0000 UTC m=+0.023385742 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:39:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.548 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(764d0410-b25b-4414-843f-9f74e17a2d49) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7c4b47c7bcc837568d4b5d03309c58f78909de53f1978dfe18a183e466a4e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:49 compute-0 podman[338374]: 2025-12-13 08:39:49.571362562 +0000 UTC m=+0.173624273 container init e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:39:49 compute-0 podman[338374]: 2025-12-13 08:39:49.579066773 +0000 UTC m=+0.181328444 container start e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.588 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.590 248514 INFO nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Took 12.35 seconds to spawn the instance on the hypervisor.
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.590 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.595 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615189.4495208, 70a00398-fa02-482c-a33e-f190895c8d28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.596 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] VM Resumed (Lifecycle Event)
Dec 13 08:39:49 compute-0 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [NOTICE]   (338430) : New worker (338432) forked
Dec 13 08:39:49 compute-0 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [NOTICE]   (338430) : Loading success.
Dec 13 08:39:49 compute-0 podman[338400]: 2025-12-13 08:39:49.618343929 +0000 UTC m=+0.060539215 container create 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.630 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.633 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:49 compute-0 systemd[1]: Started libpod-conmon-9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0.scope.
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.664 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.677 248514 INFO nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Took 14.12 seconds to build instance.
Dec 13 08:39:49 compute-0 podman[338400]: 2025-12-13 08:39:49.592317363 +0000 UTC m=+0.034512659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:39:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:39:49 compute-0 nova_compute[248510]: 2025-12-13 08:39:49.700 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:49 compute-0 podman[338400]: 2025-12-13 08:39:49.725954401 +0000 UTC m=+0.168149707 container init 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:39:49 compute-0 podman[338400]: 2025-12-13 08:39:49.732427952 +0000 UTC m=+0.174623238 container start 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 08:39:49 compute-0 podman[338400]: 2025-12-13 08:39:49.737119699 +0000 UTC m=+0.179314985 container attach 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 08:39:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 332 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Dec 13 08:39:50 compute-0 charming_bohr[338446]: {
Dec 13 08:39:50 compute-0 charming_bohr[338446]:     "0": [
Dec 13 08:39:50 compute-0 charming_bohr[338446]:         {
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "devices": [
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "/dev/loop3"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             ],
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_name": "ceph_lv0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_size": "21470642176",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "name": "ceph_lv0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "tags": {
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cluster_name": "ceph",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.crush_device_class": "",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.encrypted": "0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.objectstore": "bluestore",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osd_id": "0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.type": "block",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.vdo": "0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.with_tpm": "0"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             },
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "type": "block",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "vg_name": "ceph_vg0"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:         }
Dec 13 08:39:50 compute-0 charming_bohr[338446]:     ],
Dec 13 08:39:50 compute-0 charming_bohr[338446]:     "1": [
Dec 13 08:39:50 compute-0 charming_bohr[338446]:         {
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "devices": [
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "/dev/loop4"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             ],
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_name": "ceph_lv1",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_size": "21470642176",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "name": "ceph_lv1",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "tags": {
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cluster_name": "ceph",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.crush_device_class": "",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.encrypted": "0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.objectstore": "bluestore",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osd_id": "1",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.type": "block",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.vdo": "0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.with_tpm": "0"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             },
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "type": "block",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "vg_name": "ceph_vg1"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:         }
Dec 13 08:39:50 compute-0 charming_bohr[338446]:     ],
Dec 13 08:39:50 compute-0 charming_bohr[338446]:     "2": [
Dec 13 08:39:50 compute-0 charming_bohr[338446]:         {
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "devices": [
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "/dev/loop5"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             ],
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_name": "ceph_lv2",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_size": "21470642176",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "name": "ceph_lv2",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "tags": {
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.cluster_name": "ceph",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.crush_device_class": "",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.encrypted": "0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.objectstore": "bluestore",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osd_id": "2",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.type": "block",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.vdo": "0",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:                 "ceph.with_tpm": "0"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             },
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "type": "block",
Dec 13 08:39:50 compute-0 charming_bohr[338446]:             "vg_name": "ceph_vg2"
Dec 13 08:39:50 compute-0 charming_bohr[338446]:         }
Dec 13 08:39:50 compute-0 charming_bohr[338446]:     ]
Dec 13 08:39:50 compute-0 charming_bohr[338446]: }
Dec 13 08:39:50 compute-0 systemd[1]: libpod-9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0.scope: Deactivated successfully.
Dec 13 08:39:50 compute-0 conmon[338446]: conmon 9e23823f88c288c2deb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0.scope/container/memory.events
Dec 13 08:39:50 compute-0 podman[338400]: 2025-12-13 08:39:50.080130657 +0000 UTC m=+0.522325973 container died 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe-merged.mount: Deactivated successfully.
Dec 13 08:39:50 compute-0 podman[338400]: 2025-12-13 08:39:50.13017992 +0000 UTC m=+0.572375206 container remove 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 08:39:50 compute-0 systemd[1]: libpod-conmon-9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0.scope: Deactivated successfully.
Dec 13 08:39:50 compute-0 sudo[338229]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:50 compute-0 sudo[338467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:39:50 compute-0 sudo[338467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:50 compute-0 sudo[338467]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:50 compute-0 sudo[338492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:39:50 compute-0 sudo[338492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Dec 13 08:39:50 compute-0 nova_compute[248510]: 2025-12-13 08:39:50.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Dec 13 08:39:50 compute-0 ceph-mon[76537]: osdmap e252: 3 total, 3 up, 3 in
Dec 13 08:39:50 compute-0 ceph-mon[76537]: pgmap v2302: 321 pgs: 321 active+clean; 332 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Dec 13 08:39:50 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Dec 13 08:39:50 compute-0 podman[338529]: 2025-12-13 08:39:50.649868316 +0000 UTC m=+0.053983602 container create 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 08:39:50 compute-0 systemd[1]: Started libpod-conmon-88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412.scope.
Dec 13 08:39:50 compute-0 podman[338529]: 2025-12-13 08:39:50.625321456 +0000 UTC m=+0.029436772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:39:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:39:50 compute-0 podman[338529]: 2025-12-13 08:39:50.745846329 +0000 UTC m=+0.149961645 container init 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:39:50 compute-0 podman[338529]: 2025-12-13 08:39:50.75392154 +0000 UTC m=+0.158036826 container start 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:39:50 compute-0 podman[338529]: 2025-12-13 08:39:50.758232107 +0000 UTC m=+0.162347533 container attach 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:39:50 compute-0 romantic_wing[338545]: 167 167
Dec 13 08:39:50 compute-0 systemd[1]: libpod-88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412.scope: Deactivated successfully.
Dec 13 08:39:50 compute-0 podman[338529]: 2025-12-13 08:39:50.763161759 +0000 UTC m=+0.167277065 container died 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 08:39:50 compute-0 nova_compute[248510]: 2025-12-13 08:39:50.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdde3396188335eb6dbd89cf1e4f56051c67d6068b35bcf917d12fcd2dd91be7-merged.mount: Deactivated successfully.
Dec 13 08:39:50 compute-0 podman[338529]: 2025-12-13 08:39:50.808358572 +0000 UTC m=+0.212473858 container remove 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:39:50 compute-0 systemd[1]: libpod-conmon-88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412.scope: Deactivated successfully.
Dec 13 08:39:50 compute-0 nova_compute[248510]: 2025-12-13 08:39:50.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:50 compute-0 podman[338568]: 2025-12-13 08:39:50.989271065 +0000 UTC m=+0.041926983 container create 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:39:51 compute-0 systemd[1]: Started libpod-conmon-15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27.scope.
Dec 13 08:39:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:51 compute-0 podman[338568]: 2025-12-13 08:39:50.971107273 +0000 UTC m=+0.023763211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:39:51 compute-0 podman[338568]: 2025-12-13 08:39:51.080647354 +0000 UTC m=+0.133303292 container init 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:39:51 compute-0 podman[338568]: 2025-12-13 08:39:51.088731745 +0000 UTC m=+0.141387663 container start 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:39:51 compute-0 podman[338568]: 2025-12-13 08:39:51.092258502 +0000 UTC m=+0.144914440 container attach 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:39:51 compute-0 nova_compute[248510]: 2025-12-13 08:39:51.479 248514 DEBUG nova.compute.manager [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:51 compute-0 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 DEBUG oslo_concurrency.lockutils [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:51 compute-0 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 DEBUG oslo_concurrency.lockutils [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:51 compute-0 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 DEBUG oslo_concurrency.lockutils [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:51 compute-0 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 DEBUG nova.compute.manager [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] No waiting events found dispatching network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:51 compute-0 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 WARNING nova.compute.manager [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received unexpected event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b for instance with vm_state active and task_state None.
Dec 13 08:39:51 compute-0 ceph-mon[76537]: osdmap e253: 3 total, 3 up, 3 in
Dec 13 08:39:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 86 op/s
Dec 13 08:39:51 compute-0 lvm[338662]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:39:51 compute-0 lvm[338665]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:39:51 compute-0 lvm[338662]: VG ceph_vg0 finished
Dec 13 08:39:51 compute-0 lvm[338665]: VG ceph_vg2 finished
Dec 13 08:39:51 compute-0 lvm[338664]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:39:51 compute-0 lvm[338664]: VG ceph_vg1 finished
Dec 13 08:39:51 compute-0 pensive_morse[338584]: {}
Dec 13 08:39:51 compute-0 systemd[1]: libpod-15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27.scope: Deactivated successfully.
Dec 13 08:39:51 compute-0 podman[338568]: 2025-12-13 08:39:51.909911328 +0000 UTC m=+0.962567276 container died 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:39:51 compute-0 systemd[1]: libpod-15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27.scope: Consumed 1.368s CPU time.
Dec 13 08:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623-merged.mount: Deactivated successfully.
Dec 13 08:39:51 compute-0 podman[338568]: 2025-12-13 08:39:51.958962306 +0000 UTC m=+1.011618214 container remove 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 13 08:39:51 compute-0 systemd[1]: libpod-conmon-15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27.scope: Deactivated successfully.
Dec 13 08:39:52 compute-0 sudo[338492]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:39:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:39:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:39:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:39:52 compute-0 sudo[338680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:39:52 compute-0 sudo[338680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:39:52 compute-0 sudo[338680]: pam_unix(sudo:session): session closed for user root
Dec 13 08:39:52 compute-0 nova_compute[248510]: 2025-12-13 08:39:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:52 compute-0 nova_compute[248510]: 2025-12-13 08:39:52.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:39:52 compute-0 nova_compute[248510]: 2025-12-13 08:39:52.895 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:39:53 compute-0 ceph-mon[76537]: pgmap v2304: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 86 op/s
Dec 13 08:39:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:39:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:39:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Dec 13 08:39:54 compute-0 nova_compute[248510]: 2025-12-13 08:39:54.198 248514 INFO nova.virt.libvirt.driver [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Snapshot image upload complete
Dec 13 08:39:54 compute-0 nova_compute[248510]: 2025-12-13 08:39:54.199 248514 DEBUG nova.compute.manager [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:54 compute-0 ceph-mon[76537]: pgmap v2305: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Dec 13 08:39:54 compute-0 nova_compute[248510]: 2025-12-13 08:39:54.604 248514 INFO nova.compute.manager [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Shelve offloading
Dec 13 08:39:54 compute-0 nova_compute[248510]: 2025-12-13 08:39:54.610 248514 INFO nova.virt.libvirt.driver [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance destroyed successfully.
Dec 13 08:39:54 compute-0 nova_compute[248510]: 2025-12-13 08:39:54.611 248514 DEBUG nova.compute.manager [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:54 compute-0 nova_compute[248510]: 2025-12-13 08:39:54.613 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:39:54 compute-0 nova_compute[248510]: 2025-12-13 08:39:54.613 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:39:54 compute-0 nova_compute[248510]: 2025-12-13 08:39:54.613 248514 DEBUG nova.network.neutron [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.097 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.693 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.693 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.693 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.694 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.694 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.695 248514 INFO nova.compute.manager [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Terminating instance
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.696 248514 DEBUG nova.compute.manager [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:39:55 compute-0 kernel: tap004ae3d0-58 (unregistering): left promiscuous mode
Dec 13 08:39:55 compute-0 NetworkManager[50376]: <info>  [1765615195.7448] device (tap004ae3d0-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 ovn_controller[148476]: 2025-12-13T08:39:55Z|00914|binding|INFO|Releasing lport 004ae3d0-5827-4393-953d-aa704915956b from this chassis (sb_readonly=0)
Dec 13 08:39:55 compute-0 ovn_controller[148476]: 2025-12-13T08:39:55Z|00915|binding|INFO|Setting lport 004ae3d0-5827-4393-953d-aa704915956b down in Southbound
Dec 13 08:39:55 compute-0 ovn_controller[148476]: 2025-12-13T08:39:55Z|00916|binding|INFO|Removing iface tap004ae3d0-58 ovn-installed in OVS
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 195 op/s
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.780 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:7b:f4 10.100.0.4'], port_security=['fa:16:3e:b0:7b:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '70a00398-fa02-482c-a33e-f190895c8d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '791fc8cca65d4bfd9d9f2f19018d60fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '480cd559-e8d2-47d2-86f5-7cc9a5743178', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e0ff201-d141-4a6c-ab2a-8538bea7e081, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=004ae3d0-5827-4393-953d-aa704915956b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.781 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 004ae3d0-5827-4393-953d-aa704915956b in datapath 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf unbound from our chassis
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.783 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.785 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[518d9150-f100-4f46-a1c7-7bea45b6ba93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.786 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf namespace which is not needed anymore
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Dec 13 08:39:55 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005d.scope: Consumed 7.154s CPU time.
Dec 13 08:39:55 compute-0 systemd-machined[210538]: Machine qemu-115-instance-0000005d terminated.
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.849 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [NOTICE]   (338430) : haproxy version is 2.8.14-c23fe91
Dec 13 08:39:55 compute-0 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [NOTICE]   (338430) : path to executable is /usr/sbin/haproxy
Dec 13 08:39:55 compute-0 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [WARNING]  (338430) : Exiting Master process...
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [ALERT]    (338430) : Current worker (338432) exited with code 143 (Terminated)
Dec 13 08:39:55 compute-0 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [WARNING]  (338430) : All workers exited. Exiting... (0)
Dec 13 08:39:55 compute-0 systemd[1]: libpod-e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb.scope: Deactivated successfully.
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.943 248514 INFO nova.virt.libvirt.driver [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance destroyed successfully.
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.944 248514 DEBUG nova.objects.instance [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lazy-loading 'resources' on Instance uuid 70a00398-fa02-482c-a33e-f190895c8d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:55 compute-0 podman[338731]: 2025-12-13 08:39:55.946452669 +0000 UTC m=+0.050989037 container died e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.958 248514 DEBUG nova.virt.libvirt.vif [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-764535591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-764535591',id=93,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:39:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='791fc8cca65d4bfd9d9f2f19018d60fb',ramdisk_id='',reservation_id='r-m1z94rmm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1097359454',owner_user_name='tempest-ServerTagsTestJSON-1097359454-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:39:49Z,user_data=None,user_id='ab92f76b5ae549d8bae02bb7911221d6',uuid=70a00398-fa02-482c-a33e-f190895c8d28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.958 248514 DEBUG nova.network.os_vif_util [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converting VIF {"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.959 248514 DEBUG nova.network.os_vif_util [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.959 248514 DEBUG os_vif [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.962 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap004ae3d0-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.963 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.966 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.968 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:55 compute-0 nova_compute[248510]: 2025-12-13 08:39:55.970 248514 INFO os_vif [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58')
Dec 13 08:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb-userdata-shm.mount: Deactivated successfully.
Dec 13 08:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d7c4b47c7bcc837568d4b5d03309c58f78909de53f1978dfe18a183e466a4e2-merged.mount: Deactivated successfully.
Dec 13 08:39:55 compute-0 podman[338731]: 2025-12-13 08:39:55.994344309 +0000 UTC m=+0.098880667 container cleanup e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:39:56 compute-0 systemd[1]: libpod-conmon-e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb.scope: Deactivated successfully.
Dec 13 08:39:56 compute-0 podman[338785]: 2025-12-13 08:39:56.083877432 +0000 UTC m=+0.062627146 container remove e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.090 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92261bfe-e782-49c2-a1c6-6c84286b07e4]: (4, ('Sat Dec 13 08:39:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf (e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb)\ne243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb\nSat Dec 13 08:39:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf (e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb)\ne243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.092 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f2039ab3-9697-4eaa-9600-f8deab0867cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c5b1dd4-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.095 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:56 compute-0 kernel: tap6c5b1dd4-50: left promiscuous mode
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.119 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bf7765-05b7-4813-9f20-4f52e50d1b05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.132 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71daf30b-8761-45e5-8d7a-9543ded72dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.133 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09193c50-6b10-41e3-bf1e-52527b27d70b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.152 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2abc9b2a-62de-49fa-b0d7-5eb2645459aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776595, 'reachable_time': 29728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338808, 'error': None, 'target': 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c5b1dd4\x2d5a39\x2d41f9\x2da98f\x2d1f901a1328cf.mount: Deactivated successfully.
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.157 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:39:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.157 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9642dff8-076b-4e69-9c33-f18defec409b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.246 248514 INFO nova.virt.libvirt.driver [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Deleting instance files /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28_del
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.247 248514 INFO nova.virt.libvirt.driver [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Deletion of /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28_del complete
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.288 248514 DEBUG nova.compute.manager [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-unplugged-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.289 248514 DEBUG oslo_concurrency.lockutils [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.289 248514 DEBUG oslo_concurrency.lockutils [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.290 248514 DEBUG oslo_concurrency.lockutils [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.290 248514 DEBUG nova.compute.manager [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] No waiting events found dispatching network-vif-unplugged-004ae3d0-5827-4393-953d-aa704915956b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.290 248514 DEBUG nova.compute.manager [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-unplugged-004ae3d0-5827-4393-953d-aa704915956b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.327 248514 INFO nova.compute.manager [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Took 0.63 seconds to destroy the instance on the hypervisor.
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.328 248514 DEBUG oslo.service.loopingcall [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.328 248514 DEBUG nova.compute.manager [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:39:56 compute-0 nova_compute[248510]: 2025-12-13 08:39:56.328 248514 DEBUG nova.network.neutron [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:39:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:39:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Dec 13 08:39:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Dec 13 08:39:56 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Dec 13 08:39:56 compute-0 ceph-mon[76537]: pgmap v2306: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 195 op/s
Dec 13 08:39:56 compute-0 ceph-mon[76537]: osdmap e254: 3 total, 3 up, 3 in
Dec 13 08:39:57 compute-0 nova_compute[248510]: 2025-12-13 08:39:57.071 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615182.0693662, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:39:57 compute-0 nova_compute[248510]: 2025-12-13 08:39:57.072 248514 INFO nova.compute.manager [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Stopped (Lifecycle Event)
Dec 13 08:39:57 compute-0 nova_compute[248510]: 2025-12-13 08:39:57.098 248514 DEBUG nova.compute.manager [None req-77d70a0e-a256-4cc1-b7b9-4d611b8cac16 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:39:57 compute-0 nova_compute[248510]: 2025-12-13 08:39:57.103 248514 DEBUG nova.compute.manager [None req-77d70a0e-a256-4cc1-b7b9-4d611b8cac16 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:39:57 compute-0 nova_compute[248510]: 2025-12-13 08:39:57.124 248514 INFO nova.compute.manager [None req-77d70a0e-a256-4cc1-b7b9-4d611b8cac16 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Dec 13 08:39:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 723 KiB/s wr, 138 op/s
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.192 248514 DEBUG nova.network.neutron [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updating instance_info_cache with network_info: [{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.212 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.524 248514 DEBUG nova.compute.manager [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.524 248514 DEBUG oslo_concurrency.lockutils [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.524 248514 DEBUG oslo_concurrency.lockutils [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.525 248514 DEBUG oslo_concurrency.lockutils [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.525 248514 DEBUG nova.compute.manager [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] No waiting events found dispatching network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.525 248514 WARNING nova.compute.manager [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received unexpected event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b for instance with vm_state active and task_state deleting.
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.776 248514 DEBUG nova.network.neutron [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.811 248514 INFO nova.compute.manager [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Took 2.48 seconds to deallocate network for instance.
Dec 13 08:39:58 compute-0 ceph-mon[76537]: pgmap v2308: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 723 KiB/s wr, 138 op/s
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.865 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:58 compute-0 nova_compute[248510]: 2025-12-13 08:39:58.866 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.003 248514 DEBUG oslo_concurrency.processutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:39:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/298707572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.598 248514 DEBUG oslo_concurrency.processutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.607 248514 DEBUG nova.compute.provider_tree [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.643 248514 DEBUG nova.scheduler.client.report [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.673 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.700 248514 INFO nova.scheduler.client.report [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Deleted allocations for instance 70a00398-fa02-482c-a33e-f190895c8d28
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:39:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 313 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.2 KiB/s wr, 146 op/s
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.814 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.815 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:39:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/298707572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.864 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.873 248514 INFO nova.virt.libvirt.driver [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance destroyed successfully.
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.875 248514 DEBUG nova.objects.instance [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'resources' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.896 248514 DEBUG nova.virt.libvirt.vif [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1902812623',display_name='tempest-ServerActionsTestOtherB-server-1902812623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1902812623',id=92,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:39:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-8adwm7jj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member',shelved_at='2025-12-13T08:39:54.199494',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='764d0410-b25b-4414-843f-9f74e17a2d49'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:39:43Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.897 248514 DEBUG nova.network.os_vif_util [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.898 248514 DEBUG nova.network.os_vif_util [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.899 248514 DEBUG os_vif [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.903 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac5a6aec-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:39:59 compute-0 nova_compute[248510]: 2025-12-13 08:39:59.914 248514 INFO os_vif [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff')
Dec 13 08:40:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:40:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602304390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.421 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.521 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.521 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.524 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.524 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.527 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.527 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.706 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.708 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3405MB free_disk=59.87046145275235GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.708 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.708 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.791 248514 DEBUG nova.compute.manager [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-deleted-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.792 248514 DEBUG nova.compute.manager [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-changed-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.792 248514 DEBUG nova.compute.manager [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Refreshing instance network info cache due to event network-changed-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.792 248514 DEBUG oslo_concurrency.lockutils [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.792 248514 DEBUG oslo_concurrency.lockutils [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.793 248514 DEBUG nova.network.neutron [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Refreshing network info cache for port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.806 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b486227-b98c-4393-9a3c-aae3e3c419a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 658e5f04-399b-4a8a-8680-5ae9717949c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:00 compute-0 ceph-mon[76537]: pgmap v2309: 321 pgs: 321 active+clean; 313 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.2 KiB/s wr, 146 op/s
Dec 13 08:40:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1602304390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:00 compute-0 nova_compute[248510]: 2025-12-13 08:40:00.908 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:40:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083906428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.466 248514 INFO nova.virt.libvirt.driver [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Deleting instance files /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_del
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.467 248514 INFO nova.virt.libvirt.driver [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Deletion of /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_del complete
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.480 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.485 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.508 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.550 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.550 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.608 248514 INFO nova.scheduler.client.report [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Deleted allocations for instance 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.684 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.684 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 144 op/s
Dec 13 08:40:01 compute-0 nova_compute[248510]: 2025-12-13 08:40:01.792 248514 DEBUG oslo_concurrency.processutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3083906428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:40:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771570506' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:02 compute-0 nova_compute[248510]: 2025-12-13 08:40:02.408 248514 DEBUG oslo_concurrency.processutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:02 compute-0 nova_compute[248510]: 2025-12-13 08:40:02.415 248514 DEBUG nova.compute.provider_tree [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:40:02 compute-0 nova_compute[248510]: 2025-12-13 08:40:02.439 248514 DEBUG nova.scheduler.client.report [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:40:02 compute-0 nova_compute[248510]: 2025-12-13 08:40:02.477 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:02 compute-0 nova_compute[248510]: 2025-12-13 08:40:02.552 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:02 compute-0 nova_compute[248510]: 2025-12-13 08:40:02.964 248514 DEBUG nova.network.neutron [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updated VIF entry in instance network info cache for port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:40:02 compute-0 nova_compute[248510]: 2025-12-13 08:40:02.965 248514 DEBUG nova.network.neutron [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updating instance_info_cache with network_info: [{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": null, "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:40:02 compute-0 nova_compute[248510]: 2025-12-13 08:40:02.998 248514 DEBUG oslo_concurrency.lockutils [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:40:03 compute-0 ceph-mon[76537]: pgmap v2310: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 144 op/s
Dec 13 08:40:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/771570506' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:03 compute-0 nova_compute[248510]: 2025-12-13 08:40:03.550 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:03 compute-0 nova_compute[248510]: 2025-12-13 08:40:03.550 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:03 compute-0 nova_compute[248510]: 2025-12-13 08:40:03.551 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:40:03 compute-0 ovn_controller[148476]: 2025-12-13T08:40:03Z|00917|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 08:40:03 compute-0 nova_compute[248510]: 2025-12-13 08:40:03.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 284 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 KiB/s wr, 102 op/s
Dec 13 08:40:03 compute-0 podman[338919]: 2025-12-13 08:40:03.996984454 +0000 UTC m=+0.079557467 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 13 08:40:04 compute-0 podman[338917]: 2025-12-13 08:40:04.004943002 +0000 UTC m=+0.086892419 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:40:04 compute-0 podman[338918]: 2025-12-13 08:40:04.004978263 +0000 UTC m=+0.085611078 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:40:04 compute-0 nova_compute[248510]: 2025-12-13 08:40:04.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:05 compute-0 ceph-mon[76537]: pgmap v2311: 321 pgs: 321 active+clean; 284 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 KiB/s wr, 102 op/s
Dec 13 08:40:05 compute-0 nova_compute[248510]: 2025-12-13 08:40:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 08:40:05 compute-0 nova_compute[248510]: 2025-12-13 08:40:05.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:06 compute-0 ceph-mon[76537]: pgmap v2312: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 08:40:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 11 KiB/s wr, 67 op/s
Dec 13 08:40:07 compute-0 nova_compute[248510]: 2025-12-13 08:40:07.957 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:07 compute-0 nova_compute[248510]: 2025-12-13 08:40:07.958 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:07 compute-0 nova_compute[248510]: 2025-12-13 08:40:07.958 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 08:40:09 compute-0 ceph-mon[76537]: pgmap v2313: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 11 KiB/s wr, 67 op/s
Dec 13 08:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:40:09
Dec 13 08:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', 'volumes']
Dec 13 08:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:40:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 10 KiB/s wr, 61 op/s
Dec 13 08:40:09 compute-0 nova_compute[248510]: 2025-12-13 08:40:09.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:40:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:40:10 compute-0 nova_compute[248510]: 2025-12-13 08:40:10.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:10 compute-0 nova_compute[248510]: 2025-12-13 08:40:10.941 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615195.9397519, 70a00398-fa02-482c-a33e-f190895c8d28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:40:10 compute-0 nova_compute[248510]: 2025-12-13 08:40:10.942 248514 INFO nova.compute.manager [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] VM Stopped (Lifecycle Event)
Dec 13 08:40:10 compute-0 ceph-mon[76537]: pgmap v2314: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 10 KiB/s wr, 61 op/s
Dec 13 08:40:11 compute-0 nova_compute[248510]: 2025-12-13 08:40:11.027 248514 DEBUG nova.compute.manager [None req-74174837-704d-49e6-9cf0-233c092090c9 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:40:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 9.9 KiB/s wr, 35 op/s
Dec 13 08:40:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:11 compute-0 nova_compute[248510]: 2025-12-13 08:40:11.919 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:11 compute-0 nova_compute[248510]: 2025-12-13 08:40:11.919 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:11 compute-0 nova_compute[248510]: 2025-12-13 08:40:11.919 248514 INFO nova.compute.manager [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Shelving
Dec 13 08:40:11 compute-0 nova_compute[248510]: 2025-12-13 08:40:11.945 248514 DEBUG nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:40:12 compute-0 ceph-mon[76537]: pgmap v2315: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 9.9 KiB/s wr, 35 op/s
Dec 13 08:40:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 13 08:40:13 compute-0 nova_compute[248510]: 2025-12-13 08:40:13.801 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:14 compute-0 kernel: tapea5aafe7-a7 (unregistering): left promiscuous mode
Dec 13 08:40:14 compute-0 NetworkManager[50376]: <info>  [1765615214.4369] device (tapea5aafe7-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:14 compute-0 ovn_controller[148476]: 2025-12-13T08:40:14Z|00918|binding|INFO|Releasing lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 from this chassis (sb_readonly=0)
Dec 13 08:40:14 compute-0 ovn_controller[148476]: 2025-12-13T08:40:14Z|00919|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 down in Southbound
Dec 13 08:40:14 compute-0 ovn_controller[148476]: 2025-12-13T08:40:14Z|00920|binding|INFO|Removing iface tapea5aafe7-a7 ovn-installed in OVS
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.446 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.453 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:82:3a 10.100.0.3'], port_security=['fa:16:3e:e0:82:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b486227-b98c-4393-9a3c-aae3e3c419a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35769c0b-1e0e-43bc-832c-d54c65a53a36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.454 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 in datapath 369f7528-6571-47b6-a030-5281647e1eac unbound from our chassis
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.456 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.461 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.478 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0149cac-e8e6-4e49-af00-07aeb0edd33b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:14 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Deactivated successfully.
Dec 13 08:40:14 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Consumed 18.396s CPU time.
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.509 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f4a5f3-7ea2-43f9-a2df-98bbdbfa1aeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:14 compute-0 systemd-machined[210538]: Machine qemu-105-instance-00000055 terminated.
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.513 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dae2f18f-2cfe-45ef-b110-eb42b82ade61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.541 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[74915ce2-c23d-49f1-8346-9a6d8c0ddf6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.561 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a66b3529-4531-4c54-8a5a-930648d8b8c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338992, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.579 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af967b49-d845-4970-ba4b-4af782b465a7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338993, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338993, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.581 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.588 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.589 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.589 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.589 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.838 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.912 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.964 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance shutdown successfully after 3 seconds.
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.969 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance destroyed successfully.
Dec 13 08:40:14 compute-0 nova_compute[248510]: 2025-12-13 08:40:14.969 248514 DEBUG nova.objects.instance [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:15 compute-0 nova_compute[248510]: 2025-12-13 08:40:15.090 248514 DEBUG nova.compute.manager [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:15 compute-0 nova_compute[248510]: 2025-12-13 08:40:15.091 248514 DEBUG oslo_concurrency.lockutils [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:15 compute-0 nova_compute[248510]: 2025-12-13 08:40:15.091 248514 DEBUG oslo_concurrency.lockutils [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:15 compute-0 nova_compute[248510]: 2025-12-13 08:40:15.091 248514 DEBUG oslo_concurrency.lockutils [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:15 compute-0 nova_compute[248510]: 2025-12-13 08:40:15.091 248514 DEBUG nova.compute.manager [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:40:15 compute-0 nova_compute[248510]: 2025-12-13 08:40:15.092 248514 WARNING nova.compute.manager [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state shelving.
Dec 13 08:40:15 compute-0 ceph-mon[76537]: pgmap v2316: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 13 08:40:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:40:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2771698614' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:40:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:40:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2771698614' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:40:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 17 KiB/s wr, 19 op/s
Dec 13 08:40:15 compute-0 nova_compute[248510]: 2025-12-13 08:40:15.829 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Beginning cold snapshot process
Dec 13 08:40:16 compute-0 nova_compute[248510]: 2025-12-13 08:40:16.015 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:16 compute-0 nova_compute[248510]: 2025-12-13 08:40:16.023 248514 DEBUG nova.virt.libvirt.imagebackend [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:40:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2771698614' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:40:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2771698614' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:40:16 compute-0 ceph-mon[76537]: pgmap v2317: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 17 KiB/s wr, 19 op/s
Dec 13 08:40:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:16 compute-0 nova_compute[248510]: 2025-12-13 08:40:16.909 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(687ede9d59134649b503496d80eb8703) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.340 248514 DEBUG nova.compute.manager [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.340 248514 DEBUG oslo_concurrency.lockutils [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.340 248514 DEBUG oslo_concurrency.lockutils [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.341 248514 DEBUG oslo_concurrency.lockutils [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.341 248514 DEBUG nova.compute.manager [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.341 248514 WARNING nova.compute.manager [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state shelving_image_uploading.
Dec 13 08:40:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Dec 13 08:40:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Dec 13 08:40:17 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.456 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk@687ede9d59134649b503496d80eb8703 to images/f328af0c-ddf7-42aa-aa98-c602105202af clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.541 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/f328af0c-ddf7-42aa-aa98-c602105202af flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:40:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 10 KiB/s wr, 2 op/s
Dec 13 08:40:17 compute-0 nova_compute[248510]: 2025-12-13 08:40:17.999 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(687ede9d59134649b503496d80eb8703) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:40:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Dec 13 08:40:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Dec 13 08:40:18 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Dec 13 08:40:18 compute-0 ceph-mon[76537]: osdmap e255: 3 total, 3 up, 3 in
Dec 13 08:40:18 compute-0 ceph-mon[76537]: pgmap v2319: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 10 KiB/s wr, 2 op/s
Dec 13 08:40:18 compute-0 nova_compute[248510]: 2025-12-13 08:40:18.448 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(f328af0c-ddf7-42aa-aa98-c602105202af) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:40:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Dec 13 08:40:19 compute-0 ceph-mon[76537]: osdmap e256: 3 total, 3 up, 3 in
Dec 13 08:40:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Dec 13 08:40:19 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Dec 13 08:40:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 285 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.1 MiB/s wr, 66 op/s
Dec 13 08:40:19 compute-0 nova_compute[248510]: 2025-12-13 08:40:19.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:20 compute-0 ceph-mon[76537]: osdmap e257: 3 total, 3 up, 3 in
Dec 13 08:40:20 compute-0 ceph-mon[76537]: pgmap v2322: 321 pgs: 321 active+clean; 285 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.1 MiB/s wr, 66 op/s
Dec 13 08:40:20 compute-0 nova_compute[248510]: 2025-12-13 08:40:20.973 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001530852208824031 of space, bias 1.0, pg target 0.4592556626472093 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014091162288174185 of space, bias 1.0, pg target 0.4227348686452256 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.938651746951207e-07 of space, bias 4.0, pg target 0.0007126382096341448 quantized to 16 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:40:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 320 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s
Dec 13 08:40:22 compute-0 nova_compute[248510]: 2025-12-13 08:40:22.342 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Snapshot image upload complete
Dec 13 08:40:22 compute-0 nova_compute[248510]: 2025-12-13 08:40:22.342 248514 DEBUG nova.compute.manager [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:40:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:22 compute-0 ceph-mon[76537]: pgmap v2323: 321 pgs: 321 active+clean; 320 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s
Dec 13 08:40:22 compute-0 nova_compute[248510]: 2025-12-13 08:40:22.425 248514 INFO nova.compute.manager [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Shelve offloading
Dec 13 08:40:22 compute-0 nova_compute[248510]: 2025-12-13 08:40:22.434 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance destroyed successfully.
Dec 13 08:40:22 compute-0 nova_compute[248510]: 2025-12-13 08:40:22.435 248514 DEBUG nova.compute.manager [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:40:22 compute-0 nova_compute[248510]: 2025-12-13 08:40:22.438 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:40:22 compute-0 nova_compute[248510]: 2025-12-13 08:40:22.438 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:40:22 compute-0 nova_compute[248510]: 2025-12-13 08:40:22.438 248514 DEBUG nova.network.neutron [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:40:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.3 MiB/s wr, 146 op/s
Dec 13 08:40:23 compute-0 nova_compute[248510]: 2025-12-13 08:40:23.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:24 compute-0 ceph-mon[76537]: pgmap v2324: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.3 MiB/s wr, 146 op/s
Dec 13 08:40:24 compute-0 nova_compute[248510]: 2025-12-13 08:40:24.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 129 op/s
Dec 13 08:40:25 compute-0 nova_compute[248510]: 2025-12-13 08:40:25.975 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:26 compute-0 ceph-mon[76537]: pgmap v2325: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 129 op/s
Dec 13 08:40:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Dec 13 08:40:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Dec 13 08:40:27 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Dec 13 08:40:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 78 op/s
Dec 13 08:40:28 compute-0 nova_compute[248510]: 2025-12-13 08:40:28.238 248514 DEBUG nova.network.neutron [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:40:28 compute-0 ceph-mon[76537]: osdmap e258: 3 total, 3 up, 3 in
Dec 13 08:40:28 compute-0 ceph-mon[76537]: pgmap v2327: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 78 op/s
Dec 13 08:40:28 compute-0 nova_compute[248510]: 2025-12-13 08:40:28.734 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:40:29 compute-0 nova_compute[248510]: 2025-12-13 08:40:29.679 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615214.6782806, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:40:29 compute-0 nova_compute[248510]: 2025-12-13 08:40:29.680 248514 INFO nova.compute.manager [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Stopped (Lifecycle Event)
Dec 13 08:40:29 compute-0 nova_compute[248510]: 2025-12-13 08:40:29.729 248514 DEBUG nova.compute.manager [None req-41c58ff6-cc5f-49f7-ae18-ea71ff4c1d42 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:40:29 compute-0 nova_compute[248510]: 2025-12-13 08:40:29.732 248514 DEBUG nova.compute.manager [None req-41c58ff6-cc5f-49f7-ae18-ea71ff4c1d42 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:40:29 compute-0 nova_compute[248510]: 2025-12-13 08:40:29.766 248514 INFO nova.compute.manager [None req-41c58ff6-cc5f-49f7-ae18-ea71ff4c1d42 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Dec 13 08:40:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 13 08:40:29 compute-0 nova_compute[248510]: 2025-12-13 08:40:29.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:30 compute-0 nova_compute[248510]: 2025-12-13 08:40:30.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:30 compute-0 ceph-mon[76537]: pgmap v2328: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.059 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance destroyed successfully.
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.059 248514 DEBUG nova.objects.instance [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'resources' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.076 248514 DEBUG nova.virt.libvirt.vif [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMCBc3yf7DLFWm969JJ3AJvRq1SqBawRmsOjScixeqlFSyjq4/Kpbcw0olzxybOT1DbERtB0mKMV4pquo3M97LIG1LWOqbG4HPkmobMKh41xqoYhtSOyaVVjlfwlnNokPA==',key_name='tempest-keypair-2123324397',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:37:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member',shelved_at='2025-12-13T08:40:22.342504',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f328af0c-ddf7-42aa-aa98-c602105202af'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:40:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.076 248514 DEBUG nova.network.os_vif_util [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.077 248514 DEBUG nova.network.os_vif_util [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.078 248514 DEBUG os_vif [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.080 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea5aafe7-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.085 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.087 248514 INFO os_vif [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7')
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.346 248514 DEBUG nova.compute.manager [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.346 248514 DEBUG nova.compute.manager [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing instance network info cache due to event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.346 248514 DEBUG oslo_concurrency.lockutils [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.346 248514 DEBUG oslo_concurrency.lockutils [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.347 248514 DEBUG nova.network.neutron [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.379 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting instance files /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8_del
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.379 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deletion of /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8_del complete
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.501 248514 INFO nova.scheduler.client.report [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Deleted allocations for instance 9b486227-b98c-4393-9a3c-aae3e3c419a8
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.561 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.562 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:31 compute-0 nova_compute[248510]: 2025-12-13 08:40:31.619 248514 DEBUG oslo_concurrency.processutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 7.8 KiB/s wr, 15 op/s
Dec 13 08:40:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:40:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382742547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:32 compute-0 nova_compute[248510]: 2025-12-13 08:40:32.186 248514 DEBUG oslo_concurrency.processutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:32 compute-0 nova_compute[248510]: 2025-12-13 08:40:32.194 248514 DEBUG nova.compute.provider_tree [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:40:32 compute-0 nova_compute[248510]: 2025-12-13 08:40:32.216 248514 DEBUG nova.scheduler.client.report [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:40:32 compute-0 nova_compute[248510]: 2025-12-13 08:40:32.247 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:32 compute-0 nova_compute[248510]: 2025-12-13 08:40:32.302 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:32 compute-0 ceph-mon[76537]: pgmap v2329: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 7.8 KiB/s wr, 15 op/s
Dec 13 08:40:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3382742547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 298 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 9.8 KiB/s rd, 716 B/s wr, 13 op/s
Dec 13 08:40:34 compute-0 nova_compute[248510]: 2025-12-13 08:40:34.359 248514 DEBUG nova.network.neutron [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated VIF entry in instance network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:40:34 compute-0 nova_compute[248510]: 2025-12-13 08:40:34.360 248514 DEBUG nova.network.neutron [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": null, "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:40:34 compute-0 nova_compute[248510]: 2025-12-13 08:40:34.394 248514 DEBUG oslo_concurrency.lockutils [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:40:34 compute-0 ceph-mon[76537]: pgmap v2330: 321 pgs: 321 active+clean; 298 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 9.8 KiB/s rd, 716 B/s wr, 13 op/s
Dec 13 08:40:34 compute-0 podman[339191]: 2025-12-13 08:40:34.976095546 +0000 UTC m=+0.048546234 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 08:40:34 compute-0 podman[339190]: 2025-12-13 08:40:34.983339946 +0000 UTC m=+0.062483199 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:40:35 compute-0 podman[339189]: 2025-12-13 08:40:35.003042053 +0000 UTC m=+0.082115904 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:40:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 246 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.261 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.262 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.294 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.433 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.434 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.445 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.445 248514 INFO nova.compute.claims [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:40:36 compute-0 nova_compute[248510]: 2025-12-13 08:40:36.604 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:40:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256936633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.262 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.270 248514 DEBUG nova.compute.provider_tree [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.293 248514 DEBUG nova.scheduler.client.report [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:40:37 compute-0 ceph-mon[76537]: pgmap v2331: 321 pgs: 321 active+clean; 246 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.432 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.433 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.484 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.484 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.514 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.539 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:40:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.746 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.748 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.748 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Creating image(s)
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.780 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 246 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.809 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.836 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.841 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.947 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.948 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.949 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.950 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.976 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:37 compute-0 nova_compute[248510]: 2025-12-13 08:40:37.980 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.215 248514 DEBUG nova.policy [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.291 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.332 248514 WARNING nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.332 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.333 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.334 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.335 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.336 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:38 compute-0 nova_compute[248510]: 2025-12-13 08:40:38.372 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3256936633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:38 compute-0 ceph-mon[76537]: pgmap v2332: 321 pgs: 321 active+clean; 246 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.002 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.003 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.003 248514 INFO nova.compute.manager [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Unshelving
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.107 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.108 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.112 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.131 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.146 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.147 248514 INFO nova.compute.claims [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.321 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:39 compute-0 nova_compute[248510]: 2025-12-13 08:40:39.709 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Successfully created port: eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:40:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 248 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 93 KiB/s wr, 28 op/s
Dec 13 08:40:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:40:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2681529047' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:40 compute-0 nova_compute[248510]: 2025-12-13 08:40:40.062 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:40 compute-0 nova_compute[248510]: 2025-12-13 08:40:40.068 248514 DEBUG nova.compute.provider_tree [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:40:40 compute-0 nova_compute[248510]: 2025-12-13 08:40:40.087 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:40:40 compute-0 nova_compute[248510]: 2025-12-13 08:40:40.171 248514 DEBUG nova.scheduler.client.report [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:40:40 compute-0 nova_compute[248510]: 2025-12-13 08:40:40.202 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:40:40 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2681529047' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:40:40 compute-0 nova_compute[248510]: 2025-12-13 08:40:40.334 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:40 compute-0 nova_compute[248510]: 2025-12-13 08:40:40.821 248514 INFO nova.network.neutron [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 13 08:40:41 compute-0 nova_compute[248510]: 2025-12-13 08:40:41.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:41 compute-0 nova_compute[248510]: 2025-12-13 08:40:41.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:41 compute-0 nova_compute[248510]: 2025-12-13 08:40:41.314 248514 DEBUG nova.objects.instance [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:41 compute-0 nova_compute[248510]: 2025-12-13 08:40:41.340 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:40:41 compute-0 nova_compute[248510]: 2025-12-13 08:40:41.340 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Ensure instance console log exists: /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:40:41 compute-0 nova_compute[248510]: 2025-12-13 08:40:41.340 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:41 compute-0 nova_compute[248510]: 2025-12-13 08:40:41.341 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:41 compute-0 nova_compute[248510]: 2025-12-13 08:40:41.341 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:41 compute-0 ceph-mon[76537]: pgmap v2333: 321 pgs: 321 active+clean; 248 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 93 KiB/s wr, 28 op/s
Dec 13 08:40:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 272 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 711 KiB/s wr, 42 op/s
Dec 13 08:40:42 compute-0 nova_compute[248510]: 2025-12-13 08:40:42.386 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Successfully updated port: eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:40:42 compute-0 nova_compute[248510]: 2025-12-13 08:40:42.406 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:40:42 compute-0 nova_compute[248510]: 2025-12-13 08:40:42.407 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:40:42 compute-0 nova_compute[248510]: 2025-12-13 08:40:42.407 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:40:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:42 compute-0 ceph-mon[76537]: pgmap v2334: 321 pgs: 321 active+clean; 272 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 711 KiB/s wr, 42 op/s
Dec 13 08:40:43 compute-0 nova_compute[248510]: 2025-12-13 08:40:43.104 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:40:43 compute-0 nova_compute[248510]: 2025-12-13 08:40:43.140 248514 DEBUG nova.compute.manager [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-changed-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:43 compute-0 nova_compute[248510]: 2025-12-13 08:40:43.140 248514 DEBUG nova.compute.manager [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Refreshing instance network info cache due to event network-changed-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:40:43 compute-0 nova_compute[248510]: 2025-12-13 08:40:43.141 248514 DEBUG oslo_concurrency.lockutils [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:40:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 283 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1020 KiB/s wr, 43 op/s
Dec 13 08:40:44 compute-0 ceph-mon[76537]: pgmap v2335: 321 pgs: 321 active+clean; 283 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1020 KiB/s wr, 43 op/s
Dec 13 08:40:44 compute-0 nova_compute[248510]: 2025-12-13 08:40:44.767 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:40:44 compute-0 nova_compute[248510]: 2025-12-13 08:40:44.768 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:40:44 compute-0 nova_compute[248510]: 2025-12-13 08:40:44.768 248514 DEBUG nova.network.neutron [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:40:45 compute-0 nova_compute[248510]: 2025-12-13 08:40:45.458 248514 DEBUG nova.compute.manager [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:45 compute-0 nova_compute[248510]: 2025-12-13 08:40:45.458 248514 DEBUG nova.compute.manager [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing instance network info cache due to event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:40:45 compute-0 nova_compute[248510]: 2025-12-13 08:40:45.459 248514 DEBUG oslo_concurrency.lockutils [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:40:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.696 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.758 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.759 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance network_info: |[{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.759 248514 DEBUG oslo_concurrency.lockutils [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.759 248514 DEBUG nova.network.neutron [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Refreshing network info cache for port eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.764 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start _get_guest_xml network_info=[{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.769 248514 WARNING nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.776 248514 DEBUG nova.virt.libvirt.host [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.776 248514 DEBUG nova.virt.libvirt.host [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.781 248514 DEBUG nova.virt.libvirt.host [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.781 248514 DEBUG nova.virt.libvirt.host [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.782 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.782 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.782 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.783 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.783 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.783 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.783 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.784 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.784 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.784 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.784 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.785 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:40:46 compute-0 nova_compute[248510]: 2025-12-13 08:40:46.787 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:47 compute-0 ceph-mon[76537]: pgmap v2336: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec 13 08:40:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:40:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897695458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:40:47 compute-0 nova_compute[248510]: 2025-12-13 08:40:47.355 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:47 compute-0 nova_compute[248510]: 2025-12-13 08:40:47.384 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:47 compute-0 nova_compute[248510]: 2025-12-13 08:40:47.390 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:40:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:40:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285801360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:40:47 compute-0 nova_compute[248510]: 2025-12-13 08:40:47.985 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:47 compute-0 nova_compute[248510]: 2025-12-13 08:40:47.986 248514 DEBUG nova.virt.libvirt.vif [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-694899991',display_name='tempest-₡-694899991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--694899991',id=94,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-vc3q6lf7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:40:37Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:40:47 compute-0 nova_compute[248510]: 2025-12-13 08:40:47.987 248514 DEBUG nova.network.os_vif_util [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:40:47 compute-0 nova_compute[248510]: 2025-12-13 08:40:47.988 248514 DEBUG nova.network.os_vif_util [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:40:47 compute-0 nova_compute[248510]: 2025-12-13 08:40:47.989 248514 DEBUG nova.objects.instance [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.007 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <uuid>9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c</uuid>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <name>instance-0000005e</name>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <nova:name>tempest-₡-694899991</nova:name>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:40:46</nova:creationTime>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <nova:port uuid="eacb4de9-7daa-4e47-a0b7-b0a986dc50f1">
Dec 13 08:40:48 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <system>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <entry name="serial">9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c</entry>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <entry name="uuid">9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c</entry>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </system>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <os>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   </os>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <features>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   </features>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk">
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       </source>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config">
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       </source>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:40:48 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b3:83:5f"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <target dev="tapeacb4de9-7d"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/console.log" append="off"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <video>
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </video>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:40:48 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:40:48 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:40:48 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:40:48 compute-0 nova_compute[248510]: </domain>
Dec 13 08:40:48 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.008 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Preparing to wait for external event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.009 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.009 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.010 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.010 248514 DEBUG nova.virt.libvirt.vif [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-694899991',display_name='tempest-₡-694899991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--694899991',id=94,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-vc3q6lf7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:40:37Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.011 248514 DEBUG nova.network.os_vif_util [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.011 248514 DEBUG nova.network.os_vif_util [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.012 248514 DEBUG os_vif [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.013 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.013 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.016 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeacb4de9-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.017 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeacb4de9-7d, col_values=(('external_ids', {'iface-id': 'eacb4de9-7daa-4e47-a0b7-b0a986dc50f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:83:5f', 'vm-uuid': '9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:48 compute-0 NetworkManager[50376]: <info>  [1765615248.0196] manager: (tapeacb4de9-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.023 248514 INFO os_vif [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d')
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.202 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.203 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.203 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:b3:83:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.203 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Using config drive
Dec 13 08:40:48 compute-0 nova_compute[248510]: 2025-12-13 08:40:48.228 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2897695458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:40:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2285801360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.260 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Creating config drive at /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config
Dec 13 08:40:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:49.265 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.266 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbq35jnxb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:49.266 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:40:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:40:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1524 writes, 7100 keys, 1524 commit groups, 1.0 writes per commit group, ingest: 9.94 MB, 0.02 MB/s
                                           Interval WAL: 1523 writes, 1523 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     16.7      3.36              0.19        30    0.112       0      0       0.0       0.0
                                             L6      1/0    9.15 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3     70.0     58.5      4.09              0.72        29    0.141    164K    16K       0.0       0.0
                                            Sum      1/0    9.15 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     38.4     39.7      7.45              0.92        59    0.126    164K    16K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     93.7     92.4      0.66              0.20        12    0.055     41K   3083       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     70.0     58.5      4.09              0.72        29    0.141    164K    16K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     16.8      3.35              0.19        29    0.116       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.055, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.07 MB/s write, 0.28 GB read, 0.07 MB/s read, 7.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 33.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000724 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2101,32.40 MB,10.6574%) FilterBlock(60,456.05 KB,0.146499%) IndexBlock(60,792.34 KB,0.254531%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.332 248514 DEBUG nova.network.neutron [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.372 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.374 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.375 248514 INFO nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating image(s)
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.395 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.398 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.401 248514 DEBUG oslo_concurrency.lockutils [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.401 248514 DEBUG nova.network.neutron [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.413 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbq35jnxb" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.434 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.437 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.493 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:49 compute-0 ceph-mon[76537]: pgmap v2337: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.513 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.517 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9013f4a5ad7d819d713aea28bc14bb02d0de3469" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.517 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9013f4a5ad7d819d713aea28bc14bb02d0de3469" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:40:49 compute-0 nova_compute[248510]: 2025-12-13 08:40:49.817 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:50 compute-0 nova_compute[248510]: 2025-12-13 08:40:50.651 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:50 compute-0 nova_compute[248510]: 2025-12-13 08:40:50.652 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Deleting local config drive /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config because it was imported into RBD.
Dec 13 08:40:50 compute-0 kernel: tapeacb4de9-7d: entered promiscuous mode
Dec 13 08:40:50 compute-0 NetworkManager[50376]: <info>  [1765615250.7237] manager: (tapeacb4de9-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Dec 13 08:40:50 compute-0 ovn_controller[148476]: 2025-12-13T08:40:50Z|00921|binding|INFO|Claiming lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 for this chassis.
Dec 13 08:40:50 compute-0 ovn_controller[148476]: 2025-12-13T08:40:50Z|00922|binding|INFO|eacb4de9-7daa-4e47-a0b7-b0a986dc50f1: Claiming fa:16:3e:b3:83:5f 10.100.0.5
Dec 13 08:40:50 compute-0 nova_compute[248510]: 2025-12-13 08:40:50.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:50 compute-0 ovn_controller[148476]: 2025-12-13T08:40:50Z|00923|binding|INFO|Setting lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 ovn-installed in OVS
Dec 13 08:40:50 compute-0 nova_compute[248510]: 2025-12-13 08:40:50.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:50 compute-0 systemd-udevd[339650]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:40:50 compute-0 systemd-machined[210538]: New machine qemu-116-instance-0000005e.
Dec 13 08:40:50 compute-0 ovn_controller[148476]: 2025-12-13T08:40:50Z|00924|binding|INFO|Setting lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 up in Southbound
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.767 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:83:5f 10.100.0.5'], port_security=['fa:16:3e:b3:83:5f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.769 158419 INFO neutron.agent.ovn.metadata.agent [-] Port eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis
Dec 13 08:40:50 compute-0 NetworkManager[50376]: <info>  [1765615250.7709] device (tapeacb4de9-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:40:50 compute-0 NetworkManager[50376]: <info>  [1765615250.7719] device (tapeacb4de9-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:40:50 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-0000005e.
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6e3c83-39c6-4e51-99e3-e574b91762c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.787 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f018c93-d1 in ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.789 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f018c93-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.789 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[318ae8bb-b169-44c0-bf13-11732bb4bcda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[145df534-0e4b-4286-b07b-1762d98613cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.802 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae4af74-3ba8-44bf-8236-f56dffdef1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[701b6df4-da94-4d1a-817b-39b2468b4f9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.860 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[69a8c840-f262-4901-b7fc-c0b200242bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.864 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88a179ed-a9c3-4ac0-9087-841ae0e2e20c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 systemd-udevd[339653]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:40:50 compute-0 NetworkManager[50376]: <info>  [1765615250.8661] manager: (tap8f018c93-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/390)
Dec 13 08:40:50 compute-0 nova_compute[248510]: 2025-12-13 08:40:50.877 248514 DEBUG nova.virt.libvirt.imagebackend [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/f328af0c-ddf7-42aa-aa98-c602105202af/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/f328af0c-ddf7-42aa-aa98-c602105202af/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.899 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2b426b4b-4e51-49be-bb30-83a37363fbc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.902 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9c56f5bd-da09-4280-94ba-826380f336f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 NetworkManager[50376]: <info>  [1765615250.9269] device (tap8f018c93-d0): carrier: link connected
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.932 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[61744960-09c4-43c7-901a-c7d70b3bcdad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.950 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56dfe86b-a185-487b-8975-761cb3d6fe24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339706, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.964 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[685147c0-df74-43e9-8a46-406a330c6e3f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:fc06'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782814, 'tstamp': 782814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339707, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.986 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a92f857-83be-42ee-b514-30776ee134c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339708, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[697a6994-69a6-45aa-9b46-002ee4413fb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.035 248514 DEBUG nova.virt.libvirt.imagebackend [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/f328af0c-ddf7-42aa-aa98-c602105202af/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.036 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning images/f328af0c-ddf7-42aa-aa98-c602105202af@snap to None/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.076 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71439b13-1f74-407b-8dfc-40cfb8d04c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:51 compute-0 ceph-mon[76537]: pgmap v2338: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.077 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.078 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.078 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:51 compute-0 kernel: tap8f018c93-d0: entered promiscuous mode
Dec 13 08:40:51 compute-0 NetworkManager[50376]: <info>  [1765615251.0810] manager: (tap8f018c93-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.082 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:51 compute-0 ovn_controller[148476]: 2025-12-13T08:40:51Z|00925|binding|INFO|Releasing lport 0f8d26a1-147d-466a-8164-8d2036166124 from this chassis (sb_readonly=0)
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.087 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f018c93-df47-4a6c-acdb-f508a51f75b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f018c93-df47-4a6c-acdb-f508a51f75b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.088 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e12f883e-e138-474c-b210-d6e95137ff87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.089 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/8f018c93-df47-4a6c-acdb-f508a51f75b3.pid.haproxy
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:40:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.090 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'env', 'PROCESS_TAG=haproxy-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f018c93-df47-4a6c-acdb-f508a51f75b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.102 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.255 248514 DEBUG nova.network.neutron [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updated VIF entry in instance network info cache for port eacb4de9-7daa-4e47-a0b7-b0a986dc50f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.257 248514 DEBUG nova.network.neutron [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.281 248514 DEBUG oslo_concurrency.lockutils [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:40:51 compute-0 podman[339816]: 2025-12-13 08:40:51.433651184 +0000 UTC m=+0.022805399 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.601 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615251.6009712, 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.602 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] VM Started (Lifecycle Event)
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.632 248514 DEBUG nova.compute.manager [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.633 248514 DEBUG oslo_concurrency.lockutils [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.633 248514 DEBUG oslo_concurrency.lockutils [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.634 248514 DEBUG oslo_concurrency.lockutils [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.634 248514 DEBUG nova.compute.manager [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Processing event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.635 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.640 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.641 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.644 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.647 248514 INFO nova.virt.libvirt.driver [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance spawned successfully.
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.647 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.705 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.705 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615251.60113, 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.705 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] VM Paused (Lifecycle Event)
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.710 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.710 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.711 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.711 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.711 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.712 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 28 op/s
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.834 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.838 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615251.6406531, 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.839 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] VM Resumed (Lifecycle Event)
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.874 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.877 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.882 248514 INFO nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Took 14.14 seconds to spawn the instance on the hypervisor.
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.883 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.901 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:40:51 compute-0 podman[339816]: 2025-12-13 08:40:51.919943328 +0000 UTC m=+0.509097523 container create 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.961 248514 INFO nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Took 15.55 seconds to build instance.
Dec 13 08:40:51 compute-0 nova_compute[248510]: 2025-12-13 08:40:51.990 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9013f4a5ad7d819d713aea28bc14bb02d0de3469" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:52 compute-0 systemd[1]: Started libpod-conmon-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6.scope.
Dec 13 08:40:52 compute-0 nova_compute[248510]: 2025-12-13 08:40:52.029 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:52 compute-0 nova_compute[248510]: 2025-12-13 08:40:52.031 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:52 compute-0 nova_compute[248510]: 2025-12-13 08:40:52.031 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:40:52 compute-0 nova_compute[248510]: 2025-12-13 08:40:52.031 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfb9bdf13c872c9b470ea097dce2795d900c75827e3e75f827321fc99371a11/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:40:52 compute-0 sudo[339890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:40:52 compute-0 sudo[339890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:40:52 compute-0 sudo[339890]: pam_unix(sudo:session): session closed for user root
Dec 13 08:40:52 compute-0 podman[339816]: 2025-12-13 08:40:52.196955783 +0000 UTC m=+0.786109978 container init 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:40:52 compute-0 podman[339816]: 2025-12-13 08:40:52.203390821 +0000 UTC m=+0.792545016 container start 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 08:40:52 compute-0 sudo[339915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:40:52 compute-0 sudo[339915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:40:52 compute-0 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [NOTICE]   (339943) : New worker (339953) forked
Dec 13 08:40:52 compute-0 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [NOTICE]   (339943) : Loading success.
Dec 13 08:40:52 compute-0 nova_compute[248510]: 2025-12-13 08:40:52.253 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:52 compute-0 ceph-mon[76537]: pgmap v2339: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 28 op/s
Dec 13 08:40:52 compute-0 nova_compute[248510]: 2025-12-13 08:40:52.432 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:40:52 compute-0 sudo[339915]: pam_unix(sudo:session): session closed for user root
Dec 13 08:40:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:40:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:40:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:40:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:40:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.329 248514 DEBUG nova.network.neutron [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated VIF entry in instance network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.330 248514 DEBUG nova.network.neutron [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.372 248514 DEBUG oslo_concurrency.lockutils [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.785 248514 DEBUG nova.compute.manager [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.785 248514 DEBUG oslo_concurrency.lockutils [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.786 248514 DEBUG oslo_concurrency.lockutils [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.786 248514 DEBUG oslo_concurrency.lockutils [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.786 248514 DEBUG nova.compute.manager [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] No waiting events found dispatching network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:40:53 compute-0 nova_compute[248510]: 2025-12-13 08:40:53.786 248514 WARNING nova.compute.manager [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received unexpected event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 for instance with vm_state active and task_state None.
Dec 13 08:40:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 293 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 963 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Dec 13 08:40:53 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:40:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:40:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:40:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:40:54 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:40:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:40:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:40:54 compute-0 sudo[340028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:40:54 compute-0 sudo[340028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:40:54 compute-0 sudo[340028]: pam_unix(sudo:session): session closed for user root
Dec 13 08:40:54 compute-0 sudo[340053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:40:54 compute-0 sudo[340053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:40:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:40:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:40:54 compute-0 podman[340089]: 2025-12-13 08:40:54.587901196 +0000 UTC m=+0.029850214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:40:54 compute-0 nova_compute[248510]: 2025-12-13 08:40:54.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:54 compute-0 nova_compute[248510]: 2025-12-13 08:40:54.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:40:54 compute-0 nova_compute[248510]: 2025-12-13 08:40:54.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:40:55 compute-0 podman[340089]: 2025-12-13 08:40:55.049184814 +0000 UTC m=+0.491133732 container create 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 08:40:55 compute-0 nova_compute[248510]: 2025-12-13 08:40:55.174 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:40:55 compute-0 nova_compute[248510]: 2025-12-13 08:40:55.175 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:40:55 compute-0 nova_compute[248510]: 2025-12-13 08:40:55.175 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:40:55 compute-0 nova_compute[248510]: 2025-12-13 08:40:55.175 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:55.268 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:55.420 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:55.420 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:40:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:55 compute-0 systemd[1]: Started libpod-conmon-33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b.scope.
Dec 13 08:40:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:40:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 318 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Dec 13 08:40:56 compute-0 nova_compute[248510]: 2025-12-13 08:40:56.060 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:56 compute-0 ceph-mon[76537]: pgmap v2340: 321 pgs: 321 active+clean; 293 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 963 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Dec 13 08:40:56 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:40:56 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:40:56 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:40:56 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:40:56 compute-0 podman[340089]: 2025-12-13 08:40:56.889971769 +0000 UTC m=+2.331920707 container init 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 08:40:56 compute-0 podman[340089]: 2025-12-13 08:40:56.904990293 +0000 UTC m=+2.346939211 container start 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:40:56 compute-0 wizardly_stonebraker[340106]: 167 167
Dec 13 08:40:56 compute-0 systemd[1]: libpod-33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b.scope: Deactivated successfully.
Dec 13 08:40:56 compute-0 conmon[340106]: conmon 33ac59d649209929289d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b.scope/container/memory.events
Dec 13 08:40:57 compute-0 podman[340089]: 2025-12-13 08:40:57.301672236 +0000 UTC m=+2.743621154 container attach 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 08:40:57 compute-0 podman[340089]: 2025-12-13 08:40:57.303532415 +0000 UTC m=+2.745481343 container died 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:40:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 318 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.5 MiB/s wr, 130 op/s
Dec 13 08:40:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe49bf938c9072578a61d4995a6e04bbfcd819c3ced6ba46b9dff0db01fbffd5-merged.mount: Deactivated successfully.
Dec 13 08:40:57 compute-0 ceph-mon[76537]: pgmap v2341: 321 pgs: 321 active+clean; 318 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Dec 13 08:40:57 compute-0 nova_compute[248510]: 2025-12-13 08:40:57.959 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updating instance_info_cache with network_info: [{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:40:57 compute-0 nova_compute[248510]: 2025-12-13 08:40:57.997 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Image rbd:vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 13 08:40:57 compute-0 nova_compute[248510]: 2025-12-13 08:40:57.998 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:40:57 compute-0 nova_compute[248510]: 2025-12-13 08:40:57.998 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Ensure instance console log exists: /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:40:57 compute-0 nova_compute[248510]: 2025-12-13 08:40:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:57 compute-0 nova_compute[248510]: 2025-12-13 08:40:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:57 compute-0 nova_compute[248510]: 2025-12-13 08:40:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.001 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Start _get_guest_xml network_info=[{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:40:11Z,direct_url=<?>,disk_format='raw',id=f328af0c-ddf7-42aa-aa98-c602105202af,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-647049604-shelved',owner='f0aee359fbaa484eb7ead3f81eef51e7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:40:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.005 248514 WARNING nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.010 248514 DEBUG nova.virt.libvirt.host [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.010 248514 DEBUG nova.virt.libvirt.host [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.013 248514 DEBUG nova.virt.libvirt.host [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.013 248514 DEBUG nova.virt.libvirt.host [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.014 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.014 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:40:11Z,direct_url=<?>,disk_format='raw',id=f328af0c-ddf7-42aa-aa98-c602105202af,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-647049604-shelved',owner='f0aee359fbaa484eb7ead3f81eef51e7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:40:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.014 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.014 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.015 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.015 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.015 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.015 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.034 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.070 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.073 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.075 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:58 compute-0 podman[340089]: 2025-12-13 08:40:58.257220106 +0000 UTC m=+3.699169024 container remove 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:40:58 compute-0 systemd[1]: libpod-conmon-33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b.scope: Deactivated successfully.
Dec 13 08:40:58 compute-0 podman[340150]: 2025-12-13 08:40:58.416810641 +0000 UTC m=+0.025552931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:40:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:40:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287762503' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.617 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.643 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:58 compute-0 nova_compute[248510]: 2025-12-13 08:40:58.648 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:40:58 compute-0 podman[340150]: 2025-12-13 08:40:58.684534613 +0000 UTC m=+0.293276873 container create 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 13 08:40:58 compute-0 systemd[1]: Started libpod-conmon-50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e.scope.
Dec 13 08:40:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:40:58 compute-0 podman[340150]: 2025-12-13 08:40:58.871497466 +0000 UTC m=+0.480239756 container init 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:40:58 compute-0 podman[340150]: 2025-12-13 08:40:58.880339978 +0000 UTC m=+0.489082238 container start 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:40:58 compute-0 ceph-mon[76537]: pgmap v2342: 321 pgs: 321 active+clean; 318 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.5 MiB/s wr, 130 op/s
Dec 13 08:40:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2287762503' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:40:58 compute-0 podman[340150]: 2025-12-13 08:40:58.924761033 +0000 UTC m=+0.533503313 container attach 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:40:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:40:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201357498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.257 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.263 248514 DEBUG nova.virt.libvirt.vif [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='f328af0c-ddf7-42aa-aa98-c602105202af',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-2123324397',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:37:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member',shelved_at='2025-12-13T08:40:22.342504',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f328af0c-ddf7-42aa-aa98-c602105202af'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:40:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.265 248514 DEBUG nova.network.os_vif_util [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.267 248514 DEBUG nova.network.os_vif_util [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.269 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.296 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <uuid>9b486227-b98c-4393-9a3c-aae3e3c419a8</uuid>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <name>instance-00000055</name>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerActionsTestOtherB-server-647049604</nova:name>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:40:58</nova:creationTime>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <nova:user uuid="7507939da64e4320a1c6f389d0fc9045">tempest-ServerActionsTestOtherB-1515133862-project-member</nova:user>
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <nova:project uuid="f0aee359fbaa484eb7ead3f81eef51e7">tempest-ServerActionsTestOtherB-1515133862</nova:project>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="f328af0c-ddf7-42aa-aa98-c602105202af"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <nova:port uuid="ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01">
Dec 13 08:40:59 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <system>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <entry name="serial">9b486227-b98c-4393-9a3c-aae3e3c419a8</entry>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <entry name="uuid">9b486227-b98c-4393-9a3c-aae3e3c419a8</entry>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </system>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <os>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   </os>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <features>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   </features>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk">
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       </source>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config">
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       </source>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:40:59 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e0:82:3a"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <target dev="tapea5aafe7-a7"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/console.log" append="off"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <video>
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </video>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:40:59 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:40:59 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:40:59 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:40:59 compute-0 nova_compute[248510]: </domain>
Dec 13 08:40:59 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.312 248514 DEBUG nova.compute.manager [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Preparing to wait for external event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.313 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.314 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.314 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.316 248514 DEBUG nova.virt.libvirt.vif [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='f328af0c-ddf7-42aa-aa98-c602105202af',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-2123324397',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:37:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member',shelved_at='2025-12-13T08:40:22.342504',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f328af0c-ddf7-42aa-aa98-c602105202af'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:40:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.317 248514 DEBUG nova.network.os_vif_util [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.319 248514 DEBUG nova.network.os_vif_util [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.320 248514 DEBUG os_vif [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.322 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.322 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.326 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea5aafe7-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.327 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea5aafe7-a7, col_values=(('external_ids', {'iface-id': 'ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:82:3a', 'vm-uuid': '9b486227-b98c-4393-9a3c-aae3e3c419a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:59 compute-0 NetworkManager[50376]: <info>  [1765615259.3302] manager: (tapea5aafe7-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.341 248514 INFO os_vif [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7')
Dec 13 08:40:59 compute-0 laughing_buck[340188]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:40:59 compute-0 laughing_buck[340188]: --> All data devices are unavailable
Dec 13 08:40:59 compute-0 systemd[1]: libpod-50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e.scope: Deactivated successfully.
Dec 13 08:40:59 compute-0 podman[340150]: 2025-12-13 08:40:59.401115216 +0000 UTC m=+1.009857486 container died 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.439 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.440 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.440 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No VIF found with MAC fa:16:3e:e0:82:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.441 248514 INFO nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Using config drive
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.458 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.478 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032-merged.mount: Deactivated successfully.
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.538 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'keypairs' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:40:59 compute-0 podman[340150]: 2025-12-13 08:40:59.616091404 +0000 UTC m=+1.224833664 container remove 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 08:40:59 compute-0 systemd[1]: libpod-conmon-50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e.scope: Deactivated successfully.
Dec 13 08:40:59 compute-0 sudo[340053]: pam_unix(sudo:session): session closed for user root
Dec 13 08:40:59 compute-0 sudo[340261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:40:59 compute-0 sudo[340261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:40:59 compute-0 sudo[340261]: pam_unix(sudo:session): session closed for user root
Dec 13 08:40:59 compute-0 nova_compute[248510]: 2025-12-13 08:40:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:40:59 compute-0 sudo[340286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:40:59 compute-0 sudo[340286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:40:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 354 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Dec 13 08:41:00 compute-0 podman[340322]: 2025-12-13 08:41:00.076010755 +0000 UTC m=+0.031190089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:41:00 compute-0 podman[340322]: 2025-12-13 08:41:00.288507107 +0000 UTC m=+0.243686411 container create 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:41:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1201357498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:00 compute-0 systemd[1]: Started libpod-conmon-006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d.scope.
Dec 13 08:41:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:41:00 compute-0 podman[340322]: 2025-12-13 08:41:00.390547943 +0000 UTC m=+0.345727267 container init 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:41:00 compute-0 podman[340322]: 2025-12-13 08:41:00.398579174 +0000 UTC m=+0.353758478 container start 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:41:00 compute-0 goofy_shockley[340340]: 167 167
Dec 13 08:41:00 compute-0 systemd[1]: libpod-006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d.scope: Deactivated successfully.
Dec 13 08:41:00 compute-0 podman[340322]: 2025-12-13 08:41:00.404354655 +0000 UTC m=+0.359533989 container attach 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:41:00 compute-0 podman[340322]: 2025-12-13 08:41:00.404630743 +0000 UTC m=+0.359810057 container died 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:41:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cab5a099aed85c6e0b8768e9af6cb597d8a61f4e310b1812c48718425f6dc349-merged.mount: Deactivated successfully.
Dec 13 08:41:00 compute-0 podman[340322]: 2025-12-13 08:41:00.447008144 +0000 UTC m=+0.402187448 container remove 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 08:41:00 compute-0 systemd[1]: libpod-conmon-006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d.scope: Deactivated successfully.
Dec 13 08:41:00 compute-0 nova_compute[248510]: 2025-12-13 08:41:00.627 248514 INFO nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating config drive at /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config
Dec 13 08:41:00 compute-0 nova_compute[248510]: 2025-12-13 08:41:00.636 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8bbe45o2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:00 compute-0 podman[340364]: 2025-12-13 08:41:00.661553891 +0000 UTC m=+0.069753131 container create 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:41:00 compute-0 podman[340364]: 2025-12-13 08:41:00.617735931 +0000 UTC m=+0.025935241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:41:00 compute-0 systemd[1]: Started libpod-conmon-954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704.scope.
Dec 13 08:41:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:41:00 compute-0 podman[340364]: 2025-12-13 08:41:00.77895431 +0000 UTC m=+0.187153580 container init 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:41:00 compute-0 podman[340364]: 2025-12-13 08:41:00.791576471 +0000 UTC m=+0.199775721 container start 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:41:00 compute-0 podman[340364]: 2025-12-13 08:41:00.794946939 +0000 UTC m=+0.203146209 container attach 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:41:00 compute-0 nova_compute[248510]: 2025-12-13 08:41:00.804 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8bbe45o2" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:00 compute-0 nova_compute[248510]: 2025-12-13 08:41:00.831 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:00 compute-0 nova_compute[248510]: 2025-12-13 08:41:00.834 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:00 compute-0 nova_compute[248510]: 2025-12-13 08:41:00.982 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:00 compute-0 nova_compute[248510]: 2025-12-13 08:41:00.983 248514 INFO nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting local config drive /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config because it was imported into RBD.
Dec 13 08:41:01 compute-0 kernel: tapea5aafe7-a7: entered promiscuous mode
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:01 compute-0 ovn_controller[148476]: 2025-12-13T08:41:01Z|00926|binding|INFO|Claiming lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for this chassis.
Dec 13 08:41:01 compute-0 ovn_controller[148476]: 2025-12-13T08:41:01Z|00927|binding|INFO|ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01: Claiming fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 08:41:01 compute-0 NetworkManager[50376]: <info>  [1765615261.0506] manager: (tapea5aafe7-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Dec 13 08:41:01 compute-0 ovn_controller[148476]: 2025-12-13T08:41:01Z|00928|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 ovn-installed in OVS
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:01 compute-0 systemd-udevd[340441]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:41:01 compute-0 systemd-machined[210538]: New machine qemu-117-instance-00000055.
Dec 13 08:41:01 compute-0 NetworkManager[50376]: <info>  [1765615261.0980] device (tapea5aafe7-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:41:01 compute-0 NetworkManager[50376]: <info>  [1765615261.0988] device (tapea5aafe7-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:41:01 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-00000055.
Dec 13 08:41:01 compute-0 ovn_controller[148476]: 2025-12-13T08:41:01Z|00929|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 up in Southbound
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.105 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:82:3a 10.100.0.3'], port_security=['fa:16:3e:e0:82:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b486227-b98c-4393-9a3c-aae3e3c419a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '35769c0b-1e0e-43bc-832c-d54c65a53a36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.107 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 in datapath 369f7528-6571-47b6-a030-5281647e1eac bound to our chassis
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.109 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:41:01 compute-0 loving_rosalind[340383]: {
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:     "0": [
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:         {
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "devices": [
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "/dev/loop3"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             ],
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_name": "ceph_lv0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_size": "21470642176",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "name": "ceph_lv0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "tags": {
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cluster_name": "ceph",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.crush_device_class": "",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.encrypted": "0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.objectstore": "bluestore",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osd_id": "0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.type": "block",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.vdo": "0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.with_tpm": "0"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             },
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "type": "block",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "vg_name": "ceph_vg0"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:         }
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:     ],
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:     "1": [
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:         {
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "devices": [
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "/dev/loop4"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             ],
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_name": "ceph_lv1",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_size": "21470642176",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "name": "ceph_lv1",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "tags": {
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cluster_name": "ceph",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.crush_device_class": "",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.encrypted": "0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.objectstore": "bluestore",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osd_id": "1",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.type": "block",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.vdo": "0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.with_tpm": "0"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             },
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "type": "block",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "vg_name": "ceph_vg1"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:         }
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:     ],
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:     "2": [
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:         {
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "devices": [
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "/dev/loop5"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             ],
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_name": "ceph_lv2",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_size": "21470642176",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "name": "ceph_lv2",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "tags": {
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.cluster_name": "ceph",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.crush_device_class": "",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.encrypted": "0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.objectstore": "bluestore",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osd_id": "2",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.type": "block",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.vdo": "0",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:                 "ceph.with_tpm": "0"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             },
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "type": "block",
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:             "vg_name": "ceph_vg2"
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:         }
Dec 13 08:41:01 compute-0 loving_rosalind[340383]:     ]
Dec 13 08:41:01 compute-0 loving_rosalind[340383]: }
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.128 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68916ac6-63dc-4c98-ac5c-012d2f500348]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:01 compute-0 podman[340364]: 2025-12-13 08:41:01.171004441 +0000 UTC m=+0.579203711 container died 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.171 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d77d5eb4-8b3b-465c-8536-2c423735ed56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:01 compute-0 systemd[1]: libpod-954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704.scope: Deactivated successfully.
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.174 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[646cc3aa-d601-460b-80da-f76e5efd02f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0-merged.mount: Deactivated successfully.
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.218 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2f5920-33e6-42db-b42e-a93acf033d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:01 compute-0 podman[340364]: 2025-12-13 08:41:01.225789738 +0000 UTC m=+0.633988988 container remove 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:41:01 compute-0 systemd[1]: libpod-conmon-954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704.scope: Deactivated successfully.
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.248 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0eae443-11b0-4838-a140-474ea57f001c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340468, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.266 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[90a7a7a4-7ef7-4168-a0f6-68de4af2c23a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340469, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340469, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.268 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.271 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:01 compute-0 sudo[340286]: pam_unix(sudo:session): session closed for user root
Dec 13 08:41:01 compute-0 ceph-mon[76537]: pgmap v2343: 321 pgs: 321 active+clean; 354 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Dec 13 08:41:01 compute-0 sudo[340470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:41:01 compute-0 sudo[340470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:41:01 compute-0 sudo[340470]: pam_unix(sudo:session): session closed for user root
Dec 13 08:41:01 compute-0 sudo[340510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:41:01 compute-0 sudo[340510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.536 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615261.5351875, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Started (Lifecycle Event)
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.594 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.599 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615261.5353959, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.599 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Paused (Lifecycle Event)
Dec 13 08:41:01 compute-0 podman[340575]: 2025-12-13 08:41:01.7211681 +0000 UTC m=+0.039223910 container create f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.762 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:01 compute-0 systemd[1]: Started libpod-conmon-f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac.scope.
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.771 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:41:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:41:01 compute-0 podman[340575]: 2025-12-13 08:41:01.70514541 +0000 UTC m=+0.023201270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:41:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.815 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.815 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.815 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:01 compute-0 podman[340575]: 2025-12-13 08:41:01.825527397 +0000 UTC m=+0.143583237 container init f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 08:41:01 compute-0 podman[340575]: 2025-12-13 08:41:01.836091714 +0000 UTC m=+0.154147534 container start f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:41:01 compute-0 podman[340575]: 2025-12-13 08:41:01.841027283 +0000 UTC m=+0.159083103 container attach f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 08:41:01 compute-0 stoic_shirley[340591]: 167 167
Dec 13 08:41:01 compute-0 systemd[1]: libpod-f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac.scope: Deactivated successfully.
Dec 13 08:41:01 compute-0 podman[340575]: 2025-12-13 08:41:01.844854924 +0000 UTC m=+0.162910744 container died f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 08:41:01 compute-0 nova_compute[248510]: 2025-12-13 08:41:01.854 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:41:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4a726975b97b3b1a6e8f6824650558b134928b5f564be8bd9d4d6ad49ba1b02-merged.mount: Deactivated successfully.
Dec 13 08:41:01 compute-0 podman[340575]: 2025-12-13 08:41:01.926451144 +0000 UTC m=+0.244506964 container remove f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:41:01 compute-0 systemd[1]: libpod-conmon-f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac.scope: Deactivated successfully.
Dec 13 08:41:02 compute-0 podman[340634]: 2025-12-13 08:41:02.169150179 +0000 UTC m=+0.079382543 container create 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 08:41:02 compute-0 podman[340634]: 2025-12-13 08:41:02.115885812 +0000 UTC m=+0.026118176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:41:02 compute-0 systemd[1]: Started libpod-conmon-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope.
Dec 13 08:41:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:41:02 compute-0 podman[340634]: 2025-12-13 08:41:02.288367455 +0000 UTC m=+0.198599819 container init 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 08:41:02 compute-0 ceph-mon[76537]: pgmap v2344: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Dec 13 08:41:02 compute-0 podman[340634]: 2025-12-13 08:41:02.298802589 +0000 UTC m=+0.209034953 container start 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:41:02 compute-0 podman[340634]: 2025-12-13 08:41:02.302120006 +0000 UTC m=+0.212352390 container attach 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.397 248514 DEBUG nova.compute.manager [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.399 248514 DEBUG oslo_concurrency.lockutils [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.399 248514 DEBUG oslo_concurrency.lockutils [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.400 248514 DEBUG oslo_concurrency.lockutils [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.400 248514 DEBUG nova.compute.manager [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Processing event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.405 248514 DEBUG nova.compute.manager [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.410 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615262.4102623, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.411 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Resumed (Lifecycle Event)
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.433 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.440 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance spawned successfully.
Dec 13 08:41:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221487140' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.475 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.623 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.628 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.666 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.677 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.678 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.683 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.683 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.689 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.689 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:41:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.938 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.942 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3257MB free_disk=59.87570957839489GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.942 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:02 compute-0 nova_compute[248510]: 2025-12-13 08:41:02.942 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:03 compute-0 lvm[340729]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:41:03 compute-0 lvm[340729]: VG ceph_vg0 finished
Dec 13 08:41:03 compute-0 lvm[340731]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:41:03 compute-0 lvm[340731]: VG ceph_vg1 finished
Dec 13 08:41:03 compute-0 lvm[340734]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:41:03 compute-0 lvm[340734]: VG ceph_vg2 finished
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.155 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 658e5f04-399b-4a8a-8680-5ae9717949c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b486227-b98c-4393-9a3c-aae3e3c419a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.157 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:41:03 compute-0 suspicious_panini[340651]: {}
Dec 13 08:41:03 compute-0 systemd[1]: libpod-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope: Deactivated successfully.
Dec 13 08:41:03 compute-0 systemd[1]: libpod-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope: Consumed 1.435s CPU time.
Dec 13 08:41:03 compute-0 conmon[340651]: conmon 82c2b06c04589e96dbe0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope/container/memory.events
Dec 13 08:41:03 compute-0 podman[340634]: 2025-12-13 08:41:03.253322482 +0000 UTC m=+1.163554856 container died 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:41:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53-merged.mount: Deactivated successfully.
Dec 13 08:41:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Dec 13 08:41:03 compute-0 podman[340634]: 2025-12-13 08:41:03.310335727 +0000 UTC m=+1.220568091 container remove 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:41:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/221487140' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Dec 13 08:41:03 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Dec 13 08:41:03 compute-0 systemd[1]: libpod-conmon-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope: Deactivated successfully.
Dec 13 08:41:03 compute-0 sudo[340510]: pam_unix(sudo:session): session closed for user root
Dec 13 08:41:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:41:03 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:41:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.397 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:03 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.456 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.457 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:03 compute-0 sudo[340748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:41:03 compute-0 sudo[340748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:41:03 compute-0 sudo[340748]: pam_unix(sudo:session): session closed for user root
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.512 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.665 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 160 op/s
Dec 13 08:41:03 compute-0 nova_compute[248510]: 2025-12-13 08:41:03.850 248514 DEBUG nova.compute.manager [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833703554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.063 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.070 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.150 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.166 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 25.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.195 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.195 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.195 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.204 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.205 248514 INFO nova.compute.claims [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:41:04 compute-0 ceph-mon[76537]: osdmap e259: 3 total, 3 up, 3 in
Dec 13 08:41:04 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:41:04 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:41:04 compute-0 ceph-mon[76537]: pgmap v2346: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 160 op/s
Dec 13 08:41:04 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1833703554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.533 248514 DEBUG nova.compute.manager [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.533 248514 DEBUG oslo_concurrency.lockutils [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.533 248514 DEBUG oslo_concurrency.lockutils [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.534 248514 DEBUG oslo_concurrency.lockutils [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.534 248514 DEBUG nova.compute.manager [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.534 248514 WARNING nova.compute.manager [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state None.
Dec 13 08:41:04 compute-0 nova_compute[248510]: 2025-12-13 08:41:04.617 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993607602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.263 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.271 248514 DEBUG nova.compute.provider_tree [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.301 248514 DEBUG nova.scheduler.client.report [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.327 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.328 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:41:05 compute-0 ovn_controller[148476]: 2025-12-13T08:41:05Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:83:5f 10.100.0.5
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.475 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:41:05 compute-0 ovn_controller[148476]: 2025-12-13T08:41:05Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:83:5f 10.100.0.5
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.476 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.558 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:41:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2993607602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.640 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:41:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.890 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.891 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.891 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Creating image(s)
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.928 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:05 compute-0 nova_compute[248510]: 2025-12-13 08:41:05.969 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:05 compute-0 podman[340825]: 2025-12-13 08:41:05.991575994 +0000 UTC m=+0.073363515 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:41:06 compute-0 podman[340826]: 2025-12-13 08:41:06.003393534 +0000 UTC m=+0.082016662 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.004 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.014 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:06 compute-0 podman[340817]: 2025-12-13 08:41:06.042315284 +0000 UTC m=+0.124505186 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.102 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.103 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.104 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.104 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.124 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.127 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f5485560-d9b8-44ef-9425-57a45ac866af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:06 compute-0 nova_compute[248510]: 2025-12-13 08:41:06.356 248514 DEBUG nova.policy [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:41:06 compute-0 ceph-mon[76537]: pgmap v2347: 321 pgs: 321 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.069 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f5485560-d9b8-44ef-9425-57a45ac866af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.942s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.135 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.193 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.194 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.713 248514 DEBUG nova.objects.instance [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid f5485560-d9b8-44ef-9425-57a45ac866af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.737 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.737 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Ensure instance console log exists: /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.738 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.738 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:07 compute-0 nova_compute[248510]: 2025-12-13 08:41:07.739 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Dec 13 08:41:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:08 compute-0 nova_compute[248510]: 2025-12-13 08:41:08.434 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Successfully created port: 685c5b77-ceb9-42c7-86cb-933b381677ba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:41:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Dec 13 08:41:09 compute-0 ceph-mon[76537]: pgmap v2348: 321 pgs: 321 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Dec 13 08:41:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Dec 13 08:41:09 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Dec 13 08:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:41:09
Dec 13 08:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr']
Dec 13 08:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:41:09 compute-0 nova_compute[248510]: 2025-12-13 08:41:09.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:09 compute-0 nova_compute[248510]: 2025-12-13 08:41:09.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:41:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 350 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.6 MiB/s wr, 277 op/s
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:41:10 compute-0 ceph-mon[76537]: osdmap e260: 3 total, 3 up, 3 in
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:41:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:41:11 compute-0 nova_compute[248510]: 2025-12-13 08:41:11.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:11 compute-0 ceph-mon[76537]: pgmap v2350: 321 pgs: 321 active+clean; 350 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.6 MiB/s wr, 277 op/s
Dec 13 08:41:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 364 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 MiB/s wr, 262 op/s
Dec 13 08:41:11 compute-0 nova_compute[248510]: 2025-12-13 08:41:11.838 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Successfully updated port: 685c5b77-ceb9-42c7-86cb-933b381677ba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:41:11 compute-0 nova_compute[248510]: 2025-12-13 08:41:11.942 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:41:11 compute-0 nova_compute[248510]: 2025-12-13 08:41:11.942 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:41:11 compute-0 nova_compute[248510]: 2025-12-13 08:41:11.942 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:41:12 compute-0 nova_compute[248510]: 2025-12-13 08:41:12.029 248514 DEBUG nova.compute.manager [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-changed-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:12 compute-0 nova_compute[248510]: 2025-12-13 08:41:12.030 248514 DEBUG nova.compute.manager [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Refreshing instance network info cache due to event network-changed-685c5b77-ceb9-42c7-86cb-933b381677ba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:41:12 compute-0 nova_compute[248510]: 2025-12-13 08:41:12.030 248514 DEBUG oslo_concurrency.lockutils [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:41:12 compute-0 nova_compute[248510]: 2025-12-13 08:41:12.299 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:41:12 compute-0 ceph-mon[76537]: pgmap v2351: 321 pgs: 321 active+clean; 364 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 MiB/s wr, 262 op/s
Dec 13 08:41:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Dec 13 08:41:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Dec 13 08:41:12 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Dec 13 08:41:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 353 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 596 KiB/s rd, 4.3 MiB/s wr, 117 op/s
Dec 13 08:41:13 compute-0 ceph-mon[76537]: osdmap e261: 3 total, 3 up, 3 in
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.161 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.161 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.162 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.162 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.162 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.163 248514 INFO nova.compute.manager [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Terminating instance
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.164 248514 DEBUG nova.compute.manager [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 kernel: tapf767e871-4f (unregistering): left promiscuous mode
Dec 13 08:41:14 compute-0 NetworkManager[50376]: <info>  [1765615274.3580] device (tapf767e871-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 ovn_controller[148476]: 2025-12-13T08:41:14Z|00930|binding|INFO|Releasing lport f767e871-4f9e-414e-a61d-c70cffe80128 from this chassis (sb_readonly=0)
Dec 13 08:41:14 compute-0 ovn_controller[148476]: 2025-12-13T08:41:14Z|00931|binding|INFO|Setting lport f767e871-4f9e-414e-a61d-c70cffe80128 down in Southbound
Dec 13 08:41:14 compute-0 ovn_controller[148476]: 2025-12-13T08:41:14Z|00932|binding|INFO|Removing iface tapf767e871-4f ovn-installed in OVS
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.415 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:d2:10 10.100.0.4'], port_security=['fa:16:3e:96:d2:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '658e5f04-399b-4a8a-8680-5ae9717949c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77183472-893b-4c33-ab3e-e88f01770ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f767e871-4f9e-414e-a61d-c70cffe80128) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:41:14 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005b.scope: Consumed 17.854s CPU time.
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.416 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f767e871-4f9e-414e-a61d-c70cffe80128 in datapath 369f7528-6571-47b6-a030-5281647e1eac unbound from our chassis
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.418 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac
Dec 13 08:41:14 compute-0 systemd-machined[210538]: Machine qemu-113-instance-0000005b terminated.
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.435 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb10d85-ab0a-45c4-b14b-4602c556cd06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.471 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65d8db0c-e4cc-438e-8b7f-5f47426b7faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.474 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1f0b04-1b0e-49cc-8d84-ff9eeff857fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.502 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fda9e149-121f-49df-9cda-ebfb624f0484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.521 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c86546f9-ffe5-413f-b1e8-6be8503865fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341058, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88ad8d02-d5d3-4126-8fa9-0f9a2b9d3f64]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341059, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341059, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.575 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.584 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.584 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.616 248514 INFO nova.virt.libvirt.driver [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance destroyed successfully.
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.617 248514 DEBUG nova.objects.instance [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'resources' on Instance uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.645 248514 DEBUG nova.virt.libvirt.vif [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2058674229',display_name='tempest-ServerActionsTestOtherB-server-2058674229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2058674229',id=91,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:39:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-hq6btaus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:39:04Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=658e5f04-399b-4a8a-8680-5ae9717949c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.645 248514 DEBUG nova.network.os_vif_util [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.646 248514 DEBUG nova.network.os_vif_util [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.646 248514 DEBUG os_vif [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.648 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.648 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf767e871-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.655 248514 INFO os_vif [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f')
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.738 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Updating instance_info_cache with network_info: [{"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.764 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.765 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance network_info: |[{"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.765 248514 DEBUG oslo_concurrency.lockutils [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.766 248514 DEBUG nova.network.neutron [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Refreshing network info cache for port 685c5b77-ceb9-42c7-86cb-933b381677ba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.769 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start _get_guest_xml network_info=[{"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.774 248514 WARNING nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.785 248514 DEBUG nova.virt.libvirt.host [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.787 248514 DEBUG nova.virt.libvirt.host [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.792 248514 DEBUG nova.virt.libvirt.host [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.792 248514 DEBUG nova.virt.libvirt.host [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.793 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.793 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.794 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.794 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.794 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.795 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.795 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.795 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.796 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.796 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.796 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.797 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:41:14 compute-0 nova_compute[248510]: 2025-12-13 08:41:14.800 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:14 compute-0 ceph-mon[76537]: pgmap v2353: 321 pgs: 321 active+clean; 353 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 596 KiB/s rd, 4.3 MiB/s wr, 117 op/s
Dec 13 08:41:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:41:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673103078' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:41:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:41:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673103078' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:41:15 compute-0 ovn_controller[148476]: 2025-12-13T08:41:15Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 08:41:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:41:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640700649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:15 compute-0 nova_compute[248510]: 2025-12-13 08:41:15.468 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:15 compute-0 nova_compute[248510]: 2025-12-13 08:41:15.498 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:15 compute-0 nova_compute[248510]: 2025-12-13 08:41:15.504 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 326 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec 13 08:41:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:41:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695515673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.077 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.079 248514 DEBUG nova.virt.libvirt.vif [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1093776704',display_name='tempest-ServersTestJSON-server-1093776704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1093776704',id=95,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-8z05o414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:41:05Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=f5485560-d9b8-44ef-9425-57a45ac866af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.080 248514 DEBUG nova.network.os_vif_util [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.081 248514 DEBUG nova.network.os_vif_util [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.082 248514 DEBUG nova.objects.instance [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid f5485560-d9b8-44ef-9425-57a45ac866af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.102 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <uuid>f5485560-d9b8-44ef-9425-57a45ac866af</uuid>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <name>instance-0000005f</name>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestJSON-server-1093776704</nova:name>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:41:14</nova:creationTime>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <nova:port uuid="685c5b77-ceb9-42c7-86cb-933b381677ba">
Dec 13 08:41:16 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <system>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <entry name="serial">f5485560-d9b8-44ef-9425-57a45ac866af</entry>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <entry name="uuid">f5485560-d9b8-44ef-9425-57a45ac866af</entry>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </system>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <os>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   </os>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <features>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   </features>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/f5485560-d9b8-44ef-9425-57a45ac866af_disk">
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       </source>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/f5485560-d9b8-44ef-9425-57a45ac866af_disk.config">
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       </source>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:41:16 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:c7:03:3c"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <target dev="tap685c5b77-ce"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/console.log" append="off"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <video>
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </video>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:41:16 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:41:16 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:41:16 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:41:16 compute-0 nova_compute[248510]: </domain>
Dec 13 08:41:16 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.104 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Preparing to wait for external event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.104 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.105 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.106 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.107 248514 DEBUG nova.virt.libvirt.vif [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1093776704',display_name='tempest-ServersTestJSON-server-1093776704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1093776704',id=95,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-8z05o414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:41:05Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=f5485560-d9b8-44ef-9425-57a45ac866af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.107 248514 DEBUG nova.network.os_vif_util [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.108 248514 DEBUG nova.network.os_vif_util [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.108 248514 DEBUG os_vif [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.109 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.109 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.110 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.113 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685c5b77-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.114 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap685c5b77-ce, col_values=(('external_ids', {'iface-id': '685c5b77-ceb9-42c7-86cb-933b381677ba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:03:3c', 'vm-uuid': 'f5485560-d9b8-44ef-9425-57a45ac866af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:16 compute-0 NetworkManager[50376]: <info>  [1765615276.1171] manager: (tap685c5b77-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.117 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.121 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.123 248514 INFO os_vif [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce')
Dec 13 08:41:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/673103078' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:41:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/673103078' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:41:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/640700649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.586 248514 DEBUG nova.compute.manager [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-unplugged-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.586 248514 DEBUG oslo_concurrency.lockutils [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.586 248514 DEBUG oslo_concurrency.lockutils [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.586 248514 DEBUG oslo_concurrency.lockutils [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.587 248514 DEBUG nova.compute.manager [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] No waiting events found dispatching network-vif-unplugged-f767e871-4f9e-414e-a61d-c70cffe80128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.587 248514 DEBUG nova.compute.manager [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-unplugged-f767e871-4f9e-414e-a61d-c70cffe80128 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.600 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.601 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.601 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:c7:03:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.601 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Using config drive
Dec 13 08:41:16 compute-0 nova_compute[248510]: 2025-12-13 08:41:16.963 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:17 compute-0 ceph-mon[76537]: pgmap v2354: 321 pgs: 321 active+clean; 326 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec 13 08:41:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1695515673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 326 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 215 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Dec 13 08:41:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Dec 13 08:41:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Dec 13 08:41:17 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Dec 13 08:41:18 compute-0 nova_compute[248510]: 2025-12-13 08:41:18.809 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Creating config drive at /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config
Dec 13 08:41:18 compute-0 nova_compute[248510]: 2025-12-13 08:41:18.815 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqr_292fb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:18 compute-0 ceph-mon[76537]: pgmap v2355: 321 pgs: 321 active+clean; 326 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 215 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Dec 13 08:41:18 compute-0 ceph-mon[76537]: osdmap e262: 3 total, 3 up, 3 in
Dec 13 08:41:18 compute-0 nova_compute[248510]: 2025-12-13 08:41:18.962 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqr_292fb" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:18 compute-0 nova_compute[248510]: 2025-12-13 08:41:18.990 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:18 compute-0 nova_compute[248510]: 2025-12-13 08:41:18.994 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config f5485560-d9b8-44ef-9425-57a45ac866af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.036 248514 DEBUG nova.compute.manager [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.037 248514 DEBUG oslo_concurrency.lockutils [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.037 248514 DEBUG oslo_concurrency.lockutils [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.037 248514 DEBUG oslo_concurrency.lockutils [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.037 248514 DEBUG nova.compute.manager [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] No waiting events found dispatching network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.038 248514 WARNING nova.compute.manager [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received unexpected event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 for instance with vm_state active and task_state deleting.
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.152 248514 INFO nova.virt.libvirt.driver [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Deleting instance files /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0_del
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.153 248514 INFO nova.virt.libvirt.driver [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Deletion of /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0_del complete
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.198 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config f5485560-d9b8-44ef-9425-57a45ac866af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.199 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Deleting local config drive /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config because it was imported into RBD.
Dec 13 08:41:19 compute-0 kernel: tap685c5b77-ce: entered promiscuous mode
Dec 13 08:41:19 compute-0 NetworkManager[50376]: <info>  [1765615279.2547] manager: (tap685c5b77-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Dec 13 08:41:19 compute-0 ovn_controller[148476]: 2025-12-13T08:41:19Z|00933|binding|INFO|Claiming lport 685c5b77-ceb9-42c7-86cb-933b381677ba for this chassis.
Dec 13 08:41:19 compute-0 ovn_controller[148476]: 2025-12-13T08:41:19Z|00934|binding|INFO|685c5b77-ceb9-42c7-86cb-933b381677ba: Claiming fa:16:3e:c7:03:3c 10.100.0.11
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.263 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:03:3c 10.100.0.11'], port_security=['fa:16:3e:c7:03:3c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5485560-d9b8-44ef-9425-57a45ac866af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=685c5b77-ceb9-42c7-86cb-933b381677ba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.266 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 685c5b77-ceb9-42c7-86cb-933b381677ba in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.268 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:41:19 compute-0 ovn_controller[148476]: 2025-12-13T08:41:19Z|00935|binding|INFO|Setting lport 685c5b77-ceb9-42c7-86cb-933b381677ba ovn-installed in OVS
Dec 13 08:41:19 compute-0 ovn_controller[148476]: 2025-12-13T08:41:19Z|00936|binding|INFO|Setting lport 685c5b77-ceb9-42c7-86cb-933b381677ba up in Southbound
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.285 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e3584a43-acc3-4260-9e5b-2864319d3c2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:19 compute-0 systemd-udevd[341228]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:41:19 compute-0 systemd-machined[210538]: New machine qemu-118-instance-0000005f.
Dec 13 08:41:19 compute-0 NetworkManager[50376]: <info>  [1765615279.2998] device (tap685c5b77-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:41:19 compute-0 NetworkManager[50376]: <info>  [1765615279.3012] device (tap685c5b77-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:41:19 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-0000005f.
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.304 248514 INFO nova.compute.manager [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Took 5.14 seconds to destroy the instance on the hypervisor.
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.305 248514 DEBUG oslo.service.loopingcall [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.305 248514 DEBUG nova.compute.manager [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.305 248514 DEBUG nova.network.neutron [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.320 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7f7abd-0a99-4f31-baec-2427c0f4487f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.324 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4c70e3-32c7-43b2-ac20-049fc90c2002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.357 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6471eeff-96bb-4e7e-9b6e-3c1ddc10f7b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.376 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b509f8fd-78cd-4c08-bff0-91e8d092cf59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341240, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.398 248514 DEBUG nova.network.neutron [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Updated VIF entry in instance network info cache for port 685c5b77-ceb9-42c7-86cb-933b381677ba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.399 248514 DEBUG nova.network.neutron [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Updating instance_info_cache with network_info: [{"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.401 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5f006b-6e87-4ea2-b26a-96bc697955e7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341242, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341242, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.403 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.406 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.406 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.406 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.407 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.425 248514 DEBUG oslo_concurrency.lockutils [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.582 248514 DEBUG nova.compute.manager [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.583 248514 DEBUG oslo_concurrency.lockutils [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.583 248514 DEBUG oslo_concurrency.lockutils [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.584 248514 DEBUG oslo_concurrency.lockutils [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.584 248514 DEBUG nova.compute.manager [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Processing event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:41:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 268 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 843 KiB/s rd, 40 KiB/s wr, 136 op/s
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.867 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615279.8673797, f5485560-d9b8-44ef-9425-57a45ac866af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.868 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] VM Started (Lifecycle Event)
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.870 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.872 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.875 248514 INFO nova.virt.libvirt.driver [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance spawned successfully.
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.876 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.897 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.901 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.901 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.901 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.906 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.979 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.979 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615279.8676324, f5485560-d9b8-44ef-9425-57a45ac866af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:19 compute-0 nova_compute[248510]: 2025-12-13 08:41:19.979 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] VM Paused (Lifecycle Event)
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.019 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.023 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615279.8720002, f5485560-d9b8-44ef-9425-57a45ac866af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.024 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] VM Resumed (Lifecycle Event)
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.028 248514 INFO nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Took 14.14 seconds to spawn the instance on the hypervisor.
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.029 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.073 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.077 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.116 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.854 248514 INFO nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Took 17.22 seconds to build instance.
Dec 13 08:41:20 compute-0 nova_compute[248510]: 2025-12-13 08:41:20.902 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020360573277216046 of space, bias 1.0, pg target 0.6108171983164814 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677608684985178 of space, bias 1.0, pg target 0.20032826054955533 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.938341245448177e-07 of space, bias 4.0, pg target 0.0007126009494537812 quantized to 16 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:21 compute-0 ceph-mon[76537]: pgmap v2357: 321 pgs: 321 active+clean; 268 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 843 KiB/s rd, 40 KiB/s wr, 136 op/s
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.418 248514 DEBUG nova.network.neutron [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.445 248514 INFO nova.compute.manager [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Took 2.14 seconds to deallocate network for instance.
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.512 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.513 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.691 248514 DEBUG oslo_concurrency.processutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.741 248514 DEBUG nova.compute.manager [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.741 248514 DEBUG oslo_concurrency.lockutils [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 DEBUG oslo_concurrency.lockutils [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 DEBUG oslo_concurrency.lockutils [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 DEBUG nova.compute.manager [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] No waiting events found dispatching network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 WARNING nova.compute.manager [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received unexpected event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba for instance with vm_state active and task_state None.
Dec 13 08:41:21 compute-0 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 DEBUG nova.compute.manager [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-deleted-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 246 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 754 KiB/s rd, 52 KiB/s wr, 125 op/s
Dec 13 08:41:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739840725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.297 248514 DEBUG oslo_concurrency.processutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.303 248514 DEBUG nova.compute.provider_tree [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.325 248514 DEBUG nova.scheduler.client.report [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.352 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:22 compute-0 ceph-mon[76537]: pgmap v2358: 321 pgs: 321 active+clean; 246 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 754 KiB/s rd, 52 KiB/s wr, 125 op/s
Dec 13 08:41:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3739840725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.380 248514 INFO nova.scheduler.client.report [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Deleted allocations for instance 658e5f04-399b-4a8a-8680-5ae9717949c0
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.499 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.984 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.984 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.985 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.985 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.985 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.987 248514 INFO nova.compute.manager [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Terminating instance
Dec 13 08:41:22 compute-0 nova_compute[248510]: 2025-12-13 08:41:22.988 248514 DEBUG nova.compute.manager [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:41:23 compute-0 kernel: tapea5aafe7-a7 (unregistering): left promiscuous mode
Dec 13 08:41:23 compute-0 NetworkManager[50376]: <info>  [1765615283.2592] device (tapea5aafe7-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:23 compute-0 ovn_controller[148476]: 2025-12-13T08:41:23Z|00937|binding|INFO|Releasing lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 from this chassis (sb_readonly=0)
Dec 13 08:41:23 compute-0 ovn_controller[148476]: 2025-12-13T08:41:23Z|00938|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 down in Southbound
Dec 13 08:41:23 compute-0 ovn_controller[148476]: 2025-12-13T08:41:23Z|00939|binding|INFO|Removing iface tapea5aafe7-a7 ovn-installed in OVS
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.316 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.323 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:82:3a 10.100.0.3'], port_security=['fa:16:3e:e0:82:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b486227-b98c-4393-9a3c-aae3e3c419a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '9', 'neutron:security_group_ids': '35769c0b-1e0e-43bc-832c-d54c65a53a36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:41:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.324 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 in datapath 369f7528-6571-47b6-a030-5281647e1eac unbound from our chassis
Dec 13 08:41:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.326 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 369f7528-6571-47b6-a030-5281647e1eac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:41:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.327 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3f39c1-6c83-42ed-b6b6-9544858c73c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.328 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-369f7528-6571-47b6-a030-5281647e1eac namespace which is not needed anymore
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:23 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000055.scope: Deactivated successfully.
Dec 13 08:41:23 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000055.scope: Consumed 13.542s CPU time.
Dec 13 08:41:23 compute-0 systemd-machined[210538]: Machine qemu-117-instance-00000055 terminated.
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.420 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance destroyed successfully.
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.421 248514 DEBUG nova.objects.instance [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'resources' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.441 248514 DEBUG nova.virt.libvirt.vif [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMCBc3yf7DLFWm969JJ3AJvRq1SqBawRmsOjScixeqlFSyjq4/Kpbcw0olzxybOT1DbERtB0mKMV4pquo3M97LIG1LWOqbG4HPkmobMKh41xqoYhtSOyaVVjlfwlnNokPA==',key_name='tempest-keypair-2123324397',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:41:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:41:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.442 248514 DEBUG nova.network.os_vif_util [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.443 248514 DEBUG nova.network.os_vif_util [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.443 248514 DEBUG os_vif [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.445 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea5aafe7-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:23 compute-0 nova_compute[248510]: 2025-12-13 08:41:23.449 248514 INFO os_vif [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7')
Dec 13 08:41:23 compute-0 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [NOTICE]   (332193) : haproxy version is 2.8.14-c23fe91
Dec 13 08:41:23 compute-0 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [NOTICE]   (332193) : path to executable is /usr/sbin/haproxy
Dec 13 08:41:23 compute-0 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [WARNING]  (332193) : Exiting Master process...
Dec 13 08:41:23 compute-0 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [ALERT]    (332193) : Current worker (332195) exited with code 143 (Terminated)
Dec 13 08:41:23 compute-0 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [WARNING]  (332193) : All workers exited. Exiting... (0)
Dec 13 08:41:23 compute-0 systemd[1]: libpod-2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6.scope: Deactivated successfully.
Dec 13 08:41:23 compute-0 podman[341337]: 2025-12-13 08:41:23.673207535 +0000 UTC m=+0.246770892 container died 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c47fe0120125029b14bb1c725f8596f8bf16f2a2dd657933ca7281e9bf15dff-merged.mount: Deactivated successfully.
Dec 13 08:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6-userdata-shm.mount: Deactivated successfully.
Dec 13 08:41:23 compute-0 podman[341337]: 2025-12-13 08:41:23.789524986 +0000 UTC m=+0.363088323 container cleanup 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:41:23 compute-0 systemd[1]: libpod-conmon-2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6.scope: Deactivated successfully.
Dec 13 08:41:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 246 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 46 KiB/s wr, 133 op/s
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.154 248514 DEBUG nova.compute.manager [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.155 248514 DEBUG oslo_concurrency.lockutils [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.155 248514 DEBUG oslo_concurrency.lockutils [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.155 248514 DEBUG oslo_concurrency.lockutils [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.155 248514 DEBUG nova.compute.manager [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.156 248514 DEBUG nova.compute.manager [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.957 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.958 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.958 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.958 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.958 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.959 248514 INFO nova.compute.manager [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Terminating instance
Dec 13 08:41:24 compute-0 nova_compute[248510]: 2025-12-13 08:41:24.960 248514 DEBUG nova.compute.manager [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:41:24 compute-0 ceph-mon[76537]: pgmap v2359: 321 pgs: 321 active+clean; 246 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 46 KiB/s wr, 133 op/s
Dec 13 08:41:25 compute-0 kernel: tap685c5b77-ce (unregistering): left promiscuous mode
Dec 13 08:41:25 compute-0 NetworkManager[50376]: <info>  [1765615285.4384] device (tap685c5b77-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:41:25 compute-0 ovn_controller[148476]: 2025-12-13T08:41:25Z|00940|binding|INFO|Releasing lport 685c5b77-ceb9-42c7-86cb-933b381677ba from this chassis (sb_readonly=0)
Dec 13 08:41:25 compute-0 ovn_controller[148476]: 2025-12-13T08:41:25Z|00941|binding|INFO|Setting lport 685c5b77-ceb9-42c7-86cb-933b381677ba down in Southbound
Dec 13 08:41:25 compute-0 ovn_controller[148476]: 2025-12-13T08:41:25Z|00942|binding|INFO|Removing iface tap685c5b77-ce ovn-installed in OVS
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.457 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:03:3c 10.100.0.11'], port_security=['fa:16:3e:c7:03:3c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5485560-d9b8-44ef-9425-57a45ac866af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=685c5b77-ceb9-42c7-86cb-933b381677ba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.461 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Dec 13 08:41:25 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Consumed 5.635s CPU time.
Dec 13 08:41:25 compute-0 systemd-machined[210538]: Machine qemu-118-instance-0000005f terminated.
Dec 13 08:41:25 compute-0 podman[341388]: 2025-12-13 08:41:25.570791409 +0000 UTC m=+1.758041225 container remove 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.578 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94619704-9147-43a1-9516-b3c16fa6322d]: (4, ('Sat Dec 13 08:41:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac (2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6)\n2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6\nSat Dec 13 08:41:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac (2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6)\n2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.579 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0dcba0-3a1f-494e-8644-309b75064070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.581 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.584 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.600 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 kernel: tap369f7528-60: left promiscuous mode
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.607 248514 INFO nova.virt.libvirt.driver [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance destroyed successfully.
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.608 248514 DEBUG nova.objects.instance [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid f5485560-d9b8-44ef-9425-57a45ac866af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b3f7e8-e6da-4fb5-8815-4f2b667646a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.622 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9afcd2-9e89-4f95-9b27-9383032768c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.624 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9162d51a-ce1b-40a0-8616-0a40f67c248c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd2f670-f2f0-45ee-a52a-723589349837]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763096, 'reachable_time': 31742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341421, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.643 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-369f7528-6571-47b6-a030-5281647e1eac deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.643 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dbcdc5c8-3909-4c69-a249-e660d08354c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.644 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 685c5b77-ceb9-42c7-86cb-933b381677ba in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis
Dec 13 08:41:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d369f7528\x2d6571\x2d47b6\x2da030\x2d5281647e1eac.mount: Deactivated successfully.
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.645 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.656 248514 DEBUG nova.virt.libvirt.vif [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:41:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1093776704',display_name='tempest-ServersTestJSON-server-1093776704',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1093776704',id=95,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:41:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-8z05o414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:41:20Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=f5485560-d9b8-44ef-9425-57a45ac866af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.657 248514 DEBUG nova.network.os_vif_util [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.657 248514 DEBUG nova.network.os_vif_util [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.658 248514 DEBUG os_vif [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.659 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.659 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685c5b77-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.661 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.662 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41558524-2254-4708-b4e3-771376eafc29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.664 248514 INFO os_vif [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce')
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.688 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0a937081-8f3e-48e5-afa6-5e65028a4cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.691 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[349fa624-4ddd-44d5-a31b-1b4494a53825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.721 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5811524a-4c6b-4f61-8e2a-135959980820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcf9cb9-f2c7-488c-869d-331d8af832e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341446, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.762 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1864ff4-40c3-418d-a096-6c858dd27182]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341447, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341447, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.764 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:25 compute-0 nova_compute[248510]: 2025-12-13 08:41:25.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.769 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.769 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.769 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.770 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 205 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 44 KiB/s wr, 180 op/s
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.077 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.465 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 WARNING nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state deleting.
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.467 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-unplugged-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.467 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.467 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.467 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] No waiting events found dispatching network-vif-unplugged-685c5b77-ceb9-42c7-86cb-933b381677ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-unplugged-685c5b77-ceb9-42c7-86cb-933b381677ba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.469 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.469 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] No waiting events found dispatching network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:26 compute-0 nova_compute[248510]: 2025-12-13 08:41:26.469 248514 WARNING nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received unexpected event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba for instance with vm_state active and task_state deleting.
Dec 13 08:41:26 compute-0 ceph-mon[76537]: pgmap v2360: 321 pgs: 321 active+clean; 205 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 44 KiB/s wr, 180 op/s
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.124 248514 INFO nova.virt.libvirt.driver [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting instance files /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8_del
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.124 248514 INFO nova.virt.libvirt.driver [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deletion of /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8_del complete
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.226 248514 INFO nova.compute.manager [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 4.24 seconds to destroy the instance on the hypervisor.
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.227 248514 DEBUG oslo.service.loopingcall [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.227 248514 DEBUG nova.compute.manager [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.227 248514 DEBUG nova.network.neutron [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.275 248514 INFO nova.virt.libvirt.driver [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Deleting instance files /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af_del
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.276 248514 INFO nova.virt.libvirt.driver [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Deletion of /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af_del complete
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.429 248514 INFO nova.compute.manager [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Took 2.47 seconds to destroy the instance on the hypervisor.
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.429 248514 DEBUG oslo.service.loopingcall [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.430 248514 DEBUG nova.compute.manager [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:41:27 compute-0 nova_compute[248510]: 2025-12-13 08:41:27.430 248514 DEBUG nova.network.neutron [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:41:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 205 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 44 KiB/s wr, 180 op/s
Dec 13 08:41:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:28 compute-0 nova_compute[248510]: 2025-12-13 08:41:28.927 248514 DEBUG nova.network.neutron [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:28 compute-0 nova_compute[248510]: 2025-12-13 08:41:28.955 248514 INFO nova.compute.manager [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Took 1.52 seconds to deallocate network for instance.
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.049 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.049 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:29 compute-0 ceph-mon[76537]: pgmap v2361: 321 pgs: 321 active+clean; 205 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 44 KiB/s wr, 180 op/s
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.167 248514 DEBUG oslo_concurrency.processutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.239 248514 DEBUG nova.compute.manager [req-f851926b-3c58-4bc4-89fd-0c732140294b req-6ae919cf-39c3-421c-a3bb-0bfa95df46cc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-deleted-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.240 248514 DEBUG nova.network.neutron [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.272 248514 INFO nova.compute.manager [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 2.04 seconds to deallocate network for instance.
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.394 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.616 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615274.6148403, 658e5f04-399b-4a8a-8680-5ae9717949c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.616 248514 INFO nova.compute.manager [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] VM Stopped (Lifecycle Event)
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.639 248514 DEBUG nova.compute.manager [None req-0215539b-c72d-461a-bfff-97fd15ede42b - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4163737752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.745 248514 DEBUG oslo_concurrency.processutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.751 248514 DEBUG nova.compute.provider_tree [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.770 248514 DEBUG nova.scheduler.client.report [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.805 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.807 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 142 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 37 KiB/s wr, 154 op/s
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.837 248514 INFO nova.scheduler.client.report [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance f5485560-d9b8-44ef-9425-57a45ac866af
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.923 248514 DEBUG oslo_concurrency.processutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:29 compute-0 nova_compute[248510]: 2025-12-13 08:41:29.987 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4163737752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:30 compute-0 nova_compute[248510]: 2025-12-13 08:41:30.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4051534305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:30 compute-0 nova_compute[248510]: 2025-12-13 08:41:30.839 248514 DEBUG oslo_concurrency.processutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.915s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:30 compute-0 nova_compute[248510]: 2025-12-13 08:41:30.845 248514 DEBUG nova.compute.provider_tree [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:41:30 compute-0 nova_compute[248510]: 2025-12-13 08:41:30.872 248514 DEBUG nova.scheduler.client.report [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:41:30 compute-0 nova_compute[248510]: 2025-12-13 08:41:30.905 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:30 compute-0 nova_compute[248510]: 2025-12-13 08:41:30.954 248514 INFO nova.scheduler.client.report [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Deleted allocations for instance 9b486227-b98c-4393-9a3c-aae3e3c419a8
Dec 13 08:41:31 compute-0 nova_compute[248510]: 2025-12-13 08:41:31.046 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:31 compute-0 nova_compute[248510]: 2025-12-13 08:41:31.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:31 compute-0 nova_compute[248510]: 2025-12-13 08:41:31.338 248514 DEBUG nova.compute.manager [req-9e4097a7-463d-42e7-8761-723b3bb406dc req-fef85a29-f142-432f-b0f0-d8740d7cd5ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-deleted-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:31 compute-0 ceph-mon[76537]: pgmap v2362: 321 pgs: 321 active+clean; 142 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 37 KiB/s wr, 154 op/s
Dec 13 08:41:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4051534305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 130 op/s
Dec 13 08:41:32 compute-0 ceph-mon[76537]: pgmap v2363: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 130 op/s
Dec 13 08:41:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 127 op/s
Dec 13 08:41:34 compute-0 nova_compute[248510]: 2025-12-13 08:41:34.568 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:34 compute-0 nova_compute[248510]: 2025-12-13 08:41:34.568 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:34 compute-0 nova_compute[248510]: 2025-12-13 08:41:34.588 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:41:34 compute-0 nova_compute[248510]: 2025-12-13 08:41:34.670 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:34 compute-0 nova_compute[248510]: 2025-12-13 08:41:34.671 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:34 compute-0 nova_compute[248510]: 2025-12-13 08:41:34.679 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:41:34 compute-0 nova_compute[248510]: 2025-12-13 08:41:34.679 248514 INFO nova.compute.claims [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:41:34 compute-0 nova_compute[248510]: 2025-12-13 08:41:34.920 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:35 compute-0 ceph-mon[76537]: pgmap v2364: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 127 op/s
Dec 13 08:41:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805570996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.521 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.528 248514 DEBUG nova.compute.provider_tree [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.550 248514 DEBUG nova.scheduler.client.report [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.573 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.573 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.640 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.641 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.705 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.727 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:41:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 91 op/s
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.839 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.840 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.841 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Creating image(s)
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.867 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.895 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.929 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:35 compute-0 nova_compute[248510]: 2025-12-13 08:41:35.935 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.014 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.016 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.016 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.017 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.038 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.041 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 58550847-ec24-44b6-a066-642db86841ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.122 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3805570996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.399 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 58550847-ec24-44b6-a066-642db86841ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.432 248514 DEBUG nova.policy [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.466 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 58550847-ec24-44b6-a066-642db86841ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.555 248514 DEBUG nova.objects.instance [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 58550847-ec24-44b6-a066-642db86841ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.765 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.766 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Ensure instance console log exists: /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.766 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.766 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:36 compute-0 nova_compute[248510]: 2025-12-13 08:41:36.767 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:36 compute-0 podman[341685]: 2025-12-13 08:41:36.980008722 +0000 UTC m=+0.059906182 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 08:41:36 compute-0 podman[341684]: 2025-12-13 08:41:36.994995905 +0000 UTC m=+0.074898345 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 08:41:37 compute-0 podman[341683]: 2025-12-13 08:41:37.016886629 +0000 UTC m=+0.096286976 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 13 08:41:37 compute-0 ceph-mon[76537]: pgmap v2365: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 91 op/s
Dec 13 08:41:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Dec 13 08:41:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:38 compute-0 ceph-mon[76537]: pgmap v2366: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Dec 13 08:41:38 compute-0 nova_compute[248510]: 2025-12-13 08:41:38.421 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615283.4194164, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:38 compute-0 nova_compute[248510]: 2025-12-13 08:41:38.421 248514 INFO nova.compute.manager [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Stopped (Lifecycle Event)
Dec 13 08:41:38 compute-0 nova_compute[248510]: 2025-12-13 08:41:38.470 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Successfully created port: 0f023371-37d6-4847-a15a-fa86a3f0a3f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:41:38 compute-0 nova_compute[248510]: 2025-12-13 08:41:38.493 248514 DEBUG nova.compute.manager [None req-a49f4f48-0949-40ff-9eab-6f80881d1690 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 150 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Dec 13 08:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:41:40 compute-0 nova_compute[248510]: 2025-12-13 08:41:40.605 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615285.604048, f5485560-d9b8-44ef-9425-57a45ac866af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:40 compute-0 nova_compute[248510]: 2025-12-13 08:41:40.606 248514 INFO nova.compute.manager [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] VM Stopped (Lifecycle Event)
Dec 13 08:41:40 compute-0 nova_compute[248510]: 2025-12-13 08:41:40.629 248514 DEBUG nova.compute.manager [None req-d23f33a4-6df9-40f4-bcc2-07bc5f2211f4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:40 compute-0 nova_compute[248510]: 2025-12-13 08:41:40.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:41 compute-0 ceph-mon[76537]: pgmap v2367: 321 pgs: 321 active+clean; 150 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.462 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Successfully updated port: 0f023371-37d6-4847-a15a-fa86a3f0a3f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.486 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.487 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.487 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.589 248514 DEBUG nova.compute.manager [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-changed-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.590 248514 DEBUG nova.compute.manager [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Refreshing instance network info cache due to event network-changed-0f023371-37d6-4847-a15a-fa86a3f0a3f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.590 248514 DEBUG oslo_concurrency.lockutils [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.706 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:41:41 compute-0 ovn_controller[148476]: 2025-12-13T08:41:41Z|00943|binding|INFO|Releasing lport 0f8d26a1-147d-466a-8164-8d2036166124 from this chassis (sb_readonly=0)
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 08:41:41 compute-0 ovn_controller[148476]: 2025-12-13T08:41:41Z|00944|binding|INFO|Releasing lport 0f8d26a1-147d-466a-8164-8d2036166124 from this chassis (sb_readonly=0)
Dec 13 08:41:41 compute-0 nova_compute[248510]: 2025-12-13 08:41:41.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:43 compute-0 ceph-mon[76537]: pgmap v2368: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 08:41:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:41:44 compute-0 ceph-mon[76537]: pgmap v2369: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.649 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Updating instance_info_cache with network_info: [{"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.893 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.893 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance network_info: |[{"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.893 248514 DEBUG oslo_concurrency.lockutils [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.894 248514 DEBUG nova.network.neutron [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Refreshing network info cache for port 0f023371-37d6-4847-a15a-fa86a3f0a3f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.897 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start _get_guest_xml network_info=[{"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.903 248514 WARNING nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.921 248514 DEBUG nova.virt.libvirt.host [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.922 248514 DEBUG nova.virt.libvirt.host [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.928 248514 DEBUG nova.virt.libvirt.host [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.928 248514 DEBUG nova.virt.libvirt.host [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.929 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.929 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.930 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.930 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.930 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.930 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.931 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.931 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.931 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.932 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.932 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.932 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:41:44 compute-0 nova_compute[248510]: 2025-12-13 08:41:44.937 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:41:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179130776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:45 compute-0 nova_compute[248510]: 2025-12-13 08:41:45.562 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:45 compute-0 nova_compute[248510]: 2025-12-13 08:41:45.586 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:45 compute-0 nova_compute[248510]: 2025-12-13 08:41:45.590 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:45 compute-0 nova_compute[248510]: 2025-12-13 08:41:45.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/179130776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:41:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738661585' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.160 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.161 248514 DEBUG nova.virt.libvirt.vif [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1050406831',display_name='tempest-ServersTestJSON-server-1050406831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1050406831',id=96,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYqQi2FIq7Gsymh9E4l7vrJYWPFYm4cOxQBvQKtSgT87c9NTs3K+3wQ1sFvpE/1xNj+RqzGI4ZZeSDRTg9ddxk3LWkpDXgbT9zZytcv9YGDwfjOIh6BYT6dInWo/hgQVA==',key_name='tempest-key-912387218',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-nljn6049',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:41:35Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=58550847-ec24-44b6-a066-642db86841ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.162 248514 DEBUG nova.network.os_vif_util [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.163 248514 DEBUG nova.network.os_vif_util [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.164 248514 DEBUG nova.objects.instance [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 58550847-ec24-44b6-a066-642db86841ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.201 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <uuid>58550847-ec24-44b6-a066-642db86841ce</uuid>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <name>instance-00000060</name>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestJSON-server-1050406831</nova:name>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:41:44</nova:creationTime>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <nova:port uuid="0f023371-37d6-4847-a15a-fa86a3f0a3f5">
Dec 13 08:41:46 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <system>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <entry name="serial">58550847-ec24-44b6-a066-642db86841ce</entry>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <entry name="uuid">58550847-ec24-44b6-a066-642db86841ce</entry>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </system>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <os>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   </os>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <features>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   </features>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/58550847-ec24-44b6-a066-642db86841ce_disk">
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       </source>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/58550847-ec24-44b6-a066-642db86841ce_disk.config">
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       </source>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:41:46 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:75:bf:1d"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <target dev="tap0f023371-37"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/console.log" append="off"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <video>
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </video>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:41:46 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:41:46 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:41:46 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:41:46 compute-0 nova_compute[248510]: </domain>
Dec 13 08:41:46 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.202 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Preparing to wait for external event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.203 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.203 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.204 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.204 248514 DEBUG nova.virt.libvirt.vif [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1050406831',display_name='tempest-ServersTestJSON-server-1050406831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1050406831',id=96,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYqQi2FIq7Gsymh9E4l7vrJYWPFYm4cOxQBvQKtSgT87c9NTs3K+3wQ1sFvpE/1xNj+RqzGI4ZZeSDRTg9ddxk3LWkpDXgbT9zZytcv9YGDwfjOIh6BYT6dInWo/hgQVA==',key_name='tempest-key-912387218',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-nljn6049',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:41:35Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=58550847-ec24-44b6-a066-642db86841ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.205 248514 DEBUG nova.network.os_vif_util [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.206 248514 DEBUG nova.network.os_vif_util [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.206 248514 DEBUG os_vif [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.207 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.207 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.208 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.211 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f023371-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.212 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f023371-37, col_values=(('external_ids', {'iface-id': '0f023371-37d6-4847-a15a-fa86a3f0a3f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:bf:1d', 'vm-uuid': '58550847-ec24-44b6-a066-642db86841ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.213 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:46 compute-0 NetworkManager[50376]: <info>  [1765615306.2146] manager: (tap0f023371-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.218 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.219 248514 INFO os_vif [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37')
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.366 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.367 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.367 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:75:bf:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.368 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Using config drive
Dec 13 08:41:46 compute-0 nova_compute[248510]: 2025-12-13 08:41:46.385 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:46 compute-0 ceph-mon[76537]: pgmap v2370: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:41:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/738661585' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:47 compute-0 nova_compute[248510]: 2025-12-13 08:41:47.362 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Creating config drive at /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config
Dec 13 08:41:47 compute-0 nova_compute[248510]: 2025-12-13 08:41:47.367 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp500hl267 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:47 compute-0 nova_compute[248510]: 2025-12-13 08:41:47.512 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp500hl267" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:47 compute-0 nova_compute[248510]: 2025-12-13 08:41:47.537 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:47 compute-0 nova_compute[248510]: 2025-12-13 08:41:47.541 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config 58550847-ec24-44b6-a066-642db86841ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:41:47 compute-0 nova_compute[248510]: 2025-12-13 08:41:47.993 248514 DEBUG nova.network.neutron [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Updated VIF entry in instance network info cache for port 0f023371-37d6-4847-a15a-fa86a3f0a3f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:41:47 compute-0 nova_compute[248510]: 2025-12-13 08:41:47.994 248514 DEBUG nova.network.neutron [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Updating instance_info_cache with network_info: [{"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:48 compute-0 nova_compute[248510]: 2025-12-13 08:41:48.021 248514 DEBUG oslo_concurrency.lockutils [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:41:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:48 compute-0 ceph-mon[76537]: pgmap v2371: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:41:48 compute-0 nova_compute[248510]: 2025-12-13 08:41:48.524 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config 58550847-ec24-44b6-a066-642db86841ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.983s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:48 compute-0 nova_compute[248510]: 2025-12-13 08:41:48.525 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Deleting local config drive /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config because it was imported into RBD.
Dec 13 08:41:48 compute-0 kernel: tap0f023371-37: entered promiscuous mode
Dec 13 08:41:48 compute-0 NetworkManager[50376]: <info>  [1765615308.5776] manager: (tap0f023371-37): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Dec 13 08:41:48 compute-0 ovn_controller[148476]: 2025-12-13T08:41:48Z|00945|binding|INFO|Claiming lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 for this chassis.
Dec 13 08:41:48 compute-0 ovn_controller[148476]: 2025-12-13T08:41:48Z|00946|binding|INFO|0f023371-37d6-4847-a15a-fa86a3f0a3f5: Claiming fa:16:3e:75:bf:1d 10.100.0.7
Dec 13 08:41:48 compute-0 nova_compute[248510]: 2025-12-13 08:41:48.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.750 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:bf:1d 10.100.0.7'], port_security=['fa:16:3e:75:bf:1d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '58550847-ec24-44b6-a066-642db86841ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0f023371-37d6-4847-a15a-fa86a3f0a3f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.751 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0f023371-37d6-4847-a15a-fa86a3f0a3f5 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis
Dec 13 08:41:48 compute-0 ovn_controller[148476]: 2025-12-13T08:41:48Z|00947|binding|INFO|Setting lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 ovn-installed in OVS
Dec 13 08:41:48 compute-0 ovn_controller[148476]: 2025-12-13T08:41:48Z|00948|binding|INFO|Setting lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 up in Southbound
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.752 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:41:48 compute-0 nova_compute[248510]: 2025-12-13 08:41:48.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:48 compute-0 systemd-udevd[341880]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.768 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43ecd0d1-5574-46d9-8cd3-71a78cafb1df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:48 compute-0 systemd-machined[210538]: New machine qemu-119-instance-00000060.
Dec 13 08:41:48 compute-0 NetworkManager[50376]: <info>  [1765615308.7804] device (tap0f023371-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:41:48 compute-0 NetworkManager[50376]: <info>  [1765615308.7811] device (tap0f023371-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:41:48 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000060.
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.798 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8a21afe0-05fe-41ce-a549-f8d15b277e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.801 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[51e4aa6a-038a-4e62-84cf-4d7f783ea339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.831 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f892f062-a450-46d9-8214-1f9983d39605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.850 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[efcaa825-d101-4edb-9c1a-be1acb9af06a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341893, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.867 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a57fcdd7-b4e2-4d45-bbfb-6aae5c355182]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341894, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341894, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.869 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:48 compute-0 nova_compute[248510]: 2025-12-13 08:41:48.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:48 compute-0 nova_compute[248510]: 2025-12-13 08:41:48.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.150 248514 DEBUG nova.compute.manager [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.151 248514 DEBUG oslo_concurrency.lockutils [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.151 248514 DEBUG oslo_concurrency.lockutils [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.151 248514 DEBUG oslo_concurrency.lockutils [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.151 248514 DEBUG nova.compute.manager [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Processing event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:41:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.895 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615309.8946912, 58550847-ec24-44b6-a066-642db86841ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.896 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] VM Started (Lifecycle Event)
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.898 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.903 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.907 248514 INFO nova.virt.libvirt.driver [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance spawned successfully.
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.908 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.932 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.937 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.948 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.949 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.949 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.950 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.951 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.951 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.987 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.987 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615309.8949203, 58550847-ec24-44b6-a066-642db86841ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:49 compute-0 nova_compute[248510]: 2025-12-13 08:41:49.988 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] VM Paused (Lifecycle Event)
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.034 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.040 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615309.9022415, 58550847-ec24-44b6-a066-642db86841ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.040 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] VM Resumed (Lifecycle Event)
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.052 248514 INFO nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Took 14.21 seconds to spawn the instance on the hypervisor.
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.053 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.064 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.067 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.102 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.150 248514 INFO nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Took 15.50 seconds to build instance.
Dec 13 08:41:50 compute-0 nova_compute[248510]: 2025-12-13 08:41:50.173 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:50 compute-0 ceph-mon[76537]: pgmap v2372: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.340 248514 DEBUG nova.compute.manager [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.341 248514 DEBUG oslo_concurrency.lockutils [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.341 248514 DEBUG oslo_concurrency.lockutils [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.342 248514 DEBUG oslo_concurrency.lockutils [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.342 248514 DEBUG nova.compute.manager [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] No waiting events found dispatching network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.342 248514 WARNING nova.compute.manager [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received unexpected event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 for instance with vm_state active and task_state None.
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.742 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "e7ecf467-c27c-44d4-b072-2537dba749e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.743 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.765 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:41:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 663 KiB/s wr, 21 op/s
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.848 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.849 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.857 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:41:51 compute-0 nova_compute[248510]: 2025-12-13 08:41:51.857 248514 INFO nova.compute.claims [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.028 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351467125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.649 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.651 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.651 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.651 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.652 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.654 248514 INFO nova.compute.manager [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Terminating instance
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.655 248514 DEBUG nova.compute.manager [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.657 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.665 248514 DEBUG nova.compute.provider_tree [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.684 248514 DEBUG nova.scheduler.client.report [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.712 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.713 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:41:52 compute-0 kernel: tap0f023371-37 (unregistering): left promiscuous mode
Dec 13 08:41:52 compute-0 NetworkManager[50376]: <info>  [1765615312.7538] device (tap0f023371-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:52 compute-0 ovn_controller[148476]: 2025-12-13T08:41:52Z|00949|binding|INFO|Releasing lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 from this chassis (sb_readonly=0)
Dec 13 08:41:52 compute-0 ovn_controller[148476]: 2025-12-13T08:41:52Z|00950|binding|INFO|Setting lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 down in Southbound
Dec 13 08:41:52 compute-0 ovn_controller[148476]: 2025-12-13T08:41:52Z|00951|binding|INFO|Removing iface tap0f023371-37 ovn-installed in OVS
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.770 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:bf:1d 10.100.0.7'], port_security=['fa:16:3e:75:bf:1d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '58550847-ec24-44b6-a066-642db86841ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0f023371-37d6-4847-a15a-fa86a3f0a3f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0f023371-37d6-4847-a15a-fa86a3f0a3f5 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.773 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.774 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.790 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[255c37b1-edc1-4c1f-a6ee-1282ebb232a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:52 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Deactivated successfully.
Dec 13 08:41:52 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Consumed 3.717s CPU time.
Dec 13 08:41:52 compute-0 systemd-machined[210538]: Machine qemu-119-instance-00000060 terminated.
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.810 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.827 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[94ddc125-51c0-448c-bdb6-a08a1b2cd594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.831 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[578bc9fd-e294-4fce-9b89-72bfca46eb2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.868 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1b27d78a-4de9-488a-be7e-5cf0f92f0616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:52 compute-0 kernel: tap0f023371-37: entered promiscuous mode
Dec 13 08:41:52 compute-0 kernel: tap0f023371-37 (unregistering): left promiscuous mode
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.894 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55ba9b0a-bb53-4f6a-b51e-f142599a4963]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341973, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.904 248514 INFO nova.virt.libvirt.driver [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance destroyed successfully.
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.905 248514 DEBUG nova.objects.instance [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 58550847-ec24-44b6-a066-642db86841ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.915 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[72a8eacc-cfb2-490b-b36d-a55e379a5437]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341982, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341982, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.917 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.923 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.933 248514 DEBUG nova.virt.libvirt.vif [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:41:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1050406831',display_name='tempest-ServersTestJSON-server-1050406831',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1050406831',id=96,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYqQi2FIq7Gsymh9E4l7vrJYWPFYm4cOxQBvQKtSgT87c9NTs3K+3wQ1sFvpE/1xNj+RqzGI4ZZeSDRTg9ddxk3LWkpDXgbT9zZytcv9YGDwfjOIh6BYT6dInWo/hgQVA==',key_name='tempest-key-912387218',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:41:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-nljn6049',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:41:50Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=58550847-ec24-44b6-a066-642db86841ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.934 248514 DEBUG nova.network.os_vif_util [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.935 248514 DEBUG nova.network.os_vif_util [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.935 248514 DEBUG os_vif [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.937 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f023371-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.942 248514 INFO os_vif [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37')
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.960 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.962 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.962 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Creating image(s)
Dec 13 08:41:52 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.980 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:52.999 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.019 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.023 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.098 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.099 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.099 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.100 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.118 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.121 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e7ecf467-c27c-44d4-b072-2537dba749e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:53.226 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:41:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:53.227 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:53 compute-0 ceph-mon[76537]: pgmap v2373: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 663 KiB/s wr, 21 op/s
Dec 13 08:41:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2351467125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.577 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-unplugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.577 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] No waiting events found dispatching network-vif-unplugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-unplugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] No waiting events found dispatching network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 WARNING nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received unexpected event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 for instance with vm_state active and task_state deleting.
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.717 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e7ecf467-c27c-44d4-b072-2537dba749e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.777 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] resizing rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:41:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 466 KiB/s rd, 12 KiB/s wr, 23 op/s
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.868 248514 DEBUG nova.objects.instance [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lazy-loading 'migration_context' on Instance uuid e7ecf467-c27c-44d4-b072-2537dba749e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.887 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.888 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Ensure instance console log exists: /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.889 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.889 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.889 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.890 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.896 248514 WARNING nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.902 248514 DEBUG nova.virt.libvirt.host [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.903 248514 DEBUG nova.virt.libvirt.host [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.908 248514 DEBUG nova.virt.libvirt.host [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.908 248514 DEBUG nova.virt.libvirt.host [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.909 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.909 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.910 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.910 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.910 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.910 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.912 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.914 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.963 248514 INFO nova.virt.libvirt.driver [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Deleting instance files /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce_del
Dec 13 08:41:53 compute-0 nova_compute[248510]: 2025-12-13 08:41:53.964 248514 INFO nova.virt.libvirt.driver [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Deletion of /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce_del complete
Dec 13 08:41:54 compute-0 nova_compute[248510]: 2025-12-13 08:41:54.058 248514 INFO nova.compute.manager [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Took 1.40 seconds to destroy the instance on the hypervisor.
Dec 13 08:41:54 compute-0 nova_compute[248510]: 2025-12-13 08:41:54.059 248514 DEBUG oslo.service.loopingcall [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:41:54 compute-0 nova_compute[248510]: 2025-12-13 08:41:54.059 248514 DEBUG nova.compute.manager [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:41:54 compute-0 nova_compute[248510]: 2025-12-13 08:41:54.059 248514 DEBUG nova.network.neutron [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:41:54 compute-0 ceph-mon[76537]: pgmap v2374: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 466 KiB/s rd, 12 KiB/s wr, 23 op/s
Dec 13 08:41:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:41:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428530057' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:54 compute-0 nova_compute[248510]: 2025-12-13 08:41:54.498 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:54 compute-0 nova_compute[248510]: 2025-12-13 08:41:54.520 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:54 compute-0 nova_compute[248510]: 2025-12-13 08:41:54.524 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:41:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805700237' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.133 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.135 248514 DEBUG nova.objects.instance [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7ecf467-c27c-44d4-b072-2537dba749e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.143 248514 DEBUG nova.network.neutron [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.164 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <uuid>e7ecf467-c27c-44d4-b072-2537dba749e2</uuid>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <name>instance-00000061</name>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersAaction247Test-server-2007909855</nova:name>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:41:53</nova:creationTime>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <nova:user uuid="7001a6774c58458f8817dd3d0a51c534">tempest-ServersAaction247Test-1329667427-project-member</nova:user>
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <nova:project uuid="3789df2219b04c028efeab8c1eae2147">tempest-ServersAaction247Test-1329667427</nova:project>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <system>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <entry name="serial">e7ecf467-c27c-44d4-b072-2537dba749e2</entry>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <entry name="uuid">e7ecf467-c27c-44d4-b072-2537dba749e2</entry>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     </system>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <os>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   </os>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <features>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   </features>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/e7ecf467-c27c-44d4-b072-2537dba749e2_disk">
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       </source>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config">
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       </source>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:41:55 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/console.log" append="off"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <video>
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     </video>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:41:55 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:41:55 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:41:55 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:41:55 compute-0 nova_compute[248510]: </domain>
Dec 13 08:41:55 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.168 248514 INFO nova.compute.manager [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Took 1.11 seconds to deallocate network for instance.
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.235 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.236 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.255 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.256 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.256 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Using config drive
Dec 13 08:41:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2428530057' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3805700237' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.278 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.355 248514 DEBUG oslo_concurrency.processutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.692 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Creating config drive at /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.697 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphp91auaf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.740 248514 DEBUG nova.compute.manager [req-b11e83ae-f922-4e23-a5d5-7d6c92133147 req-f37abdf5-20e7-405f-8791-f046ac504590 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-deleted-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:41:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 171 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.846 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphp91auaf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.873 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.877 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:41:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:41:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901742580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.938 248514 DEBUG oslo_concurrency.processutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.945 248514 DEBUG nova.compute.provider_tree [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.965 248514 DEBUG nova.scheduler.client.report [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:41:55 compute-0 nova_compute[248510]: 2025-12-13 08:41:55.996 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.023 248514 INFO nova.scheduler.client.report [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 58550847-ec24-44b6-a066-642db86841ce
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.040 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.041 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Deleting local config drive /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config because it was imported into RBD.
Dec 13 08:41:56 compute-0 systemd-machined[210538]: New machine qemu-120-instance-00000061.
Dec 13 08:41:56 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000061.
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.137 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:56 compute-0 ceph-mon[76537]: pgmap v2375: 321 pgs: 321 active+clean; 171 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 13 08:41:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1901742580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.629 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615316.6288662, e7ecf467-c27c-44d4-b072-2537dba749e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.630 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] VM Resumed (Lifecycle Event)
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.632 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.633 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.637 248514 INFO nova.virt.libvirt.driver [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance spawned successfully.
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.637 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.652 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.658 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.662 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.662 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.663 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.663 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.664 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.664 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.698 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.698 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615316.630049, e7ecf467-c27c-44d4-b072-2537dba749e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.699 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] VM Started (Lifecycle Event)
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.732 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.735 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.742 248514 INFO nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Took 3.78 seconds to spawn the instance on the hypervisor.
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.743 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.814 248514 INFO nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Took 5.00 seconds to build instance.
Dec 13 08:41:56 compute-0 nova_compute[248510]: 2025-12-13 08:41:56.842 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:41:57.230 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:41:57 compute-0 nova_compute[248510]: 2025-12-13 08:41:57.251 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:41:57 compute-0 nova_compute[248510]: 2025-12-13 08:41:57.251 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:41:57 compute-0 nova_compute[248510]: 2025-12-13 08:41:57.252 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:41:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 171 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 13 08:41:57 compute-0 nova_compute[248510]: 2025-12-13 08:41:57.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:41:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.230 248514 DEBUG nova.compute.manager [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.286 248514 INFO nova.compute.manager [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] instance snapshotting
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.287 248514 DEBUG nova.objects.instance [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lazy-loading 'flavor' on Instance uuid e7ecf467-c27c-44d4-b072-2537dba749e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.561 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "e7ecf467-c27c-44d4-b072-2537dba749e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.562 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.562 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "e7ecf467-c27c-44d4-b072-2537dba749e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.563 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.563 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.564 248514 INFO nova.compute.manager [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Terminating instance
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.565 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "refresh_cache-e7ecf467-c27c-44d4-b072-2537dba749e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.565 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquired lock "refresh_cache-e7ecf467-c27c-44d4-b072-2537dba749e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.565 248514 DEBUG nova.network.neutron [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.638 248514 INFO nova.virt.libvirt.driver [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Beginning live snapshot process
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.684 248514 DEBUG nova.compute.manager [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Dec 13 08:41:58 compute-0 nova_compute[248510]: 2025-12-13 08:41:58.747 248514 DEBUG nova.network.neutron [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:41:58 compute-0 ceph-mon[76537]: pgmap v2376: 321 pgs: 321 active+clean; 171 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 13 08:41:59 compute-0 nova_compute[248510]: 2025-12-13 08:41:59.215 248514 DEBUG nova.network.neutron [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:41:59 compute-0 nova_compute[248510]: 2025-12-13 08:41:59.231 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Releasing lock "refresh_cache-e7ecf467-c27c-44d4-b072-2537dba749e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:41:59 compute-0 nova_compute[248510]: 2025-12-13 08:41:59.231 248514 DEBUG nova.compute.manager [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:41:59 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000061.scope: Deactivated successfully.
Dec 13 08:41:59 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000061.scope: Consumed 3.218s CPU time.
Dec 13 08:41:59 compute-0 systemd-machined[210538]: Machine qemu-120-instance-00000061 terminated.
Dec 13 08:41:59 compute-0 nova_compute[248510]: 2025-12-13 08:41:59.329 248514 DEBUG nova.compute.manager [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Dec 13 08:41:59 compute-0 nova_compute[248510]: 2025-12-13 08:41:59.453 248514 INFO nova.virt.libvirt.driver [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance destroyed successfully.
Dec 13 08:41:59 compute-0 nova_compute[248510]: 2025-12-13 08:41:59.453 248514 DEBUG nova.objects.instance [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lazy-loading 'resources' on Instance uuid e7ecf467-c27c-44d4-b072-2537dba749e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:41:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Dec 13 08:41:59 compute-0 nova_compute[248510]: 2025-12-13 08:41:59.929 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.166 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.166 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.166 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.204 248514 INFO nova.virt.libvirt.driver [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Deleting instance files /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2_del
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.205 248514 INFO nova.virt.libvirt.driver [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Deletion of /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2_del complete
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.299 248514 INFO nova.compute.manager [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Took 1.07 seconds to destroy the instance on the hypervisor.
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.300 248514 DEBUG oslo.service.loopingcall [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.300 248514 DEBUG nova.compute.manager [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.300 248514 DEBUG nova.network.neutron [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.513 248514 DEBUG nova.network.neutron [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.530 248514 DEBUG nova.network.neutron [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.548 248514 INFO nova.compute.manager [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Took 0.25 seconds to deallocate network for instance.
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.601 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.602 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.637 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.676 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.677 248514 DEBUG nova.compute.provider_tree [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.702 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.740 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.837 248514 DEBUG oslo_concurrency.processutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.959 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.960 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:00 compute-0 nova_compute[248510]: 2025-12-13 08:42:00.992 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.112 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:01 compute-0 ceph-mon[76537]: pgmap v2377: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Dec 13 08:42:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997877790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.408 248514 DEBUG oslo_concurrency.processutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.414 248514 DEBUG nova.compute.provider_tree [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.440 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.478 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.480 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.490 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.491 248514 INFO nova.compute.claims [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.541 248514 INFO nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Deleted allocations for instance e7ecf467-c27c-44d4-b072-2537dba749e2
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.654 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:01 compute-0 nova_compute[248510]: 2025-12-13 08:42:01.675 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 160 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Dec 13 08:42:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075129003' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.234 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.239 248514 DEBUG nova.compute.provider_tree [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.260 248514 DEBUG nova.scheduler.client.report [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.291 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.292 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.361 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.362 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.401 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.431 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.610 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.612 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.612 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Creating image(s)
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.640 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.753 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.773 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.776 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.811 248514 DEBUG nova.policy [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.815 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.816 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.850 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.850 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.851 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.851 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.851 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1997877790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:02 compute-0 ceph-mon[76537]: pgmap v2378: 321 pgs: 321 active+clean; 160 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Dec 13 08:42:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2075129003' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.891 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.893 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.895 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.895 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.925 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.928 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e9e6711-892f-4278-b911-bfacbff9b48e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:02 compute-0 nova_compute[248510]: 2025-12-13 08:42:02.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3311972622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.426 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.549 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.550 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:42:03 compute-0 sudo[342551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:42:03 compute-0 sudo[342551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:03 compute-0 sudo[342551]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:03 compute-0 sudo[342577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:42:03 compute-0 sudo[342577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.740 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.742 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3510MB free_disk=59.92250001523644GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.742 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.743 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:42:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 122K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.90 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7746 writes, 28K keys, 7746 commit groups, 1.0 writes per commit group, ingest: 28.13 MB, 0.05 MB/s
                                           Interval WAL: 7745 writes, 3091 syncs, 2.51 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:42:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 148 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2e9e6711-892f-4278-b911-bfacbff9b48e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.854 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.854 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:42:03 compute-0 nova_compute[248510]: 2025-12-13 08:42:03.920 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:04 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3311972622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.060 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e9e6711-892f-4278-b911-bfacbff9b48e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.135 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:42:04 compute-0 sudo[342577]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:42:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:42:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:42:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:42:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:42:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:42:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:42:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:42:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:42:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:42:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:42:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:42:04 compute-0 sudo[342707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:42:04 compute-0 sudo[342707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:04 compute-0 sudo[342707]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260077907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.513 248514 DEBUG nova.objects.instance [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 2e9e6711-892f-4278-b911-bfacbff9b48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:04 compute-0 sudo[342732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:42:04 compute-0 sudo[342732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.535 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.541 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.635 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.635 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Ensure instance console log exists: /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.636 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.636 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.636 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.637 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.656 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Successfully created port: a6d7b148-2fef-4c47-a3fa-c8948759b4a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.661 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:42:04 compute-0 nova_compute[248510]: 2025-12-13 08:42:04.661 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:04 compute-0 podman[342788]: 2025-12-13 08:42:04.81081969 +0000 UTC m=+0.053679939 container create 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 08:42:04 compute-0 systemd[1]: Started libpod-conmon-743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911.scope.
Dec 13 08:42:04 compute-0 podman[342788]: 2025-12-13 08:42:04.777828764 +0000 UTC m=+0.020689053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:42:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:42:04 compute-0 podman[342788]: 2025-12-13 08:42:04.981046194 +0000 UTC m=+0.223906483 container init 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:42:04 compute-0 podman[342788]: 2025-12-13 08:42:04.988471669 +0000 UTC m=+0.231331928 container start 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:42:04 compute-0 fervent_hamilton[342804]: 167 167
Dec 13 08:42:04 compute-0 systemd[1]: libpod-743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911.scope: Deactivated successfully.
Dec 13 08:42:04 compute-0 conmon[342804]: conmon 743d8260ff3896b596ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911.scope/container/memory.events
Dec 13 08:42:05 compute-0 podman[342788]: 2025-12-13 08:42:05.001049749 +0000 UTC m=+0.243910008 container attach 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:42:05 compute-0 podman[342788]: 2025-12-13 08:42:05.001544822 +0000 UTC m=+0.244405081 container died 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:42:05 compute-0 ceph-mon[76537]: pgmap v2379: 321 pgs: 321 active+clean; 148 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Dec 13 08:42:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:42:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:42:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:42:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:42:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:42:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:42:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2260077907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3344eebdd1ed99d30df31abf8f0f640e0828bbbf6c98b4e2477cf7a75a24f54-merged.mount: Deactivated successfully.
Dec 13 08:42:05 compute-0 podman[342788]: 2025-12-13 08:42:05.152052949 +0000 UTC m=+0.394913208 container remove 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:42:05 compute-0 systemd[1]: libpod-conmon-743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911.scope: Deactivated successfully.
Dec 13 08:42:05 compute-0 podman[342827]: 2025-12-13 08:42:05.302456063 +0000 UTC m=+0.022277715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:42:05 compute-0 podman[342827]: 2025-12-13 08:42:05.704053875 +0000 UTC m=+0.423875507 container create 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 08:42:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 151 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 218 op/s
Dec 13 08:42:05 compute-0 systemd[1]: Started libpod-conmon-6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f.scope.
Dec 13 08:42:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:05 compute-0 nova_compute[248510]: 2025-12-13 08:42:05.944 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Successfully updated port: a6d7b148-2fef-4c47-a3fa-c8948759b4a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:05 compute-0 nova_compute[248510]: 2025-12-13 08:42:05.970 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:42:05 compute-0 nova_compute[248510]: 2025-12-13 08:42:05.970 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:42:05 compute-0 nova_compute[248510]: 2025-12-13 08:42:05.970 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:42:06 compute-0 nova_compute[248510]: 2025-12-13 08:42:06.146 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:06 compute-0 nova_compute[248510]: 2025-12-13 08:42:06.161 248514 DEBUG nova.compute.manager [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received event network-changed-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:06 compute-0 nova_compute[248510]: 2025-12-13 08:42:06.162 248514 DEBUG nova.compute.manager [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Refreshing instance network info cache due to event network-changed-a6d7b148-2fef-4c47-a3fa-c8948759b4a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:42:06 compute-0 nova_compute[248510]: 2025-12-13 08:42:06.162 248514 DEBUG oslo_concurrency.lockutils [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:42:06 compute-0 podman[342827]: 2025-12-13 08:42:06.237712631 +0000 UTC m=+0.957534283 container init 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:42:06 compute-0 nova_compute[248510]: 2025-12-13 08:42:06.244 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:42:06 compute-0 podman[342827]: 2025-12-13 08:42:06.245522636 +0000 UTC m=+0.965344268 container start 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 08:42:06 compute-0 podman[342827]: 2025-12-13 08:42:06.288826622 +0000 UTC m=+1.008648284 container attach 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 13 08:42:06 compute-0 ceph-mon[76537]: pgmap v2380: 321 pgs: 321 active+clean; 151 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 218 op/s
Dec 13 08:42:06 compute-0 happy_lamarr[342843]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:42:06 compute-0 happy_lamarr[342843]: --> All data devices are unavailable
Dec 13 08:42:06 compute-0 systemd[1]: libpod-6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f.scope: Deactivated successfully.
Dec 13 08:42:06 compute-0 podman[342827]: 2025-12-13 08:42:06.793384724 +0000 UTC m=+1.513206376 container died 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.362 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updating instance_info_cache with network_info: [{"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.386 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.386 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance network_info: |[{"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.386 248514 DEBUG oslo_concurrency.lockutils [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.387 248514 DEBUG nova.network.neutron [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Refreshing network info cache for port a6d7b148-2fef-4c47-a3fa-c8948759b4a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.389 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start _get_guest_xml network_info=[{"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.394 248514 WARNING nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.400 248514 DEBUG nova.virt.libvirt.host [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.402 248514 DEBUG nova.virt.libvirt.host [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.410 248514 DEBUG nova.virt.libvirt.host [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.410 248514 DEBUG nova.virt.libvirt.host [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.411 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.411 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.412 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.412 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.412 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.412 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.414 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.418 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8-merged.mount: Deactivated successfully.
Dec 13 08:42:07 compute-0 podman[342827]: 2025-12-13 08:42:07.647132844 +0000 UTC m=+2.366954476 container remove 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:42:07 compute-0 systemd[1]: libpod-conmon-6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f.scope: Deactivated successfully.
Dec 13 08:42:07 compute-0 sudo[342732]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:07 compute-0 podman[342877]: 2025-12-13 08:42:07.729180026 +0000 UTC m=+0.295535532 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 08:42:07 compute-0 podman[342876]: 2025-12-13 08:42:07.73391469 +0000 UTC m=+0.296563708 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:42:07 compute-0 podman[342875]: 2025-12-13 08:42:07.754979433 +0000 UTC m=+0.325933809 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller)
Dec 13 08:42:07 compute-0 sudo[342952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:42:07 compute-0 sudo[342952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:07 compute-0 sudo[342952]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:07 compute-0 sudo[342985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:42:07 compute-0 sudo[342985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 151 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 128 op/s
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.903 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615312.901425, 58550847-ec24-44b6-a066-642db86841ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.904 248514 INFO nova.compute.manager [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] VM Stopped (Lifecycle Event)
Dec 13 08:42:07 compute-0 nova_compute[248510]: 2025-12-13 08:42:07.967 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.003 248514 DEBUG nova.compute.manager [None req-84a96038-e98b-48f8-9a50-beb4b40deb75 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:42:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238631318' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.031 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.053 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.058 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:08 compute-0 podman[343041]: 2025-12-13 08:42:08.111225866 +0000 UTC m=+0.019960625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:42:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/238631318' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:08 compute-0 podman[343041]: 2025-12-13 08:42:08.29446929 +0000 UTC m=+0.203204029 container create c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:42:08 compute-0 systemd[1]: Started libpod-conmon-c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756.scope.
Dec 13 08:42:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:42:08 compute-0 podman[343041]: 2025-12-13 08:42:08.451907679 +0000 UTC m=+0.360642448 container init c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 08:42:08 compute-0 podman[343041]: 2025-12-13 08:42:08.459785526 +0000 UTC m=+0.368520265 container start c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:42:08 compute-0 exciting_knuth[343076]: 167 167
Dec 13 08:42:08 compute-0 systemd[1]: libpod-c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756.scope: Deactivated successfully.
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.618 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.618 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:42:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:42:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043436435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.697 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.699 248514 DEBUG nova.virt.libvirt.vif [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=98,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-6ci0kbz7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:02Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=2e9e6711-892f-4278-b911-bfacbff9b48e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.699 248514 DEBUG nova.network.os_vif_util [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.700 248514 DEBUG nova.network.os_vif_util [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.702 248514 DEBUG nova.objects.instance [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e9e6711-892f-4278-b911-bfacbff9b48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.726 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <uuid>2e9e6711-892f-4278-b911-bfacbff9b48e</uuid>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <name>instance-00000062</name>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestJSON-server-1551844735</nova:name>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:42:07</nova:creationTime>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <nova:port uuid="a6d7b148-2fef-4c47-a3fa-c8948759b4a9">
Dec 13 08:42:08 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <system>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <entry name="serial">2e9e6711-892f-4278-b911-bfacbff9b48e</entry>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <entry name="uuid">2e9e6711-892f-4278-b911-bfacbff9b48e</entry>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </system>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <os>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   </os>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <features>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   </features>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2e9e6711-892f-4278-b911-bfacbff9b48e_disk">
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       </source>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config">
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       </source>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:42:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e5:df:21"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <target dev="tapa6d7b148-2f"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/console.log" append="off"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <video>
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </video>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:42:08 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:42:08 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:42:08 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:42:08 compute-0 nova_compute[248510]: </domain>
Dec 13 08:42:08 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.727 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Preparing to wait for external event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.727 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.728 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.728 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.729 248514 DEBUG nova.virt.libvirt.vif [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=98,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-6ci0kbz7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:02Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=2e9e6711-892f-4278-b911-bfacbff9b48e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.730 248514 DEBUG nova.network.os_vif_util [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.730 248514 DEBUG nova.network.os_vif_util [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.731 248514 DEBUG os_vif [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.732 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.733 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.737 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6d7b148-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.737 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6d7b148-2f, col_values=(('external_ids', {'iface-id': 'a6d7b148-2fef-4c47-a3fa-c8948759b4a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:df:21', 'vm-uuid': '2e9e6711-892f-4278-b911-bfacbff9b48e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:08 compute-0 podman[343041]: 2025-12-13 08:42:08.782737725 +0000 UTC m=+0.691472514 container attach c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 08:42:08 compute-0 podman[343041]: 2025-12-13 08:42:08.783815454 +0000 UTC m=+0.692550193 container died c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:08 compute-0 NetworkManager[50376]: <info>  [1765615328.7843] manager: (tapa6d7b148-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:08 compute-0 nova_compute[248510]: 2025-12-13 08:42:08.793 248514 INFO os_vif [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f')
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.008 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.008 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.008 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:e5:df:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.009 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Using config drive
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.025 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.185 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.185 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.210 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:42:09
Dec 13 08:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.data', 'volumes']
Dec 13 08:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:42:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-afc30dafa26c339b2ac11af23d91469e3e7a227dd26f2a7da50aafe3536f8890-merged.mount: Deactivated successfully.
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.310 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.310 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.318 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.319 248514 INFO nova.compute.claims [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:42:09 compute-0 ceph-mon[76537]: pgmap v2381: 321 pgs: 321 active+clean; 151 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 128 op/s
Dec 13 08:42:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4043436435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:09 compute-0 nova_compute[248510]: 2025-12-13 08:42:09.497 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 141 op/s
Dec 13 08:42:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2353635197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.074 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.080 248514 DEBUG nova.compute.provider_tree [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:10 compute-0 podman[343041]: 2025-12-13 08:42:10.097463475 +0000 UTC m=+2.006198214 container remove c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.102 248514 DEBUG nova.scheduler.client.report [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:42:10 compute-0 systemd[1]: libpod-conmon-c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756.scope: Deactivated successfully.
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.145 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.146 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.200 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.220 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.241 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.279 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Creating config drive at /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.284 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmyrcnbfr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:10 compute-0 podman[343144]: 2025-12-13 08:42:10.249184894 +0000 UTC m=+0.022347647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.367 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.369 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.369 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating image(s)
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.545 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:10 compute-0 podman[343144]: 2025-12-13 08:42:10.550395834 +0000 UTC m=+0.323558567 container create 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.585 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.607 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.611 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:10 compute-0 ceph-mon[76537]: pgmap v2382: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 141 op/s
Dec 13 08:42:10 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2353635197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.656 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmyrcnbfr" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:10 compute-0 systemd[1]: Started libpod-conmon-923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028.scope.
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.685 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.690 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:42:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:42:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.733 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.738 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.739 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.739 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:10 compute-0 podman[343144]: 2025-12-13 08:42:10.746294371 +0000 UTC m=+0.519457104 container init 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:42:10 compute-0 podman[343144]: 2025-12-13 08:42:10.755641947 +0000 UTC m=+0.528804680 container start 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 08:42:10 compute-0 podman[343144]: 2025-12-13 08:42:10.761326056 +0000 UTC m=+0.534488809 container attach 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.761 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.768 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.806 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.848 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.849 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Deleting local config drive /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config because it was imported into RBD.
Dec 13 08:42:10 compute-0 kernel: tapa6d7b148-2f: entered promiscuous mode
Dec 13 08:42:10 compute-0 ovn_controller[148476]: 2025-12-13T08:42:10Z|00952|binding|INFO|Claiming lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 for this chassis.
Dec 13 08:42:10 compute-0 ovn_controller[148476]: 2025-12-13T08:42:10Z|00953|binding|INFO|a6d7b148-2fef-4c47-a3fa-c8948759b4a9: Claiming fa:16:3e:e5:df:21 10.100.0.9
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:10 compute-0 NetworkManager[50376]: <info>  [1765615330.9114] manager: (tapa6d7b148-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Dec 13 08:42:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.916 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:df:21 10.100.0.9'], port_security=['fa:16:3e:e5:df:21 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e9e6711-892f-4278-b911-bfacbff9b48e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a6d7b148-2fef-4c47-a3fa-c8948759b4a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:42:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.917 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a6d7b148-2fef-4c47-a3fa-c8948759b4a9 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis
Dec 13 08:42:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.918 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:42:10 compute-0 ovn_controller[148476]: 2025-12-13T08:42:10Z|00954|binding|INFO|Setting lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 ovn-installed in OVS
Dec 13 08:42:10 compute-0 ovn_controller[148476]: 2025-12-13T08:42:10Z|00955|binding|INFO|Setting lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 up in Southbound
Dec 13 08:42:10 compute-0 nova_compute[248510]: 2025-12-13 08:42:10.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4beaa995-dc9d-4d1a-bdf8-fefadc049250]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:10 compute-0 systemd-udevd[343314]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:42:10 compute-0 systemd-machined[210538]: New machine qemu-121-instance-00000062.
Dec 13 08:42:10 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000062.
Dec 13 08:42:10 compute-0 NetworkManager[50376]: <info>  [1765615330.9720] device (tapa6d7b148-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:42:10 compute-0 NetworkManager[50376]: <info>  [1765615330.9733] device (tapa6d7b148-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:42:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.981 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c52f151a-4314-45d8-bab0-033194563f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.986 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb2f233-b9f5-4cff-8524-df4b765ba60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.021 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[53627624-ab8a-4668-9f3d-5653ca617291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.043 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b02cf8a-8e06-45a0-bb53-e764b787c4e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343324, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.055 248514 DEBUG nova.network.neutron [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updated VIF entry in instance network info cache for port a6d7b148-2fef-4c47-a3fa-c8948759b4a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.055 248514 DEBUG nova.network.neutron [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updating instance_info_cache with network_info: [{"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.062 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e848b33-c023-45e2-b075-31e04ecb1c72]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343330, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343330, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.064 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.066 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.067 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.067 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.068 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.068 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.069 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:42:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.8 total, 600.0 interval
                                           Cumulative writes: 31K writes, 120K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.83 writes per sync, written: 0.11 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6963 writes, 25K keys, 6963 commit groups, 1.0 writes per commit group, ingest: 25.25 MB, 0.04 MB/s
                                           Interval WAL: 6963 writes, 2848 syncs, 2.44 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.087 248514 DEBUG oslo_concurrency.lockutils [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]: {
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:     "0": [
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:         {
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "devices": [
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "/dev/loop3"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             ],
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_name": "ceph_lv0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_size": "21470642176",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "name": "ceph_lv0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "tags": {
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cluster_name": "ceph",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.crush_device_class": "",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.encrypted": "0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.objectstore": "bluestore",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osd_id": "0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.type": "block",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.vdo": "0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.with_tpm": "0"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             },
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "type": "block",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "vg_name": "ceph_vg0"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:         }
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:     ],
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:     "1": [
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:         {
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "devices": [
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "/dev/loop4"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             ],
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_name": "ceph_lv1",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_size": "21470642176",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "name": "ceph_lv1",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "tags": {
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cluster_name": "ceph",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.crush_device_class": "",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.encrypted": "0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.objectstore": "bluestore",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osd_id": "1",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.type": "block",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.vdo": "0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.with_tpm": "0"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             },
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "type": "block",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "vg_name": "ceph_vg1"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:         }
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:     ],
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:     "2": [
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:         {
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "devices": [
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "/dev/loop5"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             ],
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_name": "ceph_lv2",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_size": "21470642176",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "name": "ceph_lv2",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "tags": {
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.cluster_name": "ceph",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.crush_device_class": "",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.encrypted": "0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.objectstore": "bluestore",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osd_id": "2",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.type": "block",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.vdo": "0",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:                 "ceph.with_tpm": "0"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             },
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "type": "block",
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:             "vg_name": "ceph_vg2"
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:         }
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]:     ]
Dec 13 08:42:11 compute-0 recursing_blackwell[343233]: }
Dec 13 08:42:11 compute-0 systemd[1]: libpod-923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028.scope: Deactivated successfully.
Dec 13 08:42:11 compute-0 conmon[343233]: conmon 923b29edb526804908c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028.scope/container/memory.events
Dec 13 08:42:11 compute-0 podman[343144]: 2025-12-13 08:42:11.134842891 +0000 UTC m=+0.908005634 container died 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a-merged.mount: Deactivated successfully.
Dec 13 08:42:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.881 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615331.880728, 2e9e6711-892f-4278-b911-bfacbff9b48e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.882 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] VM Started (Lifecycle Event)
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.905 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.910 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615331.8814082, 2e9e6711-892f-4278-b911-bfacbff9b48e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.910 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] VM Paused (Lifecycle Event)
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.931 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.935 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:42:11 compute-0 nova_compute[248510]: 2025-12-13 08:42:11.961 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:42:12 compute-0 nova_compute[248510]: 2025-12-13 08:42:12.351 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:12 compute-0 nova_compute[248510]: 2025-12-13 08:42:12.408 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] resizing rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:42:12 compute-0 ceph-mon[76537]: pgmap v2383: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 08:42:12 compute-0 podman[343144]: 2025-12-13 08:42:12.749786444 +0000 UTC m=+2.522949167 container remove 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 08:42:12 compute-0 sudo[342985]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:12 compute-0 systemd[1]: libpod-conmon-923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028.scope: Deactivated successfully.
Dec 13 08:42:12 compute-0 sudo[343440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:42:12 compute-0 sudo[343440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:12 compute-0 sudo[343440]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:12 compute-0 sudo[343465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:42:12 compute-0 sudo[343465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.177 248514 DEBUG nova.objects.instance [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'migration_context' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.199 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.200 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Ensure instance console log exists: /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.200 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.200 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.201 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.202 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.206 248514 WARNING nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.213 248514 DEBUG nova.virt.libvirt.host [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.214 248514 DEBUG nova.virt.libvirt.host [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.216 248514 DEBUG nova.virt.libvirt.host [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.217 248514 DEBUG nova.virt.libvirt.host [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.217 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.217 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.218 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.218 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.218 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.218 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.219 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.219 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.219 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.219 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.220 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.220 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.223 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:13 compute-0 podman[343509]: 2025-12-13 08:42:13.242271329 +0000 UTC m=+0.070064768 container create 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:42:13 compute-0 systemd[1]: Started libpod-conmon-81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f.scope.
Dec 13 08:42:13 compute-0 podman[343509]: 2025-12-13 08:42:13.197760382 +0000 UTC m=+0.025553851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:42:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:42:13 compute-0 podman[343509]: 2025-12-13 08:42:13.319805323 +0000 UTC m=+0.147598792 container init 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:42:13 compute-0 podman[343509]: 2025-12-13 08:42:13.327842434 +0000 UTC m=+0.155635883 container start 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:42:13 compute-0 awesome_beaver[343537]: 167 167
Dec 13 08:42:13 compute-0 systemd[1]: libpod-81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f.scope: Deactivated successfully.
Dec 13 08:42:13 compute-0 podman[343509]: 2025-12-13 08:42:13.436714429 +0000 UTC m=+0.264507908 container attach 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:42:13 compute-0 podman[343509]: 2025-12-13 08:42:13.43751131 +0000 UTC m=+0.265304759 container died 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:42:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-99b0812f99cc9e7246bf12bb313e8f17059e39200e558e3f23d0a814978f6d91-merged.mount: Deactivated successfully.
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:42:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982063026' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.813 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 195 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.7 MiB/s wr, 56 op/s
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.840 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:13 compute-0 nova_compute[248510]: 2025-12-13 08:42:13.844 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:13 compute-0 podman[343509]: 2025-12-13 08:42:13.884323118 +0000 UTC m=+0.712116567 container remove 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 08:42:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2982063026' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:13 compute-0 systemd[1]: libpod-conmon-81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f.scope: Deactivated successfully.
Dec 13 08:42:14 compute-0 podman[343620]: 2025-12-13 08:42:14.041671424 +0000 UTC m=+0.023349243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:42:14 compute-0 podman[343620]: 2025-12-13 08:42:14.152398438 +0000 UTC m=+0.134076237 container create 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:42:14 compute-0 systemd[1]: Started libpod-conmon-7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8.scope.
Dec 13 08:42:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:42:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:42:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609557199' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.434 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.437 248514 DEBUG nova.objects.instance [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.452 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615319.45055, e7ecf467-c27c-44d4-b072-2537dba749e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.452 248514 INFO nova.compute.manager [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] VM Stopped (Lifecycle Event)
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.470 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <uuid>4edf7378-73e8-4119-a019-6ab7098ee4ad</uuid>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <name>instance-00000063</name>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerShowV257Test-server-1971514055</nova:name>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:42:13</nova:creationTime>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <nova:user uuid="e382ad6b2fdc41398bfa58dbb651d4be">tempest-ServerShowV257Test-371152472-project-member</nova:user>
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <nova:project uuid="6d41fdecfe334aaeaaba54b9c6eaeb00">tempest-ServerShowV257Test-371152472</nova:project>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <system>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <entry name="serial">4edf7378-73e8-4119-a019-6ab7098ee4ad</entry>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <entry name="uuid">4edf7378-73e8-4119-a019-6ab7098ee4ad</entry>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     </system>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <os>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   </os>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <features>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   </features>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4edf7378-73e8-4119-a019-6ab7098ee4ad_disk">
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config">
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:42:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/console.log" append="off"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <video>
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     </video>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:42:14 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:42:14 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:42:14 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:42:14 compute-0 nova_compute[248510]: </domain>
Dec 13 08:42:14 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.477 248514 DEBUG nova.compute.manager [None req-cf932fe5-e3dd-4a11-8748-9ba4909b7cbe - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:14 compute-0 podman[343620]: 2025-12-13 08:42:14.525841602 +0000 UTC m=+0.507519431 container init 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:42:14 compute-0 podman[343620]: 2025-12-13 08:42:14.535406043 +0000 UTC m=+0.517083852 container start 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 08:42:14 compute-0 podman[343620]: 2025-12-13 08:42:14.576333786 +0000 UTC m=+0.558011605 container attach 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.602 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.603 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.603 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Using config drive
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.627 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.867 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating config drive at /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config
Dec 13 08:42:14 compute-0 nova_compute[248510]: 2025-12-13 08:42:14.875 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqmz0mm7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:15 compute-0 nova_compute[248510]: 2025-12-13 08:42:15.021 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqmz0mm7" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:42:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3453947686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:42:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:42:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3453947686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:42:15 compute-0 nova_compute[248510]: 2025-12-13 08:42:15.056 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:15 compute-0 nova_compute[248510]: 2025-12-13 08:42:15.060 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:15 compute-0 ceph-mon[76537]: pgmap v2384: 321 pgs: 321 active+clean; 195 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.7 MiB/s wr, 56 op/s
Dec 13 08:42:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2609557199' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:15 compute-0 lvm[343771]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:42:15 compute-0 lvm[343771]: VG ceph_vg0 finished
Dec 13 08:42:15 compute-0 lvm[343774]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:42:15 compute-0 lvm[343774]: VG ceph_vg1 finished
Dec 13 08:42:15 compute-0 lvm[343776]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:42:15 compute-0 lvm[343776]: VG ceph_vg2 finished
Dec 13 08:42:15 compute-0 cool_haibt[343636]: {}
Dec 13 08:42:15 compute-0 systemd[1]: libpod-7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8.scope: Deactivated successfully.
Dec 13 08:42:15 compute-0 systemd[1]: libpod-7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8.scope: Consumed 1.316s CPU time.
Dec 13 08:42:15 compute-0 podman[343620]: 2025-12-13 08:42:15.392375737 +0000 UTC m=+1.374053536 container died 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 08:42:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb-merged.mount: Deactivated successfully.
Dec 13 08:42:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Dec 13 08:42:16 compute-0 nova_compute[248510]: 2025-12-13 08:42:16.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:16 compute-0 sshd-session[343793]: Invalid user solana from 193.32.162.146 port 45256
Dec 13 08:42:16 compute-0 podman[343620]: 2025-12-13 08:42:16.223456423 +0000 UTC m=+2.205134242 container remove 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 08:42:16 compute-0 nova_compute[248510]: 2025-12-13 08:42:16.233 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:16 compute-0 nova_compute[248510]: 2025-12-13 08:42:16.234 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deleting local config drive /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config because it was imported into RBD.
Dec 13 08:42:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3453947686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:42:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3453947686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:42:16 compute-0 sudo[343465]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:42:16 compute-0 systemd[1]: libpod-conmon-7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8.scope: Deactivated successfully.
Dec 13 08:42:16 compute-0 systemd-machined[210538]: New machine qemu-122-instance-00000063.
Dec 13 08:42:16 compute-0 sshd-session[343793]: Connection closed by invalid user solana 193.32.162.146 port 45256 [preauth]
Dec 13 08:42:16 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000063.
Dec 13 08:42:16 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:42:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:42:16 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:42:16 compute-0 sudo[343810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:42:16 compute-0 sudo[343810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:42:16 compute-0 sudo[343810]: pam_unix(sudo:session): session closed for user root
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.177 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615337.1766627, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.178 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Resumed (Lifecycle Event)
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.181 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.181 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.184 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance spawned successfully.
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.185 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.207 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.212 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.212 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.213 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.213 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.213 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.214 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.218 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.253 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.253 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615337.1777263, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.253 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Started (Lifecycle Event)
Dec 13 08:42:17 compute-0 ceph-mon[76537]: pgmap v2385: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Dec 13 08:42:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:42:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.292 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.299 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.307 248514 INFO nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Took 6.94 seconds to spawn the instance on the hypervisor.
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.307 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.335 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.383 248514 INFO nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Took 8.10 seconds to build instance.
Dec 13 08:42:17 compute-0 nova_compute[248510]: 2025-12-13 08:42:17.407 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Dec 13 08:42:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:18 compute-0 ceph-mon[76537]: pgmap v2386: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Dec 13 08:42:18 compute-0 nova_compute[248510]: 2025-12-13 08:42:18.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:18 compute-0 nova_compute[248510]: 2025-12-13 08:42:18.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:42:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.2 total, 600.0 interval
                                           Cumulative writes: 25K writes, 101K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 25K writes, 8616 syncs, 2.95 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5918 writes, 22K keys, 5918 commit groups, 1.0 writes per commit group, ingest: 23.38 MB, 0.04 MB/s
                                           Interval WAL: 5918 writes, 2362 syncs, 2.51 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.709 248514 INFO nova.compute.manager [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Rebuilding instance
Dec 13 08:42:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.900 248514 DEBUG nova.compute.manager [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.901 248514 DEBUG oslo_concurrency.lockutils [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.902 248514 DEBUG oslo_concurrency.lockutils [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.902 248514 DEBUG oslo_concurrency.lockutils [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.902 248514 DEBUG nova.compute.manager [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Processing event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.903 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.913 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615339.912378, 2e9e6711-892f-4278-b911-bfacbff9b48e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.914 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] VM Resumed (Lifecycle Event)
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.916 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.926 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance spawned successfully.
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.926 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.936 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.942 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.962 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.963 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.963 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.964 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.965 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.965 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:19 compute-0 nova_compute[248510]: 2025-12-13 08:42:19.971 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.045 248514 INFO nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Took 17.44 seconds to spawn the instance on the hypervisor.
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.046 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.071 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.098 248514 DEBUG nova.compute.manager [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.116 248514 INFO nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Took 19.03 seconds to build instance.
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.148 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.152 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.166 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.183 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'resources' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.198 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'migration_context' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.276 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:42:20 compute-0 nova_compute[248510]: 2025-12-13 08:42:20.279 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:42:21 compute-0 ceph-mon[76537]: pgmap v2387: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014647506305992414 of space, bias 1.0, pg target 0.4394251891797724 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677122905383687 of space, bias 1.0, pg target 0.20031368716151063 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.927163191339098e-07 of space, bias 4.0, pg target 0.0007112595829606918 quantized to 16 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:42:21 compute-0 nova_compute[248510]: 2025-12-13 08:42:21.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Dec 13 08:42:22 compute-0 nova_compute[248510]: 2025-12-13 08:42:22.030 248514 DEBUG nova.compute.manager [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:22 compute-0 nova_compute[248510]: 2025-12-13 08:42:22.031 248514 DEBUG oslo_concurrency.lockutils [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:22 compute-0 nova_compute[248510]: 2025-12-13 08:42:22.031 248514 DEBUG oslo_concurrency.lockutils [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:22 compute-0 nova_compute[248510]: 2025-12-13 08:42:22.032 248514 DEBUG oslo_concurrency.lockutils [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:22 compute-0 nova_compute[248510]: 2025-12-13 08:42:22.032 248514 DEBUG nova.compute.manager [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] No waiting events found dispatching network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:42:22 compute-0 nova_compute[248510]: 2025-12-13 08:42:22.032 248514 WARNING nova.compute.manager [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received unexpected event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 for instance with vm_state active and task_state None.
Dec 13 08:42:23 compute-0 ceph-mon[76537]: pgmap v2388: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Dec 13 08:42:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 08:42:23 compute-0 nova_compute[248510]: 2025-12-13 08:42:23.789 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 13 08:42:24 compute-0 ceph-mon[76537]: pgmap v2389: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 13 08:42:24 compute-0 nova_compute[248510]: 2025-12-13 08:42:24.844 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:24 compute-0 nova_compute[248510]: 2025-12-13 08:42:24.845 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:24 compute-0 nova_compute[248510]: 2025-12-13 08:42:24.864 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:42:24 compute-0 nova_compute[248510]: 2025-12-13 08:42:24.941 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:24 compute-0 nova_compute[248510]: 2025-12-13 08:42:24.942 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:24 compute-0 nova_compute[248510]: 2025-12-13 08:42:24.952 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:42:24 compute-0 nova_compute[248510]: 2025-12-13 08:42:24.953 248514 INFO nova.compute.claims [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.176 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868920489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.770 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.775 248514 DEBUG nova.compute.provider_tree [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.799 248514 DEBUG nova.scheduler.client.report [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2868920489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 875 KiB/s wr, 159 op/s
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.844 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.845 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.926 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.926 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.956 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:42:25 compute-0 nova_compute[248510]: 2025-12-13 08:42:25.990 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.109 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.110 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.110 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Creating image(s)
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.129 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.150 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.173 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.177 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.265 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.265 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.266 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.266 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.286 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.290 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:26 compute-0 nova_compute[248510]: 2025-12-13 08:42:26.328 248514 DEBUG nova.policy [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:42:26 compute-0 ceph-mon[76537]: pgmap v2390: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 875 KiB/s wr, 159 op/s
Dec 13 08:42:27 compute-0 nova_compute[248510]: 2025-12-13 08:42:27.302 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:27 compute-0 nova_compute[248510]: 2025-12-13 08:42:27.369 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:42:27 compute-0 nova_compute[248510]: 2025-12-13 08:42:27.483 248514 DEBUG nova.objects.instance [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:27 compute-0 nova_compute[248510]: 2025-12-13 08:42:27.594 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:42:27 compute-0 nova_compute[248510]: 2025-12-13 08:42:27.596 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Ensure instance console log exists: /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:42:27 compute-0 nova_compute[248510]: 2025-12-13 08:42:27.596 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:27 compute-0 nova_compute[248510]: 2025-12-13 08:42:27.597 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:27 compute-0 nova_compute[248510]: 2025-12-13 08:42:27.597 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Dec 13 08:42:28 compute-0 nova_compute[248510]: 2025-12-13 08:42:28.001 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Successfully created port: 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:42:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:28 compute-0 nova_compute[248510]: 2025-12-13 08:42:28.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:29 compute-0 nova_compute[248510]: 2025-12-13 08:42:29.106 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Successfully updated port: 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:42:29 compute-0 nova_compute[248510]: 2025-12-13 08:42:29.137 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:42:29 compute-0 nova_compute[248510]: 2025-12-13 08:42:29.137 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:42:29 compute-0 nova_compute[248510]: 2025-12-13 08:42:29.137 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:42:29 compute-0 nova_compute[248510]: 2025-12-13 08:42:29.285 248514 DEBUG nova.compute.manager [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-changed-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:29 compute-0 nova_compute[248510]: 2025-12-13 08:42:29.286 248514 DEBUG nova.compute.manager [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Refreshing instance network info cache due to event network-changed-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:42:29 compute-0 nova_compute[248510]: 2025-12-13 08:42:29.286 248514 DEBUG oslo_concurrency.lockutils [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:42:29 compute-0 nova_compute[248510]: 2025-12-13 08:42:29.394 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:42:29 compute-0 ceph-mon[76537]: pgmap v2391: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Dec 13 08:42:29 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec 13 08:42:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 233 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 164 op/s
Dec 13 08:42:30 compute-0 nova_compute[248510]: 2025-12-13 08:42:30.432 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:42:30 compute-0 ceph-mon[76537]: pgmap v2392: 321 pgs: 321 active+clean; 233 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 164 op/s
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.205 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Updating instance_info_cache with network_info: [{"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.218 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.234 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.235 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance network_info: |[{"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.235 248514 DEBUG oslo_concurrency.lockutils [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.235 248514 DEBUG nova.network.neutron [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Refreshing network info cache for port 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.238 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start _get_guest_xml network_info=[{"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.241 248514 WARNING nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.247 248514 DEBUG nova.virt.libvirt.host [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.248 248514 DEBUG nova.virt.libvirt.host [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.251 248514 DEBUG nova.virt.libvirt.host [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.252 248514 DEBUG nova.virt.libvirt.host [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.252 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.253 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.254 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.254 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.255 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.255 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.255 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.256 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.256 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.256 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.257 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.257 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.262 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:42:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289946204' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.831 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 264 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.861 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:31 compute-0 nova_compute[248510]: 2025-12-13 08:42:31.865 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2289946204' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:42:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3299341507' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.456 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.458 248514 DEBUG nova.virt.libvirt.vif [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=100,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-2qzmyk70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:26Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.458 248514 DEBUG nova.network.os_vif_util [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.459 248514 DEBUG nova.network.os_vif_util [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.460 248514 DEBUG nova.objects.instance [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.480 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <uuid>6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c</uuid>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <name>instance-00000064</name>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestJSON-server-1551844735</nova:name>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:42:31</nova:creationTime>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <nova:port uuid="6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965">
Dec 13 08:42:32 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <system>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <entry name="serial">6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c</entry>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <entry name="uuid">6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c</entry>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </system>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <os>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   </os>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <features>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   </features>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk">
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config">
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       </source>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:42:32 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:89:6b:01"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <target dev="tap6a8f6d0b-00"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/console.log" append="off"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <video>
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </video>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:42:32 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:42:32 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:42:32 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:42:32 compute-0 nova_compute[248510]: </domain>
Dec 13 08:42:32 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.481 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Preparing to wait for external event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.481 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.481 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.481 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.482 248514 DEBUG nova.virt.libvirt.vif [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=100,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-2qzmyk70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:26Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.482 248514 DEBUG nova.network.os_vif_util [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.483 248514 DEBUG nova.network.os_vif_util [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.483 248514 DEBUG os_vif [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.484 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.485 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.487 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a8f6d0b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.488 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a8f6d0b-00, col_values=(('external_ids', {'iface-id': '6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:6b:01', 'vm-uuid': '6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:32 compute-0 NetworkManager[50376]: <info>  [1765615352.4901] manager: (tap6a8f6d0b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.497 248514 INFO os_vif [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00')
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.558 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.559 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.559 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:89:6b:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.560 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Using config drive
Dec 13 08:42:32 compute-0 nova_compute[248510]: 2025-12-13 08:42:32.578 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:33 compute-0 ceph-mon[76537]: pgmap v2393: 321 pgs: 321 active+clean; 264 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Dec 13 08:42:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3299341507' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.253600) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353253636, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2100, "num_deletes": 255, "total_data_size": 3434009, "memory_usage": 3492000, "flush_reason": "Manual Compaction"}
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353270530, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 2052033, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45072, "largest_seqno": 47171, "table_properties": {"data_size": 2044946, "index_size": 3777, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18775, "raw_average_key_size": 21, "raw_value_size": 2029063, "raw_average_value_size": 2295, "num_data_blocks": 170, "num_entries": 884, "num_filter_entries": 884, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615152, "oldest_key_time": 1765615152, "file_creation_time": 1765615353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 16987 microseconds, and 5072 cpu microseconds.
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.270580) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 2052033 bytes OK
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.270602) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.272600) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.272618) EVENT_LOG_v1 {"time_micros": 1765615353272613, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.272637) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3425090, prev total WAL file size 3425090, number of live WAL files 2.
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.273751) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373539' seq:72057594037927935, type:22 .. '6D6772737461740032303131' seq:0, type:0; will stop at (end)
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(2003KB)], [104(9369KB)]
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353273808, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11646830, "oldest_snapshot_seqno": -1}
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7081 keys, 9465697 bytes, temperature: kUnknown
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353341668, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 9465697, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9420086, "index_size": 26811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 182502, "raw_average_key_size": 25, "raw_value_size": 9294990, "raw_average_value_size": 1312, "num_data_blocks": 1059, "num_entries": 7081, "num_filter_entries": 7081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.341914) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 9465697 bytes
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.343945) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.4 rd, 139.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(10.3) write-amplify(4.6) OK, records in: 7511, records dropped: 430 output_compression: NoCompression
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.343966) EVENT_LOG_v1 {"time_micros": 1765615353343957, "job": 62, "event": "compaction_finished", "compaction_time_micros": 67944, "compaction_time_cpu_micros": 26537, "output_level": 6, "num_output_files": 1, "total_output_size": 9465697, "num_input_records": 7511, "num_output_records": 7081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353344558, "job": 62, "event": "table_file_deletion", "file_number": 106}
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353346542, "job": 62, "event": "table_file_deletion", "file_number": 104}
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.273673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:33 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:33 compute-0 nova_compute[248510]: 2025-12-13 08:42:33.590 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Creating config drive at /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config
Dec 13 08:42:33 compute-0 nova_compute[248510]: 2025-12-13 08:42:33.595 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqyoe2iv9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:33 compute-0 nova_compute[248510]: 2025-12-13 08:42:33.709 248514 DEBUG nova.network.neutron [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Updated VIF entry in instance network info cache for port 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:42:33 compute-0 nova_compute[248510]: 2025-12-13 08:42:33.710 248514 DEBUG nova.network.neutron [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Updating instance_info_cache with network_info: [{"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:33 compute-0 nova_compute[248510]: 2025-12-13 08:42:33.733 248514 DEBUG oslo_concurrency.lockutils [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:42:33 compute-0 nova_compute[248510]: 2025-12-13 08:42:33.743 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqyoe2iv9" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:33 compute-0 nova_compute[248510]: 2025-12-13 08:42:33.765 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:33 compute-0 nova_compute[248510]: 2025-12-13 08:42:33.769 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 272 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 154 op/s
Dec 13 08:42:33 compute-0 ovn_controller[148476]: 2025-12-13T08:42:33Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:df:21 10.100.0.9
Dec 13 08:42:33 compute-0 ovn_controller[148476]: 2025-12-13T08:42:33Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:df:21 10.100.0.9
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.401 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.402 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Deleting local config drive /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config because it was imported into RBD.
Dec 13 08:42:34 compute-0 NetworkManager[50376]: <info>  [1765615354.4537] manager: (tap6a8f6d0b-00): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Dec 13 08:42:34 compute-0 kernel: tap6a8f6d0b-00: entered promiscuous mode
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:34 compute-0 ovn_controller[148476]: 2025-12-13T08:42:34Z|00956|binding|INFO|Claiming lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 for this chassis.
Dec 13 08:42:34 compute-0 ovn_controller[148476]: 2025-12-13T08:42:34Z|00957|binding|INFO|6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965: Claiming fa:16:3e:89:6b:01 10.100.0.7
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.468 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:6b:01 10.100.0.7'], port_security=['fa:16:3e:89:6b:01 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.469 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.471 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:42:34 compute-0 ovn_controller[148476]: 2025-12-13T08:42:34Z|00958|binding|INFO|Setting lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 ovn-installed in OVS
Dec 13 08:42:34 compute-0 ovn_controller[148476]: 2025-12-13T08:42:34Z|00959|binding|INFO|Setting lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 up in Southbound
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:34 compute-0 systemd-udevd[344201]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.495 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc74c8b2-5760-4ccc-8d17-f6cb46af33e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:34 compute-0 systemd-machined[210538]: New machine qemu-123-instance-00000064.
Dec 13 08:42:34 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Dec 13 08:42:34 compute-0 NetworkManager[50376]: <info>  [1765615354.5130] device (tap6a8f6d0b-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:42:34 compute-0 ceph-mon[76537]: pgmap v2394: 321 pgs: 321 active+clean; 272 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 154 op/s
Dec 13 08:42:34 compute-0 NetworkManager[50376]: <info>  [1765615354.5152] device (tap6a8f6d0b-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.543 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[885cbbea-d3cd-4a22-b704-ae3bd089c30d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.547 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0e4f82-95c5-4ef3-afd5-8cef12e6356c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.582 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5d456f-2db3-41bf-b80c-2fc37388c71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.600 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e89350fc-9398-4997-a140-7a3fc92fc30a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344215, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.616 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09fdb1f7-c6bc-455c-8ec9-192866b19977]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344216, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344216, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.618 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.621 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.621 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.622 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.622 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.976 248514 DEBUG nova.compute.manager [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.977 248514 DEBUG oslo_concurrency.lockutils [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.977 248514 DEBUG oslo_concurrency.lockutils [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.977 248514 DEBUG oslo_concurrency.lockutils [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:34 compute-0 nova_compute[248510]: 2025-12-13 08:42:34.978 248514 DEBUG nova.compute.manager [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Processing event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.114 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615355.1143548, 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.115 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] VM Started (Lifecycle Event)
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.117 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.119 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.122 248514 INFO nova.virt.libvirt.driver [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance spawned successfully.
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.122 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.144 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.150 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.155 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.155 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.156 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.156 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.157 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.159 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.195 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.195 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615355.1144834, 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.195 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] VM Paused (Lifecycle Event)
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.230 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.234 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615355.1190386, 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.234 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] VM Resumed (Lifecycle Event)
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.249 248514 INFO nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Took 9.14 seconds to spawn the instance on the hypervisor.
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.249 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.261 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.263 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.295 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.355 248514 INFO nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Took 10.44 seconds to build instance.
Dec 13 08:42:35 compute-0 nova_compute[248510]: 2025-12-13 08:42:35.380 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 318 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.0 MiB/s wr, 195 op/s
Dec 13 08:42:36 compute-0 nova_compute[248510]: 2025-12-13 08:42:36.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:36 compute-0 ceph-mon[76537]: pgmap v2395: 321 pgs: 321 active+clean; 318 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.0 MiB/s wr, 195 op/s
Dec 13 08:42:37 compute-0 nova_compute[248510]: 2025-12-13 08:42:37.120 248514 DEBUG nova.compute.manager [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:37 compute-0 nova_compute[248510]: 2025-12-13 08:42:37.120 248514 DEBUG oslo_concurrency.lockutils [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:37 compute-0 nova_compute[248510]: 2025-12-13 08:42:37.120 248514 DEBUG oslo_concurrency.lockutils [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:37 compute-0 nova_compute[248510]: 2025-12-13 08:42:37.121 248514 DEBUG oslo_concurrency.lockutils [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:37 compute-0 nova_compute[248510]: 2025-12-13 08:42:37.121 248514 DEBUG nova.compute.manager [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] No waiting events found dispatching network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:42:37 compute-0 nova_compute[248510]: 2025-12-13 08:42:37.121 248514 WARNING nova.compute.manager [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received unexpected event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 for instance with vm_state active and task_state None.
Dec 13 08:42:37 compute-0 nova_compute[248510]: 2025-12-13 08:42:37.491 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 318 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 6.0 MiB/s wr, 146 op/s
Dec 13 08:42:37 compute-0 podman[344260]: 2025-12-13 08:42:37.982833564 +0000 UTC m=+0.066813133 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 08:42:37 compute-0 podman[344261]: 2025-12-13 08:42:37.999820109 +0000 UTC m=+0.083070069 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 08:42:38 compute-0 podman[344259]: 2025-12-13 08:42:38.028980044 +0000 UTC m=+0.113144678 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 13 08:42:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.496894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358496922, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 291, "num_deletes": 251, "total_data_size": 101214, "memory_usage": 108200, "flush_reason": "Manual Compaction"}
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358528246, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 100159, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47172, "largest_seqno": 47462, "table_properties": {"data_size": 98208, "index_size": 179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4961, "raw_average_key_size": 18, "raw_value_size": 94431, "raw_average_value_size": 348, "num_data_blocks": 8, "num_entries": 271, "num_filter_entries": 271, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615354, "oldest_key_time": 1765615354, "file_creation_time": 1765615358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 31402 microseconds, and 1018 cpu microseconds.
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.528290) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 100159 bytes OK
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.528310) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.615547) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.615593) EVENT_LOG_v1 {"time_micros": 1765615358615583, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.615619) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 99070, prev total WAL file size 99070, number of live WAL files 2.
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.618118) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(97KB)], [107(9243KB)]
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358618177, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9565856, "oldest_snapshot_seqno": -1}
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6843 keys, 7777672 bytes, temperature: kUnknown
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358871222, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7777672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7735044, "index_size": 24404, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178244, "raw_average_key_size": 26, "raw_value_size": 7615469, "raw_average_value_size": 1112, "num_data_blocks": 949, "num_entries": 6843, "num_filter_entries": 6843, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.871638) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7777672 bytes
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.878159) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 37.8 rd, 30.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(173.2) write-amplify(77.7) OK, records in: 7352, records dropped: 509 output_compression: NoCompression
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.878190) EVENT_LOG_v1 {"time_micros": 1765615358878177, "job": 64, "event": "compaction_finished", "compaction_time_micros": 253170, "compaction_time_cpu_micros": 19160, "output_level": 6, "num_output_files": 1, "total_output_size": 7777672, "num_input_records": 7352, "num_output_records": 6843, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358878377, "job": 64, "event": "table_file_deletion", "file_number": 109}
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358880412, "job": 64, "event": "table_file_deletion", "file_number": 107}
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.618026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:42:38 compute-0 nova_compute[248510]: 2025-12-13 08:42:38.994 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:38 compute-0 nova_compute[248510]: 2025-12-13 08:42:38.995 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:38 compute-0 nova_compute[248510]: 2025-12-13 08:42:38.995 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:38 compute-0 nova_compute[248510]: 2025-12-13 08:42:38.995 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:38 compute-0 nova_compute[248510]: 2025-12-13 08:42:38.996 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:38 compute-0 nova_compute[248510]: 2025-12-13 08:42:38.997 248514 INFO nova.compute.manager [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Terminating instance
Dec 13 08:42:38 compute-0 nova_compute[248510]: 2025-12-13 08:42:38.998 248514 DEBUG nova.compute.manager [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:42:39 compute-0 kernel: tap6a8f6d0b-00 (unregistering): left promiscuous mode
Dec 13 08:42:39 compute-0 NetworkManager[50376]: <info>  [1765615359.0633] device (tap6a8f6d0b-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:42:39 compute-0 ovn_controller[148476]: 2025-12-13T08:42:39Z|00960|binding|INFO|Releasing lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 from this chassis (sb_readonly=0)
Dec 13 08:42:39 compute-0 ovn_controller[148476]: 2025-12-13T08:42:39Z|00961|binding|INFO|Setting lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 down in Southbound
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 ovn_controller[148476]: 2025-12-13T08:42:39Z|00962|binding|INFO|Removing iface tap6a8f6d0b-00 ovn-installed in OVS
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.085 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:6b:01 10.100.0.7'], port_security=['fa:16:3e:89:6b:01 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.086 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.088 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.103 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b10647db-a289-4f73-ad84-321245fced13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:39 compute-0 ceph-mon[76537]: pgmap v2396: 321 pgs: 321 active+clean; 318 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 6.0 MiB/s wr, 146 op/s
Dec 13 08:42:39 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Dec 13 08:42:39 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 4.498s CPU time.
Dec 13 08:42:39 compute-0 systemd-machined[210538]: Machine qemu-123-instance-00000064 terminated.
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.132 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c745a6a5-9b3e-46e5-9e4c-b1250d32afff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.137 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f9c96f-247e-467a-966c-f73b74ce4c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.167 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e680adc7-7785-4d79-9c4d-5ce45b3a884b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.186 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[269992d1-af2f-453d-a0b4-4461960e37ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344332, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.203 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40a3248c-0719-49b7-9e35-9b311eb52241]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344333, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344333, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.205 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.213 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.213 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.213 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.229 248514 INFO nova.virt.libvirt.driver [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance destroyed successfully.
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.230 248514 DEBUG nova.objects.instance [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.253 248514 DEBUG nova.virt.libvirt.vif [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=100,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:42:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-2qzmyk70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:42:35Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.253 248514 DEBUG nova.network.os_vif_util [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.254 248514 DEBUG nova.network.os_vif_util [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.255 248514 DEBUG os_vif [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.257 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a8f6d0b-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.258 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.260 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.262 248514 INFO os_vif [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00')
Dec 13 08:42:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 326 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.0 MiB/s wr, 196 op/s
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.867 248514 DEBUG nova.compute.manager [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-unplugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.867 248514 DEBUG oslo_concurrency.lockutils [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.868 248514 DEBUG oslo_concurrency.lockutils [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.868 248514 DEBUG oslo_concurrency.lockutils [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.868 248514 DEBUG nova.compute.manager [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] No waiting events found dispatching network-vif-unplugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:42:39 compute-0 nova_compute[248510]: 2025-12-13 08:42:39.869 248514 DEBUG nova.compute.manager [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-unplugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:42:40 compute-0 nova_compute[248510]: 2025-12-13 08:42:40.054 248514 INFO nova.virt.libvirt.driver [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Deleting instance files /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_del
Dec 13 08:42:40 compute-0 nova_compute[248510]: 2025-12-13 08:42:40.054 248514 INFO nova.virt.libvirt.driver [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Deletion of /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_del complete
Dec 13 08:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:42:40 compute-0 nova_compute[248510]: 2025-12-13 08:42:40.130 248514 INFO nova.compute.manager [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Took 1.13 seconds to destroy the instance on the hypervisor.
Dec 13 08:42:40 compute-0 nova_compute[248510]: 2025-12-13 08:42:40.130 248514 DEBUG oslo.service.loopingcall [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:42:40 compute-0 nova_compute[248510]: 2025-12-13 08:42:40.130 248514 DEBUG nova.compute.manager [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:42:40 compute-0 nova_compute[248510]: 2025-12-13 08:42:40.131 248514 DEBUG nova.network.neutron [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:42:41 compute-0 ceph-mon[76537]: pgmap v2397: 321 pgs: 321 active+clean; 326 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.0 MiB/s wr, 196 op/s
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.480 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.605 248514 DEBUG nova.network.neutron [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.628 248514 INFO nova.compute.manager [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Took 1.50 seconds to deallocate network for instance.
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.691 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.692 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.744 248514 DEBUG nova.compute.manager [req-44fd1b34-730f-44ed-a57e-7dca291c8f83 req-e53774e9-d2e6-4e3d-aef3-53c57c96b172 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-deleted-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.833 248514 DEBUG oslo_concurrency.processutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 298 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 207 op/s
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.994 248514 DEBUG nova.compute.manager [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.994 248514 DEBUG oslo_concurrency.lockutils [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.995 248514 DEBUG oslo_concurrency.lockutils [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.995 248514 DEBUG oslo_concurrency.lockutils [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.995 248514 DEBUG nova.compute.manager [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] No waiting events found dispatching network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:42:41 compute-0 nova_compute[248510]: 2025-12-13 08:42:41.995 248514 WARNING nova.compute.manager [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received unexpected event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 for instance with vm_state deleted and task_state None.
Dec 13 08:42:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273022990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:42 compute-0 nova_compute[248510]: 2025-12-13 08:42:42.405 248514 DEBUG oslo_concurrency.processutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:42 compute-0 nova_compute[248510]: 2025-12-13 08:42:42.411 248514 DEBUG nova.compute.provider_tree [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:42 compute-0 nova_compute[248510]: 2025-12-13 08:42:42.434 248514 DEBUG nova.scheduler.client.report [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:42 compute-0 nova_compute[248510]: 2025-12-13 08:42:42.467 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:42 compute-0 nova_compute[248510]: 2025-12-13 08:42:42.520 248514 INFO nova.scheduler.client.report [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c
Dec 13 08:42:42 compute-0 nova_compute[248510]: 2025-12-13 08:42:42.627 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:43 compute-0 ceph-mon[76537]: pgmap v2398: 321 pgs: 321 active+clean; 298 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 207 op/s
Dec 13 08:42:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4273022990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.248 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.248 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.248 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.249 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.249 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.250 248514 INFO nova.compute.manager [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Terminating instance
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.251 248514 DEBUG nova.compute.manager [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:42:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:43 compute-0 kernel: tapa6d7b148-2f (unregistering): left promiscuous mode
Dec 13 08:42:43 compute-0 NetworkManager[50376]: <info>  [1765615363.3568] device (tapa6d7b148-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.361 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:43 compute-0 ovn_controller[148476]: 2025-12-13T08:42:43Z|00963|binding|INFO|Releasing lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 from this chassis (sb_readonly=0)
Dec 13 08:42:43 compute-0 ovn_controller[148476]: 2025-12-13T08:42:43Z|00964|binding|INFO|Setting lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 down in Southbound
Dec 13 08:42:43 compute-0 ovn_controller[148476]: 2025-12-13T08:42:43Z|00965|binding|INFO|Removing iface tapa6d7b148-2f ovn-installed in OVS
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:43 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Deactivated successfully.
Dec 13 08:42:43 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Consumed 13.268s CPU time.
Dec 13 08:42:43 compute-0 systemd-machined[210538]: Machine qemu-121-instance-00000062 terminated.
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.487 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance destroyed successfully.
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.488 248514 DEBUG nova.objects.instance [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 2e9e6711-892f-4278-b911-bfacbff9b48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.545 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:df:21 10.100.0.9'], port_security=['fa:16:3e:e5:df:21 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e9e6711-892f-4278-b911-bfacbff9b48e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a6d7b148-2fef-4c47-a3fa-c8948759b4a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.546 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a6d7b148-2fef-4c47-a3fa-c8948759b4a9 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.547 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.563 248514 DEBUG nova.virt.libvirt.vif [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:41:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=98,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:42:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-6ci0kbz7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:42:20Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=2e9e6711-892f-4278-b911-bfacbff9b48e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.563 248514 DEBUG nova.network.os_vif_util [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.564 248514 DEBUG nova.network.os_vif_util [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.565 248514 DEBUG os_vif [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.566 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.566 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6d7b148-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78a0f578-2fbe-4168-8f82-2f01c381407b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.572 248514 INFO os_vif [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f')
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.606 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1afe41ae-7fde-4c89-b696-3c658cf84d53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.610 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd4aa5f-f2f0-4587-ab2a-65dc7ee2ce5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.646 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4663de1b-a58d-4aec-b66a-b8133659c7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea65c441-a543-479b-a73d-c41157e28b01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344426, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b9693afd-a969-4ea7-b409-2dfe4262a7dd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344427, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344427, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.702 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.703 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.708 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.708 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.709 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:42:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.709 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:42:43 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Deactivated successfully.
Dec 13 08:42:43 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Consumed 13.446s CPU time.
Dec 13 08:42:43 compute-0 systemd-machined[210538]: Machine qemu-122-instance-00000063 terminated.
Dec 13 08:42:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 285 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 224 op/s
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.866 248514 INFO nova.virt.libvirt.driver [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Deleting instance files /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e_del
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.867 248514 INFO nova.virt.libvirt.driver [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Deletion of /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e_del complete
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.939 248514 INFO nova.compute.manager [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Took 0.69 seconds to destroy the instance on the hypervisor.
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.940 248514 DEBUG oslo.service.loopingcall [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.940 248514 DEBUG nova.compute.manager [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:42:43 compute-0 nova_compute[248510]: 2025-12-13 08:42:43.940 248514 DEBUG nova.network.neutron [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:42:44 compute-0 nova_compute[248510]: 2025-12-13 08:42:44.495 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance shutdown successfully after 24 seconds.
Dec 13 08:42:44 compute-0 nova_compute[248510]: 2025-12-13 08:42:44.501 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance destroyed successfully.
Dec 13 08:42:44 compute-0 nova_compute[248510]: 2025-12-13 08:42:44.506 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance destroyed successfully.
Dec 13 08:42:44 compute-0 nova_compute[248510]: 2025-12-13 08:42:44.802 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deleting instance files /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad_del
Dec 13 08:42:44 compute-0 nova_compute[248510]: 2025-12-13 08:42:44.803 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deletion of /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad_del complete
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.013 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.014 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating image(s)
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.037 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.064 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.091 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.099 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.186 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.187 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.187 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.187 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.210 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.214 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:45 compute-0 ceph-mon[76537]: pgmap v2399: 321 pgs: 321 active+clean; 285 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 224 op/s
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.527 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.585 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] resizing rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.681 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.683 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Ensure instance console log exists: /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.684 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.684 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.685 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.687 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.693 248514 WARNING nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.699 248514 DEBUG nova.virt.libvirt.host [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.699 248514 DEBUG nova.virt.libvirt.host [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.703 248514 DEBUG nova.virt.libvirt.host [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.704 248514 DEBUG nova.virt.libvirt.host [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.704 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.704 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.705 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.705 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.705 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.705 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.706 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.706 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.706 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.706 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.707 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.707 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.707 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.736 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.814 248514 DEBUG nova.network.neutron [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 204 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 215 op/s
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.892 248514 INFO nova.compute.manager [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Took 1.95 seconds to deallocate network for instance.
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.907 248514 DEBUG nova.compute.manager [req-c2edb3a6-263d-4cc5-82af-0fcd4cdf4ec7 req-66ccdf7b-db3c-4330-8174-f5cb3c4b783b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received event network-vif-deleted-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.908 248514 INFO nova.compute.manager [req-c2edb3a6-263d-4cc5-82af-0fcd4cdf4ec7 req-66ccdf7b-db3c-4330-8174-f5cb3c4b783b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Neutron deleted interface a6d7b148-2fef-4c47-a3fa-c8948759b4a9; detaching it from the instance and deleting it from the info cache
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.908 248514 DEBUG nova.network.neutron [req-c2edb3a6-263d-4cc5-82af-0fcd4cdf4ec7 req-66ccdf7b-db3c-4330-8174-f5cb3c4b783b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.949 248514 DEBUG nova.compute.manager [req-c2edb3a6-263d-4cc5-82af-0fcd4cdf4ec7 req-66ccdf7b-db3c-4330-8174-f5cb3c4b783b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Detach interface failed, port_id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9, reason: Instance 2e9e6711-892f-4278-b911-bfacbff9b48e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.980 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:45 compute-0 nova_compute[248510]: 2025-12-13 08:42:45.981 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.094 248514 DEBUG oslo_concurrency.processutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:42:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058594592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.396 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.420 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.424 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517483989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.693 248514 DEBUG oslo_concurrency.processutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.699 248514 DEBUG nova.compute.provider_tree [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.737 248514 DEBUG nova.scheduler.client.report [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.770 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:46 compute-0 nova_compute[248510]: 2025-12-13 08:42:46.917 248514 INFO nova.scheduler.client.report [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 2e9e6711-892f-4278-b911-bfacbff9b48e
Dec 13 08:42:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:42:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459931686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.066 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.068 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.072 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <uuid>4edf7378-73e8-4119-a019-6ab7098ee4ad</uuid>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <name>instance-00000063</name>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <nova:name>tempest-ServerShowV257Test-server-1971514055</nova:name>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:42:45</nova:creationTime>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <nova:user uuid="e382ad6b2fdc41398bfa58dbb651d4be">tempest-ServerShowV257Test-371152472-project-member</nova:user>
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <nova:project uuid="6d41fdecfe334aaeaaba54b9c6eaeb00">tempest-ServerShowV257Test-371152472</nova:project>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <system>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <entry name="serial">4edf7378-73e8-4119-a019-6ab7098ee4ad</entry>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <entry name="uuid">4edf7378-73e8-4119-a019-6ab7098ee4ad</entry>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     </system>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <os>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   </os>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <features>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   </features>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4edf7378-73e8-4119-a019-6ab7098ee4ad_disk">
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       </source>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config">
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       </source>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:42:47 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/console.log" append="off"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <video>
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     </video>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:42:47 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:42:47 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:42:47 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:42:47 compute-0 nova_compute[248510]: </domain>
Dec 13 08:42:47 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.152 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.153 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.154 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Using config drive
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.183 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.220 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:47 compute-0 ceph-mon[76537]: pgmap v2400: 321 pgs: 321 active+clean; 204 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 215 op/s
Dec 13 08:42:47 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1058594592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:47 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2517483989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:47 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2459931686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.277 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'keypairs' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.480 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating config drive at /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.486 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kcn1t6n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.630 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kcn1t6n" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.652 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.656 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.803 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:47 compute-0 nova_compute[248510]: 2025-12-13 08:42:47.806 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deleting local config drive /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config because it was imported into RBD.
Dec 13 08:42:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 204 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 126 KiB/s wr, 135 op/s
Dec 13 08:42:47 compute-0 systemd-machined[210538]: New machine qemu-124-instance-00000063.
Dec 13 08:42:47 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000063.
Dec 13 08:42:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.593 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 4edf7378-73e8-4119-a019-6ab7098ee4ad due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.594 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615368.5933955, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.594 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Resumed (Lifecycle Event)
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.597 248514 DEBUG nova.compute.manager [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.597 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.601 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance spawned successfully.
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.602 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.632 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.636 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.637 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.637 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.637 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.638 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.638 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.642 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.679 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.679 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615368.5968769, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.680 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Started (Lifecycle Event)
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.714 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.718 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.740 248514 DEBUG nova.compute.manager [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.753 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.840 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.841 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.842 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 08:42:48 compute-0 nova_compute[248510]: 2025-12-13 08:42:48.986 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:49 compute-0 ceph-mon[76537]: pgmap v2401: 321 pgs: 321 active+clean; 204 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 126 KiB/s wr, 135 op/s
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.465 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.466 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.466 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "4edf7378-73e8-4119-a019-6ab7098ee4ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.466 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.467 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.468 248514 INFO nova.compute.manager [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Terminating instance
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.469 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "refresh_cache-4edf7378-73e8-4119-a019-6ab7098ee4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.470 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquired lock "refresh_cache-4edf7378-73e8-4119-a019-6ab7098ee4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.470 248514 DEBUG nova.network.neutron [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.647 248514 DEBUG nova.network.neutron [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:42:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 146 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 199 op/s
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.928 248514 DEBUG nova.network.neutron [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.950 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Releasing lock "refresh_cache-4edf7378-73e8-4119-a019-6ab7098ee4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:42:49 compute-0 nova_compute[248510]: 2025-12-13 08:42:49.951 248514 DEBUG nova.compute.manager [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:42:50 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000063.scope: Deactivated successfully.
Dec 13 08:42:50 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000063.scope: Consumed 2.148s CPU time.
Dec 13 08:42:50 compute-0 systemd-machined[210538]: Machine qemu-124-instance-00000063 terminated.
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.172 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance destroyed successfully.
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.172 248514 DEBUG nova.objects.instance [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'resources' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.458 248514 INFO nova.virt.libvirt.driver [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deleting instance files /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad_del
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.459 248514 INFO nova.virt.libvirt.driver [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deletion of /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad_del complete
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.527 248514 INFO nova.compute.manager [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Took 0.57 seconds to destroy the instance on the hypervisor.
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.528 248514 DEBUG oslo.service.loopingcall [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.529 248514 DEBUG nova.compute.manager [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.529 248514 DEBUG nova.network.neutron [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.722 248514 DEBUG nova.network.neutron [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.739 248514 DEBUG nova.network.neutron [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.764 248514 INFO nova.compute.manager [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Took 0.23 seconds to deallocate network for instance.
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.816 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.817 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:50 compute-0 nova_compute[248510]: 2025-12-13 08:42:50.931 248514 DEBUG oslo_concurrency.processutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:51 compute-0 nova_compute[248510]: 2025-12-13 08:42:51.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:51 compute-0 ceph-mon[76537]: pgmap v2402: 321 pgs: 321 active+clean; 146 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 199 op/s
Dec 13 08:42:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962436280' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:51 compute-0 nova_compute[248510]: 2025-12-13 08:42:51.499 248514 DEBUG oslo_concurrency.processutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:51 compute-0 nova_compute[248510]: 2025-12-13 08:42:51.506 248514 DEBUG nova.compute.provider_tree [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:51 compute-0 nova_compute[248510]: 2025-12-13 08:42:51.538 248514 DEBUG nova.scheduler.client.report [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:51 compute-0 nova_compute[248510]: 2025-12-13 08:42:51.622 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:51 compute-0 nova_compute[248510]: 2025-12-13 08:42:51.659 248514 INFO nova.scheduler.client.report [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Deleted allocations for instance 4edf7378-73e8-4119-a019-6ab7098ee4ad
Dec 13 08:42:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 08:42:51 compute-0 nova_compute[248510]: 2025-12-13 08:42:51.931 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/962436280' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:52 compute-0 nova_compute[248510]: 2025-12-13 08:42:52.569 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:52 compute-0 nova_compute[248510]: 2025-12-13 08:42:52.569 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:52 compute-0 nova_compute[248510]: 2025-12-13 08:42:52.618 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:42:52 compute-0 nova_compute[248510]: 2025-12-13 08:42:52.703 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:52 compute-0 nova_compute[248510]: 2025-12-13 08:42:52.704 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:52 compute-0 nova_compute[248510]: 2025-12-13 08:42:52.711 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:42:52 compute-0 nova_compute[248510]: 2025-12-13 08:42:52.712 248514 INFO nova.compute.claims [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:42:52 compute-0 nova_compute[248510]: 2025-12-13 08:42:52.935 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:53 compute-0 ceph-mon[76537]: pgmap v2403: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 08:42:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:53.431 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:42:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:53.432 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.432 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:42:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438340591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.520 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.526 248514 DEBUG nova.compute.provider_tree [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.561 248514 DEBUG nova.scheduler.client.report [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.644 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.645 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.779 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:42:53 compute-0 nova_compute[248510]: 2025-12-13 08:42:53.780 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:42:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 153 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.159 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.228 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615359.2278087, 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.229 248514 INFO nova.compute.manager [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] VM Stopped (Lifecycle Event)
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.241 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.272 248514 DEBUG nova.compute.manager [None req-f95517ad-de54-41f3-acc0-cbc72a6366e4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1438340591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:42:54 compute-0 ceph-mon[76537]: pgmap v2404: 321 pgs: 321 active+clean; 153 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 08:42:54 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:42:54 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.389 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.391 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.391 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Creating image(s)
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.421 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.446 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.467 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.471 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.507 248514 DEBUG nova.policy [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.541 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.542 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.543 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.543 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.565 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.568 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 47b7be19-6608-45b4-9f0e-74393969e3f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.871 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 47b7be19-6608-45b4-9f0e-74393969e3f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:42:54 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.932 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:42:55 compute-0 nova_compute[248510]: 2025-12-13 08:42:54.999 248514 DEBUG nova.objects.instance [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 47b7be19-6608-45b4-9f0e-74393969e3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:42:55 compute-0 nova_compute[248510]: 2025-12-13 08:42:55.101 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:42:55 compute-0 nova_compute[248510]: 2025-12-13 08:42:55.101 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Ensure instance console log exists: /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:42:55 compute-0 nova_compute[248510]: 2025-12-13 08:42:55.102 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:55 compute-0 nova_compute[248510]: 2025-12-13 08:42:55.102 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:55 compute-0 nova_compute[248510]: 2025-12-13 08:42:55.103 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:42:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:42:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:42:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:42:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 124 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 195 op/s
Dec 13 08:42:56 compute-0 nova_compute[248510]: 2025-12-13 08:42:56.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:56 compute-0 nova_compute[248510]: 2025-12-13 08:42:56.618 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Successfully created port: ab92e3fe-6177-4ba7-962b-08377c7056ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:42:57 compute-0 ceph-mon[76537]: pgmap v2405: 321 pgs: 321 active+clean; 124 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 195 op/s
Dec 13 08:42:57 compute-0 nova_compute[248510]: 2025-12-13 08:42:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:57 compute-0 nova_compute[248510]: 2025-12-13 08:42:57.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:42:57 compute-0 nova_compute[248510]: 2025-12-13 08:42:57.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:42:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 124 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Dec 13 08:42:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:42:58 compute-0 ceph-mon[76537]: pgmap v2406: 321 pgs: 321 active+clean; 124 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.486 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615363.4846647, 2e9e6711-892f-4278-b911-bfacbff9b48e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.487 248514 INFO nova.compute.manager [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] VM Stopped (Lifecycle Event)
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.670 248514 DEBUG nova.compute.manager [None req-89e29a84-a949-4f03-ae05-1f772516bbb1 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.697 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Successfully updated port: ab92e3fe-6177-4ba7-962b-08377c7056ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.743 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.743 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.743 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.914 248514 DEBUG nova.compute.manager [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-changed-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.915 248514 DEBUG nova.compute.manager [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Refreshing instance network info cache due to event network-changed-ab92e3fe-6177-4ba7-962b-08377c7056ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:42:58 compute-0 nova_compute[248510]: 2025-12-13 08:42:58.915 248514 DEBUG oslo_concurrency.lockutils [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:42:59 compute-0 nova_compute[248510]: 2025-12-13 08:42:59.235 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:42:59 compute-0 nova_compute[248510]: 2025-12-13 08:42:59.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:42:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.097 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Updating instance_info_cache with network_info: [{"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:43:01 compute-0 ceph-mon[76537]: pgmap v2407: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.170 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.171 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance network_info: |[{"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.173 248514 DEBUG oslo_concurrency.lockutils [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.173 248514 DEBUG nova.network.neutron [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Refreshing network info cache for port ab92e3fe-6177-4ba7-962b-08377c7056ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.177 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start _get_guest_xml network_info=[{"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.182 248514 WARNING nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.188 248514 DEBUG nova.virt.libvirt.host [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.189 248514 DEBUG nova.virt.libvirt.host [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.195 248514 DEBUG nova.virt.libvirt.host [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.195 248514 DEBUG nova.virt.libvirt.host [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.195 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.196 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.196 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.196 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.197 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.197 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.197 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.198 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.198 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.198 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.198 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.199 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.201 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:43:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3708720613' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.756 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.786 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.792 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:01 compute-0 nova_compute[248510]: 2025-12-13 08:43:01.839 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:43:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 123 op/s
Dec 13 08:43:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3708720613' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:43:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:43:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683537550' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.365 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.367 248514 DEBUG nova.virt.libvirt.vif [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:42:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-173101990',display_name='tempest-ServersTestJSON-server-173101990',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-173101990',id=101,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-l0nko7x7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:54Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=47b7be19-6608-45b4-9f0e-74393969e3f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.368 248514 DEBUG nova.network.os_vif_util [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.369 248514 DEBUG nova.network.os_vif_util [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.371 248514 DEBUG nova.objects.instance [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 47b7be19-6608-45b4-9f0e-74393969e3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.402 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <uuid>47b7be19-6608-45b4-9f0e-74393969e3f4</uuid>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <name>instance-00000065</name>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestJSON-server-173101990</nova:name>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:43:01</nova:creationTime>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <nova:port uuid="ab92e3fe-6177-4ba7-962b-08377c7056ad">
Dec 13 08:43:02 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <system>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <entry name="serial">47b7be19-6608-45b4-9f0e-74393969e3f4</entry>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <entry name="uuid">47b7be19-6608-45b4-9f0e-74393969e3f4</entry>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </system>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <os>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   </os>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <features>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   </features>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/47b7be19-6608-45b4-9f0e-74393969e3f4_disk">
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       </source>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config">
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       </source>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:43:02 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:aa:4e:c6"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <target dev="tapab92e3fe-61"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/console.log" append="off"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <video>
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </video>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:43:02 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:43:02 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:43:02 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:43:02 compute-0 nova_compute[248510]: </domain>
Dec 13 08:43:02 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.404 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Preparing to wait for external event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.404 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.405 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.405 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.406 248514 DEBUG nova.virt.libvirt.vif [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:42:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-173101990',display_name='tempest-ServersTestJSON-server-173101990',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-173101990',id=101,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-l0nko7x7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:54Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=47b7be19-6608-45b4-9f0e-74393969e3f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.406 248514 DEBUG nova.network.os_vif_util [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.407 248514 DEBUG nova.network.os_vif_util [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.407 248514 DEBUG os_vif [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.408 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.408 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.411 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab92e3fe-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.411 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab92e3fe-61, col_values=(('external_ids', {'iface-id': 'ab92e3fe-6177-4ba7-962b-08377c7056ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:4e:c6', 'vm-uuid': '47b7be19-6608-45b4-9f0e-74393969e3f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:02 compute-0 NetworkManager[50376]: <info>  [1765615382.4667] manager: (tapab92e3fe-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.472 248514 INFO os_vif [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61')
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.755 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.756 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.756 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:aa:4e:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.757 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Using config drive
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.779 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.785 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.914 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.915 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.916 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.916 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:43:02 compute-0 nova_compute[248510]: 2025-12-13 08:43:02.916 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:03 compute-0 ceph-mon[76537]: pgmap v2408: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 123 op/s
Dec 13 08:43:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1683537550' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:43:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:03.434 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:43:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814005455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.501 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.586 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Creating config drive at /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.591 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf73n2v4t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.732 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf73n2v4t" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.765 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.769 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.900 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.901 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Deleting local config drive /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config because it was imported into RBD.
Dec 13 08:43:03 compute-0 kernel: tapab92e3fe-61: entered promiscuous mode
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:03 compute-0 ovn_controller[148476]: 2025-12-13T08:43:03Z|00966|binding|INFO|Claiming lport ab92e3fe-6177-4ba7-962b-08377c7056ad for this chassis.
Dec 13 08:43:03 compute-0 ovn_controller[148476]: 2025-12-13T08:43:03Z|00967|binding|INFO|ab92e3fe-6177-4ba7-962b-08377c7056ad: Claiming fa:16:3e:aa:4e:c6 10.100.0.6
Dec 13 08:43:03 compute-0 NetworkManager[50376]: <info>  [1765615383.9492] manager: (tapab92e3fe-61): new Tun device (/org/freedesktop/NetworkManager/Devices/403)
Dec 13 08:43:03 compute-0 ovn_controller[148476]: 2025-12-13T08:43:03Z|00968|binding|INFO|Setting lport ab92e3fe-6177-4ba7-962b-08377c7056ad ovn-installed in OVS
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.964 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:03 compute-0 nova_compute[248510]: 2025-12-13 08:43:03.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:03 compute-0 systemd-udevd[345205]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:43:03 compute-0 systemd-machined[210538]: New machine qemu-125-instance-00000065.
Dec 13 08:43:03 compute-0 NetworkManager[50376]: <info>  [1765615383.9835] device (tapab92e3fe-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:43:03 compute-0 NetworkManager[50376]: <info>  [1765615383.9844] device (tapab92e3fe-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:43:03 compute-0 ovn_controller[148476]: 2025-12-13T08:43:03Z|00969|binding|INFO|Setting lport ab92e3fe-6177-4ba7-962b-08377c7056ad up in Southbound
Dec 13 08:43:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:03.986 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:4e:c6 10.100.0.6'], port_security=['fa:16:3e:aa:4e:c6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '47b7be19-6608-45b4-9f0e-74393969e3f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ab92e3fe-6177-4ba7-962b-08377c7056ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:43:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:03.987 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ab92e3fe-6177-4ba7-962b-08377c7056ad in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis
Dec 13 08:43:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:03.989 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:43:03 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000065.
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.004 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b59ef43-c26b-4c23-b746-996c084f2499]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.010 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.010 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.031 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[633a455f-39f2-4949-9c83-b62e1575e0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.034 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0f88e6-b08f-4850-b53c-d397ad3d0d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.052 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.053 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.062 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f18fb8-5097-4c8d-a2a4-13d8dbb5124e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f96e8e69-2f63-4061-9ffc-f4b3630c5156]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345219, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d121803b-17d0-4d2a-8ed1-084185e56b16]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345221, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345221, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.101 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.102 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.102 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.103 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:04 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3814005455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.183 248514 DEBUG nova.network.neutron [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Updated VIF entry in instance network info cache for port ab92e3fe-6177-4ba7-962b-08377c7056ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.183 248514 DEBUG nova.network.neutron [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Updating instance_info_cache with network_info: [{"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.217 248514 DEBUG oslo_concurrency.lockutils [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.225 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.226 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3499MB free_disk=59.92131443321705GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.226 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.227 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.584 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615384.5835812, 47b7be19-6608-45b4-9f0e-74393969e3f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.584 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] VM Started (Lifecycle Event)
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.719 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.723 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615384.5846167, 47b7be19-6608-45b4-9f0e-74393969e3f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.723 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] VM Paused (Lifecycle Event)
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.763 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.764 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 47b7be19-6608-45b4-9f0e-74393969e3f4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.764 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.764 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.829 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.832 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.856 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:04 compute-0 nova_compute[248510]: 2025-12-13 08:43:04.891 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.106 248514 DEBUG nova.compute.manager [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.106 248514 DEBUG oslo_concurrency.lockutils [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.106 248514 DEBUG oslo_concurrency.lockutils [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.107 248514 DEBUG oslo_concurrency.lockutils [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.107 248514 DEBUG nova.compute.manager [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Processing event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.108 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.127 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615385.1117122, 47b7be19-6608-45b4-9f0e-74393969e3f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.128 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] VM Resumed (Lifecycle Event)
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.130 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.152 248514 INFO nova.virt.libvirt.driver [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance spawned successfully.
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.153 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.170 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615370.1698139, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.171 248514 INFO nova.compute.manager [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Stopped (Lifecycle Event)
Dec 13 08:43:05 compute-0 ceph-mon[76537]: pgmap v2409: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 08:43:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:43:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756913723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.404 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.404 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.405 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.405 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.406 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.406 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.409 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.411 248514 DEBUG nova.compute.manager [None req-7c3baadc-27e3-44f3-8111-d1a796d054a6 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.411 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.418 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.420 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.456 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.464 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.495 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.496 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.506 248514 INFO nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Took 11.12 seconds to spawn the instance on the hypervisor.
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.506 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.590 248514 INFO nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Took 12.91 seconds to build instance.
Dec 13 08:43:05 compute-0 nova_compute[248510]: 2025-12-13 08:43:05.644 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 694 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 08:43:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1756913723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:43:06 compute-0 nova_compute[248510]: 2025-12-13 08:43:06.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:06 compute-0 nova_compute[248510]: 2025-12-13 08:43:06.483 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:43:06 compute-0 nova_compute[248510]: 2025-12-13 08:43:06.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:43:06 compute-0 nova_compute[248510]: 2025-12-13 08:43:06.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:43:07 compute-0 ceph-mon[76537]: pgmap v2410: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 694 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 08:43:07 compute-0 nova_compute[248510]: 2025-12-13 08:43:07.248 248514 DEBUG nova.compute.manager [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:07 compute-0 nova_compute[248510]: 2025-12-13 08:43:07.249 248514 DEBUG oslo_concurrency.lockutils [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:07 compute-0 nova_compute[248510]: 2025-12-13 08:43:07.249 248514 DEBUG oslo_concurrency.lockutils [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:07 compute-0 nova_compute[248510]: 2025-12-13 08:43:07.249 248514 DEBUG oslo_concurrency.lockutils [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:07 compute-0 nova_compute[248510]: 2025-12-13 08:43:07.250 248514 DEBUG nova.compute.manager [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] No waiting events found dispatching network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:43:07 compute-0 nova_compute[248510]: 2025-12-13 08:43:07.250 248514 WARNING nova.compute.manager [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received unexpected event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad for instance with vm_state active and task_state None.
Dec 13 08:43:07 compute-0 nova_compute[248510]: 2025-12-13 08:43:07.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 22 op/s
Dec 13 08:43:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:08 compute-0 ceph-mon[76537]: pgmap v2411: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 22 op/s
Dec 13 08:43:08 compute-0 podman[345287]: 2025-12-13 08:43:08.980612098 +0000 UTC m=+0.068868657 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:43:09 compute-0 podman[345286]: 2025-12-13 08:43:09.014059785 +0000 UTC m=+0.102213292 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:43:09 compute-0 podman[345288]: 2025-12-13 08:43:09.019905138 +0000 UTC m=+0.096116482 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 08:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:43:09
Dec 13 08:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'backups', 'volumes', '.mgr']
Dec 13 08:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:43:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:43:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:43:10 compute-0 ceph-mon[76537]: pgmap v2412: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Dec 13 08:43:11 compute-0 nova_compute[248510]: 2025-12-13 08:43:11.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.480 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.480 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.481 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.481 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.481 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.482 248514 INFO nova.compute.manager [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Terminating instance
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.483 248514 DEBUG nova.compute.manager [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:43:12 compute-0 kernel: tapab92e3fe-61 (unregistering): left promiscuous mode
Dec 13 08:43:12 compute-0 NetworkManager[50376]: <info>  [1765615392.5104] device (tapab92e3fe-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:12 compute-0 ovn_controller[148476]: 2025-12-13T08:43:12Z|00970|binding|INFO|Releasing lport ab92e3fe-6177-4ba7-962b-08377c7056ad from this chassis (sb_readonly=0)
Dec 13 08:43:12 compute-0 ovn_controller[148476]: 2025-12-13T08:43:12Z|00971|binding|INFO|Setting lport ab92e3fe-6177-4ba7-962b-08377c7056ad down in Southbound
Dec 13 08:43:12 compute-0 ovn_controller[148476]: 2025-12-13T08:43:12Z|00972|binding|INFO|Removing iface tapab92e3fe-61 ovn-installed in OVS
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.535 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:4e:c6 10.100.0.6'], port_security=['fa:16:3e:aa:4e:c6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '47b7be19-6608-45b4-9f0e-74393969e3f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ab92e3fe-6177-4ba7-962b-08377c7056ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.536 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ab92e3fe-6177-4ba7-962b-08377c7056ad in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.538 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.555 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7684574-0b4d-4b89-80a6-045309fcbc50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:12 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Deactivated successfully.
Dec 13 08:43:12 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Consumed 8.041s CPU time.
Dec 13 08:43:12 compute-0 systemd-machined[210538]: Machine qemu-125-instance-00000065 terminated.
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.583 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17f4a8be-c7b4-4f83-af55-358809a36ea9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.586 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[258fbaec-e008-4d07-9bb8-dd71a0be1fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.622 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[24aa6173-9295-405b-a6e0-0ba8c047424a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.639 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dad05a1d-0b9e-4050-812c-4a9e28cc29fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345361, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.657 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d044de69-0f40-4aaa-8a1c-1dd458ed13df]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345362, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345362, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.659 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.665 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.665 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.666 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.666 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.717 248514 INFO nova.virt.libvirt.driver [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance destroyed successfully.
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.718 248514 DEBUG nova.objects.instance [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 47b7be19-6608-45b4-9f0e-74393969e3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.737 248514 DEBUG nova.virt.libvirt.vif [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:42:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-173101990',display_name='tempest-ServersTestJSON-server-173101990',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-173101990',id=101,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:43:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-l0nko7x7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:43:09Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=47b7be19-6608-45b4-9f0e-74393969e3f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.738 248514 DEBUG nova.network.os_vif_util [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.739 248514 DEBUG nova.network.os_vif_util [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.739 248514 DEBUG os_vif [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.741 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab92e3fe-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.745 248514 INFO os_vif [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61')
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:43:12 compute-0 ceph-mon[76537]: pgmap v2413: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.959 248514 DEBUG nova.compute.manager [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-unplugged-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.959 248514 DEBUG oslo_concurrency.lockutils [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.960 248514 DEBUG oslo_concurrency.lockutils [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.960 248514 DEBUG oslo_concurrency.lockutils [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.960 248514 DEBUG nova.compute.manager [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] No waiting events found dispatching network-vif-unplugged-ab92e3fe-6177-4ba7-962b-08377c7056ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:43:12 compute-0 nova_compute[248510]: 2025-12-13 08:43:12.960 248514 DEBUG nova.compute.manager [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-unplugged-ab92e3fe-6177-4ba7-962b-08377c7056ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:43:13 compute-0 nova_compute[248510]: 2025-12-13 08:43:13.043 248514 INFO nova.virt.libvirt.driver [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Deleting instance files /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4_del
Dec 13 08:43:13 compute-0 nova_compute[248510]: 2025-12-13 08:43:13.044 248514 INFO nova.virt.libvirt.driver [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Deletion of /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4_del complete
Dec 13 08:43:13 compute-0 nova_compute[248510]: 2025-12-13 08:43:13.117 248514 INFO nova.compute.manager [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Took 0.63 seconds to destroy the instance on the hypervisor.
Dec 13 08:43:13 compute-0 nova_compute[248510]: 2025-12-13 08:43:13.118 248514 DEBUG oslo.service.loopingcall [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:43:13 compute-0 nova_compute[248510]: 2025-12-13 08:43:13.118 248514 DEBUG nova.compute.manager [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:43:13 compute-0 nova_compute[248510]: 2025-12-13 08:43:13.119 248514 DEBUG nova.network.neutron [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:43:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 148 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 08:43:14 compute-0 nova_compute[248510]: 2025-12-13 08:43:14.137 248514 DEBUG nova.network.neutron [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:43:14 compute-0 nova_compute[248510]: 2025-12-13 08:43:14.524 248514 INFO nova.compute.manager [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Took 1.41 seconds to deallocate network for instance.
Dec 13 08:43:14 compute-0 nova_compute[248510]: 2025-12-13 08:43:14.612 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:14 compute-0 nova_compute[248510]: 2025-12-13 08:43:14.613 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:14 compute-0 nova_compute[248510]: 2025-12-13 08:43:14.738 248514 DEBUG oslo_concurrency.processutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:15 compute-0 ceph-mon[76537]: pgmap v2414: 321 pgs: 321 active+clean; 148 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 08:43:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:43:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1741052178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:43:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:43:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1741052178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.205 248514 DEBUG nova.compute.manager [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.205 248514 DEBUG oslo_concurrency.lockutils [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.206 248514 DEBUG oslo_concurrency.lockutils [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.206 248514 DEBUG oslo_concurrency.lockutils [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.206 248514 DEBUG nova.compute.manager [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] No waiting events found dispatching network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.206 248514 WARNING nova.compute.manager [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received unexpected event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad for instance with vm_state deleted and task_state None.
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.207 248514 DEBUG nova.compute.manager [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-deleted-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:43:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301256832' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.337 248514 DEBUG oslo_concurrency.processutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.346 248514 DEBUG nova.compute.provider_tree [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.544 248514 DEBUG nova.scheduler.client.report [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.584 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.639 248514 INFO nova.scheduler.client.report [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 47b7be19-6608-45b4-9f0e-74393969e3f4
Dec 13 08:43:15 compute-0 nova_compute[248510]: 2025-12-13 08:43:15.714 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 121 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Dec 13 08:43:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1741052178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:43:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1741052178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:43:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1301256832' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:43:16 compute-0 nova_compute[248510]: 2025-12-13 08:43:16.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:16 compute-0 sudo[345416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:43:16 compute-0 sudo[345416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:16 compute-0 sudo[345416]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:16 compute-0 sudo[345441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:43:16 compute-0 sudo[345441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:17 compute-0 ceph-mon[76537]: pgmap v2415: 321 pgs: 321 active+clean; 121 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Dec 13 08:43:17 compute-0 sudo[345441]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:43:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:43:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:43:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:43:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:43:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:43:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:43:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:43:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:43:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:43:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:43:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:43:17 compute-0 sudo[345497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:43:17 compute-0 sudo[345497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:17 compute-0 sudo[345497]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:17 compute-0 sudo[345522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:43:17 compute-0 sudo[345522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:17 compute-0 nova_compute[248510]: 2025-12-13 08:43:17.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:17 compute-0 podman[345559]: 2025-12-13 08:43:17.785441929 +0000 UTC m=+0.040665748 container create d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:43:17 compute-0 systemd[1]: Started libpod-conmon-d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6.scope.
Dec 13 08:43:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:43:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 121 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 91 op/s
Dec 13 08:43:17 compute-0 podman[345559]: 2025-12-13 08:43:17.766445141 +0000 UTC m=+0.021669010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:43:17 compute-0 podman[345559]: 2025-12-13 08:43:17.866543586 +0000 UTC m=+0.121767405 container init d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:43:17 compute-0 podman[345559]: 2025-12-13 08:43:17.874748871 +0000 UTC m=+0.129972690 container start d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:43:17 compute-0 podman[345559]: 2025-12-13 08:43:17.879057354 +0000 UTC m=+0.134281193 container attach d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:43:17 compute-0 angry_goldberg[345575]: 167 167
Dec 13 08:43:17 compute-0 systemd[1]: libpod-d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6.scope: Deactivated successfully.
Dec 13 08:43:17 compute-0 podman[345559]: 2025-12-13 08:43:17.881873128 +0000 UTC m=+0.137096947 container died d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b1b91f45ff5d262d143db26dcd11c3d4447feff050b268ef605cbe282a57332-merged.mount: Deactivated successfully.
Dec 13 08:43:17 compute-0 podman[345559]: 2025-12-13 08:43:17.924990529 +0000 UTC m=+0.180214358 container remove d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:43:17 compute-0 systemd[1]: libpod-conmon-d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6.scope: Deactivated successfully.
Dec 13 08:43:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:43:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:43:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:43:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:43:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:43:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:43:18 compute-0 podman[345599]: 2025-12-13 08:43:18.110617747 +0000 UTC m=+0.046803799 container create 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:43:18 compute-0 systemd[1]: Started libpod-conmon-35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa.scope.
Dec 13 08:43:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:18 compute-0 podman[345599]: 2025-12-13 08:43:18.180616873 +0000 UTC m=+0.116802925 container init 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:43:18 compute-0 podman[345599]: 2025-12-13 08:43:18.094396721 +0000 UTC m=+0.030582793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:43:18 compute-0 podman[345599]: 2025-12-13 08:43:18.192975307 +0000 UTC m=+0.129161359 container start 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:43:18 compute-0 podman[345599]: 2025-12-13 08:43:18.19691915 +0000 UTC m=+0.133105262 container attach 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:43:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:18 compute-0 beautiful_jepsen[345616]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:43:18 compute-0 beautiful_jepsen[345616]: --> All data devices are unavailable
Dec 13 08:43:18 compute-0 systemd[1]: libpod-35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa.scope: Deactivated successfully.
Dec 13 08:43:18 compute-0 podman[345599]: 2025-12-13 08:43:18.674493705 +0000 UTC m=+0.610679757 container died 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907-merged.mount: Deactivated successfully.
Dec 13 08:43:18 compute-0 podman[345599]: 2025-12-13 08:43:18.744530692 +0000 UTC m=+0.680716744 container remove 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:43:18 compute-0 systemd[1]: libpod-conmon-35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa.scope: Deactivated successfully.
Dec 13 08:43:18 compute-0 sudo[345522]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:18 compute-0 sudo[345647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:43:18 compute-0 sudo[345647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:18 compute-0 sudo[345647]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:18 compute-0 sudo[345672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:43:18 compute-0 sudo[345672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:19 compute-0 ceph-mon[76537]: pgmap v2416: 321 pgs: 321 active+clean; 121 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 91 op/s
Dec 13 08:43:19 compute-0 podman[345709]: 2025-12-13 08:43:19.185445265 +0000 UTC m=+0.037680269 container create 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 08:43:19 compute-0 systemd[1]: Started libpod-conmon-19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba.scope.
Dec 13 08:43:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:43:19 compute-0 podman[345709]: 2025-12-13 08:43:19.170689898 +0000 UTC m=+0.022924922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:43:19 compute-0 podman[345709]: 2025-12-13 08:43:19.271847971 +0000 UTC m=+0.124082995 container init 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:43:19 compute-0 podman[345709]: 2025-12-13 08:43:19.278338731 +0000 UTC m=+0.130573735 container start 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 08:43:19 compute-0 podman[345709]: 2025-12-13 08:43:19.281250368 +0000 UTC m=+0.133485392 container attach 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 08:43:19 compute-0 adoring_khayyam[345726]: 167 167
Dec 13 08:43:19 compute-0 systemd[1]: libpod-19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba.scope: Deactivated successfully.
Dec 13 08:43:19 compute-0 podman[345709]: 2025-12-13 08:43:19.28360938 +0000 UTC m=+0.135844384 container died 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb1c593cbcc464ef6a69426517d46638a03b126b2ddfb2563805e3352e1ec9c8-merged.mount: Deactivated successfully.
Dec 13 08:43:19 compute-0 podman[345709]: 2025-12-13 08:43:19.341500978 +0000 UTC m=+0.193735982 container remove 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:43:19 compute-0 systemd[1]: libpod-conmon-19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba.scope: Deactivated successfully.
Dec 13 08:43:19 compute-0 podman[345751]: 2025-12-13 08:43:19.524058976 +0000 UTC m=+0.049069858 container create 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:43:19 compute-0 systemd[1]: Started libpod-conmon-2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522.scope.
Dec 13 08:43:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:43:19 compute-0 podman[345751]: 2025-12-13 08:43:19.503854296 +0000 UTC m=+0.028865188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:19 compute-0 podman[345751]: 2025-12-13 08:43:19.61420584 +0000 UTC m=+0.139216752 container init 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 08:43:19 compute-0 podman[345751]: 2025-12-13 08:43:19.620514825 +0000 UTC m=+0.145525707 container start 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 08:43:19 compute-0 podman[345751]: 2025-12-13 08:43:19.632860849 +0000 UTC m=+0.157871731 container attach 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:43:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]: {
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:     "0": [
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:         {
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "devices": [
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "/dev/loop3"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             ],
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_name": "ceph_lv0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_size": "21470642176",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "name": "ceph_lv0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "tags": {
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cluster_name": "ceph",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.crush_device_class": "",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.encrypted": "0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.objectstore": "bluestore",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osd_id": "0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.type": "block",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.vdo": "0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.with_tpm": "0"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             },
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "type": "block",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "vg_name": "ceph_vg0"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:         }
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:     ],
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:     "1": [
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:         {
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "devices": [
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "/dev/loop4"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             ],
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_name": "ceph_lv1",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_size": "21470642176",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "name": "ceph_lv1",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "tags": {
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cluster_name": "ceph",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.crush_device_class": "",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.encrypted": "0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.objectstore": "bluestore",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osd_id": "1",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.type": "block",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.vdo": "0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.with_tpm": "0"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             },
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "type": "block",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "vg_name": "ceph_vg1"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:         }
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:     ],
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:     "2": [
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:         {
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "devices": [
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "/dev/loop5"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             ],
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_name": "ceph_lv2",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_size": "21470642176",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "name": "ceph_lv2",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "tags": {
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.cluster_name": "ceph",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.crush_device_class": "",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.encrypted": "0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.objectstore": "bluestore",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osd_id": "2",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.type": "block",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.vdo": "0",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:                 "ceph.with_tpm": "0"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             },
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "type": "block",
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:             "vg_name": "ceph_vg2"
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:         }
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]:     ]
Dec 13 08:43:19 compute-0 vigorous_cartwright[345768]: }
Dec 13 08:43:19 compute-0 systemd[1]: libpod-2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522.scope: Deactivated successfully.
Dec 13 08:43:19 compute-0 podman[345751]: 2025-12-13 08:43:19.952509951 +0000 UTC m=+0.477520833 container died 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b-merged.mount: Deactivated successfully.
Dec 13 08:43:20 compute-0 podman[345751]: 2025-12-13 08:43:20.00126984 +0000 UTC m=+0.526280722 container remove 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:43:20 compute-0 systemd[1]: libpod-conmon-2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522.scope: Deactivated successfully.
Dec 13 08:43:20 compute-0 sudo[345672]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:20 compute-0 sudo[345788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:43:20 compute-0 sudo[345788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:20 compute-0 sudo[345788]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:20 compute-0 sudo[345813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:43:20 compute-0 sudo[345813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:20 compute-0 podman[345850]: 2025-12-13 08:43:20.470867315 +0000 UTC m=+0.044048406 container create 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 08:43:20 compute-0 systemd[1]: Started libpod-conmon-22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82.scope.
Dec 13 08:43:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:43:20 compute-0 podman[345850]: 2025-12-13 08:43:20.449155936 +0000 UTC m=+0.022337047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:43:20 compute-0 podman[345850]: 2025-12-13 08:43:20.552145387 +0000 UTC m=+0.125326488 container init 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:43:20 compute-0 podman[345850]: 2025-12-13 08:43:20.561475412 +0000 UTC m=+0.134656503 container start 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 08:43:20 compute-0 podman[345850]: 2025-12-13 08:43:20.565769014 +0000 UTC m=+0.138950135 container attach 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:43:20 compute-0 keen_ganguly[345866]: 167 167
Dec 13 08:43:20 compute-0 systemd[1]: libpod-22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82.scope: Deactivated successfully.
Dec 13 08:43:20 compute-0 podman[345850]: 2025-12-13 08:43:20.568693911 +0000 UTC m=+0.141875032 container died 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 08:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c75ac06866a93e10ee20456aab578ac31522c8c17efd1b6a859fd07373c3ec-merged.mount: Deactivated successfully.
Dec 13 08:43:20 compute-0 podman[345850]: 2025-12-13 08:43:20.606370599 +0000 UTC m=+0.179551690 container remove 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 08:43:20 compute-0 systemd[1]: libpod-conmon-22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82.scope: Deactivated successfully.
Dec 13 08:43:20 compute-0 podman[345890]: 2025-12-13 08:43:20.77762167 +0000 UTC m=+0.038221523 container create d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 08:43:20 compute-0 systemd[1]: Started libpod-conmon-d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b.scope.
Dec 13 08:43:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:43:20 compute-0 podman[345890]: 2025-12-13 08:43:20.761362564 +0000 UTC m=+0.021962437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:43:20 compute-0 podman[345890]: 2025-12-13 08:43:20.878449215 +0000 UTC m=+0.139049098 container init d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:43:20 compute-0 podman[345890]: 2025-12-13 08:43:20.886926667 +0000 UTC m=+0.147526550 container start d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:43:20 compute-0 podman[345890]: 2025-12-13 08:43:20.890459209 +0000 UTC m=+0.151059062 container attach d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:43:21 compute-0 ceph-mon[76537]: pgmap v2417: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007703559523006975 of space, bias 1.0, pg target 0.23110678569020926 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676265921235325 of space, bias 1.0, pg target 0.20028797763705977 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.905117584623971e-07 of space, bias 4.0, pg target 0.0007086141101548765 quantized to 16 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.411 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.411 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.445 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.536 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.536 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.543 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.544 248514 INFO nova.compute.claims [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:43:21 compute-0 lvm[345984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:43:21 compute-0 lvm[345985]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:43:21 compute-0 lvm[345985]: VG ceph_vg1 finished
Dec 13 08:43:21 compute-0 lvm[345984]: VG ceph_vg0 finished
Dec 13 08:43:21 compute-0 lvm[345987]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:43:21 compute-0 lvm[345987]: VG ceph_vg2 finished
Dec 13 08:43:21 compute-0 nova_compute[248510]: 2025-12-13 08:43:21.752 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:21 compute-0 heuristic_williamson[345906]: {}
Dec 13 08:43:21 compute-0 systemd[1]: libpod-d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b.scope: Deactivated successfully.
Dec 13 08:43:21 compute-0 systemd[1]: libpod-d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b.scope: Consumed 1.461s CPU time.
Dec 13 08:43:21 compute-0 podman[345890]: 2025-12-13 08:43:21.82427664 +0000 UTC m=+1.084876533 container died d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 08:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf-merged.mount: Deactivated successfully.
Dec 13 08:43:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 13 08:43:21 compute-0 podman[345890]: 2025-12-13 08:43:21.873811049 +0000 UTC m=+1.134410902 container remove d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 08:43:21 compute-0 systemd[1]: libpod-conmon-d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b.scope: Deactivated successfully.
Dec 13 08:43:21 compute-0 sudo[345813]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:43:21 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:43:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:43:21 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:43:22 compute-0 sudo[346022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:43:22 compute-0 sudo[346022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:43:22 compute-0 sudo[346022]: pam_unix(sudo:session): session closed for user root
Dec 13 08:43:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:43:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040416571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.325 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.333 248514 DEBUG nova.compute.provider_tree [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.357 248514 DEBUG nova.scheduler.client.report [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.405 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.406 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.473 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.473 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.655 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.697 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.820 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.821 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.821 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Creating image(s)
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.840 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.864 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.885 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.889 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:22 compute-0 ceph-mon[76537]: pgmap v2418: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 13 08:43:22 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:43:22 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:43:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1040416571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.974 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.975 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.976 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:22 compute-0 nova_compute[248510]: 2025-12-13 08:43:22.976 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.001 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.006 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.061 248514 DEBUG nova.policy [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:43:23 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec 13 08:43:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.612 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.673 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:43:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.897 248514 DEBUG nova.objects.instance [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.917 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.917 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Ensure instance console log exists: /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.918 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.918 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:23 compute-0 nova_compute[248510]: 2025-12-13 08:43:23.918 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:24 compute-0 nova_compute[248510]: 2025-12-13 08:43:24.232 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Successfully created port: 6168919d-f5d8-46ba-a89a-e352f37e674d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:43:24 compute-0 ceph-mon[76537]: pgmap v2419: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 13 08:43:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 156 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.011 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Successfully updated port: 6168919d-f5d8-46ba-a89a-e352f37e674d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.041 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.042 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.042 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.255 248514 DEBUG nova.compute.manager [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-changed-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.256 248514 DEBUG nova.compute.manager [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Refreshing instance network info cache due to event network-changed-6168919d-f5d8-46ba-a89a-e352f37e674d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.256 248514 DEBUG oslo_concurrency.lockutils [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.366 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:43:26 compute-0 nova_compute[248510]: 2025-12-13 08:43:26.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:27 compute-0 ceph-mon[76537]: pgmap v2420: 321 pgs: 321 active+clean; 156 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Dec 13 08:43:27 compute-0 nova_compute[248510]: 2025-12-13 08:43:27.716 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615392.7151818, 47b7be19-6608-45b4-9f0e-74393969e3f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:43:27 compute-0 nova_compute[248510]: 2025-12-13 08:43:27.717 248514 INFO nova.compute.manager [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] VM Stopped (Lifecycle Event)
Dec 13 08:43:27 compute-0 nova_compute[248510]: 2025-12-13 08:43:27.740 248514 DEBUG nova.compute.manager [None req-7faea41c-af8f-422e-a34b-98f70d7fc7c1 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:27 compute-0 nova_compute[248510]: 2025-12-13 08:43:27.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 156 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.174 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updating instance_info_cache with network_info: [{"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.213 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.214 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance network_info: |[{"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.215 248514 DEBUG oslo_concurrency.lockutils [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.215 248514 DEBUG nova.network.neutron [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Refreshing network info cache for port 6168919d-f5d8-46ba-a89a-e352f37e674d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.220 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start _get_guest_xml network_info=[{"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.226 248514 WARNING nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.234 248514 DEBUG nova.virt.libvirt.host [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.234 248514 DEBUG nova.virt.libvirt.host [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.238 248514 DEBUG nova.virt.libvirt.host [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.238 248514 DEBUG nova.virt.libvirt.host [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.239 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.239 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.239 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.241 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.241 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.241 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.241 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.244 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:43:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767693065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.805 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.834 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:28 compute-0 nova_compute[248510]: 2025-12-13 08:43:28.840 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:29 compute-0 ceph-mon[76537]: pgmap v2421: 321 pgs: 321 active+clean; 156 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Dec 13 08:43:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3767693065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:43:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:43:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761894302' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.493 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.495 248514 DEBUG nova.virt.libvirt.vif [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:43:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-596298568',display_name='tempest-ServersTestJSON-server-596298568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-596298568',id=102,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-gzr370um',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:43:22Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=d7f248dd-d4a5-4de5-b69a-bf263bfa30ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.496 248514 DEBUG nova.network.os_vif_util [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.496 248514 DEBUG nova.network.os_vif_util [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.498 248514 DEBUG nova.objects.instance [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.515 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <uuid>d7f248dd-d4a5-4de5-b69a-bf263bfa30ad</uuid>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <name>instance-00000066</name>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersTestJSON-server-596298568</nova:name>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:43:28</nova:creationTime>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <nova:port uuid="6168919d-f5d8-46ba-a89a-e352f37e674d">
Dec 13 08:43:29 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <system>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <entry name="serial">d7f248dd-d4a5-4de5-b69a-bf263bfa30ad</entry>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <entry name="uuid">d7f248dd-d4a5-4de5-b69a-bf263bfa30ad</entry>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </system>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <os>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   </os>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <features>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   </features>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk">
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       </source>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config">
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       </source>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:43:29 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:24:fb:ce"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <target dev="tap6168919d-f5"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/console.log" append="off"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <video>
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </video>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:43:29 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:43:29 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:43:29 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:43:29 compute-0 nova_compute[248510]: </domain>
Dec 13 08:43:29 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.517 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Preparing to wait for external event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.518 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.518 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.518 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.519 248514 DEBUG nova.virt.libvirt.vif [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:43:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-596298568',display_name='tempest-ServersTestJSON-server-596298568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-596298568',id=102,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-gzr370um',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:43:22Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=d7f248dd-d4a5-4de5-b69a-bf263bfa30ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.519 248514 DEBUG nova.network.os_vif_util [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.520 248514 DEBUG nova.network.os_vif_util [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.520 248514 DEBUG os_vif [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.521 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.522 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.525 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6168919d-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.526 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6168919d-f5, col_values=(('external_ids', {'iface-id': '6168919d-f5d8-46ba-a89a-e352f37e674d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:fb:ce', 'vm-uuid': 'd7f248dd-d4a5-4de5-b69a-bf263bfa30ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:29 compute-0 NetworkManager[50376]: <info>  [1765615409.5298] manager: (tap6168919d-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.537 248514 INFO os_vif [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5')
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.604 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.605 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.605 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:24:fb:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.606 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Using config drive
Dec 13 08:43:29 compute-0 nova_compute[248510]: 2025-12-13 08:43:29.634 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 08:43:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/761894302' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:43:30 compute-0 nova_compute[248510]: 2025-12-13 08:43:30.695 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Creating config drive at /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config
Dec 13 08:43:30 compute-0 nova_compute[248510]: 2025-12-13 08:43:30.701 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahiph2_e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:30 compute-0 nova_compute[248510]: 2025-12-13 08:43:30.856 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahiph2_e" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:30 compute-0 nova_compute[248510]: 2025-12-13 08:43:30.883 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:43:30 compute-0 nova_compute[248510]: 2025-12-13 08:43:30.888 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.058 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.059 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Deleting local config drive /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config because it was imported into RBD.
Dec 13 08:43:31 compute-0 kernel: tap6168919d-f5: entered promiscuous mode
Dec 13 08:43:31 compute-0 NetworkManager[50376]: <info>  [1765615411.1109] manager: (tap6168919d-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:31 compute-0 ovn_controller[148476]: 2025-12-13T08:43:31Z|00973|binding|INFO|Claiming lport 6168919d-f5d8-46ba-a89a-e352f37e674d for this chassis.
Dec 13 08:43:31 compute-0 ovn_controller[148476]: 2025-12-13T08:43:31Z|00974|binding|INFO|6168919d-f5d8-46ba-a89a-e352f37e674d: Claiming fa:16:3e:24:fb:ce 10.100.0.13
Dec 13 08:43:31 compute-0 ovn_controller[148476]: 2025-12-13T08:43:31Z|00975|binding|INFO|Setting lport 6168919d-f5d8-46ba-a89a-e352f37e674d ovn-installed in OVS
Dec 13 08:43:31 compute-0 ovn_controller[148476]: 2025-12-13T08:43:31Z|00976|binding|INFO|Setting lport 6168919d-f5d8-46ba-a89a-e352f37e674d up in Southbound
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.128 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:fb:ce 10.100.0.13'], port_security=['fa:16:3e:24:fb:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7f248dd-d4a5-4de5-b69a-bf263bfa30ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6168919d-f5d8-46ba-a89a-e352f37e674d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.129 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.130 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6168919d-f5d8-46ba-a89a-e352f37e674d in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.131 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:43:31 compute-0 ceph-mon[76537]: pgmap v2422: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 08:43:31 compute-0 systemd-udevd[346350]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:43:31 compute-0 systemd-machined[210538]: New machine qemu-126-instance-00000066.
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.147 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fd89fd-7ddb-4578-9c9e-ddcd0e785eb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:31 compute-0 NetworkManager[50376]: <info>  [1765615411.1512] device (tap6168919d-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:43:31 compute-0 NetworkManager[50376]: <info>  [1765615411.1521] device (tap6168919d-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:43:31 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000066.
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.174 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[db995b64-b3bc-4154-8438-de28adb16484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.177 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e63e6f5-787e-4855-8843-44edb22c7103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.205 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0c6adc-1e9d-4666-87bf-a4c936339b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.222 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2d721847-dad0-4afc-802b-c04bdbc658f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346365, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.237 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff274458-cf18-4d7f-9d70-7b5b24541b97]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346366, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346366, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.239 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.281 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.282 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.283 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.284 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.285 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.286 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.309 248514 DEBUG nova.network.neutron [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updated VIF entry in instance network info cache for port 6168919d-f5d8-46ba-a89a-e352f37e674d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.310 248514 DEBUG nova.network.neutron [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updating instance_info_cache with network_info: [{"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.341 248514 DEBUG oslo_concurrency.lockutils [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.642 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615411.6415884, d7f248dd-d4a5-4de5-b69a-bf263bfa30ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.643 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] VM Started (Lifecycle Event)
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.695 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.702 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615411.6417358, d7f248dd-d4a5-4de5-b69a-bf263bfa30ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.702 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] VM Paused (Lifecycle Event)
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.762 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.767 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:43:31 compute-0 nova_compute[248510]: 2025-12-13 08:43:31.825 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:43:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:43:32 compute-0 ceph-mon[76537]: pgmap v2423: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:43:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:43:34 compute-0 nova_compute[248510]: 2025-12-13 08:43:34.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:34 compute-0 ceph-mon[76537]: pgmap v2424: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.357 248514 DEBUG nova.compute.manager [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.358 248514 DEBUG oslo_concurrency.lockutils [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.359 248514 DEBUG oslo_concurrency.lockutils [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.359 248514 DEBUG oslo_concurrency.lockutils [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.359 248514 DEBUG nova.compute.manager [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Processing event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.360 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.363 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615415.363652, d7f248dd-d4a5-4de5-b69a-bf263bfa30ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.364 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] VM Resumed (Lifecycle Event)
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.366 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.369 248514 INFO nova.virt.libvirt.driver [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance spawned successfully.
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.369 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.407 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.407 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.408 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.408 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.408 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.409 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.413 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.416 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.492 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.506 248514 INFO nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Took 12.69 seconds to spawn the instance on the hypervisor.
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.507 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.600 248514 INFO nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Took 14.09 seconds to build instance.
Dec 13 08:43:35 compute-0 nova_compute[248510]: 2025-12-13 08:43:35.643 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 13 08:43:36 compute-0 nova_compute[248510]: 2025-12-13 08:43:36.378 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:37 compute-0 ceph-mon[76537]: pgmap v2425: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 13 08:43:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 454 KiB/s wr, 11 op/s
Dec 13 08:43:38 compute-0 nova_compute[248510]: 2025-12-13 08:43:38.016 248514 DEBUG nova.compute.manager [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:38 compute-0 nova_compute[248510]: 2025-12-13 08:43:38.016 248514 DEBUG oslo_concurrency.lockutils [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:38 compute-0 nova_compute[248510]: 2025-12-13 08:43:38.016 248514 DEBUG oslo_concurrency.lockutils [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:38 compute-0 nova_compute[248510]: 2025-12-13 08:43:38.017 248514 DEBUG oslo_concurrency.lockutils [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:38 compute-0 nova_compute[248510]: 2025-12-13 08:43:38.017 248514 DEBUG nova.compute.manager [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] No waiting events found dispatching network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:43:38 compute-0 nova_compute[248510]: 2025-12-13 08:43:38.017 248514 WARNING nova.compute.manager [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received unexpected event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d for instance with vm_state active and task_state None.
Dec 13 08:43:38 compute-0 ceph-mon[76537]: pgmap v2426: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 454 KiB/s wr, 11 op/s
Dec 13 08:43:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:39 compute-0 nova_compute[248510]: 2025-12-13 08:43:39.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 454 KiB/s wr, 37 op/s
Dec 13 08:43:39 compute-0 podman[346411]: 2025-12-13 08:43:39.97757188 +0000 UTC m=+0.064154174 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 08:43:39 compute-0 podman[346410]: 2025-12-13 08:43:39.986198036 +0000 UTC m=+0.073879639 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:43:40 compute-0 podman[346409]: 2025-12-13 08:43:40.040259614 +0000 UTC m=+0.127934336 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Dec 13 08:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:43:41 compute-0 ceph-mon[76537]: pgmap v2427: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 454 KiB/s wr, 37 op/s
Dec 13 08:43:41 compute-0 nova_compute[248510]: 2025-12-13 08:43:41.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:43:42 compute-0 nova_compute[248510]: 2025-12-13 08:43:42.841 248514 DEBUG oslo_concurrency.lockutils [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:42 compute-0 nova_compute[248510]: 2025-12-13 08:43:42.842 248514 DEBUG oslo_concurrency.lockutils [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:42 compute-0 nova_compute[248510]: 2025-12-13 08:43:42.842 248514 DEBUG nova.compute.manager [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:42 compute-0 nova_compute[248510]: 2025-12-13 08:43:42.846 248514 DEBUG nova.compute.manager [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 08:43:42 compute-0 nova_compute[248510]: 2025-12-13 08:43:42.847 248514 DEBUG nova.objects.instance [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'flavor' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:43:42 compute-0 nova_compute[248510]: 2025-12-13 08:43:42.890 248514 DEBUG nova.virt.libvirt.driver [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:43:43 compute-0 ceph-mon[76537]: pgmap v2428: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:43:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:43:44 compute-0 nova_compute[248510]: 2025-12-13 08:43:44.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:45 compute-0 ceph-mon[76537]: pgmap v2429: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:43:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Dec 13 08:43:46 compute-0 ceph-mon[76537]: pgmap v2430: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Dec 13 08:43:46 compute-0 nova_compute[248510]: 2025-12-13 08:43:46.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 08:43:47 compute-0 ovn_controller[148476]: 2025-12-13T08:43:47Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:fb:ce 10.100.0.13
Dec 13 08:43:47 compute-0 ovn_controller[148476]: 2025-12-13T08:43:47Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:fb:ce 10.100.0.13
Dec 13 08:43:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:49 compute-0 ceph-mon[76537]: pgmap v2431: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 08:43:49 compute-0 nova_compute[248510]: 2025-12-13 08:43:49.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 181 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 995 KiB/s wr, 108 op/s
Dec 13 08:43:50 compute-0 ceph-mon[76537]: pgmap v2432: 321 pgs: 321 active+clean; 181 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 995 KiB/s wr, 108 op/s
Dec 13 08:43:51 compute-0 nova_compute[248510]: 2025-12-13 08:43:51.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:51 compute-0 nova_compute[248510]: 2025-12-13 08:43:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:43:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Dec 13 08:43:52 compute-0 nova_compute[248510]: 2025-12-13 08:43:52.934 248514 DEBUG nova.virt.libvirt.driver [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 13 08:43:53 compute-0 ceph-mon[76537]: pgmap v2433: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Dec 13 08:43:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:43:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:54 compute-0 nova_compute[248510]: 2025-12-13 08:43:54.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:54.048 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:43:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:54.050 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:43:54 compute-0 nova_compute[248510]: 2025-12-13 08:43:54.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:54 compute-0 nova_compute[248510]: 2025-12-13 08:43:54.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:43:54 compute-0 ceph-mon[76537]: pgmap v2434: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:43:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 08:43:56 compute-0 nova_compute[248510]: 2025-12-13 08:43:56.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:56 compute-0 ceph-mon[76537]: pgmap v2435: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 08:43:56 compute-0 kernel: tap6168919d-f5 (unregistering): left promiscuous mode
Dec 13 08:43:56 compute-0 NetworkManager[50376]: <info>  [1765615436.8019] device (tap6168919d-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:43:56 compute-0 ovn_controller[148476]: 2025-12-13T08:43:56Z|00977|binding|INFO|Releasing lport 6168919d-f5d8-46ba-a89a-e352f37e674d from this chassis (sb_readonly=0)
Dec 13 08:43:56 compute-0 nova_compute[248510]: 2025-12-13 08:43:56.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:56 compute-0 ovn_controller[148476]: 2025-12-13T08:43:56Z|00978|binding|INFO|Setting lport 6168919d-f5d8-46ba-a89a-e352f37e674d down in Southbound
Dec 13 08:43:56 compute-0 ovn_controller[148476]: 2025-12-13T08:43:56Z|00979|binding|INFO|Removing iface tap6168919d-f5 ovn-installed in OVS
Dec 13 08:43:56 compute-0 nova_compute[248510]: 2025-12-13 08:43:56.817 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.823 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:fb:ce 10.100.0.13'], port_security=['fa:16:3e:24:fb:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7f248dd-d4a5-4de5-b69a-bf263bfa30ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6168919d-f5d8-46ba-a89a-e352f37e674d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.826 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6168919d-f5d8-46ba-a89a-e352f37e674d in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.829 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 08:43:56 compute-0 nova_compute[248510]: 2025-12-13 08:43:56.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.848 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68aa16b4-6e0a-4799-b995-57fe1859341a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:56 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000066.scope: Deactivated successfully.
Dec 13 08:43:56 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000066.scope: Consumed 12.761s CPU time.
Dec 13 08:43:56 compute-0 systemd-machined[210538]: Machine qemu-126-instance-00000066 terminated.
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.892 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a2050349-d12c-4837-b3ac-bbe573bab088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.896 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[39468f78-86e9-4818-9aa4-5b322f3cc469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.935 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d79e20-f8d7-4ef3-b428-51d1f2162ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.955 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8092b18b-8b7f-422a-b027-c6c12ca778d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 27, 'rx_bytes': 742, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 27, 'rx_bytes': 742, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346482, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.972 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[63b99c19-9a1b-496d-9a2a-8cbee49e8e4e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346483, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346483, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.974 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:56 compute-0 nova_compute[248510]: 2025-12-13 08:43:56.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:56 compute-0 nova_compute[248510]: 2025-12-13 08:43:56.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.981 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.982 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.982 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.983 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.049 248514 INFO nova.virt.libvirt.driver [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance shutdown successfully after 14 seconds.
Dec 13 08:43:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:43:57.052 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.055 248514 INFO nova.virt.libvirt.driver [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance destroyed successfully.
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.057 248514 DEBUG nova.objects.instance [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'numa_topology' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.078 248514 DEBUG nova.compute.manager [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.152 248514 DEBUG oslo_concurrency.lockutils [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.288 248514 DEBUG nova.compute.manager [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-unplugged-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.289 248514 DEBUG oslo_concurrency.lockutils [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.289 248514 DEBUG oslo_concurrency.lockutils [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.289 248514 DEBUG oslo_concurrency.lockutils [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.289 248514 DEBUG nova.compute.manager [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] No waiting events found dispatching network-vif-unplugged-6168919d-f5d8-46ba-a89a-e352f37e674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.290 248514 WARNING nova.compute.manager [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received unexpected event network-vif-unplugged-6168919d-f5d8-46ba-a89a-e352f37e674d for instance with vm_state stopped and task_state None.
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:43:57 compute-0 nova_compute[248510]: 2025-12-13 08:43:57.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:43:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 08:43:58 compute-0 nova_compute[248510]: 2025-12-13 08:43:58.139 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:43:58 compute-0 nova_compute[248510]: 2025-12-13 08:43:58.140 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:43:58 compute-0 nova_compute[248510]: 2025-12-13 08:43:58.140 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:43:58 compute-0 nova_compute[248510]: 2025-12-13 08:43:58.140 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:43:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:43:58 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Dec 13 08:43:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:58.985134) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:43:58 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Dec 13 08:43:58 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615438985167, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 909, "num_deletes": 255, "total_data_size": 1279599, "memory_usage": 1306488, "flush_reason": "Manual Compaction"}
Dec 13 08:43:58 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439006118, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1257136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47463, "largest_seqno": 48371, "table_properties": {"data_size": 1252591, "index_size": 2133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9859, "raw_average_key_size": 19, "raw_value_size": 1243515, "raw_average_value_size": 2424, "num_data_blocks": 95, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615359, "oldest_key_time": 1765615359, "file_creation_time": 1765615438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 21051 microseconds, and 4430 cpu microseconds.
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.006176) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1257136 bytes OK
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.006199) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.014576) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.014591) EVENT_LOG_v1 {"time_micros": 1765615439014587, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.014608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1275165, prev total WAL file size 1275165, number of live WAL files 2.
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.015054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303131' seq:0, type:0; will stop at (end)
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1227KB)], [110(7595KB)]
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439015158, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 9034808, "oldest_snapshot_seqno": -1}
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6834 keys, 8900177 bytes, temperature: kUnknown
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439084308, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8900177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8856070, "index_size": 25947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 178993, "raw_average_key_size": 26, "raw_value_size": 8735101, "raw_average_value_size": 1278, "num_data_blocks": 1013, "num_entries": 6834, "num_filter_entries": 6834, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615439, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.084552) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8900177 bytes
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.087880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.5 rd, 128.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.4 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 7356, records dropped: 522 output_compression: NoCompression
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.087897) EVENT_LOG_v1 {"time_micros": 1765615439087889, "job": 66, "event": "compaction_finished", "compaction_time_micros": 69230, "compaction_time_cpu_micros": 21009, "output_level": 6, "num_output_files": 1, "total_output_size": 8900177, "num_input_records": 7356, "num_output_records": 6834, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439088314, "job": 66, "event": "table_file_deletion", "file_number": 112}
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439089742, "job": 66, "event": "table_file_deletion", "file_number": 110}
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.014979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:43:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:43:59 compute-0 ceph-mon[76537]: pgmap v2436: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 08:43:59 compute-0 nova_compute[248510]: 2025-12-13 08:43:59.542 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:43:59 compute-0 nova_compute[248510]: 2025-12-13 08:43:59.834 248514 DEBUG nova.compute.manager [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:43:59 compute-0 nova_compute[248510]: 2025-12-13 08:43:59.834 248514 DEBUG oslo_concurrency.lockutils [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:43:59 compute-0 nova_compute[248510]: 2025-12-13 08:43:59.834 248514 DEBUG oslo_concurrency.lockutils [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:43:59 compute-0 nova_compute[248510]: 2025-12-13 08:43:59.834 248514 DEBUG oslo_concurrency.lockutils [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:43:59 compute-0 nova_compute[248510]: 2025-12-13 08:43:59.835 248514 DEBUG nova.compute.manager [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] No waiting events found dispatching network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:43:59 compute-0 nova_compute[248510]: 2025-12-13 08:43:59.835 248514 WARNING nova.compute.manager [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received unexpected event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d for instance with vm_state stopped and task_state None.
Dec 13 08:43:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 08:44:00 compute-0 ceph-mon[76537]: pgmap v2437: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.656 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.686 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.686 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.686 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.854 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.855 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.855 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.855 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.855 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.857 248514 INFO nova.compute.manager [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Terminating instance
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.858 248514 DEBUG nova.compute.manager [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.867 248514 INFO nova.virt.libvirt.driver [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance destroyed successfully.
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.868 248514 DEBUG nova.objects.instance [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:44:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.888 248514 DEBUG nova.virt.libvirt.vif [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:43:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-596298568',display_name='tempest-Íñstáñcé-1719169917',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-596298568',id=102,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:43:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-gzr370um',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:43:58Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=d7f248dd-d4a5-4de5-b69a-bf263bfa30ad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.889 248514 DEBUG nova.network.os_vif_util [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.890 248514 DEBUG nova.network.os_vif_util [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.890 248514 DEBUG os_vif [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.892 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6168919d-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:01 compute-0 nova_compute[248510]: 2025-12-13 08:44:01.897 248514 INFO os_vif [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5')
Dec 13 08:44:02 compute-0 nova_compute[248510]: 2025-12-13 08:44:02.177 248514 INFO nova.virt.libvirt.driver [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Deleting instance files /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_del
Dec 13 08:44:02 compute-0 nova_compute[248510]: 2025-12-13 08:44:02.178 248514 INFO nova.virt.libvirt.driver [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Deletion of /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_del complete
Dec 13 08:44:02 compute-0 nova_compute[248510]: 2025-12-13 08:44:02.269 248514 INFO nova.compute.manager [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Took 0.41 seconds to destroy the instance on the hypervisor.
Dec 13 08:44:02 compute-0 nova_compute[248510]: 2025-12-13 08:44:02.270 248514 DEBUG oslo.service.loopingcall [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:44:02 compute-0 nova_compute[248510]: 2025-12-13 08:44:02.270 248514 DEBUG nova.compute.manager [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:44:02 compute-0 nova_compute[248510]: 2025-12-13 08:44:02.270 248514 DEBUG nova.network.neutron [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:44:03 compute-0 ceph-mon[76537]: pgmap v2438: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.583 248514 DEBUG nova.compute.manager [req-435ba39d-b917-4902-944e-8df29b78d197 req-841e4eb7-4524-4d94-a374-dcdf9e716b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-deleted-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.584 248514 INFO nova.compute.manager [req-435ba39d-b917-4902-944e-8df29b78d197 req-841e4eb7-4524-4d94-a374-dcdf9e716b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Neutron deleted interface 6168919d-f5d8-46ba-a89a-e352f37e674d; detaching it from the instance and deleting it from the info cache
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.585 248514 DEBUG nova.network.neutron [req-435ba39d-b917-4902-944e-8df29b78d197 req-841e4eb7-4524-4d94-a374-dcdf9e716b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.677 248514 DEBUG nova.network.neutron [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.698 248514 INFO nova.compute.manager [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Took 1.43 seconds to deallocate network for instance.
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.701 248514 DEBUG nova.compute.manager [req-435ba39d-b917-4902-944e-8df29b78d197 req-841e4eb7-4524-4d94-a374-dcdf9e716b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Detach interface failed, port_id=6168919d-f5d8-46ba-a89a-e352f37e674d, reason: Instance d7f248dd-d4a5-4de5-b69a-bf263bfa30ad could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.759 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.759 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:03 compute-0 nova_compute[248510]: 2025-12-13 08:44:03.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 178 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 26 KiB/s wr, 29 op/s
Dec 13 08:44:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.060 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.176 248514 DEBUG oslo_concurrency.processutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:04 compute-0 ceph-mon[76537]: pgmap v2439: 321 pgs: 321 active+clean; 178 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 26 KiB/s wr, 29 op/s
Dec 13 08:44:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:44:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592116885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.754 248514 DEBUG oslo_concurrency.processutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.762 248514 DEBUG nova.compute.provider_tree [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.833 248514 DEBUG nova.scheduler.client.report [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.872 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.875 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.876 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.876 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.876 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:04 compute-0 nova_compute[248510]: 2025-12-13 08:44:04.969 248514 INFO nova.scheduler.client.report [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance d7f248dd-d4a5-4de5-b69a-bf263bfa30ad
Dec 13 08:44:05 compute-0 nova_compute[248510]: 2025-12-13 08:44:05.094 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:44:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456379895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3592116885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:05 compute-0 nova_compute[248510]: 2025-12-13 08:44:05.531 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:05 compute-0 nova_compute[248510]: 2025-12-13 08:44:05.682 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:44:05 compute-0 nova_compute[248510]: 2025-12-13 08:44:05.683 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:44:05 compute-0 nova_compute[248510]: 2025-12-13 08:44:05.851 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:44:05 compute-0 nova_compute[248510]: 2025-12-13 08:44:05.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.9078850755468GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:44:05 compute-0 nova_compute[248510]: 2025-12-13 08:44:05.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:05 compute-0 nova_compute[248510]: 2025-12-13 08:44:05.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 27 KiB/s wr, 90 op/s
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.042 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.043 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.043 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.151 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1456379895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:06 compute-0 ceph-mon[76537]: pgmap v2440: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 27 KiB/s wr, 90 op/s
Dec 13 08:44:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:44:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537074341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.774 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.779 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.881 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:44:06 compute-0 nova_compute[248510]: 2025-12-13 08:44:06.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:07 compute-0 nova_compute[248510]: 2025-12-13 08:44:07.000 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:44:07 compute-0 nova_compute[248510]: 2025-12-13 08:44:07.001 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/537074341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.5 KiB/s wr, 87 op/s
Dec 13 08:44:08 compute-0 ceph-mon[76537]: pgmap v2441: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.5 KiB/s wr, 87 op/s
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.907 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.907 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.907 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.908 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.908 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.910 248514 INFO nova.compute.manager [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Terminating instance
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.911 248514 DEBUG nova.compute.manager [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:44:08 compute-0 kernel: tapeacb4de9-7d (unregistering): left promiscuous mode
Dec 13 08:44:08 compute-0 NetworkManager[50376]: <info>  [1765615448.9589] device (tapeacb4de9-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:44:08 compute-0 ovn_controller[148476]: 2025-12-13T08:44:08Z|00980|binding|INFO|Releasing lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 from this chassis (sb_readonly=0)
Dec 13 08:44:08 compute-0 ovn_controller[148476]: 2025-12-13T08:44:08Z|00981|binding|INFO|Setting lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 down in Southbound
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.964 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:08 compute-0 ovn_controller[148476]: 2025-12-13T08:44:08Z|00982|binding|INFO|Removing iface tapeacb4de9-7d ovn-installed in OVS
Dec 13 08:44:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.974 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:83:5f 10.100.0.5'], port_security=['fa:16:3e:b3:83:5f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:44:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.975 158419 INFO neutron.agent.ovn.metadata.agent [-] Port eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis
Dec 13 08:44:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.976 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f018c93-df47-4a6c-acdb-f508a51f75b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:44:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c80236e-4ce4-4bbe-831f-1f0ba9827ce6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.978 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 namespace which is not needed anymore
Dec 13 08:44:08 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:08.999 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.000 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.000 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:44:09 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Dec 13 08:44:09 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Consumed 20.716s CPU time.
Dec 13 08:44:09 compute-0 systemd-machined[210538]: Machine qemu-116-instance-0000005e terminated.
Dec 13 08:44:09 compute-0 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [NOTICE]   (339943) : haproxy version is 2.8.14-c23fe91
Dec 13 08:44:09 compute-0 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [NOTICE]   (339943) : path to executable is /usr/sbin/haproxy
Dec 13 08:44:09 compute-0 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [WARNING]  (339943) : Exiting Master process...
Dec 13 08:44:09 compute-0 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [ALERT]    (339943) : Current worker (339953) exited with code 143 (Terminated)
Dec 13 08:44:09 compute-0 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [WARNING]  (339943) : All workers exited. Exiting... (0)
Dec 13 08:44:09 compute-0 systemd[1]: libpod-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6.scope: Deactivated successfully.
Dec 13 08:44:09 compute-0 conmon[339862]: conmon 009ac3a8f481e99b8c11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6.scope/container/memory.events
Dec 13 08:44:09 compute-0 podman[346607]: 2025-12-13 08:44:09.124504942 +0000 UTC m=+0.044675982 container died 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.145 248514 INFO nova.virt.libvirt.driver [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance destroyed successfully.
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.146 248514 DEBUG nova.objects.instance [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6-userdata-shm.mount: Deactivated successfully.
Dec 13 08:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dfb9bdf13c872c9b470ea097dce2795d900c75827e3e75f827321fc99371a11-merged.mount: Deactivated successfully.
Dec 13 08:44:09 compute-0 podman[346607]: 2025-12-13 08:44:09.179345138 +0000 UTC m=+0.099516178 container cleanup 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:44:09 compute-0 systemd[1]: libpod-conmon-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6.scope: Deactivated successfully.
Dec 13 08:44:09 compute-0 podman[346645]: 2025-12-13 08:44:09.238493693 +0000 UTC m=+0.038902928 container remove 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.244 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eae5aa31-6c44-4bb5-945a-0081381e7e54]: (4, ('Sat Dec 13 08:44:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 (009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6)\n009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6\nSat Dec 13 08:44:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 (009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6)\n009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.246 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d57aa912-c973-4a96-b674-d24228eb628d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.246 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:09 compute-0 kernel: tap8f018c93-d0: left promiscuous mode
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.267 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0af3db03-49fc-4358-b459-22ebc5fec122]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:44:09
Dec 13 08:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups']
Dec 13 08:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.284 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b46ddfe0-699c-4beb-979f-a46608911ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.285 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c578a285-460a-43a8-b1ce-fff6c03e0387]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.304 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d927264-ba02-4c9f-a2c6-449b92ef85c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782807, 'reachable_time': 35416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346664, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d8f018c93\x2ddf47\x2d4a6c\x2dacdb\x2df508a51f75b3.mount: Deactivated successfully.
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.309 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:44:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.309 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf2c675-1d2c-4621-b889-efa26da4b401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.431 248514 DEBUG nova.virt.libvirt.vif [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-694899991',display_name='tempest-₡-694899991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--694899991',id=94,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:40:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-vc3q6lf7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:40:51Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.432 248514 DEBUG nova.network.os_vif_util [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.434 248514 DEBUG nova.network.os_vif_util [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.434 248514 DEBUG os_vif [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.436 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.436 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeacb4de9-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.441 248514 INFO os_vif [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d')
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.687 248514 DEBUG nova.compute.manager [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-unplugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.688 248514 DEBUG oslo_concurrency.lockutils [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.689 248514 DEBUG oslo_concurrency.lockutils [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.689 248514 DEBUG oslo_concurrency.lockutils [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.689 248514 DEBUG nova.compute.manager [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] No waiting events found dispatching network-vif-unplugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.689 248514 DEBUG nova.compute.manager [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-unplugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.708 248514 INFO nova.virt.libvirt.driver [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Deleting instance files /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_del
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.709 248514 INFO nova.virt.libvirt.driver [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Deletion of /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_del complete
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.803 248514 INFO nova.compute.manager [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Took 0.89 seconds to destroy the instance on the hypervisor.
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.803 248514 DEBUG oslo.service.loopingcall [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.804 248514 DEBUG nova.compute.manager [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:44:09 compute-0 nova_compute[248510]: 2025-12-13 08:44:09.804 248514 DEBUG nova.network.neutron [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:44:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.5 KiB/s wr, 87 op/s
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:44:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:44:10 compute-0 ceph-mon[76537]: pgmap v2442: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.5 KiB/s wr, 87 op/s
Dec 13 08:44:10 compute-0 podman[346686]: 2025-12-13 08:44:10.963216083 +0000 UTC m=+0.051151994 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:44:10 compute-0 podman[346685]: 2025-12-13 08:44:10.969623634 +0000 UTC m=+0.062394067 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:44:10 compute-0 podman[346684]: 2025-12-13 08:44:10.989121893 +0000 UTC m=+0.082116221 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.245 248514 DEBUG nova.network.neutron [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.286 248514 INFO nova.compute.manager [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Took 1.48 seconds to deallocate network for instance.
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.353 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.353 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.405 248514 DEBUG oslo_concurrency.processutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 91 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.5 KiB/s wr, 106 op/s
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.895 248514 DEBUG nova.compute.manager [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.896 248514 DEBUG oslo_concurrency.lockutils [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.896 248514 DEBUG oslo_concurrency.lockutils [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.896 248514 DEBUG oslo_concurrency.lockutils [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.897 248514 DEBUG nova.compute.manager [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] No waiting events found dispatching network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.897 248514 WARNING nova.compute.manager [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received unexpected event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 for instance with vm_state deleted and task_state None.
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.897 248514 DEBUG nova.compute.manager [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-deleted-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:44:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:44:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19150681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/19150681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.964 248514 DEBUG oslo_concurrency.processutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.971 248514 DEBUG nova.compute.provider_tree [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:44:11 compute-0 nova_compute[248510]: 2025-12-13 08:44:11.990 248514 DEBUG nova.scheduler.client.report [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:44:12 compute-0 nova_compute[248510]: 2025-12-13 08:44:12.021 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:12 compute-0 nova_compute[248510]: 2025-12-13 08:44:12.048 248514 INFO nova.scheduler.client.report [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c
Dec 13 08:44:12 compute-0 nova_compute[248510]: 2025-12-13 08:44:12.049 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615437.0484374, d7f248dd-d4a5-4de5-b69a-bf263bfa30ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:44:12 compute-0 nova_compute[248510]: 2025-12-13 08:44:12.049 248514 INFO nova.compute.manager [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] VM Stopped (Lifecycle Event)
Dec 13 08:44:12 compute-0 nova_compute[248510]: 2025-12-13 08:44:12.079 248514 DEBUG nova.compute.manager [None req-62f8c865-dfcb-47e6-b9ca-72a786c88328 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:44:12 compute-0 nova_compute[248510]: 2025-12-13 08:44:12.172 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:12 compute-0 ceph-mon[76537]: pgmap v2443: 321 pgs: 321 active+clean; 91 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.5 KiB/s wr, 106 op/s
Dec 13 08:44:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 65 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 2.3 KiB/s wr, 108 op/s
Dec 13 08:44:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:14 compute-0 nova_compute[248510]: 2025-12-13 08:44:14.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:14 compute-0 nova_compute[248510]: 2025-12-13 08:44:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:14 compute-0 ceph-mon[76537]: pgmap v2444: 321 pgs: 321 active+clean; 65 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 2.3 KiB/s wr, 108 op/s
Dec 13 08:44:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:44:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/267277573' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:44:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:44:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/267277573' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:44:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 KiB/s wr, 88 op/s
Dec 13 08:44:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/267277573' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:44:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/267277573' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:44:16 compute-0 nova_compute[248510]: 2025-12-13 08:44:16.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:16 compute-0 ceph-mon[76537]: pgmap v2445: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 KiB/s wr, 88 op/s
Dec 13 08:44:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:44:18 compute-0 nova_compute[248510]: 2025-12-13 08:44:18.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:19 compute-0 ceph-mon[76537]: pgmap v2446: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:44:19 compute-0 nova_compute[248510]: 2025-12-13 08:44:19.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:44:21 compute-0 ceph-mon[76537]: pgmap v2447: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.2116032574507006e-05 of space, bias 1.0, pg target 0.0036348097723521017 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676247291145143 of space, bias 1.0, pg target 0.2002874187343543 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.905117584623971e-07 of space, bias 4.0, pg target 0.0007086141101548765 quantized to 16 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:44:21 compute-0 nova_compute[248510]: 2025-12-13 08:44:21.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:21 compute-0 nova_compute[248510]: 2025-12-13 08:44:21.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:44:22 compute-0 sudo[346766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:44:22 compute-0 sudo[346766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:22 compute-0 sudo[346766]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:22 compute-0 sudo[346791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:44:22 compute-0 sudo[346791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:22 compute-0 sudo[346791]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:44:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:44:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:44:22 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:44:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:44:22 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:44:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:44:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:44:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:44:22 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:44:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:44:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:44:22 compute-0 sudo[346847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:44:22 compute-0 sudo[346847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:22 compute-0 sudo[346847]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:22 compute-0 sudo[346872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:44:22 compute-0 sudo[346872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:23 compute-0 ceph-mon[76537]: pgmap v2448: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:44:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:44:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:44:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:44:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:44:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:44:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:44:23 compute-0 podman[346910]: 2025-12-13 08:44:23.141960207 +0000 UTC m=+0.049717439 container create 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 08:44:23 compute-0 systemd[1]: Started libpod-conmon-6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e.scope.
Dec 13 08:44:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:44:23 compute-0 podman[346910]: 2025-12-13 08:44:23.115350099 +0000 UTC m=+0.023107351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:44:23 compute-0 podman[346910]: 2025-12-13 08:44:23.279290173 +0000 UTC m=+0.187047405 container init 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:44:23 compute-0 podman[346910]: 2025-12-13 08:44:23.292749571 +0000 UTC m=+0.200506803 container start 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:44:23 compute-0 podman[346910]: 2025-12-13 08:44:23.296647649 +0000 UTC m=+0.204404901 container attach 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Dec 13 08:44:23 compute-0 priceless_austin[346926]: 167 167
Dec 13 08:44:23 compute-0 systemd[1]: libpod-6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e.scope: Deactivated successfully.
Dec 13 08:44:23 compute-0 podman[346910]: 2025-12-13 08:44:23.299839929 +0000 UTC m=+0.207597181 container died 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:44:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Dec 13 08:44:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:24 compute-0 nova_compute[248510]: 2025-12-13 08:44:24.142 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615449.139949, 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:44:24 compute-0 nova_compute[248510]: 2025-12-13 08:44:24.144 248514 INFO nova.compute.manager [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] VM Stopped (Lifecycle Event)
Dec 13 08:44:24 compute-0 nova_compute[248510]: 2025-12-13 08:44:24.167 248514 DEBUG nova.compute.manager [None req-dba7eb62-7c58-4601-9903-7c677bcc5f1b - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:44:24 compute-0 nova_compute[248510]: 2025-12-13 08:44:24.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c6cbc5f7fb3943d6989285ba9a964270fe01325c40adafe607816445aa9c3b0-merged.mount: Deactivated successfully.
Dec 13 08:44:24 compute-0 podman[346910]: 2025-12-13 08:44:24.508352485 +0000 UTC m=+1.416109737 container remove 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:44:24 compute-0 systemd[1]: libpod-conmon-6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e.scope: Deactivated successfully.
Dec 13 08:44:24 compute-0 ceph-mon[76537]: pgmap v2449: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Dec 13 08:44:24 compute-0 podman[346951]: 2025-12-13 08:44:24.673231393 +0000 UTC m=+0.045897003 container create 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:44:24 compute-0 systemd[1]: Started libpod-conmon-7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64.scope.
Dec 13 08:44:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:24 compute-0 podman[346951]: 2025-12-13 08:44:24.656313239 +0000 UTC m=+0.028978879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:44:24 compute-0 podman[346951]: 2025-12-13 08:44:24.754610525 +0000 UTC m=+0.127276155 container init 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 08:44:24 compute-0 podman[346951]: 2025-12-13 08:44:24.762904664 +0000 UTC m=+0.135570274 container start 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 08:44:24 compute-0 podman[346951]: 2025-12-13 08:44:24.766952955 +0000 UTC m=+0.139618595 container attach 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:44:25 compute-0 intelligent_moser[346969]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:44:25 compute-0 intelligent_moser[346969]: --> All data devices are unavailable
Dec 13 08:44:25 compute-0 systemd[1]: libpod-7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64.scope: Deactivated successfully.
Dec 13 08:44:25 compute-0 podman[346951]: 2025-12-13 08:44:25.229141474 +0000 UTC m=+0.601807104 container died 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e-merged.mount: Deactivated successfully.
Dec 13 08:44:25 compute-0 podman[346951]: 2025-12-13 08:44:25.276098842 +0000 UTC m=+0.648764442 container remove 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:44:25 compute-0 systemd[1]: libpod-conmon-7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64.scope: Deactivated successfully.
Dec 13 08:44:25 compute-0 sudo[346872]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:25 compute-0 sudo[346999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:44:25 compute-0 sudo[346999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:25 compute-0 sudo[346999]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:25 compute-0 sudo[347024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:44:25 compute-0 sudo[347024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:25 compute-0 podman[347060]: 2025-12-13 08:44:25.769404952 +0000 UTC m=+0.075132337 container create 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 08:44:25 compute-0 systemd[1]: Started libpod-conmon-79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00.scope.
Dec 13 08:44:25 compute-0 podman[347060]: 2025-12-13 08:44:25.720601707 +0000 UTC m=+0.026329112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:44:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:44:25 compute-0 podman[347060]: 2025-12-13 08:44:25.856230161 +0000 UTC m=+0.161957576 container init 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:44:25 compute-0 podman[347060]: 2025-12-13 08:44:25.863932324 +0000 UTC m=+0.169659709 container start 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:44:25 compute-0 quizzical_shtern[347076]: 167 167
Dec 13 08:44:25 compute-0 systemd[1]: libpod-79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00.scope: Deactivated successfully.
Dec 13 08:44:25 compute-0 podman[347060]: 2025-12-13 08:44:25.868673463 +0000 UTC m=+0.174400868 container attach 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 08:44:25 compute-0 podman[347060]: 2025-12-13 08:44:25.86934648 +0000 UTC m=+0.175073865 container died 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:44:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 13 08:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa22edd0862fb2d535e77cf18625f0f7108232ce79b4b679ac392f5251cc2a7d-merged.mount: Deactivated successfully.
Dec 13 08:44:25 compute-0 podman[347060]: 2025-12-13 08:44:25.911386005 +0000 UTC m=+0.217113380 container remove 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 08:44:25 compute-0 systemd[1]: libpod-conmon-79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00.scope: Deactivated successfully.
Dec 13 08:44:26 compute-0 podman[347098]: 2025-12-13 08:44:26.103397044 +0000 UTC m=+0.046378725 container create d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:44:26 compute-0 systemd[1]: Started libpod-conmon-d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e.scope.
Dec 13 08:44:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:26 compute-0 podman[347098]: 2025-12-13 08:44:26.175569355 +0000 UTC m=+0.118551056 container init d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:44:26 compute-0 podman[347098]: 2025-12-13 08:44:26.084002027 +0000 UTC m=+0.026983728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:44:26 compute-0 podman[347098]: 2025-12-13 08:44:26.181538685 +0000 UTC m=+0.124520366 container start d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:44:26 compute-0 podman[347098]: 2025-12-13 08:44:26.184980001 +0000 UTC m=+0.127961702 container attach d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]: {
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:     "0": [
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:         {
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "devices": [
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "/dev/loop3"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             ],
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_name": "ceph_lv0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_size": "21470642176",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "name": "ceph_lv0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "tags": {
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cluster_name": "ceph",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.crush_device_class": "",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.encrypted": "0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.objectstore": "bluestore",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osd_id": "0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.type": "block",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.vdo": "0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.with_tpm": "0"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             },
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "type": "block",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "vg_name": "ceph_vg0"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:         }
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:     ],
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:     "1": [
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:         {
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "devices": [
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "/dev/loop4"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             ],
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_name": "ceph_lv1",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_size": "21470642176",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "name": "ceph_lv1",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "tags": {
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cluster_name": "ceph",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.crush_device_class": "",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.encrypted": "0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.objectstore": "bluestore",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osd_id": "1",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.type": "block",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.vdo": "0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.with_tpm": "0"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             },
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "type": "block",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "vg_name": "ceph_vg1"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:         }
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:     ],
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:     "2": [
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:         {
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "devices": [
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "/dev/loop5"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             ],
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_name": "ceph_lv2",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_size": "21470642176",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "name": "ceph_lv2",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "tags": {
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.cluster_name": "ceph",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.crush_device_class": "",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.encrypted": "0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.objectstore": "bluestore",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osd_id": "2",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.type": "block",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.vdo": "0",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:                 "ceph.with_tpm": "0"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             },
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "type": "block",
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:             "vg_name": "ceph_vg2"
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:         }
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]:     ]
Dec 13 08:44:26 compute-0 eloquent_satoshi[347115]: }
Dec 13 08:44:26 compute-0 systemd[1]: libpod-d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e.scope: Deactivated successfully.
Dec 13 08:44:26 compute-0 podman[347098]: 2025-12-13 08:44:26.527767053 +0000 UTC m=+0.470748744 container died d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:44:26 compute-0 nova_compute[248510]: 2025-12-13 08:44:26.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713-merged.mount: Deactivated successfully.
Dec 13 08:44:26 compute-0 podman[347098]: 2025-12-13 08:44:26.5722859 +0000 UTC m=+0.515267581 container remove d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:44:26 compute-0 systemd[1]: libpod-conmon-d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e.scope: Deactivated successfully.
Dec 13 08:44:26 compute-0 sudo[347024]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:26 compute-0 sudo[347134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:44:26 compute-0 sudo[347134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:26 compute-0 sudo[347134]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:26 compute-0 sudo[347159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:44:26 compute-0 sudo[347159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:26 compute-0 ceph-mon[76537]: pgmap v2450: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 13 08:44:27 compute-0 podman[347195]: 2025-12-13 08:44:27.04765154 +0000 UTC m=+0.023217044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:44:27 compute-0 podman[347195]: 2025-12-13 08:44:27.366639655 +0000 UTC m=+0.342205139 container create ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:44:27 compute-0 systemd[1]: Started libpod-conmon-ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626.scope.
Dec 13 08:44:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:44:27 compute-0 podman[347195]: 2025-12-13 08:44:27.46369343 +0000 UTC m=+0.439258944 container init ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 08:44:27 compute-0 podman[347195]: 2025-12-13 08:44:27.471223079 +0000 UTC m=+0.446788563 container start ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:44:27 compute-0 nice_heyrovsky[347211]: 167 167
Dec 13 08:44:27 compute-0 systemd[1]: libpod-ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626.scope: Deactivated successfully.
Dec 13 08:44:27 compute-0 podman[347195]: 2025-12-13 08:44:27.477831275 +0000 UTC m=+0.453396789 container attach ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:44:27 compute-0 podman[347195]: 2025-12-13 08:44:27.478169883 +0000 UTC m=+0.453735367 container died ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:44:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-984628a64d9e75d499299216c2d9e2ea0007e09439a00f6e303c8a2dd5c6e82b-merged.mount: Deactivated successfully.
Dec 13 08:44:27 compute-0 podman[347195]: 2025-12-13 08:44:27.511469359 +0000 UTC m=+0.487034843 container remove ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Dec 13 08:44:27 compute-0 systemd[1]: libpod-conmon-ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626.scope: Deactivated successfully.
Dec 13 08:44:27 compute-0 podman[347235]: 2025-12-13 08:44:27.675733491 +0000 UTC m=+0.052203531 container create 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 08:44:27 compute-0 systemd[1]: Started libpod-conmon-837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8.scope.
Dec 13 08:44:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:27 compute-0 podman[347235]: 2025-12-13 08:44:27.654591881 +0000 UTC m=+0.031061931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:44:27 compute-0 podman[347235]: 2025-12-13 08:44:27.753314448 +0000 UTC m=+0.129784478 container init 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:44:27 compute-0 podman[347235]: 2025-12-13 08:44:27.760769085 +0000 UTC m=+0.137239125 container start 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:44:27 compute-0 podman[347235]: 2025-12-13 08:44:27.764281863 +0000 UTC m=+0.140751933 container attach 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:44:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:28 compute-0 lvm[347329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:44:28 compute-0 lvm[347331]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:44:28 compute-0 lvm[347329]: VG ceph_vg0 finished
Dec 13 08:44:28 compute-0 lvm[347331]: VG ceph_vg1 finished
Dec 13 08:44:28 compute-0 lvm[347333]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:44:28 compute-0 lvm[347333]: VG ceph_vg2 finished
Dec 13 08:44:28 compute-0 upbeat_curie[347252]: {}
Dec 13 08:44:28 compute-0 systemd[1]: libpod-837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8.scope: Deactivated successfully.
Dec 13 08:44:28 compute-0 podman[347235]: 2025-12-13 08:44:28.572865004 +0000 UTC m=+0.949335074 container died 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 08:44:28 compute-0 systemd[1]: libpod-837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8.scope: Consumed 1.240s CPU time.
Dec 13 08:44:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1-merged.mount: Deactivated successfully.
Dec 13 08:44:28 compute-0 podman[347235]: 2025-12-13 08:44:28.641780643 +0000 UTC m=+1.018250683 container remove 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:44:28 compute-0 systemd[1]: libpod-conmon-837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8.scope: Deactivated successfully.
Dec 13 08:44:28 compute-0 sudo[347159]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:44:28 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:44:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:44:28 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:44:28 compute-0 sudo[347348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:44:28 compute-0 sudo[347348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:44:28 compute-0 sudo[347348]: pam_unix(sudo:session): session closed for user root
Dec 13 08:44:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:29 compute-0 ceph-mon[76537]: pgmap v2451: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:29 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:44:29 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:44:29 compute-0 nova_compute[248510]: 2025-12-13 08:44:29.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:30 compute-0 ceph-mon[76537]: pgmap v2452: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:31 compute-0 nova_compute[248510]: 2025-12-13 08:44:31.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:32 compute-0 ceph-mon[76537]: pgmap v2453: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:34 compute-0 nova_compute[248510]: 2025-12-13 08:44:34.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:34 compute-0 ceph-mon[76537]: pgmap v2454: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:36 compute-0 nova_compute[248510]: 2025-12-13 08:44:36.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:37 compute-0 ceph-mon[76537]: pgmap v2455: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:39 compute-0 ceph-mon[76537]: pgmap v2456: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:39 compute-0 nova_compute[248510]: 2025-12-13 08:44:39.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:44:41 compute-0 ceph-mon[76537]: pgmap v2457: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:41 compute-0 nova_compute[248510]: 2025-12-13 08:44:41.579 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:41 compute-0 podman[347375]: 2025-12-13 08:44:41.973427431 +0000 UTC m=+0.060823277 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:44:41 compute-0 podman[347374]: 2025-12-13 08:44:41.977670958 +0000 UTC m=+0.066817578 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 08:44:42 compute-0 podman[347373]: 2025-12-13 08:44:42.003841474 +0000 UTC m=+0.093121627 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:44:43 compute-0 ceph-mon[76537]: pgmap v2458: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:44 compute-0 nova_compute[248510]: 2025-12-13 08:44:44.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:45 compute-0 ceph-mon[76537]: pgmap v2459: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.308 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.340 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.469 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.470 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.482 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.482 248514 INFO nova.compute.claims [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.581 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:46 compute-0 nova_compute[248510]: 2025-12-13 08:44:46.711 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:47 compute-0 ceph-mon[76537]: pgmap v2460: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:44:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387109520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.295 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.301 248514 DEBUG nova.compute.provider_tree [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.322 248514 DEBUG nova.scheduler.client.report [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.449 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.450 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.551 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.551 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.596 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.628 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.731 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.733 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.733 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating image(s)
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.757 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.783 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.805 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.809 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.901 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.902 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.903 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.903 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.927 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.930 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c3fb322f-a9db-4396-b659-2307698e5524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:47 compute-0 nova_compute[248510]: 2025-12-13 08:44:47.968 248514 DEBUG nova.policy [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8948a1b0c26f43129cb50ef6f3872ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:44:48 compute-0 nova_compute[248510]: 2025-12-13 08:44:48.224 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c3fb322f-a9db-4396-b659-2307698e5524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2387109520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:44:48 compute-0 nova_compute[248510]: 2025-12-13 08:44:48.287 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] resizing rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:44:48 compute-0 nova_compute[248510]: 2025-12-13 08:44:48.350 248514 DEBUG nova.objects.instance [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'migration_context' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:44:48 compute-0 nova_compute[248510]: 2025-12-13 08:44:48.410 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:44:48 compute-0 nova_compute[248510]: 2025-12-13 08:44:48.410 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Ensure instance console log exists: /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:44:48 compute-0 nova_compute[248510]: 2025-12-13 08:44:48.411 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:48 compute-0 nova_compute[248510]: 2025-12-13 08:44:48.411 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:48 compute-0 nova_compute[248510]: 2025-12-13 08:44:48.412 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:49 compute-0 ceph-mon[76537]: pgmap v2461: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:44:49 compute-0 nova_compute[248510]: 2025-12-13 08:44:49.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:49 compute-0 ovn_controller[148476]: 2025-12-13T08:44:49Z|00983|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 13 08:44:49 compute-0 nova_compute[248510]: 2025-12-13 08:44:49.747 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Successfully created port: 2d164f50-a56a-4eaf-ad60-84274a0eb413 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:44:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 54 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 643 KiB/s wr, 1 op/s
Dec 13 08:44:50 compute-0 ceph-mon[76537]: pgmap v2462: 321 pgs: 321 active+clean; 54 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 643 KiB/s wr, 1 op/s
Dec 13 08:44:51 compute-0 nova_compute[248510]: 2025-12-13 08:44:51.583 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:51 compute-0 nova_compute[248510]: 2025-12-13 08:44:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 68 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Dec 13 08:44:52 compute-0 nova_compute[248510]: 2025-12-13 08:44:52.107 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Successfully updated port: 2d164f50-a56a-4eaf-ad60-84274a0eb413 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:44:52 compute-0 nova_compute[248510]: 2025-12-13 08:44:52.126 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:44:52 compute-0 nova_compute[248510]: 2025-12-13 08:44:52.127 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:44:52 compute-0 nova_compute[248510]: 2025-12-13 08:44:52.127 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:44:52 compute-0 nova_compute[248510]: 2025-12-13 08:44:52.297 248514 DEBUG nova.compute.manager [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:44:52 compute-0 nova_compute[248510]: 2025-12-13 08:44:52.298 248514 DEBUG nova.compute.manager [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing instance network info cache due to event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:44:52 compute-0 nova_compute[248510]: 2025-12-13 08:44:52.299 248514 DEBUG oslo_concurrency.lockutils [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:44:52 compute-0 nova_compute[248510]: 2025-12-13 08:44:52.570 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:44:53 compute-0 ceph-mon[76537]: pgmap v2463: 321 pgs: 321 active+clean; 68 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Dec 13 08:44:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:44:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:54 compute-0 ceph-mon[76537]: pgmap v2464: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:44:54 compute-0 nova_compute[248510]: 2025-12-13 08:44:54.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:54 compute-0 nova_compute[248510]: 2025-12-13 08:44:54.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.424 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.544 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.545 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance network_info: |[{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.545 248514 DEBUG oslo_concurrency.lockutils [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.546 248514 DEBUG nova.network.neutron [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.549 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start _get_guest_xml network_info=[{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.553 248514 WARNING nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.560 248514 DEBUG nova.virt.libvirt.host [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.561 248514 DEBUG nova.virt.libvirt.host [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.570 248514 DEBUG nova.virt.libvirt.host [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.571 248514 DEBUG nova.virt.libvirt.host [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.571 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.574 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:44:55 compute-0 nova_compute[248510]: 2025-12-13 08:44:55.577 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:44:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:44:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290291857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.147 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.167 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.171 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.584 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:44:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683683758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.710 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.712 248514 DEBUG nova.virt.libvirt.vif [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:44:47Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.712 248514 DEBUG nova.network.os_vif_util [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.713 248514 DEBUG nova.network.os_vif_util [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.714 248514 DEBUG nova.objects.instance [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.737 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <uuid>c3fb322f-a9db-4396-b659-2307698e5524</uuid>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <name>instance-00000067</name>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersNegativeTestJSON-server-235457723</nova:name>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:44:55</nova:creationTime>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <nova:user uuid="8948a1b0c26f43129cb50ef6f3872ecd">tempest-ServersNegativeTestJSON-1471623163-project-member</nova:user>
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <nova:project uuid="d2d4d23379cc4b03bbdd72a9134fdd9b">tempest-ServersNegativeTestJSON-1471623163</nova:project>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <nova:port uuid="2d164f50-a56a-4eaf-ad60-84274a0eb413">
Dec 13 08:44:56 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <system>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <entry name="serial">c3fb322f-a9db-4396-b659-2307698e5524</entry>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <entry name="uuid">c3fb322f-a9db-4396-b659-2307698e5524</entry>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </system>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <os>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   </os>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <features>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   </features>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c3fb322f-a9db-4396-b659-2307698e5524_disk">
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       </source>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c3fb322f-a9db-4396-b659-2307698e5524_disk.config">
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       </source>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:44:56 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e2:39:62"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <target dev="tap2d164f50-a5"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/console.log" append="off"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <video>
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </video>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:44:56 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:44:56 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:44:56 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:44:56 compute-0 nova_compute[248510]: </domain>
Dec 13 08:44:56 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.738 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Preparing to wait for external event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.739 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.739 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.739 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.740 248514 DEBUG nova.virt.libvirt.vif [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:44:47Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.740 248514 DEBUG nova.network.os_vif_util [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.741 248514 DEBUG nova.network.os_vif_util [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.741 248514 DEBUG os_vif [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.743 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.743 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.746 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d164f50-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.747 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d164f50-a5, col_values=(('external_ids', {'iface-id': '2d164f50-a56a-4eaf-ad60-84274a0eb413', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:39:62', 'vm-uuid': 'c3fb322f-a9db-4396-b659-2307698e5524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:56 compute-0 NetworkManager[50376]: <info>  [1765615496.7500] manager: (tap2d164f50-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.755 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.756 248514 INFO os_vif [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.929 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.930 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.930 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No VIF found with MAC fa:16:3e:e2:39:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.931 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Using config drive
Dec 13 08:44:56 compute-0 nova_compute[248510]: 2025-12-13 08:44:56.953 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:44:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:57.077 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:44:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:57.078 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:44:57 compute-0 nova_compute[248510]: 2025-12-13 08:44:57.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:57 compute-0 ceph-mon[76537]: pgmap v2465: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:44:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3290291857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:44:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1683683758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:44:57 compute-0 nova_compute[248510]: 2025-12-13 08:44:57.757 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating config drive at /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config
Dec 13 08:44:57 compute-0 nova_compute[248510]: 2025-12-13 08:44:57.763 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8q0huyow execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:57 compute-0 nova_compute[248510]: 2025-12-13 08:44:57.906 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8q0huyow" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:44:57 compute-0 nova_compute[248510]: 2025-12-13 08:44:57.938 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:44:57 compute-0 nova_compute[248510]: 2025-12-13 08:44:57.943 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config c3fb322f-a9db-4396-b659-2307698e5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:44:58 compute-0 nova_compute[248510]: 2025-12-13 08:44:58.249 248514 DEBUG nova.network.neutron [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updated VIF entry in instance network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:44:58 compute-0 nova_compute[248510]: 2025-12-13 08:44:58.250 248514 DEBUG nova.network.neutron [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:44:58 compute-0 nova_compute[248510]: 2025-12-13 08:44:58.275 248514 DEBUG oslo_concurrency.lockutils [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:44:58 compute-0 nova_compute[248510]: 2025-12-13 08:44:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:44:58 compute-0 nova_compute[248510]: 2025-12-13 08:44:58.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:44:58 compute-0 nova_compute[248510]: 2025-12-13 08:44:58.815 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:44:58 compute-0 ceph-mon[76537]: pgmap v2466: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:44:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.468 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config c3fb322f-a9db-4396-b659-2307698e5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.469 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deleting local config drive /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config because it was imported into RBD.
Dec 13 08:44:59 compute-0 kernel: tap2d164f50-a5: entered promiscuous mode
Dec 13 08:44:59 compute-0 ovn_controller[148476]: 2025-12-13T08:44:59Z|00984|binding|INFO|Claiming lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 for this chassis.
Dec 13 08:44:59 compute-0 ovn_controller[148476]: 2025-12-13T08:44:59Z|00985|binding|INFO|2d164f50-a56a-4eaf-ad60-84274a0eb413: Claiming fa:16:3e:e2:39:62 10.100.0.6
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 NetworkManager[50376]: <info>  [1765615499.5226] manager: (tap2d164f50-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 systemd-udevd[347759]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:44:59 compute-0 NetworkManager[50376]: <info>  [1765615499.5592] device (tap2d164f50-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:44:59 compute-0 NetworkManager[50376]: <info>  [1765615499.5599] device (tap2d164f50-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.581 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 ovn_controller[148476]: 2025-12-13T08:44:59Z|00986|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 ovn-installed in OVS
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 systemd-machined[210538]: New machine qemu-127-instance-00000067.
Dec 13 08:44:59 compute-0 ovn_controller[148476]: 2025-12-13T08:44:59Z|00987|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 up in Southbound
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.610 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.610 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b bound to our chassis
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.612 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:44:59 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000067.
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.625 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7144342d-2192-40a2-b11f-7f82b2fe9674]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.626 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ad7f755-f1 in ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.627 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ad7f755-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[680a7cbb-ca1c-45da-9ea8-dbc03fa1c0f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22d80d02-685f-4187-bc20-0ffce223257e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.641 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7f809824-bb72-41a4-9d2d-ec42e3607b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.668 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93365a6a-5d07-44c1-9e8e-2581d19b8c5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.697 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d6bcb6-9fa6-4fa1-a624-0507e56203c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 systemd-udevd[347761]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:44:59 compute-0 NetworkManager[50376]: <info>  [1765615499.7049] manager: (tap6ad7f755-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.704 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef61ceea-e621-4df0-a091-ffe86c116758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.733 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b35abfb4-6886-46db-b0cb-c9881e5ea76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.736 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[68ac012c-1129-4ecd-b38f-c410f44bc5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 NetworkManager[50376]: <info>  [1765615499.7612] device (tap6ad7f755-f0): carrier: link connected
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.767 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[db80ee6f-0079-47ad-a2df-c69e3a351168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.785 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab30ab2a-3094-4397-bd0e-328ca637ee1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347795, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba8c0a2-d034-43f9-9b20-e85b5c406b14]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:35ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807697, 'tstamp': 807697}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347796, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3db5e29f-5f79-4a0d-a953-395faad21e7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347798, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.860 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53e811c2-5d29-4b0d-869d-04c8d1682c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.917 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7b5640-05d1-4694-8686-92efab9b67d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.918 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.919 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.919 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 kernel: tap6ad7f755-f0: entered promiscuous mode
Dec 13 08:44:59 compute-0 NetworkManager[50376]: <info>  [1765615499.9222] manager: (tap6ad7f755-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 ovn_controller[148476]: 2025-12-13T08:44:59Z|00988|binding|INFO|Releasing lport 683d7da0-6f1e-41a6-9158-6204fb05ee50 from this chassis (sb_readonly=0)
Dec 13 08:44:59 compute-0 nova_compute[248510]: 2025-12-13 08:44:59.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.939 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[63ef8c00-3dc0-489e-a8bf-1ee80161e7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.943 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:44:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.944 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'env', 'PROCESS_TAG=haproxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:45:00 compute-0 nova_compute[248510]: 2025-12-13 08:45:00.090 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615500.0895076, c3fb322f-a9db-4396-b659-2307698e5524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:45:00 compute-0 nova_compute[248510]: 2025-12-13 08:45:00.091 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Started (Lifecycle Event)
Dec 13 08:45:00 compute-0 nova_compute[248510]: 2025-12-13 08:45:00.124 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:00 compute-0 nova_compute[248510]: 2025-12-13 08:45:00.128 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615500.0896235, c3fb322f-a9db-4396-b659-2307698e5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:45:00 compute-0 nova_compute[248510]: 2025-12-13 08:45:00.129 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Paused (Lifecycle Event)
Dec 13 08:45:00 compute-0 nova_compute[248510]: 2025-12-13 08:45:00.156 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:00 compute-0 nova_compute[248510]: 2025-12-13 08:45:00.159 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:45:00 compute-0 nova_compute[248510]: 2025-12-13 08:45:00.201 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:45:00 compute-0 podman[347871]: 2025-12-13 08:45:00.29151851 +0000 UTC m=+0.027015289 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:45:00 compute-0 podman[347871]: 2025-12-13 08:45:00.631374879 +0000 UTC m=+0.366871588 container create 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 08:45:00 compute-0 systemd[1]: Started libpod-conmon-44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7.scope.
Dec 13 08:45:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7ca1c7f01e35473290f2070a95e46c25e0e0fafb8bf13da3280c15aeb90b0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:00 compute-0 podman[347871]: 2025-12-13 08:45:00.743674758 +0000 UTC m=+0.479171467 container init 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:45:00 compute-0 podman[347871]: 2025-12-13 08:45:00.749388091 +0000 UTC m=+0.484884790 container start 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:45:00 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [NOTICE]   (347890) : New worker (347892) forked
Dec 13 08:45:00 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [NOTICE]   (347890) : Loading success.
Dec 13 08:45:01 compute-0 ceph-mon[76537]: pgmap v2467: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.459 248514 DEBUG nova.compute.manager [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.460 248514 DEBUG oslo_concurrency.lockutils [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.460 248514 DEBUG oslo_concurrency.lockutils [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.460 248514 DEBUG oslo_concurrency.lockutils [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.460 248514 DEBUG nova.compute.manager [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Processing event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.461 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.464 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615501.4645479, c3fb322f-a9db-4396-b659-2307698e5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.464 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Resumed (Lifecycle Event)
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.466 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.471 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance spawned successfully.
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.471 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.509 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.515 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.518 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.519 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.519 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.520 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.520 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.520 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.564 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.626 248514 INFO nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Took 13.89 seconds to spawn the instance on the hypervisor.
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.626 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.797 248514 INFO nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Took 15.38 seconds to build instance.
Dec 13 08:45:01 compute-0 nova_compute[248510]: 2025-12-13 08:45:01.862 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Dec 13 08:45:03 compute-0 ceph-mon[76537]: pgmap v2468: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Dec 13 08:45:03 compute-0 nova_compute[248510]: 2025-12-13 08:45:03.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:03 compute-0 nova_compute[248510]: 2025-12-13 08:45:03.863 248514 DEBUG nova.compute.manager [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:03 compute-0 nova_compute[248510]: 2025-12-13 08:45:03.863 248514 DEBUG oslo_concurrency.lockutils [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:03 compute-0 nova_compute[248510]: 2025-12-13 08:45:03.864 248514 DEBUG oslo_concurrency.lockutils [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:03 compute-0 nova_compute[248510]: 2025-12-13 08:45:03.864 248514 DEBUG oslo_concurrency.lockutils [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:03 compute-0 nova_compute[248510]: 2025-12-13 08:45:03.864 248514 DEBUG nova.compute.manager [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:45:03 compute-0 nova_compute[248510]: 2025-12-13 08:45:03.864 248514 WARNING nova.compute.manager [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state None.
Dec 13 08:45:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 548 KiB/s wr, 31 op/s
Dec 13 08:45:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:05 compute-0 ceph-mon[76537]: pgmap v2469: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 548 KiB/s wr, 31 op/s
Dec 13 08:45:05 compute-0 nova_compute[248510]: 2025-12-13 08:45:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:05 compute-0 nova_compute[248510]: 2025-12-13 08:45:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:05 compute-0 nova_compute[248510]: 2025-12-13 08:45:05.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:05 compute-0 nova_compute[248510]: 2025-12-13 08:45:05.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:05 compute-0 nova_compute[248510]: 2025-12-13 08:45:05.808 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:05 compute-0 nova_compute[248510]: 2025-12-13 08:45:05.808 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:45:05 compute-0 nova_compute[248510]: 2025-12-13 08:45:05.808 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:45:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:06.081 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:45:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304269005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.368 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.450 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.450 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.615 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.616 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3646MB free_disk=59.96665725391358GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.616 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.617 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.747 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c3fb322f-a9db-4396-b659-2307698e5524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.747 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.747 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:06 compute-0 nova_compute[248510]: 2025-12-13 08:45:06.823 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:07 compute-0 ceph-mon[76537]: pgmap v2470: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:45:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2304269005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:45:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:45:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648866817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:45:07 compute-0 nova_compute[248510]: 2025-12-13 08:45:07.435 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:07 compute-0 nova_compute[248510]: 2025-12-13 08:45:07.441 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:45:07 compute-0 nova_compute[248510]: 2025-12-13 08:45:07.464 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:45:07 compute-0 nova_compute[248510]: 2025-12-13 08:45:07.496 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:45:07 compute-0 nova_compute[248510]: 2025-12-13 08:45:07.497 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:07 compute-0 nova_compute[248510]: 2025-12-13 08:45:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:45:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/648866817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:45:09 compute-0 nova_compute[248510]: 2025-12-13 08:45:09.024 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:09 compute-0 nova_compute[248510]: 2025-12-13 08:45:09.024 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:45:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:09 compute-0 ceph-mon[76537]: pgmap v2471: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:45:09
Dec 13 08:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms']
Dec 13 08:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:45:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:45:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:45:10 compute-0 nova_compute[248510]: 2025-12-13 08:45:10.930 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:10 compute-0 nova_compute[248510]: 2025-12-13 08:45:10.931 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:10 compute-0 nova_compute[248510]: 2025-12-13 08:45:10.954 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.053 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.054 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.061 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.062 248514 INFO nova.compute.claims [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:45:11 compute-0 ceph-mon[76537]: pgmap v2472: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.226 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.590 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:45:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378654992' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.825 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.831 248514 DEBUG nova.compute.provider_tree [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.878 248514 DEBUG nova.scheduler.client.report [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.900 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.901 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:45:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.956 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.957 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:45:11 compute-0 nova_compute[248510]: 2025-12-13 08:45:11.982 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.009 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.132 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.133 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.133 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Creating image(s)
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.153 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.173 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.191 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.194 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/378654992' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.263 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.264 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.265 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.265 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.283 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.286 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.578 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.649 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] resizing rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.769 248514 DEBUG nova.objects.instance [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'migration_context' on Instance uuid 55a85a4e-6537-4498-8c7d-5c062cd421e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.802 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.803 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Ensure instance console log exists: /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.803 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.804 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.805 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:12 compute-0 nova_compute[248510]: 2025-12-13 08:45:12.859 248514 DEBUG nova.policy [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8948a1b0c26f43129cb50ef6f3872ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:45:12 compute-0 podman[348135]: 2025-12-13 08:45:12.996146252 +0000 UTC m=+0.071166527 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 13 08:45:13 compute-0 podman[348134]: 2025-12-13 08:45:13.0088039 +0000 UTC m=+0.088513123 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:45:13 compute-0 podman[348136]: 2025-12-13 08:45:13.041312695 +0000 UTC m=+0.102315828 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:45:13 compute-0 ceph-mon[76537]: pgmap v2473: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Dec 13 08:45:13 compute-0 ovn_controller[148476]: 2025-12-13T08:45:13Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:39:62 10.100.0.6
Dec 13 08:45:13 compute-0 ovn_controller[148476]: 2025-12-13T08:45:13Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:39:62 10.100.0.6
Dec 13 08:45:13 compute-0 nova_compute[248510]: 2025-12-13 08:45:13.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:13 compute-0 nova_compute[248510]: 2025-12-13 08:45:13.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 08:45:13 compute-0 nova_compute[248510]: 2025-12-13 08:45:13.877 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Successfully created port: c5e3056c-03cb-4408-8f11-96bd3e735ff6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:45:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 115 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 98 op/s
Dec 13 08:45:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:14 compute-0 nova_compute[248510]: 2025-12-13 08:45:14.795 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:45:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1515492163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:45:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:45:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1515492163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:45:15 compute-0 ceph-mon[76537]: pgmap v2474: 321 pgs: 321 active+clean; 115 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 98 op/s
Dec 13 08:45:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1515492163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:45:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1515492163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:45:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 165 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.099 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Successfully updated port: c5e3056c-03cb-4408-8f11-96bd3e735ff6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.135 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.136 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.136 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.372 248514 DEBUG nova.compute.manager [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-changed-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.373 248514 DEBUG nova.compute.manager [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Refreshing instance network info cache due to event network-changed-c5e3056c-03cb-4408-8f11-96bd3e735ff6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.373 248514 DEBUG oslo_concurrency.lockutils [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:16 compute-0 nova_compute[248510]: 2025-12-13 08:45:16.952 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:45:17 compute-0 ceph-mon[76537]: pgmap v2475: 321 pgs: 321 active+clean; 165 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Dec 13 08:45:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.799 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Updating instance_info_cache with network_info: [{"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.846 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.847 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance network_info: |[{"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.847 248514 DEBUG oslo_concurrency.lockutils [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.847 248514 DEBUG nova.network.neutron [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Refreshing network info cache for port c5e3056c-03cb-4408-8f11-96bd3e735ff6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.850 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start _get_guest_xml network_info=[{"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.855 248514 WARNING nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.860 248514 DEBUG nova.virt.libvirt.host [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.861 248514 DEBUG nova.virt.libvirt.host [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.864 248514 DEBUG nova.virt.libvirt.host [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.864 248514 DEBUG nova.virt.libvirt.host [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.864 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.865 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.865 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.865 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.865 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.866 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.866 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.866 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.866 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.867 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.867 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.867 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:45:18 compute-0 nova_compute[248510]: 2025-12-13 08:45:18.870 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:19 compute-0 ceph-mon[76537]: pgmap v2476: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 08:45:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:45:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288817914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:45:19 compute-0 nova_compute[248510]: 2025-12-13 08:45:19.512 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:19 compute-0 nova_compute[248510]: 2025-12-13 08:45:19.532 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:45:19 compute-0 nova_compute[248510]: 2025-12-13 08:45:19.535 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 08:45:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:45:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434092488' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.077 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.079 248514 DEBUG nova.virt.libvirt.vif [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:45:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-981558366',display_name='tempest-ServersNegativeTestJSON-server-981558366',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-981558366',id=104,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-j9mepcx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:45:12Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=55a85a4e-6537-4498-8c7d-5c062cd421e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.080 248514 DEBUG nova.network.os_vif_util [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.081 248514 DEBUG nova.network.os_vif_util [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.082 248514 DEBUG nova.objects.instance [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid 55a85a4e-6537-4498-8c7d-5c062cd421e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.117 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <uuid>55a85a4e-6537-4498-8c7d-5c062cd421e2</uuid>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <name>instance-00000068</name>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersNegativeTestJSON-server-981558366</nova:name>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:45:18</nova:creationTime>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <nova:user uuid="8948a1b0c26f43129cb50ef6f3872ecd">tempest-ServersNegativeTestJSON-1471623163-project-member</nova:user>
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <nova:project uuid="d2d4d23379cc4b03bbdd72a9134fdd9b">tempest-ServersNegativeTestJSON-1471623163</nova:project>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <nova:port uuid="c5e3056c-03cb-4408-8f11-96bd3e735ff6">
Dec 13 08:45:20 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <system>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <entry name="serial">55a85a4e-6537-4498-8c7d-5c062cd421e2</entry>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <entry name="uuid">55a85a4e-6537-4498-8c7d-5c062cd421e2</entry>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </system>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <os>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   </os>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <features>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   </features>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/55a85a4e-6537-4498-8c7d-5c062cd421e2_disk">
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       </source>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config">
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       </source>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:45:20 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:2b:0e:dd"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <target dev="tapc5e3056c-03"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/console.log" append="off"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <video>
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </video>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:45:20 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:45:20 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:45:20 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:45:20 compute-0 nova_compute[248510]: </domain>
Dec 13 08:45:20 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.118 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Preparing to wait for external event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.119 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.120 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.120 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.121 248514 DEBUG nova.virt.libvirt.vif [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:45:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-981558366',display_name='tempest-ServersNegativeTestJSON-server-981558366',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-981558366',id=104,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-j9mepcx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:45:12Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=55a85a4e-6537-4498-8c7d-5c062cd421e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.121 248514 DEBUG nova.network.os_vif_util [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.122 248514 DEBUG nova.network.os_vif_util [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.122 248514 DEBUG os_vif [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.124 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.124 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.129 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.129 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5e3056c-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.130 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc5e3056c-03, col_values=(('external_ids', {'iface-id': 'c5e3056c-03cb-4408-8f11-96bd3e735ff6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:0e:dd', 'vm-uuid': '55a85a4e-6537-4498-8c7d-5c062cd421e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.131 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:20 compute-0 NetworkManager[50376]: <info>  [1765615520.1327] manager: (tapc5e3056c-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.141 248514 INFO os_vif [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03')
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.238 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.239 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.239 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No VIF found with MAC fa:16:3e:2b:0e:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.241 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Using config drive
Dec 13 08:45:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4288817914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:45:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2434092488' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.270 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.850 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.954 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Creating config drive at /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config
Dec 13 08:45:20 compute-0 nova_compute[248510]: 2025-12-13 08:45:20.959 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqmdb9fd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001113967037902661 of space, bias 1.0, pg target 0.3341901113707983 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676236113091034 of space, bias 1.0, pg target 0.20028708339273102 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.90077056358155e-07 of space, bias 4.0, pg target 0.000708092467629786 quantized to 16 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.116 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqmdb9fd" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.142 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.145 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:21 compute-0 ceph-mon[76537]: pgmap v2477: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.276 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.277 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Deleting local config drive /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config because it was imported into RBD.
Dec 13 08:45:21 compute-0 kernel: tapc5e3056c-03: entered promiscuous mode
Dec 13 08:45:21 compute-0 NetworkManager[50376]: <info>  [1765615521.3229] manager: (tapc5e3056c-03): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Dec 13 08:45:21 compute-0 ovn_controller[148476]: 2025-12-13T08:45:21Z|00989|binding|INFO|Claiming lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 for this chassis.
Dec 13 08:45:21 compute-0 ovn_controller[148476]: 2025-12-13T08:45:21Z|00990|binding|INFO|c5e3056c-03cb-4408-8f11-96bd3e735ff6: Claiming fa:16:3e:2b:0e:dd 10.100.0.3
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.325 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.342 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:0e:dd 10.100.0.3'], port_security=['fa:16:3e:2b:0e:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '55a85a4e-6537-4498-8c7d-5c062cd421e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c5e3056c-03cb-4408-8f11-96bd3e735ff6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:45:21 compute-0 ovn_controller[148476]: 2025-12-13T08:45:21Z|00991|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 ovn-installed in OVS
Dec 13 08:45:21 compute-0 ovn_controller[148476]: 2025-12-13T08:45:21Z|00992|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 up in Southbound
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.344 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c5e3056c-03cb-4408-8f11-96bd3e735ff6 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b bound to our chassis
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.347 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:21 compute-0 systemd-udevd[348331]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:45:21 compute-0 systemd-machined[210538]: New machine qemu-128-instance-00000068.
Dec 13 08:45:21 compute-0 NetworkManager[50376]: <info>  [1765615521.3681] device (tapc5e3056c-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:45:21 compute-0 NetworkManager[50376]: <info>  [1765615521.3687] device (tapc5e3056c-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.370 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5520604c-3fcd-407b-adba-4900ef51eca3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:21 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000068.
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.401 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac0784d-f63e-4921-ae1a-66751240020f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.404 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c75e2202-1273-4439-b97d-15483f991e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.438 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5fbf31-26eb-47ed-8014-02f4e421e894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.455 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8222d065-9c4a-450c-9a44-526ec4a8da9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348345, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc258d9b-51d0-4b4e-9cba-51c40064f93c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807710, 'tstamp': 807710}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348346, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807712, 'tstamp': 807712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348346, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.469 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.472 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.472 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.473 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.473 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.735 248514 DEBUG nova.network.neutron [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Updated VIF entry in instance network info cache for port c5e3056c-03cb-4408-8f11-96bd3e735ff6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.736 248514 DEBUG nova.network.neutron [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Updating instance_info_cache with network_info: [{"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.754 248514 DEBUG nova.compute.manager [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.755 248514 DEBUG oslo_concurrency.lockutils [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.755 248514 DEBUG oslo_concurrency.lockutils [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.756 248514 DEBUG oslo_concurrency.lockutils [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.756 248514 DEBUG nova.compute.manager [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Processing event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.769 248514 DEBUG oslo_concurrency.lockutils [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:45:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.992 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.993 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615521.991312, 55a85a4e-6537-4498-8c7d-5c062cd421e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.993 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] VM Started (Lifecycle Event)
Dec 13 08:45:21 compute-0 nova_compute[248510]: 2025-12-13 08:45:21.997 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.002 248514 INFO nova.virt.libvirt.driver [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance spawned successfully.
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.002 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.070 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.078 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.078 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.079 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.080 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.081 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.081 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.086 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.122 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.123 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615521.9917, 55a85a4e-6537-4498-8c7d-5c062cd421e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.123 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] VM Paused (Lifecycle Event)
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.158 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.162 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615521.996112, 55a85a4e-6537-4498-8c7d-5c062cd421e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.162 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] VM Resumed (Lifecycle Event)
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.214 248514 INFO nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Took 10.08 seconds to spawn the instance on the hypervisor.
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.215 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.270 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.273 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.330 248514 INFO nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Took 11.30 seconds to build instance.
Dec 13 08:45:22 compute-0 nova_compute[248510]: 2025-12-13 08:45:22.387 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:23 compute-0 ceph-mon[76537]: pgmap v2478: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 08:45:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 832 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Dec 13 08:45:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.152 248514 DEBUG nova.compute.manager [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.153 248514 DEBUG oslo_concurrency.lockutils [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.153 248514 DEBUG oslo_concurrency.lockutils [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.153 248514 DEBUG oslo_concurrency.lockutils [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.154 248514 DEBUG nova.compute.manager [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.154 248514 WARNING nova.compute.manager [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state active and task_state None.
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.796 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.797 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.797 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.797 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.798 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.799 248514 INFO nova.compute.manager [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Terminating instance
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.800 248514 DEBUG nova.compute.manager [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:45:24 compute-0 kernel: tapc5e3056c-03 (unregistering): left promiscuous mode
Dec 13 08:45:24 compute-0 NetworkManager[50376]: <info>  [1765615524.8551] device (tapc5e3056c-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:45:24 compute-0 ovn_controller[148476]: 2025-12-13T08:45:24Z|00993|binding|INFO|Releasing lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 from this chassis (sb_readonly=0)
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:24 compute-0 ovn_controller[148476]: 2025-12-13T08:45:24Z|00994|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 down in Southbound
Dec 13 08:45:24 compute-0 ovn_controller[148476]: 2025-12-13T08:45:24Z|00995|binding|INFO|Removing iface tapc5e3056c-03 ovn-installed in OVS
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.922 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:0e:dd 10.100.0.3'], port_security=['fa:16:3e:2b:0e:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '55a85a4e-6537-4498-8c7d-5c062cd421e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c5e3056c-03cb-4408-8f11-96bd3e735ff6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:45:24 compute-0 nova_compute[248510]: 2025-12-13 08:45:24.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.924 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c5e3056c-03cb-4408-8f11-96bd3e735ff6 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis
Dec 13 08:45:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.925 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:45:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.941 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68ca1ebd-b9c7-42bd-845e-25f746dae6a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.969 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84f99f27-d488-464d-842e-8273fc24b62c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:24 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Deactivated successfully.
Dec 13 08:45:24 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Consumed 3.214s CPU time.
Dec 13 08:45:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.972 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e31eb2-27fd-402c-89e1-03ea17ebdcf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:24 compute-0 systemd-machined[210538]: Machine qemu-128-instance-00000068 terminated.
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.001 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[344c0436-98b6-49d9-bd1a-8d64d34d71b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 kernel: tapc5e3056c-03: entered promiscuous mode
Dec 13 08:45:25 compute-0 kernel: tapc5e3056c-03 (unregistering): left promiscuous mode
Dec 13 08:45:25 compute-0 NetworkManager[50376]: <info>  [1765615525.0177] manager: (tapc5e3056c-03): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|00996|binding|INFO|Claiming lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 for this chassis.
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|00997|binding|INFO|c5e3056c-03cb-4408-8f11-96bd3e735ff6: Claiming fa:16:3e:2b:0e:dd 10.100.0.3
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.021 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e98982b1-5e43-40d0-85d9-bd7aec1f1655]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348402, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.027 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:0e:dd 10.100.0.3'], port_security=['fa:16:3e:2b:0e:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '55a85a4e-6537-4498-8c7d-5c062cd421e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c5e3056c-03cb-4408-8f11-96bd3e735ff6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.034 248514 INFO nova.virt.libvirt.driver [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance destroyed successfully.
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.035 248514 DEBUG nova.objects.instance [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'resources' on Instance uuid 55a85a4e-6537-4498-8c7d-5c062cd421e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|00998|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 ovn-installed in OVS
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|00999|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 up in Southbound
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|01000|binding|INFO|Releasing lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 from this chassis (sb_readonly=1)
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|01001|if_status|INFO|Dropped 2 log messages in last 386 seconds (most recently, 386 seconds ago) due to excessive rate
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|01002|if_status|INFO|Not setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 down as sb is readonly
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|01003|binding|INFO|Removing iface tapc5e3056c-03 ovn-installed in OVS
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.040 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[37993ee2-0d9c-45ad-ad77-9a2f6130b37e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807710, 'tstamp': 807710}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348406, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807712, 'tstamp': 807712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348406, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.043 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.044 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|01004|binding|INFO|Releasing lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 from this chassis (sb_readonly=0)
Dec 13 08:45:25 compute-0 ovn_controller[148476]: 2025-12-13T08:45:25Z|01005|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 down in Southbound
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.053 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.059 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.059 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.059 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.060 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.059 248514 DEBUG nova.virt.libvirt.vif [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:45:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-981558366',display_name='tempest-ServersNegativeTestJSON-server-981558366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-981558366',id=104,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:45:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-j9mepcx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:45:22Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=55a85a4e-6537-4498-8c7d-5c062cd421e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.060 248514 DEBUG nova.network.os_vif_util [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.061 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c5e3056c-03cb-4408-8f11-96bd3e735ff6 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.060 248514 DEBUG nova.network.os_vif_util [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.061 248514 DEBUG os_vif [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.062 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.062 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.063 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:0e:dd 10.100.0.3'], port_security=['fa:16:3e:2b:0e:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '55a85a4e-6537-4498-8c7d-5c062cd421e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c5e3056c-03cb-4408-8f11-96bd3e735ff6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.063 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5e3056c-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.063 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.066 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.068 248514 INFO os_vif [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03')
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.075 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8de863-7fc9-483c-86c2-f96aa0f065f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.115 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[66c0c79a-8383-4627-b358-42e08ebcf469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.118 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[191c1228-6c30-4789-a972-bb90ecade421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.150 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a20ff71-0bcd-43c4-b6a6-ea7631b1c3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.172 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf29b573-a992-4c09-89a0-d9aa50ed54be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348432, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7e9941e4-198d-4999-9933-353181d95a66]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807710, 'tstamp': 807710}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348433, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807712, 'tstamp': 807712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348433, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.192 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.194 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.197 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.197 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.198 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.198 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c5e3056c-03cb-4408-8f11-96bd3e735ff6 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.199 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.216 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2564bb3e-0ed6-4a7b-8d89-16ab1359fdd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.256 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[75931ddd-c762-4bdd-8bd3-cf4825beacb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.260 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6291014-6401-4883-82b5-428eb9d17357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.290 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[26f79c52-67d8-4753-a151-841175a3f246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ceph-mon[76537]: pgmap v2479: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 832 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.300 248514 INFO nova.virt.libvirt.driver [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Deleting instance files /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2_del
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.301 248514 INFO nova.virt.libvirt.driver [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Deletion of /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2_del complete
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.306 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd826a2c-43c9-4be1-adcb-d84802fb281b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348442, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.322 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c064fef-691b-4d5f-874c-749b426c6e3d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807710, 'tstamp': 807710}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348443, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807712, 'tstamp': 807712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348443, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.324 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.327 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.328 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.328 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.329 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:45:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.330 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.550 248514 INFO nova.compute.manager [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Took 0.75 seconds to destroy the instance on the hypervisor.
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.551 248514 DEBUG oslo.service.loopingcall [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.552 248514 DEBUG nova.compute.manager [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:45:25 compute-0 nova_compute[248510]: 2025-12-13 08:45:25.552 248514 DEBUG nova.network.neutron [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:45:25 compute-0 sshd-session[348434]: Invalid user sol from 193.32.162.146 port 56296
Dec 13 08:45:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 141 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 124 op/s
Dec 13 08:45:25 compute-0 sshd-session[348434]: Connection closed by invalid user sol 193.32.162.146 port 56296 [preauth]
Dec 13 08:45:26 compute-0 ceph-mon[76537]: pgmap v2480: 321 pgs: 321 active+clean; 141 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 124 op/s
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.426 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-unplugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.426 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-unplugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-unplugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.428 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.428 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.428 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.428 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 WARNING nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state active and task_state deleting.
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.430 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.430 248514 WARNING nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state active and task_state deleting.
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.595 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.824 248514 DEBUG nova.network.neutron [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.850 248514 INFO nova.compute.manager [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Took 1.30 seconds to deallocate network for instance.
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.908 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:26 compute-0 nova_compute[248510]: 2025-12-13 08:45:26.909 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:27 compute-0 nova_compute[248510]: 2025-12-13 08:45:27.011 248514 DEBUG oslo_concurrency.processutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:45:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:45:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026977894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:45:27 compute-0 nova_compute[248510]: 2025-12-13 08:45:27.551 248514 DEBUG oslo_concurrency.processutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:45:27 compute-0 nova_compute[248510]: 2025-12-13 08:45:27.559 248514 DEBUG nova.compute.provider_tree [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:45:27 compute-0 nova_compute[248510]: 2025-12-13 08:45:27.579 248514 DEBUG nova.scheduler.client.report [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:45:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1026977894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:45:27 compute-0 nova_compute[248510]: 2025-12-13 08:45:27.667 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:27 compute-0 nova_compute[248510]: 2025-12-13 08:45:27.716 248514 INFO nova.scheduler.client.report [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Deleted allocations for instance 55a85a4e-6537-4498-8c7d-5c062cd421e2
Dec 13 08:45:27 compute-0 nova_compute[248510]: 2025-12-13 08:45:27.807 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 121 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 81 KiB/s wr, 104 op/s
Dec 13 08:45:28 compute-0 ceph-mon[76537]: pgmap v2481: 321 pgs: 321 active+clean; 121 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 81 KiB/s wr, 104 op/s
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.603 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.603 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 WARNING nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state deleted and task_state None.
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 WARNING nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state deleted and task_state None.
Dec 13 08:45:28 compute-0 nova_compute[248510]: 2025-12-13 08:45:28.606 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-deleted-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:45:28 compute-0 sudo[348466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:45:28 compute-0 sudo[348466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:28 compute-0 sudo[348466]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:28 compute-0 sudo[348491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 08:45:28 compute-0 sudo[348491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:29 compute-0 sudo[348491]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:45:29 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:45:29 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:29 compute-0 sudo[348536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:45:29 compute-0 sudo[348536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:29 compute-0 sudo[348536]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:29 compute-0 sudo[348561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:45:29 compute-0 sudo[348561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 121 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 92 op/s
Dec 13 08:45:29 compute-0 sudo[348561]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:45:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:45:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:45:29 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:45:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:45:30 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:45:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:45:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:45:30 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:45:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:45:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:45:30 compute-0 nova_compute[248510]: 2025-12-13 08:45:30.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:30 compute-0 sudo[348618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:45:30 compute-0 sudo[348618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:30 compute-0 sudo[348618]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:30 compute-0 sudo[348643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:45:30 compute-0 sudo[348643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:45:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:45:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:45:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:45:30 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:45:30 compute-0 podman[348680]: 2025-12-13 08:45:30.419656852 +0000 UTC m=+0.049499793 container create 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 08:45:30 compute-0 systemd[1]: Started libpod-conmon-70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d.scope.
Dec 13 08:45:30 compute-0 podman[348680]: 2025-12-13 08:45:30.392463109 +0000 UTC m=+0.022306120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:45:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:45:30 compute-0 podman[348680]: 2025-12-13 08:45:30.55666815 +0000 UTC m=+0.186511111 container init 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:45:30 compute-0 podman[348680]: 2025-12-13 08:45:30.564526747 +0000 UTC m=+0.194369678 container start 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:45:30 compute-0 xenodochial_agnesi[348696]: 167 167
Dec 13 08:45:30 compute-0 systemd[1]: libpod-70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d.scope: Deactivated successfully.
Dec 13 08:45:30 compute-0 podman[348680]: 2025-12-13 08:45:30.631367015 +0000 UTC m=+0.261209946 container attach 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 08:45:30 compute-0 podman[348680]: 2025-12-13 08:45:30.631951239 +0000 UTC m=+0.261794170 container died 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bde2c46014e63e527fbd3d3217b811af3d46c27c6eb7d7fbb16dd12675af55d-merged.mount: Deactivated successfully.
Dec 13 08:45:30 compute-0 podman[348680]: 2025-12-13 08:45:30.763145452 +0000 UTC m=+0.392988383 container remove 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:45:30 compute-0 systemd[1]: libpod-conmon-70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d.scope: Deactivated successfully.
Dec 13 08:45:30 compute-0 podman[348721]: 2025-12-13 08:45:30.967306985 +0000 UTC m=+0.081314422 container create 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:45:31 compute-0 podman[348721]: 2025-12-13 08:45:30.913549876 +0000 UTC m=+0.027557333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:45:31 compute-0 systemd[1]: Started libpod-conmon-0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b.scope.
Dec 13 08:45:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:31 compute-0 podman[348721]: 2025-12-13 08:45:31.059667943 +0000 UTC m=+0.173675390 container init 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:45:31 compute-0 podman[348721]: 2025-12-13 08:45:31.068006252 +0000 UTC m=+0.182013689 container start 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 08:45:31 compute-0 podman[348721]: 2025-12-13 08:45:31.072086585 +0000 UTC m=+0.186094022 container attach 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:45:31 compute-0 ceph-mon[76537]: pgmap v2482: 321 pgs: 321 active+clean; 121 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 92 op/s
Dec 13 08:45:31 compute-0 dazzling_taussig[348737]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:45:31 compute-0 dazzling_taussig[348737]: --> All data devices are unavailable
Dec 13 08:45:31 compute-0 systemd[1]: libpod-0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b.scope: Deactivated successfully.
Dec 13 08:45:31 compute-0 podman[348721]: 2025-12-13 08:45:31.600211198 +0000 UTC m=+0.714218665 container died 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:45:31 compute-0 nova_compute[248510]: 2025-12-13 08:45:31.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba-merged.mount: Deactivated successfully.
Dec 13 08:45:31 compute-0 podman[348721]: 2025-12-13 08:45:31.798892304 +0000 UTC m=+0.912899771 container remove 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 08:45:31 compute-0 systemd[1]: libpod-conmon-0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b.scope: Deactivated successfully.
Dec 13 08:45:31 compute-0 sudo[348643]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 121 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 100 op/s
Dec 13 08:45:31 compute-0 sudo[348771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:45:31 compute-0 sudo[348771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:31 compute-0 sudo[348771]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:32 compute-0 sudo[348796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:45:32 compute-0 sudo[348796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:32 compute-0 podman[348833]: 2025-12-13 08:45:32.366358923 +0000 UTC m=+0.110401711 container create 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:45:32 compute-0 podman[348833]: 2025-12-13 08:45:32.276702923 +0000 UTC m=+0.020745732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:45:32 compute-0 ceph-mon[76537]: pgmap v2483: 321 pgs: 321 active+clean; 121 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 100 op/s
Dec 13 08:45:32 compute-0 systemd[1]: Started libpod-conmon-7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2.scope.
Dec 13 08:45:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:45:32 compute-0 podman[348833]: 2025-12-13 08:45:32.527873276 +0000 UTC m=+0.271916184 container init 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:45:32 compute-0 podman[348833]: 2025-12-13 08:45:32.537538359 +0000 UTC m=+0.281581147 container start 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:45:32 compute-0 zealous_golick[348850]: 167 167
Dec 13 08:45:32 compute-0 systemd[1]: libpod-7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2.scope: Deactivated successfully.
Dec 13 08:45:32 compute-0 podman[348833]: 2025-12-13 08:45:32.588696453 +0000 UTC m=+0.332739271 container attach 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:45:32 compute-0 podman[348833]: 2025-12-13 08:45:32.589947714 +0000 UTC m=+0.333990562 container died 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 08:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a81075742ae245bdba68f6c10336aa612ac21d930ce1a6291b67929bccf40e29-merged.mount: Deactivated successfully.
Dec 13 08:45:32 compute-0 podman[348833]: 2025-12-13 08:45:32.861845657 +0000 UTC m=+0.605888445 container remove 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 08:45:32 compute-0 systemd[1]: libpod-conmon-7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2.scope: Deactivated successfully.
Dec 13 08:45:33 compute-0 podman[348873]: 2025-12-13 08:45:33.034220203 +0000 UTC m=+0.046575920 container create 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:45:33 compute-0 systemd[1]: Started libpod-conmon-7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f.scope.
Dec 13 08:45:33 compute-0 podman[348873]: 2025-12-13 08:45:33.014442967 +0000 UTC m=+0.026798704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:45:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:33 compute-0 podman[348873]: 2025-12-13 08:45:33.136216142 +0000 UTC m=+0.148571879 container init 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 08:45:33 compute-0 podman[348873]: 2025-12-13 08:45:33.144745497 +0000 UTC m=+0.157101214 container start 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:45:33 compute-0 podman[348873]: 2025-12-13 08:45:33.149415534 +0000 UTC m=+0.161771251 container attach 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 08:45:33 compute-0 kind_merkle[348889]: {
Dec 13 08:45:33 compute-0 kind_merkle[348889]:     "0": [
Dec 13 08:45:33 compute-0 kind_merkle[348889]:         {
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "devices": [
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "/dev/loop3"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             ],
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_name": "ceph_lv0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_size": "21470642176",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "name": "ceph_lv0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "tags": {
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cluster_name": "ceph",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.crush_device_class": "",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.encrypted": "0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.objectstore": "bluestore",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osd_id": "0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.type": "block",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.vdo": "0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.with_tpm": "0"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             },
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "type": "block",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "vg_name": "ceph_vg0"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:         }
Dec 13 08:45:33 compute-0 kind_merkle[348889]:     ],
Dec 13 08:45:33 compute-0 kind_merkle[348889]:     "1": [
Dec 13 08:45:33 compute-0 kind_merkle[348889]:         {
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "devices": [
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "/dev/loop4"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             ],
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_name": "ceph_lv1",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_size": "21470642176",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "name": "ceph_lv1",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "tags": {
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cluster_name": "ceph",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.crush_device_class": "",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.encrypted": "0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.objectstore": "bluestore",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osd_id": "1",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.type": "block",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.vdo": "0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.with_tpm": "0"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             },
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "type": "block",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "vg_name": "ceph_vg1"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:         }
Dec 13 08:45:33 compute-0 kind_merkle[348889]:     ],
Dec 13 08:45:33 compute-0 kind_merkle[348889]:     "2": [
Dec 13 08:45:33 compute-0 kind_merkle[348889]:         {
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "devices": [
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "/dev/loop5"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             ],
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_name": "ceph_lv2",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_size": "21470642176",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "name": "ceph_lv2",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "tags": {
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.cluster_name": "ceph",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.crush_device_class": "",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.encrypted": "0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.objectstore": "bluestore",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osd_id": "2",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.type": "block",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.vdo": "0",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:                 "ceph.with_tpm": "0"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             },
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "type": "block",
Dec 13 08:45:33 compute-0 kind_merkle[348889]:             "vg_name": "ceph_vg2"
Dec 13 08:45:33 compute-0 kind_merkle[348889]:         }
Dec 13 08:45:33 compute-0 kind_merkle[348889]:     ]
Dec 13 08:45:33 compute-0 kind_merkle[348889]: }
Dec 13 08:45:33 compute-0 systemd[1]: libpod-7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f.scope: Deactivated successfully.
Dec 13 08:45:33 compute-0 podman[348873]: 2025-12-13 08:45:33.469385423 +0000 UTC m=+0.481741140 container died 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6-merged.mount: Deactivated successfully.
Dec 13 08:45:33 compute-0 podman[348873]: 2025-12-13 08:45:33.522918827 +0000 UTC m=+0.535274554 container remove 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:45:33 compute-0 systemd[1]: libpod-conmon-7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f.scope: Deactivated successfully.
Dec 13 08:45:33 compute-0 sudo[348796]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:33 compute-0 sudo[348912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:45:33 compute-0 sudo[348912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:33 compute-0 sudo[348912]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:33 compute-0 sudo[348937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:45:33 compute-0 sudo[348937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 100 op/s
Dec 13 08:45:33 compute-0 podman[348974]: 2025-12-13 08:45:33.96376112 +0000 UTC m=+0.038007005 container create 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 08:45:34 compute-0 systemd[1]: Started libpod-conmon-4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36.scope.
Dec 13 08:45:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:45:34 compute-0 podman[348974]: 2025-12-13 08:45:33.947980824 +0000 UTC m=+0.022226729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:45:34 compute-0 podman[348974]: 2025-12-13 08:45:34.051803829 +0000 UTC m=+0.126049744 container init 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:45:34 compute-0 podman[348974]: 2025-12-13 08:45:34.060412855 +0000 UTC m=+0.134658740 container start 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 08:45:34 compute-0 podman[348974]: 2025-12-13 08:45:34.063538504 +0000 UTC m=+0.137784389 container attach 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:45:34 compute-0 upbeat_mcclintock[348990]: 167 167
Dec 13 08:45:34 compute-0 systemd[1]: libpod-4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36.scope: Deactivated successfully.
Dec 13 08:45:34 compute-0 podman[348974]: 2025-12-13 08:45:34.067104503 +0000 UTC m=+0.141350388 container died 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-805ca287a976607b427314ae0ae7845c6c34f0cf3d71a24b6be7a0f4e86f2926-merged.mount: Deactivated successfully.
Dec 13 08:45:34 compute-0 podman[348974]: 2025-12-13 08:45:34.102062311 +0000 UTC m=+0.176308196 container remove 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:45:34 compute-0 systemd[1]: libpod-conmon-4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36.scope: Deactivated successfully.
Dec 13 08:45:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:34 compute-0 podman[349014]: 2025-12-13 08:45:34.281429742 +0000 UTC m=+0.054491469 container create 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:45:34 compute-0 systemd[1]: Started libpod-conmon-30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94.scope.
Dec 13 08:45:34 compute-0 podman[349014]: 2025-12-13 08:45:34.254557417 +0000 UTC m=+0.027619174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:45:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:45:34 compute-0 podman[349014]: 2025-12-13 08:45:34.368827005 +0000 UTC m=+0.141888732 container init 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:45:34 compute-0 podman[349014]: 2025-12-13 08:45:34.375677617 +0000 UTC m=+0.148739324 container start 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:45:34 compute-0 podman[349014]: 2025-12-13 08:45:34.378707813 +0000 UTC m=+0.151769530 container attach 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:45:35 compute-0 lvm[349110]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:45:35 compute-0 lvm[349109]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:45:35 compute-0 lvm[349110]: VG ceph_vg1 finished
Dec 13 08:45:35 compute-0 lvm[349109]: VG ceph_vg0 finished
Dec 13 08:45:35 compute-0 lvm[349112]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:45:35 compute-0 lvm[349112]: VG ceph_vg2 finished
Dec 13 08:45:35 compute-0 nova_compute[248510]: 2025-12-13 08:45:35.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:35 compute-0 ecstatic_keldysh[349031]: {}
Dec 13 08:45:35 compute-0 systemd[1]: libpod-30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94.scope: Deactivated successfully.
Dec 13 08:45:35 compute-0 systemd[1]: libpod-30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94.scope: Consumed 1.360s CPU time.
Dec 13 08:45:35 compute-0 podman[349014]: 2025-12-13 08:45:35.258014889 +0000 UTC m=+1.031076616 container died 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 08:45:35 compute-0 ceph-mon[76537]: pgmap v2484: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 100 op/s
Dec 13 08:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03-merged.mount: Deactivated successfully.
Dec 13 08:45:35 compute-0 podman[349014]: 2025-12-13 08:45:35.560137302 +0000 UTC m=+1.333199009 container remove 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 08:45:35 compute-0 sudo[348937]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:45:35 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:45:35 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:35 compute-0 systemd[1]: libpod-conmon-30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94.scope: Deactivated successfully.
Dec 13 08:45:35 compute-0 sudo[349126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:45:35 compute-0 sudo[349126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:45:35 compute-0 sudo[349126]: pam_unix(sudo:session): session closed for user root
Dec 13 08:45:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 79 op/s
Dec 13 08:45:36 compute-0 nova_compute[248510]: 2025-12-13 08:45:36.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:36 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:36 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:45:36 compute-0 ceph-mon[76537]: pgmap v2485: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 79 op/s
Dec 13 08:45:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Dec 13 08:45:38 compute-0 ceph-mon[76537]: pgmap v2486: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Dec 13 08:45:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 1023 B/s wr, 9 op/s
Dec 13 08:45:40 compute-0 nova_compute[248510]: 2025-12-13 08:45:40.034 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615525.032338, 55a85a4e-6537-4498-8c7d-5c062cd421e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:45:40 compute-0 nova_compute[248510]: 2025-12-13 08:45:40.035 248514 INFO nova.compute.manager [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] VM Stopped (Lifecycle Event)
Dec 13 08:45:40 compute-0 nova_compute[248510]: 2025-12-13 08:45:40.077 248514 DEBUG nova.compute.manager [None req-3289fc79-ecc5-497b-86d7-33e13ff6b862 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:45:40 compute-0 nova_compute[248510]: 2025-12-13 08:45:40.188 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Dec 13 08:45:41 compute-0 ceph-mon[76537]: pgmap v2487: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 1023 B/s wr, 9 op/s
Dec 13 08:45:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Dec 13 08:45:41 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Dec 13 08:45:41 compute-0 nova_compute[248510]: 2025-12-13 08:45:41.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Dec 13 08:45:42 compute-0 ceph-mon[76537]: osdmap e263: 3 total, 3 up, 3 in
Dec 13 08:45:43 compute-0 ceph-mon[76537]: pgmap v2489: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Dec 13 08:45:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 102 B/s wr, 0 op/s
Dec 13 08:45:43 compute-0 podman[349152]: 2025-12-13 08:45:43.994478652 +0000 UTC m=+0.072033279 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:45:43 compute-0 podman[349153]: 2025-12-13 08:45:43.995672732 +0000 UTC m=+0.073269380 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:45:44 compute-0 podman[349151]: 2025-12-13 08:45:44.075138476 +0000 UTC m=+0.151817611 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 08:45:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:45 compute-0 ceph-mon[76537]: pgmap v2490: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 102 B/s wr, 0 op/s
Dec 13 08:45:45 compute-0 nova_compute[248510]: 2025-12-13 08:45:45.192 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 137 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Dec 13 08:45:46 compute-0 nova_compute[248510]: 2025-12-13 08:45:46.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Dec 13 08:45:47 compute-0 ceph-mon[76537]: pgmap v2491: 321 pgs: 321 active+clean; 137 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Dec 13 08:45:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Dec 13 08:45:47 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Dec 13 08:45:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 193 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 9.0 MiB/s wr, 51 op/s
Dec 13 08:45:48 compute-0 ceph-mon[76537]: osdmap e264: 3 total, 3 up, 3 in
Dec 13 08:45:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Dec 13 08:45:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Dec 13 08:45:49 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Dec 13 08:45:49 compute-0 ceph-mon[76537]: pgmap v2493: 321 pgs: 321 active+clean; 193 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 9.0 MiB/s wr, 51 op/s
Dec 13 08:45:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 233 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 14 MiB/s wr, 51 op/s
Dec 13 08:45:50 compute-0 nova_compute[248510]: 2025-12-13 08:45:50.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Dec 13 08:45:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Dec 13 08:45:50 compute-0 ceph-mon[76537]: osdmap e265: 3 total, 3 up, 3 in
Dec 13 08:45:50 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Dec 13 08:45:51 compute-0 ceph-mon[76537]: pgmap v2495: 321 pgs: 321 active+clean; 233 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 14 MiB/s wr, 51 op/s
Dec 13 08:45:51 compute-0 ceph-mon[76537]: osdmap e266: 3 total, 3 up, 3 in
Dec 13 08:45:51 compute-0 nova_compute[248510]: 2025-12-13 08:45:51.605 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:51 compute-0 nova_compute[248510]: 2025-12-13 08:45:51.851 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 217 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 16 MiB/s wr, 85 op/s
Dec 13 08:45:52 compute-0 ceph-mon[76537]: pgmap v2497: 321 pgs: 321 active+clean; 217 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 16 MiB/s wr, 85 op/s
Dec 13 08:45:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 161 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 14 MiB/s wr, 86 op/s
Dec 13 08:45:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:54 compute-0 ceph-mon[76537]: pgmap v2498: 321 pgs: 321 active+clean; 161 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 14 MiB/s wr, 86 op/s
Dec 13 08:45:55 compute-0 nova_compute[248510]: 2025-12-13 08:45:55.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:55.424 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:45:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:55.424 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:45:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:45:55.425 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:45:55 compute-0 nova_compute[248510]: 2025-12-13 08:45:55.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:45:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 5.0 MiB/s wr, 74 op/s
Dec 13 08:45:56 compute-0 nova_compute[248510]: 2025-12-13 08:45:56.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:45:57 compute-0 ceph-mon[76537]: pgmap v2499: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 5.0 MiB/s wr, 74 op/s
Dec 13 08:45:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.9 KiB/s wr, 67 op/s
Dec 13 08:45:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:45:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Dec 13 08:45:59 compute-0 ceph-mon[76537]: pgmap v2500: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.9 KiB/s wr, 67 op/s
Dec 13 08:45:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Dec 13 08:45:59 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Dec 13 08:45:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 08:46:00 compute-0 nova_compute[248510]: 2025-12-13 08:46:00.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:00 compute-0 ceph-mon[76537]: osdmap e267: 3 total, 3 up, 3 in
Dec 13 08:46:00 compute-0 ceph-mon[76537]: pgmap v2502: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 08:46:00 compute-0 nova_compute[248510]: 2025-12-13 08:46:00.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:00 compute-0 nova_compute[248510]: 2025-12-13 08:46:00.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:46:00 compute-0 nova_compute[248510]: 2025-12-13 08:46:00.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:46:01 compute-0 nova_compute[248510]: 2025-12-13 08:46:01.228 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:46:01 compute-0 nova_compute[248510]: 2025-12-13 08:46:01.229 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:46:01 compute-0 nova_compute[248510]: 2025-12-13 08:46:01.229 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:46:01 compute-0 nova_compute[248510]: 2025-12-13 08:46:01.229 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:46:01 compute-0 nova_compute[248510]: 2025-12-13 08:46:01.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:01.763 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:46:01 compute-0 nova_compute[248510]: 2025-12-13 08:46:01.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:01.764 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:46:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 08:46:03 compute-0 ceph-mon[76537]: pgmap v2503: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 08:46:03 compute-0 nova_compute[248510]: 2025-12-13 08:46:03.756 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:46:03 compute-0 nova_compute[248510]: 2025-12-13 08:46:03.787 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:46:03 compute-0 nova_compute[248510]: 2025-12-13 08:46:03.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:46:03 compute-0 nova_compute[248510]: 2025-12-13 08:46:03.788 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 921 B/s wr, 18 op/s
Dec 13 08:46:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:04.766 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:46:05 compute-0 ceph-mon[76537]: pgmap v2504: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 921 B/s wr, 18 op/s
Dec 13 08:46:05 compute-0 nova_compute[248510]: 2025-12-13 08:46:05.250 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:05 compute-0 nova_compute[248510]: 2025-12-13 08:46:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:05 compute-0 nova_compute[248510]: 2025-12-13 08:46:05.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:05 compute-0 nova_compute[248510]: 2025-12-13 08:46:05.832 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:46:05 compute-0 nova_compute[248510]: 2025-12-13 08:46:05.834 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:46:05 compute-0 nova_compute[248510]: 2025-12-13 08:46:05.835 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:46:05 compute-0 nova_compute[248510]: 2025-12-13 08:46:05.835 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:46:05 compute-0 nova_compute[248510]: 2025-12-13 08:46:05.835 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:46:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:46:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169512374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.409 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.508 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.508 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:46:06 compute-0 ceph-mon[76537]: pgmap v2505: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3169512374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.690 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.692 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3669MB free_disk=59.942045137286186GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.692 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.692 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.989 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c3fb322f-a9db-4396-b659-2307698e5524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.989 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:46:06 compute-0 nova_compute[248510]: 2025-12-13 08:46:06.989 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:46:07 compute-0 nova_compute[248510]: 2025-12-13 08:46:07.179 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:46:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:46:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125902812' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:46:07 compute-0 nova_compute[248510]: 2025-12-13 08:46:07.722 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:46:07 compute-0 nova_compute[248510]: 2025-12-13 08:46:07.728 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:46:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1125902812' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:46:07 compute-0 nova_compute[248510]: 2025-12-13 08:46:07.762 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:46:07 compute-0 nova_compute[248510]: 2025-12-13 08:46:07.882 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:46:07 compute-0 nova_compute[248510]: 2025-12-13 08:46:07.883 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:46:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:08 compute-0 ceph-mon[76537]: pgmap v2506: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:46:09
Dec 13 08:46:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:46:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:46:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups']
Dec 13 08:46:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:46:09 compute-0 nova_compute[248510]: 2025-12-13 08:46:09.884 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:09 compute-0 nova_compute[248510]: 2025-12-13 08:46:09.884 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:09 compute-0 nova_compute[248510]: 2025-12-13 08:46:09.884 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:46:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:46:10 compute-0 nova_compute[248510]: 2025-12-13 08:46:10.253 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:46:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:46:11 compute-0 ceph-mon[76537]: pgmap v2507: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.104446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571104493, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1420, "num_deletes": 252, "total_data_size": 2239492, "memory_usage": 2284832, "flush_reason": "Manual Compaction"}
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571147494, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 2176533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48372, "largest_seqno": 49791, "table_properties": {"data_size": 2169885, "index_size": 3783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14152, "raw_average_key_size": 20, "raw_value_size": 2156446, "raw_average_value_size": 3058, "num_data_blocks": 169, "num_entries": 705, "num_filter_entries": 705, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615439, "oldest_key_time": 1765615439, "file_creation_time": 1765615571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 43110 microseconds, and 6176 cpu microseconds.
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.147550) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 2176533 bytes OK
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.147575) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150007) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150030) EVENT_LOG_v1 {"time_micros": 1765615571150025, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150067) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2233200, prev total WAL file size 2233200, number of live WAL files 2.
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150851) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(2125KB)], [113(8691KB)]
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571150906, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11076710, "oldest_snapshot_seqno": -1}
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7019 keys, 9307756 bytes, temperature: kUnknown
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571221027, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9307756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9262092, "index_size": 26992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 183516, "raw_average_key_size": 26, "raw_value_size": 9137620, "raw_average_value_size": 1301, "num_data_blocks": 1052, "num_entries": 7019, "num_filter_entries": 7019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.221309) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9307756 bytes
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.233730) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.7 rd, 132.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 8.5 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(9.4) write-amplify(4.3) OK, records in: 7539, records dropped: 520 output_compression: NoCompression
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.233768) EVENT_LOG_v1 {"time_micros": 1765615571233754, "job": 68, "event": "compaction_finished", "compaction_time_micros": 70242, "compaction_time_cpu_micros": 24518, "output_level": 6, "num_output_files": 1, "total_output_size": 9307756, "num_input_records": 7539, "num_output_records": 7019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571234361, "job": 68, "event": "table_file_deletion", "file_number": 115}
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571235724, "job": 68, "event": "table_file_deletion", "file_number": 113}
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:46:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:46:11 compute-0 nova_compute[248510]: 2025-12-13 08:46:11.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:13 compute-0 ceph-mon[76537]: pgmap v2508: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:14 compute-0 podman[349259]: 2025-12-13 08:46:14.973276455 +0000 UTC m=+0.054744685 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 08:46:14 compute-0 podman[349258]: 2025-12-13 08:46:14.975341006 +0000 UTC m=+0.056301113 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:46:14 compute-0 podman[349257]: 2025-12-13 08:46:14.993387889 +0000 UTC m=+0.085843515 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec 13 08:46:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:46:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959195663' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:46:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:46:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959195663' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:46:15 compute-0 ceph-mon[76537]: pgmap v2509: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1959195663' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:46:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1959195663' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:46:15 compute-0 nova_compute[248510]: 2025-12-13 08:46:15.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:15 compute-0 nova_compute[248510]: 2025-12-13 08:46:15.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:16 compute-0 nova_compute[248510]: 2025-12-13 08:46:16.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:17 compute-0 ceph-mon[76537]: pgmap v2510: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:18 compute-0 nova_compute[248510]: 2025-12-13 08:46:18.202 248514 INFO nova.compute.manager [None req-528e3b7d-601c-43ab-a174-be31e484ad0c 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Pausing
Dec 13 08:46:18 compute-0 nova_compute[248510]: 2025-12-13 08:46:18.204 248514 DEBUG nova.objects.instance [None req-528e3b7d-601c-43ab-a174-be31e484ad0c 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'flavor' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:46:18 compute-0 nova_compute[248510]: 2025-12-13 08:46:18.307 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615578.3069959, c3fb322f-a9db-4396-b659-2307698e5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:46:18 compute-0 nova_compute[248510]: 2025-12-13 08:46:18.307 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Paused (Lifecycle Event)
Dec 13 08:46:18 compute-0 nova_compute[248510]: 2025-12-13 08:46:18.309 248514 DEBUG nova.compute.manager [None req-528e3b7d-601c-43ab-a174-be31e484ad0c 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:46:18 compute-0 nova_compute[248510]: 2025-12-13 08:46:18.444 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:46:18 compute-0 nova_compute[248510]: 2025-12-13 08:46:18.448 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:46:18 compute-0 nova_compute[248510]: 2025-12-13 08:46:18.506 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (pausing). Skip.
Dec 13 08:46:18 compute-0 ceph-mon[76537]: pgmap v2511: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec 13 08:46:20 compute-0 nova_compute[248510]: 2025-12-13 08:46:20.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:21 compute-0 ceph-mon[76537]: pgmap v2512: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000770751562265708 of space, bias 1.0, pg target 0.23122546867971241 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006681201032124484 of space, bias 1.0, pg target 0.20043603096373452 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.839601767484647e-07 of space, bias 4.0, pg target 0.0007007522120981576 quantized to 16 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.566 248514 INFO nova.compute.manager [None req-01197d19-f7a0-48d8-b0e9-2e06b7add471 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Unpausing
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.567 248514 DEBUG nova.objects.instance [None req-01197d19-f7a0-48d8-b0e9-2e06b7add471 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'flavor' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.599 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615581.599589, c3fb322f-a9db-4396-b659-2307698e5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.600 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Resumed (Lifecycle Event)
Dec 13 08:46:21 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.604 248514 DEBUG nova.virt.libvirt.guest [None req-01197d19-f7a0-48d8-b0e9-2e06b7add471 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.605 248514 DEBUG nova.compute.manager [None req-01197d19-f7a0-48d8-b0e9-2e06b7add471 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.654 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.657 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:46:21 compute-0 nova_compute[248510]: 2025-12-13 08:46:21.698 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (unpausing). Skip.
Dec 13 08:46:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:23 compute-0 ceph-mon[76537]: pgmap v2513: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:24 compute-0 nova_compute[248510]: 2025-12-13 08:46:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:25 compute-0 ceph-mon[76537]: pgmap v2514: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:25 compute-0 nova_compute[248510]: 2025-12-13 08:46:25.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:26 compute-0 ceph-mon[76537]: pgmap v2515: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:26 compute-0 nova_compute[248510]: 2025-12-13 08:46:26.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:29 compute-0 ceph-mon[76537]: pgmap v2516: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:30 compute-0 nova_compute[248510]: 2025-12-13 08:46:30.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:31 compute-0 ceph-mon[76537]: pgmap v2517: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 08:46:31 compute-0 nova_compute[248510]: 2025-12-13 08:46:31.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 13 08:46:33 compute-0 ceph-mon[76537]: pgmap v2518: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 13 08:46:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:35 compute-0 ceph-mon[76537]: pgmap v2519: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:35 compute-0 nova_compute[248510]: 2025-12-13 08:46:35.316 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:35 compute-0 sudo[349322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:46:35 compute-0 sudo[349322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:35 compute-0 sudo[349322]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:35 compute-0 sudo[349347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:46:35 compute-0 sudo[349347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:36 compute-0 sudo[349347]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 08:46:36 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 08:46:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:46:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:46:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:46:36 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:46:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:46:36 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:46:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:46:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:46:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:46:36 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:46:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:46:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:46:36 compute-0 sudo[349403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:46:36 compute-0 sudo[349403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:36 compute-0 sudo[349403]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:36 compute-0 sudo[349428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:46:36 compute-0 sudo[349428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:36 compute-0 nova_compute[248510]: 2025-12-13 08:46:36.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:36 compute-0 podman[349466]: 2025-12-13 08:46:36.820355184 +0000 UTC m=+0.049212536 container create 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 08:46:36 compute-0 systemd[1]: Started libpod-conmon-1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3.scope.
Dec 13 08:46:36 compute-0 podman[349466]: 2025-12-13 08:46:36.796224379 +0000 UTC m=+0.025081721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:46:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:46:36 compute-0 podman[349466]: 2025-12-13 08:46:36.916875306 +0000 UTC m=+0.145732668 container init 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:46:36 compute-0 podman[349466]: 2025-12-13 08:46:36.925473142 +0000 UTC m=+0.154330454 container start 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Dec 13 08:46:36 compute-0 peaceful_elbakyan[349483]: 167 167
Dec 13 08:46:36 compute-0 systemd[1]: libpod-1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3.scope: Deactivated successfully.
Dec 13 08:46:36 compute-0 podman[349466]: 2025-12-13 08:46:36.933216897 +0000 UTC m=+0.162074219 container attach 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:46:36 compute-0 podman[349466]: 2025-12-13 08:46:36.934464268 +0000 UTC m=+0.163321590 container died 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 08:46:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b75d4a3f798b042ae552d92066605b9f818fd7c601d570181bdfc85730251278-merged.mount: Deactivated successfully.
Dec 13 08:46:36 compute-0 podman[349466]: 2025-12-13 08:46:36.977259252 +0000 UTC m=+0.206116564 container remove 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:46:36 compute-0 systemd[1]: libpod-conmon-1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3.scope: Deactivated successfully.
Dec 13 08:46:37 compute-0 podman[349507]: 2025-12-13 08:46:37.155843673 +0000 UTC m=+0.041368939 container create 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 08:46:37 compute-0 systemd[1]: Started libpod-conmon-8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47.scope.
Dec 13 08:46:37 compute-0 ceph-mon[76537]: pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 08:46:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:46:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:46:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:46:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:46:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:46:37 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:46:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:46:37 compute-0 podman[349507]: 2025-12-13 08:46:37.135904433 +0000 UTC m=+0.021429699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:37 compute-0 podman[349507]: 2025-12-13 08:46:37.247588226 +0000 UTC m=+0.133113492 container init 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:46:37 compute-0 podman[349507]: 2025-12-13 08:46:37.259455414 +0000 UTC m=+0.144980660 container start 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 08:46:37 compute-0 podman[349507]: 2025-12-13 08:46:37.263335491 +0000 UTC m=+0.148860757 container attach 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:46:37 compute-0 wonderful_payne[349523]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:46:37 compute-0 wonderful_payne[349523]: --> All data devices are unavailable
Dec 13 08:46:37 compute-0 systemd[1]: libpod-8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47.scope: Deactivated successfully.
Dec 13 08:46:37 compute-0 podman[349507]: 2025-12-13 08:46:37.792537531 +0000 UTC m=+0.678062797 container died 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a-merged.mount: Deactivated successfully.
Dec 13 08:46:37 compute-0 podman[349507]: 2025-12-13 08:46:37.839825978 +0000 UTC m=+0.725351224 container remove 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:46:37 compute-0 systemd[1]: libpod-conmon-8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47.scope: Deactivated successfully.
Dec 13 08:46:37 compute-0 sudo[349428]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:37 compute-0 sudo[349554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:46:37 compute-0 sudo[349554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:37 compute-0 sudo[349554]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:38 compute-0 sudo[349579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:46:38 compute-0 sudo[349579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:38 compute-0 podman[349616]: 2025-12-13 08:46:38.306973161 +0000 UTC m=+0.038467616 container create 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:46:38 compute-0 systemd[1]: Started libpod-conmon-18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4.scope.
Dec 13 08:46:38 compute-0 podman[349616]: 2025-12-13 08:46:38.290881858 +0000 UTC m=+0.022376323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:46:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:46:38 compute-0 podman[349616]: 2025-12-13 08:46:38.403193576 +0000 UTC m=+0.134688031 container init 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 08:46:38 compute-0 podman[349616]: 2025-12-13 08:46:38.412413567 +0000 UTC m=+0.143908002 container start 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 08:46:38 compute-0 podman[349616]: 2025-12-13 08:46:38.415812543 +0000 UTC m=+0.147307008 container attach 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 08:46:38 compute-0 great_raman[349632]: 167 167
Dec 13 08:46:38 compute-0 systemd[1]: libpod-18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4.scope: Deactivated successfully.
Dec 13 08:46:38 compute-0 podman[349616]: 2025-12-13 08:46:38.418768207 +0000 UTC m=+0.150262642 container died 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 08:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42d3b9980440864b7c9f9ec2e5444a633c4fbd4ec5678596c45541cd821da42-merged.mount: Deactivated successfully.
Dec 13 08:46:38 compute-0 podman[349616]: 2025-12-13 08:46:38.459196001 +0000 UTC m=+0.190690436 container remove 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 08:46:38 compute-0 systemd[1]: libpod-conmon-18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4.scope: Deactivated successfully.
Dec 13 08:46:38 compute-0 podman[349655]: 2025-12-13 08:46:38.623696889 +0000 UTC m=+0.039508512 container create 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:46:38 compute-0 systemd[1]: Started libpod-conmon-49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55.scope.
Dec 13 08:46:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:38 compute-0 podman[349655]: 2025-12-13 08:46:38.69264173 +0000 UTC m=+0.108453383 container init 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:46:38 compute-0 podman[349655]: 2025-12-13 08:46:38.698450315 +0000 UTC m=+0.114261928 container start 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:46:38 compute-0 podman[349655]: 2025-12-13 08:46:38.701335598 +0000 UTC m=+0.117147251 container attach 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:46:38 compute-0 podman[349655]: 2025-12-13 08:46:38.606707903 +0000 UTC m=+0.022519566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:46:38 compute-0 kind_lovelace[349671]: {
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:     "0": [
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:         {
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "devices": [
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "/dev/loop3"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             ],
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_name": "ceph_lv0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_size": "21470642176",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "name": "ceph_lv0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "tags": {
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cluster_name": "ceph",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.crush_device_class": "",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.encrypted": "0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.objectstore": "bluestore",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osd_id": "0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.type": "block",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.vdo": "0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.with_tpm": "0"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             },
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "type": "block",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "vg_name": "ceph_vg0"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:         }
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:     ],
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:     "1": [
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:         {
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "devices": [
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "/dev/loop4"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             ],
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_name": "ceph_lv1",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_size": "21470642176",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "name": "ceph_lv1",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "tags": {
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cluster_name": "ceph",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.crush_device_class": "",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.encrypted": "0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.objectstore": "bluestore",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osd_id": "1",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.type": "block",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.vdo": "0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.with_tpm": "0"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             },
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "type": "block",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "vg_name": "ceph_vg1"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:         }
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:     ],
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:     "2": [
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:         {
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "devices": [
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "/dev/loop5"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             ],
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_name": "ceph_lv2",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_size": "21470642176",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "name": "ceph_lv2",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "tags": {
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.cluster_name": "ceph",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.crush_device_class": "",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.encrypted": "0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.objectstore": "bluestore",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osd_id": "2",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.type": "block",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.vdo": "0",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:                 "ceph.with_tpm": "0"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             },
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "type": "block",
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:             "vg_name": "ceph_vg2"
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:         }
Dec 13 08:46:38 compute-0 kind_lovelace[349671]:     ]
Dec 13 08:46:38 compute-0 kind_lovelace[349671]: }
Dec 13 08:46:39 compute-0 systemd[1]: libpod-49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55.scope: Deactivated successfully.
Dec 13 08:46:39 compute-0 podman[349655]: 2025-12-13 08:46:39.019291037 +0000 UTC m=+0.435102690 container died 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 08:46:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31-merged.mount: Deactivated successfully.
Dec 13 08:46:39 compute-0 ceph-mon[76537]: pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:39 compute-0 podman[349655]: 2025-12-13 08:46:39.399763496 +0000 UTC m=+0.815575119 container remove 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 08:46:39 compute-0 systemd[1]: libpod-conmon-49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55.scope: Deactivated successfully.
Dec 13 08:46:39 compute-0 sudo[349579]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:39 compute-0 sudo[349691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:46:39 compute-0 sudo[349691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:39 compute-0 sudo[349691]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:39 compute-0 sudo[349716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:46:39 compute-0 sudo[349716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:39 compute-0 podman[349753]: 2025-12-13 08:46:39.84503478 +0000 UTC m=+0.027691756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:46:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:40 compute-0 podman[349753]: 2025-12-13 08:46:40.027198841 +0000 UTC m=+0.209855797 container create 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:46:40 compute-0 systemd[1]: Started libpod-conmon-8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234.scope.
Dec 13 08:46:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:46:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:46:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:46:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:46:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:46:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:46:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:46:40 compute-0 podman[349753]: 2025-12-13 08:46:40.245507199 +0000 UTC m=+0.428164175 container init 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:46:40 compute-0 podman[349753]: 2025-12-13 08:46:40.259368546 +0000 UTC m=+0.442025502 container start 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:46:40 compute-0 eager_euler[349769]: 167 167
Dec 13 08:46:40 compute-0 systemd[1]: libpod-8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234.scope: Deactivated successfully.
Dec 13 08:46:40 compute-0 podman[349753]: 2025-12-13 08:46:40.270275989 +0000 UTC m=+0.452932975 container attach 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 08:46:40 compute-0 podman[349753]: 2025-12-13 08:46:40.27190659 +0000 UTC m=+0.454563586 container died 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 13 08:46:40 compute-0 nova_compute[248510]: 2025-12-13 08:46:40.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-432fad8ed84535cd4f63ae4841d3e4f2c0c9f27b1eb59a8ee048412a03bd482d-merged.mount: Deactivated successfully.
Dec 13 08:46:40 compute-0 podman[349753]: 2025-12-13 08:46:40.44880005 +0000 UTC m=+0.631456996 container remove 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:46:40 compute-0 systemd[1]: libpod-conmon-8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234.scope: Deactivated successfully.
Dec 13 08:46:40 compute-0 podman[349792]: 2025-12-13 08:46:40.613901293 +0000 UTC m=+0.024730402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:46:40 compute-0 podman[349792]: 2025-12-13 08:46:40.740418558 +0000 UTC m=+0.151247637 container create f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 08:46:40 compute-0 systemd[1]: Started libpod-conmon-f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1.scope.
Dec 13 08:46:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:46:40 compute-0 podman[349792]: 2025-12-13 08:46:40.982983315 +0000 UTC m=+0.393812474 container init f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:46:40 compute-0 podman[349792]: 2025-12-13 08:46:40.991870168 +0000 UTC m=+0.402699247 container start f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:46:41 compute-0 podman[349792]: 2025-12-13 08:46:41.00030325 +0000 UTC m=+0.411132349 container attach f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 08:46:41 compute-0 ceph-mon[76537]: pgmap v2522: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:41 compute-0 nova_compute[248510]: 2025-12-13 08:46:41.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:41 compute-0 lvm[349886]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:46:41 compute-0 lvm[349887]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:46:41 compute-0 lvm[349887]: VG ceph_vg1 finished
Dec 13 08:46:41 compute-0 lvm[349886]: VG ceph_vg0 finished
Dec 13 08:46:41 compute-0 lvm[349889]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:46:41 compute-0 lvm[349889]: VG ceph_vg2 finished
Dec 13 08:46:41 compute-0 hardcore_ellis[349808]: {}
Dec 13 08:46:41 compute-0 systemd[1]: libpod-f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1.scope: Deactivated successfully.
Dec 13 08:46:41 compute-0 systemd[1]: libpod-f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1.scope: Consumed 1.355s CPU time.
Dec 13 08:46:41 compute-0 podman[349792]: 2025-12-13 08:46:41.803737002 +0000 UTC m=+1.214566101 container died f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883-merged.mount: Deactivated successfully.
Dec 13 08:46:41 compute-0 podman[349792]: 2025-12-13 08:46:41.858669311 +0000 UTC m=+1.269498390 container remove f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 08:46:41 compute-0 systemd[1]: libpod-conmon-f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1.scope: Deactivated successfully.
Dec 13 08:46:41 compute-0 sudo[349716]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:46:41 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:46:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:46:41 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:46:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:42 compute-0 sudo[349906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:46:42 compute-0 sudo[349906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:46:42 compute-0 sudo[349906]: pam_unix(sudo:session): session closed for user root
Dec 13 08:46:42 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:46:42 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:46:42 compute-0 ceph-mon[76537]: pgmap v2523: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:44 compute-0 nova_compute[248510]: 2025-12-13 08:46:44.264 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:46:44 compute-0 nova_compute[248510]: 2025-12-13 08:46:44.264 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:46:44 compute-0 nova_compute[248510]: 2025-12-13 08:46:44.264 248514 INFO nova.compute.manager [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Shelving
Dec 13 08:46:44 compute-0 nova_compute[248510]: 2025-12-13 08:46:44.293 248514 DEBUG nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:46:45 compute-0 ceph-mon[76537]: pgmap v2524: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:46:45 compute-0 nova_compute[248510]: 2025-12-13 08:46:45.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 13 08:46:45 compute-0 podman[349932]: 2025-12-13 08:46:45.981063471 +0000 UTC m=+0.066243914 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 13 08:46:45 compute-0 podman[349933]: 2025-12-13 08:46:45.994399236 +0000 UTC m=+0.077216019 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 08:46:46 compute-0 podman[349931]: 2025-12-13 08:46:46.009986647 +0000 UTC m=+0.098059732 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 08:46:46 compute-0 kernel: tap2d164f50-a5 (unregistering): left promiscuous mode
Dec 13 08:46:46 compute-0 NetworkManager[50376]: <info>  [1765615606.5601] device (tap2d164f50-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:46:46 compute-0 ovn_controller[148476]: 2025-12-13T08:46:46Z|01006|binding|INFO|Releasing lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 from this chassis (sb_readonly=0)
Dec 13 08:46:46 compute-0 ovn_controller[148476]: 2025-12-13T08:46:46Z|01007|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 down in Southbound
Dec 13 08:46:46 compute-0 ovn_controller[148476]: 2025-12-13T08:46:46Z|01008|binding|INFO|Removing iface tap2d164f50-a5 ovn-installed in OVS
Dec 13 08:46:46 compute-0 nova_compute[248510]: 2025-12-13 08:46:46.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:46 compute-0 nova_compute[248510]: 2025-12-13 08:46:46.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.577 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.579 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.580 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.583 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c965579-235e-46d1-a366-bb066f00776e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.584 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace which is not needed anymore
Dec 13 08:46:46 compute-0 nova_compute[248510]: 2025-12-13 08:46:46.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:46 compute-0 nova_compute[248510]: 2025-12-13 08:46:46.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:46 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Deactivated successfully.
Dec 13 08:46:46 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Consumed 16.572s CPU time.
Dec 13 08:46:46 compute-0 systemd-machined[210538]: Machine qemu-127-instance-00000067 terminated.
Dec 13 08:46:46 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [NOTICE]   (347890) : haproxy version is 2.8.14-c23fe91
Dec 13 08:46:46 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [NOTICE]   (347890) : path to executable is /usr/sbin/haproxy
Dec 13 08:46:46 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [WARNING]  (347890) : Exiting Master process...
Dec 13 08:46:46 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [WARNING]  (347890) : Exiting Master process...
Dec 13 08:46:46 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [ALERT]    (347890) : Current worker (347892) exited with code 143 (Terminated)
Dec 13 08:46:46 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [WARNING]  (347890) : All workers exited. Exiting... (0)
Dec 13 08:46:46 compute-0 systemd[1]: libpod-44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7.scope: Deactivated successfully.
Dec 13 08:46:46 compute-0 podman[350016]: 2025-12-13 08:46:46.736113409 +0000 UTC m=+0.047576905 container died 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 08:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7-userdata-shm.mount: Deactivated successfully.
Dec 13 08:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b7ca1c7f01e35473290f2070a95e46c25e0e0fafb8bf13da3280c15aeb90b0b-merged.mount: Deactivated successfully.
Dec 13 08:46:46 compute-0 podman[350016]: 2025-12-13 08:46:46.778517643 +0000 UTC m=+0.089981139 container cleanup 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:46:46 compute-0 systemd[1]: libpod-conmon-44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7.scope: Deactivated successfully.
Dec 13 08:46:46 compute-0 podman[350048]: 2025-12-13 08:46:46.844521939 +0000 UTC m=+0.046605440 container remove 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.851 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[306cd8ba-2bd1-4002-85fc-10c5eb295908]: (4, ('Sat Dec 13 08:46:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7)\n44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7\nSat Dec 13 08:46:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7)\n44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.853 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e8d963-92b1-4885-99f2-689bd42d946d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.854 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:46:46 compute-0 nova_compute[248510]: 2025-12-13 08:46:46.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:46 compute-0 kernel: tap6ad7f755-f0: left promiscuous mode
Dec 13 08:46:46 compute-0 nova_compute[248510]: 2025-12-13 08:46:46.872 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.876 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b232dac2-2576-4235-9d34-d503bd5169eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.891 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d30c1e-586b-42d2-a713-1f3f889aa834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.893 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f485bb04-362e-44b4-8f92-4864f5628bf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.910 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68bdda6c-3ec7-4db2-baf0-e2cfb887827e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807690, 'reachable_time': 38466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350076, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.913 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:46:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.913 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7e030b0b-95ec-487d-ac83-aee49dd3a5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:46:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ad7f755\x2dfa29\x2d40dd\x2d89c4\x2d988d0a51cf9b.mount: Deactivated successfully.
Dec 13 08:46:47 compute-0 ceph-mon[76537]: pgmap v2525: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.061 248514 DEBUG nova.compute.manager [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.062 248514 DEBUG oslo_concurrency.lockutils [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.062 248514 DEBUG oslo_concurrency.lockutils [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.062 248514 DEBUG oslo_concurrency.lockutils [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.062 248514 DEBUG nova.compute.manager [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.063 248514 WARNING nova.compute.manager [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state shelving.
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.310 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance shutdown successfully after 3 seconds.
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.317 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance destroyed successfully.
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.317 248514 DEBUG nova.objects.instance [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'numa_topology' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.637 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Beginning cold snapshot process
Dec 13 08:46:47 compute-0 nova_compute[248510]: 2025-12-13 08:46:47.823 248514 DEBUG nova.virt.libvirt.imagebackend [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:46:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 5.4 KiB/s wr, 1 op/s
Dec 13 08:46:48 compute-0 nova_compute[248510]: 2025-12-13 08:46:48.132 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] creating snapshot(eb3953769aa74c6bbf42ade93aaf4c37) on rbd image(c3fb322f-a9db-4396-b659-2307698e5524_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:46:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Dec 13 08:46:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Dec 13 08:46:49 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Dec 13 08:46:49 compute-0 ceph-mon[76537]: pgmap v2526: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 5.4 KiB/s wr, 1 op/s
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.089 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] cloning vms/c3fb322f-a9db-4396-b659-2307698e5524_disk@eb3953769aa74c6bbf42ade93aaf4c37 to images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:46:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.161 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] flattening images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.225 248514 DEBUG nova.compute.manager [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.225 248514 DEBUG oslo_concurrency.lockutils [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.225 248514 DEBUG oslo_concurrency.lockutils [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.226 248514 DEBUG oslo_concurrency.lockutils [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.227 248514 DEBUG nova.compute.manager [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.228 248514 WARNING nova.compute.manager [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state shelving_image_uploading.
Dec 13 08:46:49 compute-0 nova_compute[248510]: 2025-12-13 08:46:49.626 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] removing snapshot(eb3953769aa74c6bbf42ade93aaf4c37) on rbd image(c3fb322f-a9db-4396-b659-2307698e5524_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:46:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 815 KiB/s rd, 10 KiB/s wr, 4 op/s
Dec 13 08:46:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Dec 13 08:46:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Dec 13 08:46:50 compute-0 ceph-mon[76537]: osdmap e268: 3 total, 3 up, 3 in
Dec 13 08:46:50 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Dec 13 08:46:50 compute-0 nova_compute[248510]: 2025-12-13 08:46:50.098 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] creating snapshot(snap) on rbd image(04012c2a-611f-4e76-a6ad-a0a4e85f7f7e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:46:50 compute-0 nova_compute[248510]: 2025-12-13 08:46:50.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Dec 13 08:46:51 compute-0 ceph-mon[76537]: pgmap v2528: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 815 KiB/s rd, 10 KiB/s wr, 4 op/s
Dec 13 08:46:51 compute-0 ceph-mon[76537]: osdmap e269: 3 total, 3 up, 3 in
Dec 13 08:46:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Dec 13 08:46:51 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Dec 13 08:46:51 compute-0 nova_compute[248510]: 2025-12-13 08:46:51.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 136 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 61 op/s
Dec 13 08:46:52 compute-0 ceph-mon[76537]: osdmap e270: 3 total, 3 up, 3 in
Dec 13 08:46:52 compute-0 nova_compute[248510]: 2025-12-13 08:46:52.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:53 compute-0 ceph-mon[76537]: pgmap v2531: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 136 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 61 op/s
Dec 13 08:46:53 compute-0 nova_compute[248510]: 2025-12-13 08:46:53.210 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Snapshot image upload complete
Dec 13 08:46:53 compute-0 nova_compute[248510]: 2025-12-13 08:46:53.210 248514 DEBUG nova.compute.manager [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:46:53 compute-0 nova_compute[248510]: 2025-12-13 08:46:53.345 248514 INFO nova.compute.manager [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Shelve offloading
Dec 13 08:46:53 compute-0 nova_compute[248510]: 2025-12-13 08:46:53.353 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance destroyed successfully.
Dec 13 08:46:53 compute-0 nova_compute[248510]: 2025-12-13 08:46:53.353 248514 DEBUG nova.compute.manager [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:46:53 compute-0 nova_compute[248510]: 2025-12-13 08:46:53.356 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:46:53 compute-0 nova_compute[248510]: 2025-12-13 08:46:53.356 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:46:53 compute-0 nova_compute[248510]: 2025-12-13 08:46:53.356 248514 DEBUG nova.network.neutron [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:46:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 164 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 4.6 MiB/s wr, 156 op/s
Dec 13 08:46:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:55 compute-0 ceph-mon[76537]: pgmap v2532: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 164 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 4.6 MiB/s wr, 156 op/s
Dec 13 08:46:55 compute-0 nova_compute[248510]: 2025-12-13 08:46:55.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:55.425 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:46:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:55.426 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:46:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:46:55.426 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:46:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.7 MiB/s wr, 150 op/s
Dec 13 08:46:56 compute-0 nova_compute[248510]: 2025-12-13 08:46:56.182 248514 DEBUG nova.network.neutron [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:46:56 compute-0 nova_compute[248510]: 2025-12-13 08:46:56.207 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:46:56 compute-0 nova_compute[248510]: 2025-12-13 08:46:56.672 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:56 compute-0 nova_compute[248510]: 2025-12-13 08:46:56.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:46:57 compute-0 ceph-mon[76537]: pgmap v2533: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.7 MiB/s wr, 150 op/s
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.599 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance destroyed successfully.
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.599 248514 DEBUG nova.objects.instance [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'resources' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.631 248514 DEBUG nova.virt.libvirt.vif [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:45:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member',shelved_at='2025-12-13T08:46:53.210929',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:46:47Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.632 248514 DEBUG nova.network.os_vif_util [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.634 248514 DEBUG nova.network.os_vif_util [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.635 248514 DEBUG os_vif [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.637 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d164f50-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.646 248514 INFO os_vif [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.750 248514 DEBUG nova.compute.manager [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.750 248514 DEBUG nova.compute.manager [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing instance network info cache due to event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.751 248514 DEBUG oslo_concurrency.lockutils [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.751 248514 DEBUG oslo_concurrency.lockutils [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.751 248514 DEBUG nova.network.neutron [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.946 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deleting instance files /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524_del
Dec 13 08:46:57 compute-0 nova_compute[248510]: 2025-12-13 08:46:57.947 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deletion of /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524_del complete
Dec 13 08:46:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 170 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.8 MiB/s wr, 130 op/s
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.061 248514 INFO nova.scheduler.client.report [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Deleted allocations for instance c3fb322f-a9db-4396-b659-2307698e5524
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.139 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.139 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.174 248514 DEBUG oslo_concurrency.processutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:46:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:46:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384703289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.774 248514 DEBUG oslo_concurrency.processutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.783 248514 DEBUG nova.compute.provider_tree [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.811 248514 DEBUG nova.scheduler.client.report [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.845 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:46:58 compute-0 nova_compute[248510]: 2025-12-13 08:46:58.904 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:46:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:46:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Dec 13 08:46:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Dec 13 08:46:59 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Dec 13 08:46:59 compute-0 ceph-mon[76537]: pgmap v2534: 321 pgs: 321 active+clean; 170 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.8 MiB/s wr, 130 op/s
Dec 13 08:46:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3384703289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:46:59 compute-0 ceph-mon[76537]: osdmap e271: 3 total, 3 up, 3 in
Dec 13 08:46:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 144 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 92 op/s
Dec 13 08:47:00 compute-0 ceph-mon[76537]: pgmap v2536: 321 pgs: 321 active+clean; 144 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 92 op/s
Dec 13 08:47:00 compute-0 nova_compute[248510]: 2025-12-13 08:47:00.943 248514 DEBUG nova.network.neutron [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updated VIF entry in instance network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:47:00 compute-0 nova_compute[248510]: 2025-12-13 08:47:00.944 248514 DEBUG nova.network.neutron [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": null, "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap2d164f50-a5", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:47:00 compute-0 nova_compute[248510]: 2025-12-13 08:47:00.973 248514 DEBUG oslo_concurrency.lockutils [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:47:01 compute-0 nova_compute[248510]: 2025-12-13 08:47:01.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:01 compute-0 nova_compute[248510]: 2025-12-13 08:47:01.811 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615606.810217, c3fb322f-a9db-4396-b659-2307698e5524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:01 compute-0 nova_compute[248510]: 2025-12-13 08:47:01.812 248514 INFO nova.compute.manager [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Stopped (Lifecycle Event)
Dec 13 08:47:01 compute-0 nova_compute[248510]: 2025-12-13 08:47:01.843 248514 DEBUG nova.compute.manager [None req-46bdaf19-b3a0-40b3-be4b-8f2a4ac9312e - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 120 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 101 op/s
Dec 13 08:47:02 compute-0 nova_compute[248510]: 2025-12-13 08:47:02.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:02 compute-0 nova_compute[248510]: 2025-12-13 08:47:02.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:02 compute-0 nova_compute[248510]: 2025-12-13 08:47:02.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:47:02 compute-0 nova_compute[248510]: 2025-12-13 08:47:02.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:47:02 compute-0 nova_compute[248510]: 2025-12-13 08:47:02.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:47:03 compute-0 ceph-mon[76537]: pgmap v2537: 321 pgs: 321 active+clean; 120 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 101 op/s
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.641 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.642 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.642 248514 INFO nova.compute.manager [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Unshelving
Dec 13 08:47:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:03.749 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.750 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:03.750 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.762 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.763 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.768 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_requests' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.801 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'numa_topology' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.827 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.828 248514 INFO nova.compute.claims [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.963 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 08:47:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 1.9 MiB/s wr, 43 op/s
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.995 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 08:47:03 compute-0 nova_compute[248510]: 2025-12-13 08:47:03.996 248514 DEBUG nova.compute.provider_tree [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 08:47:04 compute-0 nova_compute[248510]: 2025-12-13 08:47:04.030 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 08:47:04 compute-0 nova_compute[248510]: 2025-12-13 08:47:04.057 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 08:47:04 compute-0 nova_compute[248510]: 2025-12-13 08:47:04.100 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:47:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1798797823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:47:04 compute-0 nova_compute[248510]: 2025-12-13 08:47:04.717 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:04 compute-0 nova_compute[248510]: 2025-12-13 08:47:04.737 248514 DEBUG nova.compute.provider_tree [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:47:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:04.753 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:04 compute-0 nova_compute[248510]: 2025-12-13 08:47:04.768 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:47:04 compute-0 nova_compute[248510]: 2025-12-13 08:47:04.796 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:05 compute-0 ceph-mon[76537]: pgmap v2538: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 1.9 MiB/s wr, 43 op/s
Dec 13 08:47:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1798797823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:47:05 compute-0 nova_compute[248510]: 2025-12-13 08:47:05.150 248514 INFO nova.network.neutron [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating port 2d164f50-a56a-4eaf-ad60-84274a0eb413 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 13 08:47:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 08:47:06 compute-0 nova_compute[248510]: 2025-12-13 08:47:06.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:06 compute-0 nova_compute[248510]: 2025-12-13 08:47:06.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:06 compute-0 nova_compute[248510]: 2025-12-13 08:47:06.858 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:47:06 compute-0 nova_compute[248510]: 2025-12-13 08:47:06.858 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:47:06 compute-0 nova_compute[248510]: 2025-12-13 08:47:06.859 248514 DEBUG nova.network.neutron [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.009 248514 DEBUG nova.compute.manager [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.010 248514 DEBUG nova.compute.manager [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing instance network info cache due to event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.011 248514 DEBUG oslo_concurrency.lockutils [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:47:07 compute-0 ceph-mon[76537]: pgmap v2539: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.805 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.805 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:47:07 compute-0 nova_compute[248510]: 2025-12-13 08:47:07.805 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 818 B/s wr, 29 op/s
Dec 13 08:47:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:47:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961085225' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.397 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:08 compute-0 ceph-mon[76537]: pgmap v2540: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 818 B/s wr, 29 op/s
Dec 13 08:47:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3961085225' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.565 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.566 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.98753574863076GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c3fb322f-a9db-4396-b659-2307698e5524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.688 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.688 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:47:08 compute-0 nova_compute[248510]: 2025-12-13 08:47:08.768 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:47:09
Dec 13 08:47:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:47:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:47:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', '.mgr', 'vms', 'default.rgw.log']
Dec 13 08:47:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:47:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:47:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791921854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.361 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.369 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.394 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.436 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2791921854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.648 248514 DEBUG nova.network.neutron [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.672 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.674 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.674 248514 INFO nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating image(s)
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.698 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.701 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'trusted_certs' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.702 248514 DEBUG oslo_concurrency.lockutils [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.703 248514 DEBUG nova.network.neutron [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.744 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.762 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.765 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "fbdf5d23e6e4d187216e212e7434ae52f5a80494" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:09 compute-0 nova_compute[248510]: 2025-12-13 08:47:09.766 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "fbdf5d23e6e4d187216e212e7434ae52f5a80494" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 757 B/s wr, 27 op/s
Dec 13 08:47:10 compute-0 nova_compute[248510]: 2025-12-13 08:47:10.109 248514 DEBUG nova.virt.libvirt.imagebackend [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:47:10 compute-0 nova_compute[248510]: 2025-12-13 08:47:10.160 248514 DEBUG nova.virt.libvirt.imagebackend [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 13 08:47:10 compute-0 nova_compute[248510]: 2025-12-13 08:47:10.161 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] cloning images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e@snap to None/c3fb322f-a9db-4396-b659-2307698e5524_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:47:10 compute-0 ceph-mon[76537]: pgmap v2541: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 757 B/s wr, 27 op/s
Dec 13 08:47:10 compute-0 nova_compute[248510]: 2025-12-13 08:47:10.641 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "fbdf5d23e6e4d187216e212e7434ae52f5a80494" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:47:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:47:10 compute-0 nova_compute[248510]: 2025-12-13 08:47:10.756 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'migration_context' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:10 compute-0 nova_compute[248510]: 2025-12-13 08:47:10.940 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] flattening vms/c3fb322f-a9db-4396-b659-2307698e5524_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:47:11 compute-0 nova_compute[248510]: 2025-12-13 08:47:11.436 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:11 compute-0 nova_compute[248510]: 2025-12-13 08:47:11.436 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:47:11 compute-0 nova_compute[248510]: 2025-12-13 08:47:11.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 15 op/s
Dec 13 08:47:12 compute-0 nova_compute[248510]: 2025-12-13 08:47:12.319 248514 DEBUG nova.network.neutron [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updated VIF entry in instance network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:47:12 compute-0 nova_compute[248510]: 2025-12-13 08:47:12.320 248514 DEBUG nova.network.neutron [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:47:12 compute-0 nova_compute[248510]: 2025-12-13 08:47:12.339 248514 DEBUG oslo_concurrency.lockutils [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:47:12 compute-0 ceph-mon[76537]: pgmap v2542: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 15 op/s
Dec 13 08:47:12 compute-0 nova_compute[248510]: 2025-12-13 08:47:12.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.429 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Image rbd:vms/c3fb322f-a9db-4396-b659-2307698e5524_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.429 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.430 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Ensure instance console log exists: /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.430 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.430 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.431 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.434 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start _get_guest_xml network_info=[{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:46:44Z,direct_url=<?>,disk_format='raw',id=04012c2a-611f-4e76-a6ad-a0a4e85f7f7e,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-235457723-shelved',owner='d2d4d23379cc4b03bbdd72a9134fdd9b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:46:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.439 248514 WARNING nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.445 248514 DEBUG nova.virt.libvirt.host [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.445 248514 DEBUG nova.virt.libvirt.host [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.449 248514 DEBUG nova.virt.libvirt.host [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.450 248514 DEBUG nova.virt.libvirt.host [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.450 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.450 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:46:44Z,direct_url=<?>,disk_format='raw',id=04012c2a-611f-4e76-a6ad-a0a4e85f7f7e,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-235457723-shelved',owner='d2d4d23379cc4b03bbdd72a9134fdd9b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:46:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.451 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.451 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.451 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.451 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.453 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.453 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'vcpu_model' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:13 compute-0 nova_compute[248510]: 2025-12-13 08:47:13.487 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 143 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 28 op/s
Dec 13 08:47:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:47:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434996582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.058 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.088 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.092 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2434996582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:47:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:47:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535499270' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.628 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.630 248514 DEBUG nova.virt.libvirt.vif [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:45:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member',shelved_at='2025-12-13T08:46:53.210929',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:47:03Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.631 248514 DEBUG nova.network.os_vif_util [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.631 248514 DEBUG nova.network.os_vif_util [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.633 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.668 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <uuid>c3fb322f-a9db-4396-b659-2307698e5524</uuid>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <name>instance-00000067</name>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <nova:name>tempest-ServersNegativeTestJSON-server-235457723</nova:name>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:47:13</nova:creationTime>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <nova:user uuid="8948a1b0c26f43129cb50ef6f3872ecd">tempest-ServersNegativeTestJSON-1471623163-project-member</nova:user>
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <nova:project uuid="d2d4d23379cc4b03bbdd72a9134fdd9b">tempest-ServersNegativeTestJSON-1471623163</nova:project>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="04012c2a-611f-4e76-a6ad-a0a4e85f7f7e"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <nova:port uuid="2d164f50-a56a-4eaf-ad60-84274a0eb413">
Dec 13 08:47:14 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <system>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <entry name="serial">c3fb322f-a9db-4396-b659-2307698e5524</entry>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <entry name="uuid">c3fb322f-a9db-4396-b659-2307698e5524</entry>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </system>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <os>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   </os>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <features>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   </features>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c3fb322f-a9db-4396-b659-2307698e5524_disk">
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c3fb322f-a9db-4396-b659-2307698e5524_disk.config">
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       </source>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:47:14 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e2:39:62"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <target dev="tap2d164f50-a5"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/console.log" append="off"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <video>
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </video>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:47:14 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:47:14 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:47:14 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:47:14 compute-0 nova_compute[248510]: </domain>
Dec 13 08:47:14 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.669 248514 DEBUG nova.compute.manager [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Preparing to wait for external event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.670 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.670 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.671 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.672 248514 DEBUG nova.virt.libvirt.vif [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:45:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member',shelved_at='2025-12-13T08:46:53.210929',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:47:03Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.672 248514 DEBUG nova.network.os_vif_util [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.673 248514 DEBUG nova.network.os_vif_util [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.673 248514 DEBUG os_vif [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.675 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.675 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.678 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d164f50-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.679 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d164f50-a5, col_values=(('external_ids', {'iface-id': '2d164f50-a56a-4eaf-ad60-84274a0eb413', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:39:62', 'vm-uuid': 'c3fb322f-a9db-4396-b659-2307698e5524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:14 compute-0 NetworkManager[50376]: <info>  [1765615634.6817] manager: (tap2d164f50-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.687 248514 INFO os_vif [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.887 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.888 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.889 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No VIF found with MAC fa:16:3e:e2:39:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.889 248514 INFO nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Using config drive
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.913 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.945 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'ec2_ids' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:14 compute-0 nova_compute[248510]: 2025-12-13 08:47:14.989 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'keypairs' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:47:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/48211187' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:47:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:47:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/48211187' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:47:15 compute-0 nova_compute[248510]: 2025-12-13 08:47:15.622 248514 INFO nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating config drive at /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config
Dec 13 08:47:15 compute-0 nova_compute[248510]: 2025-12-13 08:47:15.627 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1757rw03 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:15 compute-0 ceph-mon[76537]: pgmap v2543: 321 pgs: 321 active+clean; 143 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 28 op/s
Dec 13 08:47:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/535499270' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:47:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/48211187' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:47:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/48211187' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:47:15 compute-0 nova_compute[248510]: 2025-12-13 08:47:15.777 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1757rw03" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:15 compute-0 nova_compute[248510]: 2025-12-13 08:47:15.811 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:15 compute-0 nova_compute[248510]: 2025-12-13 08:47:15.815 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config c3fb322f-a9db-4396-b659-2307698e5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 199 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 75 op/s
Dec 13 08:47:16 compute-0 nova_compute[248510]: 2025-12-13 08:47:16.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:16 compute-0 nova_compute[248510]: 2025-12-13 08:47:16.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:16 compute-0 ceph-mon[76537]: pgmap v2544: 321 pgs: 321 active+clean; 199 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 75 op/s
Dec 13 08:47:16 compute-0 podman[350668]: 2025-12-13 08:47:16.992941346 +0000 UTC m=+0.065355431 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:47:17 compute-0 podman[350667]: 2025-12-13 08:47:17.013046221 +0000 UTC m=+0.092683857 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:47:17 compute-0 podman[350666]: 2025-12-13 08:47:17.017471712 +0000 UTC m=+0.096733359 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:47:17 compute-0 nova_compute[248510]: 2025-12-13 08:47:17.344 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config c3fb322f-a9db-4396-b659-2307698e5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:17 compute-0 nova_compute[248510]: 2025-12-13 08:47:17.345 248514 INFO nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deleting local config drive /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config because it was imported into RBD.
Dec 13 08:47:17 compute-0 kernel: tap2d164f50-a5: entered promiscuous mode
Dec 13 08:47:17 compute-0 NetworkManager[50376]: <info>  [1765615637.4090] manager: (tap2d164f50-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/414)
Dec 13 08:47:17 compute-0 ovn_controller[148476]: 2025-12-13T08:47:17Z|01009|binding|INFO|Claiming lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 for this chassis.
Dec 13 08:47:17 compute-0 ovn_controller[148476]: 2025-12-13T08:47:17Z|01010|binding|INFO|2d164f50-a56a-4eaf-ad60-84274a0eb413: Claiming fa:16:3e:e2:39:62 10.100.0.6
Dec 13 08:47:17 compute-0 nova_compute[248510]: 2025-12-13 08:47:17.412 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:17 compute-0 ovn_controller[148476]: 2025-12-13T08:47:17Z|01011|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 ovn-installed in OVS
Dec 13 08:47:17 compute-0 nova_compute[248510]: 2025-12-13 08:47:17.432 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.434 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:47:17 compute-0 nova_compute[248510]: 2025-12-13 08:47:17.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:17 compute-0 ovn_controller[148476]: 2025-12-13T08:47:17Z|01012|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 up in Southbound
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.436 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b bound to our chassis
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.437 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:47:17 compute-0 systemd-machined[210538]: New machine qemu-129-instance-00000067.
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.453 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39c2bf4b-870a-4456-aeb7-944224effc91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.454 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ad7f755-f1 in ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.457 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ad7f755-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.457 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5cf569-688b-4643-9b18-7884d81eeffc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.458 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c25a6c77-fc9c-476b-97f7-69ccf3e7fcae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000067.
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.471 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e111bacb-1439-40c1-bbce-f8ba37c6ae6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 systemd-udevd[350745]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.489 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:5d:27 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3cd63fa2-b81b-489a-a8cf-c4a874eedf7b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cd63fa2-b81b-489a-a8cf-c4a874eedf7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f01d2e7-04a2-4e90-bd8e-58cdccdf83ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=14861846-11b2-42ff-8374-23cd7b7371bb) old=Port_Binding(mac=['fa:16:3e:9a:5d:27 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3cd63fa2-b81b-489a-a8cf-c4a874eedf7b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cd63fa2-b81b-489a-a8cf-c4a874eedf7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:47:17 compute-0 NetworkManager[50376]: <info>  [1765615637.4915] device (tap2d164f50-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[718e6fb4-0c9b-446c-ae9b-b6576559d778]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 NetworkManager[50376]: <info>  [1765615637.4937] device (tap2d164f50-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.532 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e99f56dc-69cc-48ff-9854-dad8fe009a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 NetworkManager[50376]: <info>  [1765615637.5386] manager: (tap6ad7f755-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/415)
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d37b9515-8748-4af6-84a6-abea7b2df971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.569 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa594db-9ad2-43f6-9774-c51f81179de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.573 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[959307ab-b1a2-4739-bb88-c5e36e3e7c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 NetworkManager[50376]: <info>  [1765615637.5974] device (tap6ad7f755-f0): carrier: link connected
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.602 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[26e57f56-5792-449b-94a0-84df4e2475d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee771fa4-9cf4-4dc0-a971-6b602d5c3248]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821481, 'reachable_time': 32904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350776, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6d8e68-b480-452f-834b-e1cd59eb1636]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:35ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 821481, 'tstamp': 821481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350777, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.665 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[357b12e8-fb99-4c77-82c8-9c041f0a5e10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821481, 'reachable_time': 32904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350778, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.706 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a814d6b-01df-4061-b11b-5170bed1e20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.782 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ea9529-117e-4c95-b20d-489329329dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:17 compute-0 kernel: tap6ad7f755-f0: entered promiscuous mode
Dec 13 08:47:17 compute-0 NetworkManager[50376]: <info>  [1765615637.7869] manager: (tap6ad7f755-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Dec 13 08:47:17 compute-0 nova_compute[248510]: 2025-12-13 08:47:17.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.788 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:17 compute-0 ovn_controller[148476]: 2025-12-13T08:47:17Z|01013|binding|INFO|Releasing lport 683d7da0-6f1e-41a6-9158-6204fb05ee50 from this chassis (sb_readonly=0)
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.790 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.798 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[04077414-8523-4fa7-908d-6ba21207b31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.799 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:47:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.800 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'env', 'PROCESS_TAG=haproxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:47:17 compute-0 nova_compute[248510]: 2025-12-13 08:47:17.805 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 78 op/s
Dec 13 08:47:18 compute-0 podman[350847]: 2025-12-13 08:47:18.206320836 +0000 UTC m=+0.065086454 container create 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.238 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615638.2374232, c3fb322f-a9db-4396-b659-2307698e5524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.240 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Started (Lifecycle Event)
Dec 13 08:47:18 compute-0 systemd[1]: Started libpod-conmon-14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81.scope.
Dec 13 08:47:18 compute-0 podman[350847]: 2025-12-13 08:47:18.174488977 +0000 UTC m=+0.033254625 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.270 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.277 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615638.2375324, c3fb322f-a9db-4396-b659-2307698e5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.278 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Paused (Lifecycle Event)
Dec 13 08:47:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1c5e1c88792b2cd7c895939660c80ec1d136af36d046687f8916aa03ba3455/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.302 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:18 compute-0 podman[350847]: 2025-12-13 08:47:18.305528676 +0000 UTC m=+0.164294324 container init 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.308 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:47:18 compute-0 podman[350847]: 2025-12-13 08:47:18.311933586 +0000 UTC m=+0.170699204 container start 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:47:18 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [NOTICE]   (350872) : New worker (350874) forked
Dec 13 08:47:18 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [NOTICE]   (350872) : Loading success.
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.340 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:47:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:18.372 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 14861846-11b2-42ff-8374-23cd7b7371bb in datapath 3cd63fa2-b81b-489a-a8cf-c4a874eedf7b updated
Dec 13 08:47:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:18.374 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cd63fa2-b81b-489a-a8cf-c4a874eedf7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:47:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:18.375 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[32da3c98-e39c-4e4c-8538-a956838365ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.919 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.919 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:18 compute-0 nova_compute[248510]: 2025-12-13 08:47:18.942 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.028 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.028 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.034 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.035 248514 INFO nova.compute.claims [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:47:19 compute-0 ceph-mon[76537]: pgmap v2545: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 78 op/s
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.159 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:47:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261967043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.815 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.821 248514 DEBUG nova.compute.provider_tree [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.842 248514 DEBUG nova.scheduler.client.report [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.870 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.871 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.931 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.931 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.958 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:47:19 compute-0 nova_compute[248510]: 2025-12-13 08:47:19.979 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:47:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 86 op/s
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.088 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.090 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.090 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Creating image(s)
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.110 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.131 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/261967043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.159 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.162 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.204 248514 DEBUG nova.policy [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.237 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.239 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.241 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.242 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.270 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.275 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.636 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.705 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.796 248514 DEBUG nova.objects.instance [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.816 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.816 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Ensure instance console log exists: /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.817 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.817 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:20 compute-0 nova_compute[248510]: 2025-12-13 08:47:20.817 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007717642784429154 of space, bias 1.0, pg target 0.2315292835328746 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014251084379551505 of space, bias 1.0, pg target 0.4275325313865451 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.839601767484647e-07 of space, bias 4.0, pg target 0.0007007522120981576 quantized to 16 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:47:21 compute-0 ceph-mon[76537]: pgmap v2546: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 86 op/s
Dec 13 08:47:21 compute-0 nova_compute[248510]: 2025-12-13 08:47:21.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:21 compute-0 nova_compute[248510]: 2025-12-13 08:47:21.846 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Successfully created port: 1a00c927-1c7f-4af5-9337-d6e58800dc3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:47:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 86 op/s
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.299 248514 DEBUG nova.compute.manager [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.300 248514 DEBUG oslo_concurrency.lockutils [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.300 248514 DEBUG oslo_concurrency.lockutils [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.300 248514 DEBUG oslo_concurrency.lockutils [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.300 248514 DEBUG nova.compute.manager [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Processing event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.301 248514 DEBUG nova.compute.manager [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.304 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615642.3041523, c3fb322f-a9db-4396-b659-2307698e5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.304 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Resumed (Lifecycle Event)
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.306 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.309 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance spawned successfully.
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.334 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.338 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:47:22 compute-0 nova_compute[248510]: 2025-12-13 08:47:22.369 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:47:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Dec 13 08:47:23 compute-0 ceph-mon[76537]: pgmap v2547: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 86 op/s
Dec 13 08:47:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Dec 13 08:47:23 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.566 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Successfully updated port: 1a00c927-1c7f-4af5-9337-d6e58800dc3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.586 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.586 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.586 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.642 248514 DEBUG nova.compute.manager [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.752 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 20.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.760 248514 DEBUG nova.compute.manager [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.760 248514 DEBUG nova.compute.manager [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing instance network info cache due to event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.761 248514 DEBUG oslo_concurrency.lockutils [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:47:23 compute-0 nova_compute[248510]: 2025-12-13 08:47:23.912 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:47:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 238 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 135 op/s
Dec 13 08:47:24 compute-0 ceph-mon[76537]: osdmap e272: 3 total, 3 up, 3 in
Dec 13 08:47:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:24 compute-0 nova_compute[248510]: 2025-12-13 08:47:24.685 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:25 compute-0 nova_compute[248510]: 2025-12-13 08:47:25.228 248514 DEBUG nova.compute.manager [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:25 compute-0 nova_compute[248510]: 2025-12-13 08:47:25.229 248514 DEBUG oslo_concurrency.lockutils [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:25 compute-0 nova_compute[248510]: 2025-12-13 08:47:25.229 248514 DEBUG oslo_concurrency.lockutils [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:25 compute-0 nova_compute[248510]: 2025-12-13 08:47:25.229 248514 DEBUG oslo_concurrency.lockutils [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:25 compute-0 nova_compute[248510]: 2025-12-13 08:47:25.230 248514 DEBUG nova.compute.manager [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:47:25 compute-0 nova_compute[248510]: 2025-12-13 08:47:25.230 248514 WARNING nova.compute.manager [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state None.
Dec 13 08:47:25 compute-0 ceph-mon[76537]: pgmap v2549: 321 pgs: 321 active+clean; 238 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 135 op/s
Dec 13 08:47:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 210 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 08:47:26 compute-0 nova_compute[248510]: 2025-12-13 08:47:26.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:27 compute-0 ceph-mon[76537]: pgmap v2550: 321 pgs: 321 active+clean; 210 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.531 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.561 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.562 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance network_info: |[{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.562 248514 DEBUG oslo_concurrency.lockutils [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.562 248514 DEBUG nova.network.neutron [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.566 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start _get_guest_xml network_info=[{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.573 248514 WARNING nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.581 248514 DEBUG nova.virt.libvirt.host [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.582 248514 DEBUG nova.virt.libvirt.host [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.596 248514 DEBUG nova.virt.libvirt.host [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.597 248514 DEBUG nova.virt.libvirt.host [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.597 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.598 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.598 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.598 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.599 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.599 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.599 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.599 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.600 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.600 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.600 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.600 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:47:27 compute-0 nova_compute[248510]: 2025-12-13 08:47:27.604 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 182 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Dec 13 08:47:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:47:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190927087' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.251 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.275 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.279 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:47:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807636306' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.855 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.857 248514 DEBUG nova.virt.libvirt.vif [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=105,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOgNSj2RX2tEOr5Rxtdc3T7qrIqjyVapwoURlTzSwBUNw2HAjV8i9+69CD+ahp0R2Tk6YrJ3W0cDR2tzHXyNVMUTiAkgjDao6U5yvxeoFoLQPs8Nmve95azrQ/Z/Vbs68Q==',key_name='tempest-TestSecurityGroupsBasicOps-80214463',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-c1fl6h76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:47:20Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=1d73d88c-ca9a-4136-80de-fa2cf028ffb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.857 248514 DEBUG nova.network.os_vif_util [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.858 248514 DEBUG nova.network.os_vif_util [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.859 248514 DEBUG nova.objects.instance [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.882 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <uuid>1d73d88c-ca9a-4136-80de-fa2cf028ffb7</uuid>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <name>instance-00000069</name>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678</nova:name>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:47:27</nova:creationTime>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <nova:port uuid="1a00c927-1c7f-4af5-9337-d6e58800dc3c">
Dec 13 08:47:28 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <system>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <entry name="serial">1d73d88c-ca9a-4136-80de-fa2cf028ffb7</entry>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <entry name="uuid">1d73d88c-ca9a-4136-80de-fa2cf028ffb7</entry>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </system>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <os>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   </os>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <features>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   </features>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk">
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config">
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       </source>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:47:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:9c:2b:42"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <target dev="tap1a00c927-1c"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/console.log" append="off"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <video>
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </video>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:47:28 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:47:28 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:47:28 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:47:28 compute-0 nova_compute[248510]: </domain>
Dec 13 08:47:28 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.884 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Preparing to wait for external event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.884 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.884 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.885 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.885 248514 DEBUG nova.virt.libvirt.vif [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=105,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOgNSj2RX2tEOr5Rxtdc3T7qrIqjyVapwoURlTzSwBUNw2HAjV8i9+69CD+ahp0R2Tk6YrJ3W0cDR2tzHXyNVMUTiAkgjDao6U5yvxeoFoLQPs8Nmve95azrQ/Z/Vbs68Q==',key_name='tempest-TestSecurityGroupsBasicOps-80214463',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-c1fl6h76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:47:20Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=1d73d88c-ca9a-4136-80de-fa2cf028ffb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.886 248514 DEBUG nova.network.os_vif_util [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.886 248514 DEBUG nova.network.os_vif_util [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.887 248514 DEBUG os_vif [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.888 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.889 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.894 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a00c927-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.894 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a00c927-1c, col_values=(('external_ids', {'iface-id': '1a00c927-1c7f-4af5-9337-d6e58800dc3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:2b:42', 'vm-uuid': '1d73d88c-ca9a-4136-80de-fa2cf028ffb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:28 compute-0 NetworkManager[50376]: <info>  [1765615648.8976] manager: (tap1a00c927-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.904 248514 INFO os_vif [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c')
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.969 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.970 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.970 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:9c:2b:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.971 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Using config drive
Dec 13 08:47:28 compute-0 nova_compute[248510]: 2025-12-13 08:47:28.991 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:29 compute-0 ceph-mon[76537]: pgmap v2551: 321 pgs: 321 active+clean; 182 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Dec 13 08:47:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/190927087' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:47:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2807636306' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:47:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Dec 13 08:47:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Dec 13 08:47:29 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Dec 13 08:47:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 167 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Dec 13 08:47:30 compute-0 ceph-mon[76537]: osdmap e273: 3 total, 3 up, 3 in
Dec 13 08:47:30 compute-0 ceph-mon[76537]: pgmap v2553: 321 pgs: 321 active+clean; 167 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Dec 13 08:47:30 compute-0 nova_compute[248510]: 2025-12-13 08:47:30.456 248514 DEBUG nova.network.neutron [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated VIF entry in instance network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:47:30 compute-0 nova_compute[248510]: 2025-12-13 08:47:30.457 248514 DEBUG nova.network.neutron [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:47:30 compute-0 nova_compute[248510]: 2025-12-13 08:47:30.507 248514 DEBUG oslo_concurrency.lockutils [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:47:30 compute-0 nova_compute[248510]: 2025-12-13 08:47:30.753 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Creating config drive at /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config
Dec 13 08:47:30 compute-0 nova_compute[248510]: 2025-12-13 08:47:30.758 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1k28ezz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:30 compute-0 nova_compute[248510]: 2025-12-13 08:47:30.902 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1k28ezz" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:30 compute-0 nova_compute[248510]: 2025-12-13 08:47:30.927 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:47:30 compute-0 nova_compute[248510]: 2025-12-13 08:47:30.931 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.082 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.083 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Deleting local config drive /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config because it was imported into RBD.
Dec 13 08:47:31 compute-0 NetworkManager[50376]: <info>  [1765615651.1417] manager: (tap1a00c927-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Dec 13 08:47:31 compute-0 kernel: tap1a00c927-1c: entered promiscuous mode
Dec 13 08:47:31 compute-0 ovn_controller[148476]: 2025-12-13T08:47:31Z|01014|binding|INFO|Claiming lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c for this chassis.
Dec 13 08:47:31 compute-0 ovn_controller[148476]: 2025-12-13T08:47:31Z|01015|binding|INFO|1a00c927-1c7f-4af5-9337-d6e58800dc3c: Claiming fa:16:3e:9c:2b:42 10.100.0.9
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.150 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.165 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:2b:42 10.100.0.9'], port_security=['fa:16:3e:9c:2b:42 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1d73d88c-ca9a-4136-80de-fa2cf028ffb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '531d7c80-e840-46e0-9afc-03ae0558f787 73f9632c-0914-472c-9969-a269b215d831', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffad85fc-28b3-4529-8d06-0367d9c3d476, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1a00c927-1c7f-4af5-9337-d6e58800dc3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.167 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1a00c927-1c7f-4af5-9337-d6e58800dc3c in datapath b7bba2fe-699d-4423-a6e2-09604625a8f5 bound to our chassis
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.168 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7bba2fe-699d-4423-a6e2-09604625a8f5
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.180 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e644bdeb-1b0a-4fd3-8084-8450aed965a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.181 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb7bba2fe-61 in ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.183 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb7bba2fe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fd542ad7-6990-4744-83ac-9e516e184e8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9604580e-c702-4686-9765-e023a421cea7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 systemd-machined[210538]: New machine qemu-130-instance-00000069.
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.197 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e3cdbe-70fd-4c9e-853f-4d65c6d7a72c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000069.
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.222 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3af3f93a-25e9-46b5-a1a9-31bf75ed559b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_controller[148476]: 2025-12-13T08:47:31Z|01016|binding|INFO|Setting lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c ovn-installed in OVS
Dec 13 08:47:31 compute-0 ovn_controller[148476]: 2025-12-13T08:47:31Z|01017|binding|INFO|Setting lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c up in Southbound
Dec 13 08:47:31 compute-0 systemd-udevd[351210]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:31 compute-0 NetworkManager[50376]: <info>  [1765615651.2425] device (tap1a00c927-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:47:31 compute-0 NetworkManager[50376]: <info>  [1765615651.2436] device (tap1a00c927-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.254 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[26b7320e-3c76-4ccb-9baf-3bbadb921076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 NetworkManager[50376]: <info>  [1765615651.2606] manager: (tapb7bba2fe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/419)
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b3fffa-0dfb-438f-882e-6c7ac4365c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.296 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bc11b992-5b17-47ca-9b99-2b0cf3817630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.299 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[83dcb12c-a990-4be5-b2a3-ff4410ff9bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 NetworkManager[50376]: <info>  [1765615651.3304] device (tapb7bba2fe-60): carrier: link connected
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.337 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[86723ff8-4d94-4683-98e7-403dd1e8282d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.356 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f4890f25-5c0a-4c22-a642-1bca1c6703b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7bba2fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:f5:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822854, 'reachable_time': 33108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351240, 'error': None, 'target': 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.374 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70a6dc1e-d951-470c-94cb-544fe43822a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:f5f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822854, 'tstamp': 822854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351241, 'error': None, 'target': 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.394 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4430bf70-8f9c-4b67-9bf4-4e2350b34e07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7bba2fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:f5:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822854, 'reachable_time': 33108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351242, 'error': None, 'target': 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[60ee3e0f-8c81-458b-b3e6-50db2d23cf92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.514 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b8122516-7c5c-47a6-bb8b-f19f391297f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.516 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7bba2fe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.517 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.518 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7bba2fe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:31 compute-0 NetworkManager[50376]: <info>  [1765615651.5214] manager: (tapb7bba2fe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:31 compute-0 kernel: tapb7bba2fe-60: entered promiscuous mode
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.528 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7bba2fe-60, col_values=(('external_ids', {'iface-id': 'd344c035-f22d-4b1b-95ff-ed098d3a946c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:31 compute-0 ovn_controller[148476]: 2025-12-13T08:47:31Z|01018|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.534 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b7bba2fe-699d-4423-a6e2-09604625a8f5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b7bba2fe-699d-4423-a6e2-09604625a8f5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c677640f-833d-49f7-b3c4-41648ce03c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.537 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-b7bba2fe-699d-4423-a6e2-09604625a8f5
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/b7bba2fe-699d-4423-a6e2-09604625a8f5.pid.haproxy
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID b7bba2fe-699d-4423-a6e2-09604625a8f5
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:47:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.537 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'env', 'PROCESS_TAG=haproxy-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b7bba2fe-699d-4423-a6e2-09604625a8f5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.627 248514 DEBUG nova.compute.manager [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.628 248514 DEBUG oslo_concurrency.lockutils [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.629 248514 DEBUG oslo_concurrency.lockutils [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.629 248514 DEBUG oslo_concurrency.lockutils [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.629 248514 DEBUG nova.compute.manager [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Processing event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:47:31 compute-0 nova_compute[248510]: 2025-12-13 08:47:31.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 167 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 144 op/s
Dec 13 08:47:31 compute-0 podman[351287]: 2025-12-13 08:47:31.89612744 +0000 UTC m=+0.023681675 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.102 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615652.1016548, 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.102 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] VM Started (Lifecycle Event)
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.104 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.109 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.112 248514 INFO nova.virt.libvirt.driver [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance spawned successfully.
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.112 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.137 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.137 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.138 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.138 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.139 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.140 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.149 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.160 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:47:32 compute-0 podman[351287]: 2025-12-13 08:47:32.17386896 +0000 UTC m=+0.301423165 container create 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.188 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.189 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615652.1018097, 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.189 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] VM Paused (Lifecycle Event)
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.226 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.231 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615652.1075459, 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.232 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] VM Resumed (Lifecycle Event)
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.239 248514 INFO nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Took 12.15 seconds to spawn the instance on the hypervisor.
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.239 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.261 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.269 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:47:32 compute-0 systemd[1]: Started libpod-conmon-63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5.scope.
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.317 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:47:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27792fd296ec41c0b07a50cc3d8774b88ded6a613398922a1314c6ce563b646f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.353 248514 INFO nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Took 13.35 seconds to build instance.
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.380 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:32 compute-0 podman[351287]: 2025-12-13 08:47:32.444514082 +0000 UTC m=+0.572068287 container init 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:47:32 compute-0 podman[351287]: 2025-12-13 08:47:32.450915983 +0000 UTC m=+0.578470188 container start 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:47:32 compute-0 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [NOTICE]   (351333) : New worker (351335) forked
Dec 13 08:47:32 compute-0 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [NOTICE]   (351333) : Loading success.
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.866 248514 DEBUG nova.objects.instance [None req-8cb325ee-f759-4832-ac11-e8a30850aa6f 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.902 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615652.9024022, c3fb322f-a9db-4396-b659-2307698e5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.903 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Paused (Lifecycle Event)
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.928 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.934 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:47:32 compute-0 nova_compute[248510]: 2025-12-13 08:47:32.959 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 13 08:47:33 compute-0 nova_compute[248510]: 2025-12-13 08:47:33.862 248514 DEBUG nova.compute.manager [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:33 compute-0 nova_compute[248510]: 2025-12-13 08:47:33.862 248514 DEBUG oslo_concurrency.lockutils [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:33 compute-0 nova_compute[248510]: 2025-12-13 08:47:33.862 248514 DEBUG oslo_concurrency.lockutils [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:33 compute-0 nova_compute[248510]: 2025-12-13 08:47:33.863 248514 DEBUG oslo_concurrency.lockutils [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:33 compute-0 nova_compute[248510]: 2025-12-13 08:47:33.863 248514 DEBUG nova.compute.manager [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] No waiting events found dispatching network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:47:33 compute-0 nova_compute[248510]: 2025-12-13 08:47:33.863 248514 WARNING nova.compute.manager [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received unexpected event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c for instance with vm_state active and task_state None.
Dec 13 08:47:33 compute-0 nova_compute[248510]: 2025-12-13 08:47:33.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 298 KiB/s wr, 135 op/s
Dec 13 08:47:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:34 compute-0 ceph-mon[76537]: pgmap v2554: 321 pgs: 321 active+clean; 167 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 144 op/s
Dec 13 08:47:35 compute-0 kernel: tap2d164f50-a5 (unregistering): left promiscuous mode
Dec 13 08:47:35 compute-0 NetworkManager[50376]: <info>  [1765615655.3270] device (tap2d164f50-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:47:35 compute-0 nova_compute[248510]: 2025-12-13 08:47:35.345 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:35 compute-0 ovn_controller[148476]: 2025-12-13T08:47:35Z|01019|binding|INFO|Releasing lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 from this chassis (sb_readonly=0)
Dec 13 08:47:35 compute-0 ovn_controller[148476]: 2025-12-13T08:47:35Z|01020|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 down in Southbound
Dec 13 08:47:35 compute-0 ovn_controller[148476]: 2025-12-13T08:47:35Z|01021|binding|INFO|Removing iface tap2d164f50-a5 ovn-installed in OVS
Dec 13 08:47:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.354 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:47:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.355 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis
Dec 13 08:47:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.357 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:47:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.358 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[23d463a6-ecd0-42b9-bdf8-4c3ecba1e749]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.359 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace which is not needed anymore
Dec 13 08:47:35 compute-0 nova_compute[248510]: 2025-12-13 08:47:35.362 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:35 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Deactivated successfully.
Dec 13 08:47:35 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Consumed 11.550s CPU time.
Dec 13 08:47:35 compute-0 systemd-machined[210538]: Machine qemu-129-instance-00000067 terminated.
Dec 13 08:47:35 compute-0 nova_compute[248510]: 2025-12-13 08:47:35.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:35 compute-0 nova_compute[248510]: 2025-12-13 08:47:35.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:35 compute-0 nova_compute[248510]: 2025-12-13 08:47:35.561 248514 DEBUG nova.compute.manager [None req-8cb325ee-f759-4832-ac11-e8a30850aa6f 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:35 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [NOTICE]   (350872) : haproxy version is 2.8.14-c23fe91
Dec 13 08:47:35 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [NOTICE]   (350872) : path to executable is /usr/sbin/haproxy
Dec 13 08:47:35 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [WARNING]  (350872) : Exiting Master process...
Dec 13 08:47:35 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [WARNING]  (350872) : Exiting Master process...
Dec 13 08:47:35 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [ALERT]    (350872) : Current worker (350874) exited with code 143 (Terminated)
Dec 13 08:47:35 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [WARNING]  (350872) : All workers exited. Exiting... (0)
Dec 13 08:47:35 compute-0 systemd[1]: libpod-14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81.scope: Deactivated successfully.
Dec 13 08:47:35 compute-0 podman[351371]: 2025-12-13 08:47:35.640999517 +0000 UTC m=+0.186314786 container died 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81-userdata-shm.mount: Deactivated successfully.
Dec 13 08:47:35 compute-0 ceph-mon[76537]: pgmap v2555: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 298 KiB/s wr, 135 op/s
Dec 13 08:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd1c5e1c88792b2cd7c895939660c80ec1d136af36d046687f8916aa03ba3455-merged.mount: Deactivated successfully.
Dec 13 08:47:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 103 op/s
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.019 248514 DEBUG nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.019 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 DEBUG nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 WARNING nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state suspended and task_state None.
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 DEBUG nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.021 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.021 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.021 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.021 248514 DEBUG nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.022 248514 WARNING nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state suspended and task_state None.
Dec 13 08:47:36 compute-0 podman[351371]: 2025-12-13 08:47:36.63773469 +0000 UTC m=+1.183049959 container cleanup 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:47:36 compute-0 systemd[1]: libpod-conmon-14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81.scope: Deactivated successfully.
Dec 13 08:47:36 compute-0 nova_compute[248510]: 2025-12-13 08:47:36.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:37 compute-0 podman[351410]: 2025-12-13 08:47:37.482604123 +0000 UTC m=+0.821106627 container remove 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.490 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb7ce44-0a5c-46d1-9925-d9015331932c]: (4, ('Sat Dec 13 08:47:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81)\n14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81\nSat Dec 13 08:47:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81)\n14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bb68b473-70ec-4c94-a71c-04a633cdbba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.493 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:37 compute-0 nova_compute[248510]: 2025-12-13 08:47:37.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:37 compute-0 kernel: tap6ad7f755-f0: left promiscuous mode
Dec 13 08:47:37 compute-0 nova_compute[248510]: 2025-12-13 08:47:37.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.516 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ca530a-f3b3-41e7-83ac-a19c6a53ac26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.539 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb078e37-affb-4893-b7d2-02453f38c093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.541 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75edd375-4a23-4115-8b12-8de5cfa826ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.557 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4e07c10c-45c1-4de9-b780-13b9831d9ffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821473, 'reachable_time': 36630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351428, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ad7f755\x2dfa29\x2d40dd\x2d89c4\x2d988d0a51cf9b.mount: Deactivated successfully.
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.560 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:47:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.560 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ca0a82-788d-49de-ac89-836111ac0e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:37 compute-0 ceph-mon[76537]: pgmap v2556: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 103 op/s
Dec 13 08:47:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Dec 13 08:47:38 compute-0 nova_compute[248510]: 2025-12-13 08:47:38.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:38 compute-0 NetworkManager[50376]: <info>  [1765615658.5738] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Dec 13 08:47:38 compute-0 ovn_controller[148476]: 2025-12-13T08:47:38Z|01022|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 08:47:38 compute-0 NetworkManager[50376]: <info>  [1765615658.5805] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Dec 13 08:47:38 compute-0 ceph-mon[76537]: pgmap v2557: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Dec 13 08:47:38 compute-0 nova_compute[248510]: 2025-12-13 08:47:38.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:38 compute-0 ovn_controller[148476]: 2025-12-13T08:47:38Z|01023|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 08:47:38 compute-0 nova_compute[248510]: 2025-12-13 08:47:38.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:38 compute-0 nova_compute[248510]: 2025-12-13 08:47:38.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.378 248514 INFO nova.compute.manager [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Resuming
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.379 248514 DEBUG nova.objects.instance [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'flavor' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.426 248514 DEBUG oslo_concurrency.lockutils [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.427 248514 DEBUG oslo_concurrency.lockutils [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.427 248514 DEBUG nova.network.neutron [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.929 248514 DEBUG nova.compute.manager [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.931 248514 DEBUG nova.compute.manager [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing instance network info cache due to event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.931 248514 DEBUG oslo_concurrency.lockutils [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.932 248514 DEBUG oslo_concurrency.lockutils [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:47:39 compute-0 nova_compute[248510]: 2025-12-13 08:47:39.932 248514 DEBUG nova.network.neutron [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:47:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 84 op/s
Dec 13 08:47:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:47:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:47:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:47:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:47:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:47:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:47:40 compute-0 ceph-mon[76537]: pgmap v2558: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 84 op/s
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.542 248514 DEBUG nova.network.neutron [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.568 248514 DEBUG oslo_concurrency.lockutils [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.576 248514 DEBUG nova.virt.libvirt.vif [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:47:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:47:35Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.578 248514 DEBUG nova.network.os_vif_util [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.580 248514 DEBUG nova.network.os_vif_util [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.581 248514 DEBUG os_vif [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.583 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.584 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.584 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.589 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d164f50-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.589 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d164f50-a5, col_values=(('external_ids', {'iface-id': '2d164f50-a56a-4eaf-ad60-84274a0eb413', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:39:62', 'vm-uuid': 'c3fb322f-a9db-4396-b659-2307698e5524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.590 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.591 248514 INFO os_vif [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.789 248514 DEBUG nova.objects.instance [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'numa_topology' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:41 compute-0 kernel: tap2d164f50-a5: entered promiscuous mode
Dec 13 08:47:41 compute-0 NetworkManager[50376]: <info>  [1765615661.8807] manager: (tap2d164f50-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/423)
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.885 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:41 compute-0 ovn_controller[148476]: 2025-12-13T08:47:41Z|01024|binding|INFO|Claiming lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 for this chassis.
Dec 13 08:47:41 compute-0 ovn_controller[148476]: 2025-12-13T08:47:41Z|01025|binding|INFO|2d164f50-a56a-4eaf-ad60-84274a0eb413: Claiming fa:16:3e:e2:39:62 10.100.0.6
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.895 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.896 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b bound to our chassis
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.898 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:47:41 compute-0 ovn_controller[148476]: 2025-12-13T08:47:41Z|01026|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 ovn-installed in OVS
Dec 13 08:47:41 compute-0 ovn_controller[148476]: 2025-12-13T08:47:41Z|01027|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 up in Southbound
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:41 compute-0 nova_compute[248510]: 2025-12-13 08:47:41.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.910 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e94671de-60d5-44ca-bf55-ec170bdeb49f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.912 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ad7f755-f1 in ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.915 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ad7f755-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b73b22fe-1f2d-4d84-9259-b1be862d85ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:41 compute-0 systemd-udevd[351445]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.921 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e0ad63-d6fb-4ef4-b2fd-544aef79f591]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:41 compute-0 systemd-machined[210538]: New machine qemu-131-instance-00000067.
Dec 13 08:47:41 compute-0 NetworkManager[50376]: <info>  [1765615661.9336] device (tap2d164f50-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:47:41 compute-0 NetworkManager[50376]: <info>  [1765615661.9350] device (tap2d164f50-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.937 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bdded4-801c-491d-8dbe-a3cd1b87b21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:41 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-00000067.
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.957 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a27ee49b-0cdf-4f36-9e63-13a1d3caf878]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.993 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e2870624-1bf8-4c4a-a22a-dc62203c80f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.000 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[683b76d3-439c-42f2-aa1c-bfb12e29814f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 NetworkManager[50376]: <info>  [1765615662.0017] manager: (tap6ad7f755-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/424)
Dec 13 08:47:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.046 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[da029a23-e7a7-4f32-861b-8ab254a020ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.051 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce27be8-65f7-4ba3-9763-c19b0daf1e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 NetworkManager[50376]: <info>  [1765615662.1058] device (tap6ad7f755-f0): carrier: link connected
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.113 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[224bd1f2-2ee8-466d-bb57-ed0101ca0049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 sudo[351475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:47:42 compute-0 sudo[351475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:42 compute-0 sudo[351475]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG nova.compute.manager [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG oslo_concurrency.lockutils [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG oslo_concurrency.lockutils [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG oslo_concurrency.lockutils [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG nova.compute.manager [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.142 248514 WARNING nova.compute.manager [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state suspended and task_state resuming.
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.145 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7bd69c2-4da2-4d81-b2be-22d153999619]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823932, 'reachable_time': 32706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351502, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.168 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0855daec-c0f4-4154-ac6a-c3aeb1fe84c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:35ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 823932, 'tstamp': 823932}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351509, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.194 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e02a94-79f6-4f21-ad38-85e7b1146992]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823932, 'reachable_time': 32706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351523, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 sudo[351504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 08:47:42 compute-0 sudo[351504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.244 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9e61c0-2031-4462-80a4-9775ca3151af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.317 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7f119b79-cdeb-46e6-9f32-9f3c7bf1bba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.320 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.321 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.321 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:42 compute-0 NetworkManager[50376]: <info>  [1765615662.3249] manager: (tap6ad7f755-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Dec 13 08:47:42 compute-0 kernel: tap6ad7f755-f0: entered promiscuous mode
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.337 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:42 compute-0 ovn_controller[148476]: 2025-12-13T08:47:42Z|01028|binding|INFO|Releasing lport 683d7da0-6f1e-41a6-9158-6204fb05ee50 from this chassis (sb_readonly=0)
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.340 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.353 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[49a99b30-6087-43b6-9824-764e9a0c2e1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.354 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:47:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.355 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'env', 'PROCESS_TAG=haproxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.691 248514 DEBUG nova.network.neutron [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated VIF entry in instance network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.693 248514 DEBUG nova.network.neutron [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.737 248514 DEBUG oslo_concurrency.lockutils [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:47:42 compute-0 podman[351624]: 2025-12-13 08:47:42.773783254 +0000 UTC m=+0.101337744 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 08:47:42 compute-0 podman[351655]: 2025-12-13 08:47:42.818398543 +0000 UTC m=+0.071585307 container create 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:47:42 compute-0 podman[351655]: 2025-12-13 08:47:42.771037315 +0000 UTC m=+0.024224109 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:47:42 compute-0 systemd[1]: Started libpod-conmon-0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771.scope.
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.889 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for c3fb322f-a9db-4396-b659-2307698e5524 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.892 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615662.888831, c3fb322f-a9db-4396-b659-2307698e5524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.892 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Started (Lifecycle Event)
Dec 13 08:47:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2d7bff872b6cdff54ab27f3dd8aa237d58f134c1191165160b971ebebd88a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:42 compute-0 podman[351655]: 2025-12-13 08:47:42.921648084 +0000 UTC m=+0.174834858 container init 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:47:42 compute-0 podman[351624]: 2025-12-13 08:47:42.924137207 +0000 UTC m=+0.251691697 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.932 248514 DEBUG nova.compute.manager [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.934 248514 DEBUG nova.objects.instance [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:47:42 compute-0 podman[351655]: 2025-12-13 08:47:42.936015465 +0000 UTC m=+0.189202229 container start 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.951 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.957 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:47:42 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [NOTICE]   (351689) : New worker (351691) forked
Dec 13 08:47:42 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [NOTICE]   (351689) : Loading success.
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.976 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance running successfully.
Dec 13 08:47:42 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.984 248514 DEBUG nova.virt.libvirt.guest [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.984 248514 DEBUG nova.compute.manager [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.995 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.997 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615662.8959033, c3fb322f-a9db-4396-b659-2307698e5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:47:42 compute-0 nova_compute[248510]: 2025-12-13 08:47:42.997 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Resumed (Lifecycle Event)
Dec 13 08:47:43 compute-0 nova_compute[248510]: 2025-12-13 08:47:43.060 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:47:43 compute-0 nova_compute[248510]: 2025-12-13 08:47:43.068 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:47:43 compute-0 ceph-mon[76537]: pgmap v2559: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 08:47:43 compute-0 nova_compute[248510]: 2025-12-13 08:47:43.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:43 compute-0 sudo[351504]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:47:43 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:47:43 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:43 compute-0 sudo[351855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:47:44 compute-0 sudo[351855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:44 compute-0 sudo[351855]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 174 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 529 KiB/s wr, 109 op/s
Dec 13 08:47:44 compute-0 sudo[351880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:47:44 compute-0 sudo[351880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:44 compute-0 ovn_controller[148476]: 2025-12-13T08:47:44Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:39:62 10.100.0.6
Dec 13 08:47:44 compute-0 nova_compute[248510]: 2025-12-13 08:47:44.524 248514 DEBUG nova.compute.manager [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:47:44 compute-0 nova_compute[248510]: 2025-12-13 08:47:44.525 248514 DEBUG oslo_concurrency.lockutils [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:44 compute-0 nova_compute[248510]: 2025-12-13 08:47:44.525 248514 DEBUG oslo_concurrency.lockutils [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:44 compute-0 nova_compute[248510]: 2025-12-13 08:47:44.525 248514 DEBUG oslo_concurrency.lockutils [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:44 compute-0 nova_compute[248510]: 2025-12-13 08:47:44.526 248514 DEBUG nova.compute.manager [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:47:44 compute-0 nova_compute[248510]: 2025-12-13 08:47:44.526 248514 WARNING nova.compute.manager [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state None.
Dec 13 08:47:44 compute-0 sudo[351880]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:47:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:47:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:47:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:47:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:47:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:47:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:47:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:47:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:47:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:47:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:47:44 compute-0 sudo[351936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:47:44 compute-0 sudo[351936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:44 compute-0 sudo[351936]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:44 compute-0 sudo[351961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:47:44 compute-0 sudo[351961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:45 compute-0 podman[351998]: 2025-12-13 08:47:45.13868221 +0000 UTC m=+0.022709861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:47:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:45 compute-0 ceph-mon[76537]: pgmap v2560: 321 pgs: 321 active+clean; 174 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 529 KiB/s wr, 109 op/s
Dec 13 08:47:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:47:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:47:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:47:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:47:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:47:45 compute-0 podman[351998]: 2025-12-13 08:47:45.361921283 +0000 UTC m=+0.245948914 container create db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:47:45 compute-0 systemd[1]: Started libpod-conmon-db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a.scope.
Dec 13 08:47:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:45 compute-0 ovn_controller[148476]: 2025-12-13T08:47:45Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:2b:42 10.100.0.9
Dec 13 08:47:45 compute-0 ovn_controller[148476]: 2025-12-13T08:47:45Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:2b:42 10.100.0.9
Dec 13 08:47:45 compute-0 podman[351998]: 2025-12-13 08:47:45.648578816 +0000 UTC m=+0.532606467 container init db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 08:47:45 compute-0 podman[351998]: 2025-12-13 08:47:45.655168522 +0000 UTC m=+0.539196153 container start db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:47:45 compute-0 bold_brattain[352014]: 167 167
Dec 13 08:47:45 compute-0 systemd[1]: libpod-db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a.scope: Deactivated successfully.
Dec 13 08:47:45 compute-0 podman[351998]: 2025-12-13 08:47:45.690651912 +0000 UTC m=+0.574679573 container attach db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 08:47:45 compute-0 podman[351998]: 2025-12-13 08:47:45.693159125 +0000 UTC m=+0.577186776 container died db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c30d9fbbce8b58164be1ea382ea944e9a3ea9f3be9e8b59ed7f55edc72f75bdd-merged.mount: Deactivated successfully.
Dec 13 08:47:45 compute-0 podman[351998]: 2025-12-13 08:47:45.9344196 +0000 UTC m=+0.818447231 container remove db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 08:47:45 compute-0 systemd[1]: libpod-conmon-db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a.scope: Deactivated successfully.
Dec 13 08:47:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 174 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 900 KiB/s rd, 517 KiB/s wr, 55 op/s
Dec 13 08:47:46 compute-0 podman[352040]: 2025-12-13 08:47:46.138322556 +0000 UTC m=+0.047049291 container create cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 08:47:46 compute-0 systemd[1]: Started libpod-conmon-cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d.scope.
Dec 13 08:47:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:46 compute-0 podman[352040]: 2025-12-13 08:47:46.118509529 +0000 UTC m=+0.027236294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:46 compute-0 podman[352040]: 2025-12-13 08:47:46.24247397 +0000 UTC m=+0.151200715 container init cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:47:46 compute-0 podman[352040]: 2025-12-13 08:47:46.25003386 +0000 UTC m=+0.158760595 container start cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:47:46 compute-0 podman[352040]: 2025-12-13 08:47:46.254202014 +0000 UTC m=+0.162928769 container attach cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 08:47:46 compute-0 ceph-mon[76537]: pgmap v2561: 321 pgs: 321 active+clean; 174 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 900 KiB/s rd, 517 KiB/s wr, 55 op/s
Dec 13 08:47:46 compute-0 cranky_dirac[352056]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:47:46 compute-0 cranky_dirac[352056]: --> All data devices are unavailable
Dec 13 08:47:46 compute-0 nova_compute[248510]: 2025-12-13 08:47:46.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:46 compute-0 systemd[1]: libpod-cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d.scope: Deactivated successfully.
Dec 13 08:47:46 compute-0 podman[352040]: 2025-12-13 08:47:46.799790706 +0000 UTC m=+0.708517441 container died cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9-merged.mount: Deactivated successfully.
Dec 13 08:47:46 compute-0 podman[352040]: 2025-12-13 08:47:46.854054938 +0000 UTC m=+0.762781673 container remove cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:47:46 compute-0 systemd[1]: libpod-conmon-cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d.scope: Deactivated successfully.
Dec 13 08:47:46 compute-0 sudo[351961]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:46 compute-0 sudo[352088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:47:46 compute-0 sudo[352088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:46 compute-0 sudo[352088]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:47 compute-0 sudo[352113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:47:47 compute-0 sudo[352113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:47 compute-0 podman[352139]: 2025-12-13 08:47:47.142412434 +0000 UTC m=+0.070301755 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 08:47:47 compute-0 podman[352137]: 2025-12-13 08:47:47.163199496 +0000 UTC m=+0.093899687 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:47:47 compute-0 podman[352138]: 2025-12-13 08:47:47.163477643 +0000 UTC m=+0.092772959 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 08:47:47 compute-0 podman[352212]: 2025-12-13 08:47:47.412166394 +0000 UTC m=+0.085559879 container create 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 08:47:47 compute-0 podman[352212]: 2025-12-13 08:47:47.347731406 +0000 UTC m=+0.021124981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:47:47 compute-0 systemd[1]: Started libpod-conmon-5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a.scope.
Dec 13 08:47:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:47 compute-0 podman[352212]: 2025-12-13 08:47:47.544058523 +0000 UTC m=+0.217452038 container init 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:47:47 compute-0 podman[352212]: 2025-12-13 08:47:47.552573187 +0000 UTC m=+0.225966672 container start 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:47:47 compute-0 podman[352212]: 2025-12-13 08:47:47.557225584 +0000 UTC m=+0.230619089 container attach 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:47:47 compute-0 compassionate_ritchie[352228]: 167 167
Dec 13 08:47:47 compute-0 systemd[1]: libpod-5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a.scope: Deactivated successfully.
Dec 13 08:47:47 compute-0 podman[352212]: 2025-12-13 08:47:47.559198443 +0000 UTC m=+0.232591928 container died 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 08:47:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6752dfce27e47be13ed8d34d7e31ac603ad8064349764cef160a8df3ddb872a0-merged.mount: Deactivated successfully.
Dec 13 08:47:47 compute-0 podman[352212]: 2025-12-13 08:47:47.605211978 +0000 UTC m=+0.278605463 container remove 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:47:47 compute-0 systemd[1]: libpod-conmon-5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a.scope: Deactivated successfully.
Dec 13 08:47:47 compute-0 podman[352254]: 2025-12-13 08:47:47.774966798 +0000 UTC m=+0.025252445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:47:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 187 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 84 op/s
Dec 13 08:47:48 compute-0 podman[352254]: 2025-12-13 08:47:48.247860405 +0000 UTC m=+0.498146032 container create 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:47:48 compute-0 systemd[1]: Started libpod-conmon-0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae.scope.
Dec 13 08:47:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:48 compute-0 podman[352254]: 2025-12-13 08:47:48.519977093 +0000 UTC m=+0.770262730 container init 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:47:48 compute-0 podman[352254]: 2025-12-13 08:47:48.526354283 +0000 UTC m=+0.776639910 container start 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:47:48 compute-0 podman[352254]: 2025-12-13 08:47:48.676561942 +0000 UTC m=+0.926847569 container attach 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]: {
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:     "0": [
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:         {
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "devices": [
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "/dev/loop3"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             ],
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_name": "ceph_lv0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_size": "21470642176",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "name": "ceph_lv0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "tags": {
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cluster_name": "ceph",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.crush_device_class": "",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.encrypted": "0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.objectstore": "bluestore",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osd_id": "0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.type": "block",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.vdo": "0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.with_tpm": "0"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             },
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "type": "block",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "vg_name": "ceph_vg0"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:         }
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:     ],
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:     "1": [
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:         {
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "devices": [
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "/dev/loop4"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             ],
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_name": "ceph_lv1",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_size": "21470642176",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "name": "ceph_lv1",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "tags": {
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cluster_name": "ceph",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.crush_device_class": "",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.encrypted": "0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.objectstore": "bluestore",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osd_id": "1",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.type": "block",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.vdo": "0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.with_tpm": "0"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             },
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "type": "block",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "vg_name": "ceph_vg1"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:         }
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:     ],
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:     "2": [
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:         {
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "devices": [
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "/dev/loop5"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             ],
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_name": "ceph_lv2",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_size": "21470642176",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "name": "ceph_lv2",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "tags": {
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.cluster_name": "ceph",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.crush_device_class": "",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.encrypted": "0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.objectstore": "bluestore",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osd_id": "2",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.type": "block",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.vdo": "0",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:                 "ceph.with_tpm": "0"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             },
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "type": "block",
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:             "vg_name": "ceph_vg2"
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:         }
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]:     ]
Dec 13 08:47:48 compute-0 vigorous_robinson[352270]: }
Dec 13 08:47:48 compute-0 systemd[1]: libpod-0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae.scope: Deactivated successfully.
Dec 13 08:47:48 compute-0 podman[352254]: 2025-12-13 08:47:48.90122284 +0000 UTC m=+1.151508527 container died 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:47:48 compute-0 nova_compute[248510]: 2025-12-13 08:47:48.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:49 compute-0 ceph-mon[76537]: pgmap v2562: 321 pgs: 321 active+clean; 187 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 84 op/s
Dec 13 08:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d-merged.mount: Deactivated successfully.
Dec 13 08:47:49 compute-0 podman[352254]: 2025-12-13 08:47:49.97552508 +0000 UTC m=+2.225810707 container remove 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 08:47:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 08:47:50 compute-0 sudo[352113]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:50 compute-0 systemd[1]: libpod-conmon-0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae.scope: Deactivated successfully.
Dec 13 08:47:50 compute-0 sudo[352291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:47:50 compute-0 sudo[352291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:50 compute-0 sudo[352291]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:50 compute-0 sudo[352318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:47:50 compute-0 sudo[352318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:50 compute-0 podman[352354]: 2025-12-13 08:47:50.516992468 +0000 UTC m=+0.026633469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:47:50 compute-0 podman[352354]: 2025-12-13 08:47:50.695817145 +0000 UTC m=+0.205458116 container create 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:47:50 compute-0 systemd[1]: Started libpod-conmon-3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234.scope.
Dec 13 08:47:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:50 compute-0 podman[352354]: 2025-12-13 08:47:50.809051357 +0000 UTC m=+0.318692348 container init 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 08:47:50 compute-0 ceph-mon[76537]: pgmap v2563: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 08:47:50 compute-0 podman[352354]: 2025-12-13 08:47:50.818859873 +0000 UTC m=+0.328501084 container start 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 08:47:50 compute-0 angry_carver[352371]: 167 167
Dec 13 08:47:50 compute-0 systemd[1]: libpod-3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234.scope: Deactivated successfully.
Dec 13 08:47:50 compute-0 podman[352354]: 2025-12-13 08:47:50.824569847 +0000 UTC m=+0.334210828 container attach 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 08:47:50 compute-0 podman[352354]: 2025-12-13 08:47:50.827167992 +0000 UTC m=+0.336808963 container died 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 08:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-12bbb450115cb2191eeb73e6884af7a739747be49b61eba162552827a0d764bb-merged.mount: Deactivated successfully.
Dec 13 08:47:50 compute-0 podman[352354]: 2025-12-13 08:47:50.919919569 +0000 UTC m=+0.429560540 container remove 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:47:50 compute-0 systemd[1]: libpod-conmon-3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234.scope: Deactivated successfully.
Dec 13 08:47:51 compute-0 podman[352394]: 2025-12-13 08:47:51.1120161 +0000 UTC m=+0.046426406 container create a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:47:51 compute-0 systemd[1]: Started libpod-conmon-a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4.scope.
Dec 13 08:47:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:47:51 compute-0 podman[352394]: 2025-12-13 08:47:51.092417448 +0000 UTC m=+0.026827784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:47:51 compute-0 podman[352394]: 2025-12-13 08:47:51.233716424 +0000 UTC m=+0.168126760 container init a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:47:51 compute-0 podman[352394]: 2025-12-13 08:47:51.24589033 +0000 UTC m=+0.180300636 container start a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:47:51 compute-0 podman[352394]: 2025-12-13 08:47:51.254164968 +0000 UTC m=+0.188575264 container attach a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 08:47:51 compute-0 nova_compute[248510]: 2025-12-13 08:47:51.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 08:47:52 compute-0 lvm[352489]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:47:52 compute-0 lvm[352489]: VG ceph_vg1 finished
Dec 13 08:47:52 compute-0 lvm[352488]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:47:52 compute-0 lvm[352488]: VG ceph_vg0 finished
Dec 13 08:47:52 compute-0 lvm[352491]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:47:52 compute-0 lvm[352491]: VG ceph_vg2 finished
Dec 13 08:47:52 compute-0 inspiring_cartwright[352410]: {}
Dec 13 08:47:52 compute-0 systemd[1]: libpod-a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4.scope: Deactivated successfully.
Dec 13 08:47:52 compute-0 podman[352394]: 2025-12-13 08:47:52.267980118 +0000 UTC m=+1.202390444 container died a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:47:52 compute-0 systemd[1]: libpod-a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4.scope: Consumed 1.647s CPU time.
Dec 13 08:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274-merged.mount: Deactivated successfully.
Dec 13 08:47:52 compute-0 podman[352394]: 2025-12-13 08:47:52.325188524 +0000 UTC m=+1.259598830 container remove a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:47:52 compute-0 systemd[1]: libpod-conmon-a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4.scope: Deactivated successfully.
Dec 13 08:47:52 compute-0 sudo[352318]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:47:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:47:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:52 compute-0 sudo[352509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:47:52 compute-0 sudo[352509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:47:52 compute-0 sudo[352509]: pam_unix(sudo:session): session closed for user root
Dec 13 08:47:53 compute-0 ceph-mon[76537]: pgmap v2564: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 08:47:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:47:53 compute-0 nova_compute[248510]: 2025-12-13 08:47:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:53 compute-0 nova_compute[248510]: 2025-12-13 08:47:53.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Dec 13 08:47:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:47:55 compute-0 ceph-mon[76537]: pgmap v2565: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Dec 13 08:47:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:55.426 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:47:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:55.427 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:47:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:47:55.428 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:47:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 1.7 MiB/s wr, 78 op/s
Dec 13 08:47:56 compute-0 nova_compute[248510]: 2025-12-13 08:47:56.750 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:57 compute-0 ceph-mon[76537]: pgmap v2566: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 1.7 MiB/s wr, 78 op/s
Dec 13 08:47:57 compute-0 nova_compute[248510]: 2025-12-13 08:47:57.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:47:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 1.7 MiB/s wr, 78 op/s
Dec 13 08:47:58 compute-0 nova_compute[248510]: 2025-12-13 08:47:58.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:47:59 compute-0 ceph-mon[76537]: pgmap v2567: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 1.7 MiB/s wr, 78 op/s
Dec 13 08:47:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 590 KiB/s wr, 49 op/s
Dec 13 08:48:01 compute-0 ceph-mon[76537]: pgmap v2568: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 590 KiB/s wr, 49 op/s
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.910 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.911 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.911 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.911 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.912 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.913 248514 INFO nova.compute.manager [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Terminating instance
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.913 248514 DEBUG nova.compute.manager [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:48:01 compute-0 kernel: tap2d164f50-a5 (unregistering): left promiscuous mode
Dec 13 08:48:01 compute-0 NetworkManager[50376]: <info>  [1765615681.9847] device (tap2d164f50-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:48:01 compute-0 ovn_controller[148476]: 2025-12-13T08:48:01Z|01029|binding|INFO|Releasing lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 from this chassis (sb_readonly=0)
Dec 13 08:48:01 compute-0 ovn_controller[148476]: 2025-12-13T08:48:01Z|01030|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 down in Southbound
Dec 13 08:48:01 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.996 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:01 compute-0 ovn_controller[148476]: 2025-12-13T08:48:01Z|01031|binding|INFO|Removing iface tap2d164f50-a5 ovn-installed in OVS
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:01.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.007 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.009 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.010 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.012 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.014 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[971872bf-331b-4140-ab22-6a029cb61627]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.014 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace which is not needed anymore
Dec 13 08:48:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 22 KiB/s wr, 1 op/s
Dec 13 08:48:02 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000067.scope: Deactivated successfully.
Dec 13 08:48:02 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000067.scope: Consumed 2.857s CPU time.
Dec 13 08:48:02 compute-0 systemd-machined[210538]: Machine qemu-131-instance-00000067 terminated.
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.164 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance destroyed successfully.
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.164 248514 DEBUG nova.objects.instance [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'resources' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:48:02 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [NOTICE]   (351689) : haproxy version is 2.8.14-c23fe91
Dec 13 08:48:02 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [NOTICE]   (351689) : path to executable is /usr/sbin/haproxy
Dec 13 08:48:02 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [WARNING]  (351689) : Exiting Master process...
Dec 13 08:48:02 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [WARNING]  (351689) : Exiting Master process...
Dec 13 08:48:02 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [ALERT]    (351689) : Current worker (351691) exited with code 143 (Terminated)
Dec 13 08:48:02 compute-0 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [WARNING]  (351689) : All workers exited. Exiting... (0)
Dec 13 08:48:02 compute-0 systemd[1]: libpod-0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771.scope: Deactivated successfully.
Dec 13 08:48:02 compute-0 podman[352557]: 2025-12-13 08:48:02.178505036 +0000 UTC m=+0.052567941 container died 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.195 248514 DEBUG nova.virt.libvirt.vif [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:47:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:47:43Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.196 248514 DEBUG nova.network.os_vif_util [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.197 248514 DEBUG nova.network.os_vif_util [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.198 248514 DEBUG os_vif [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.202 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d164f50-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.210 248514 INFO os_vif [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')
Dec 13 08:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c2d7bff872b6cdff54ab27f3dd8aa237d58f134c1191165160b971ebebd88a7-merged.mount: Deactivated successfully.
Dec 13 08:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771-userdata-shm.mount: Deactivated successfully.
Dec 13 08:48:02 compute-0 podman[352557]: 2025-12-13 08:48:02.224157911 +0000 UTC m=+0.098220806 container cleanup 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:48:02 compute-0 systemd[1]: libpod-conmon-0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771.scope: Deactivated successfully.
Dec 13 08:48:02 compute-0 podman[352606]: 2025-12-13 08:48:02.307040771 +0000 UTC m=+0.056436457 container remove 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.313 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46085592-3495-4d3e-93b7-4d44676adb8a]: (4, ('Sat Dec 13 08:48:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771)\n0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771\nSat Dec 13 08:48:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771)\n0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1e7547-1546-4584-b819-7beef157c1c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.316 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 kernel: tap6ad7f755-f0: left promiscuous mode
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.325 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d42a7dda-c150-4705-8e53-347b2ed02fc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.342 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1284b20c-1889-4faf-88a6-b6e121f73b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.344 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7819ee7-9866-4e85-8ae5-816c2e44e1b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.361 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8a11f0-6604-4774-b526-8c4fb8e401a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823920, 'reachable_time': 39110, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352624, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.365 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:48:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ad7f755\x2dfa29\x2d40dd\x2d89c4\x2d988d0a51cf9b.mount: Deactivated successfully.
Dec 13 08:48:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.366 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[695d02c8-ce4e-44ae-a5c6-19bffdabb91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.489 248514 INFO nova.virt.libvirt.driver [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deleting instance files /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524_del
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.490 248514 INFO nova.virt.libvirt.driver [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deletion of /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524_del complete
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.556 248514 INFO nova.compute.manager [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Took 0.64 seconds to destroy the instance on the hypervisor.
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.557 248514 DEBUG oslo.service.loopingcall [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.557 248514 DEBUG nova.compute.manager [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.558 248514 DEBUG nova.network.neutron [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:48:02 compute-0 nova_compute[248510]: 2025-12-13 08:48:02.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 13 08:48:03 compute-0 ceph-mon[76537]: pgmap v2569: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 22 KiB/s wr, 1 op/s
Dec 13 08:48:03 compute-0 nova_compute[248510]: 2025-12-13 08:48:03.529 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:48:03 compute-0 nova_compute[248510]: 2025-12-13 08:48:03.530 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:48:03 compute-0 nova_compute[248510]: 2025-12-13 08:48:03.530 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:48:03 compute-0 nova_compute[248510]: 2025-12-13 08:48:03.530 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:48:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:03.722 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:1f:a9 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=496055dc-0896-49fa-bf70-209d5a08ecd5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=50178087-37ed-4900-b708-4cc10cbf6678) old=Port_Binding(mac=['fa:16:3e:92:1f:a9 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:48:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:03.723 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 50178087-37ed-4900-b708-4cc10cbf6678 in datapath 15538243-d813-425c-a420-0747a4cf75d2 updated
Dec 13 08:48:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:03.724 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15538243-d813-425c-a420-0747a4cf75d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:48:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:03.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcf0719-cad5-4562-97af-819f072cd123]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 29 op/s
Dec 13 08:48:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:04 compute-0 nova_compute[248510]: 2025-12-13 08:48:04.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:05 compute-0 nova_compute[248510]: 2025-12-13 08:48:05.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:05 compute-0 ceph-mon[76537]: pgmap v2570: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 29 op/s
Dec 13 08:48:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:05.455 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:48:05 compute-0 nova_compute[248510]: 2025-12-13 08:48:05.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:05.457 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:48:05 compute-0 nova_compute[248510]: 2025-12-13 08:48:05.729 248514 DEBUG nova.network.neutron [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:48:05 compute-0 nova_compute[248510]: 2025-12-13 08:48:05.785 248514 INFO nova.compute.manager [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Took 3.23 seconds to deallocate network for instance.
Dec 13 08:48:05 compute-0 nova_compute[248510]: 2025-12-13 08:48:05.858 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:05 compute-0 nova_compute[248510]: 2025-12-13 08:48:05.858 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:05 compute-0 nova_compute[248510]: 2025-12-13 08:48:05.890 248514 DEBUG nova.compute.manager [req-acc0e784-9171-4a55-ac8a-a533f48bd852 req-730c725d-e9fe-4724-bb34-f6cecaa19430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-deleted-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:48:05 compute-0 nova_compute[248510]: 2025-12-13 08:48:05.983 248514 DEBUG oslo_concurrency.processutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 08:48:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:48:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914377555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:48:06 compute-0 nova_compute[248510]: 2025-12-13 08:48:06.601 248514 DEBUG oslo_concurrency.processutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:06 compute-0 nova_compute[248510]: 2025-12-13 08:48:06.609 248514 DEBUG nova.compute.provider_tree [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:48:06 compute-0 nova_compute[248510]: 2025-12-13 08:48:06.644 248514 DEBUG nova.scheduler.client.report [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:48:06 compute-0 nova_compute[248510]: 2025-12-13 08:48:06.682 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:06 compute-0 nova_compute[248510]: 2025-12-13 08:48:06.712 248514 INFO nova.scheduler.client.report [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Deleted allocations for instance c3fb322f-a9db-4396-b659-2307698e5524
Dec 13 08:48:06 compute-0 nova_compute[248510]: 2025-12-13 08:48:06.802 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:06 compute-0 nova_compute[248510]: 2025-12-13 08:48:06.836 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.099 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.143 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.144 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.145 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:07 compute-0 ceph-mon[76537]: pgmap v2571: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 08:48:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2914377555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:48:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:07.459 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:07 compute-0 ovn_controller[148476]: 2025-12-13T08:48:07Z|01032|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.812 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:48:07 compute-0 nova_compute[248510]: 2025-12-13 08:48:07.812 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.041 248514 DEBUG nova.compute.manager [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.042 248514 DEBUG oslo_concurrency.lockutils [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.042 248514 DEBUG oslo_concurrency.lockutils [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.043 248514 DEBUG oslo_concurrency.lockutils [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.043 248514 DEBUG nova.compute.manager [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.043 248514 WARNING nova.compute.manager [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state deleted and task_state None.
Dec 13 08:48:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:48:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566965958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.424 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:08 compute-0 ceph-mon[76537]: pgmap v2572: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 08:48:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1566965958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.808 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.809 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.971 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.972 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3528MB free_disk=59.941981153562665GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.973 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:08 compute-0 nova_compute[248510]: 2025-12-13 08:48:08.973 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:09 compute-0 nova_compute[248510]: 2025-12-13 08:48:09.051 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:48:09 compute-0 nova_compute[248510]: 2025-12-13 08:48:09.052 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:48:09 compute-0 nova_compute[248510]: 2025-12-13 08:48:09.052 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:48:09 compute-0 nova_compute[248510]: 2025-12-13 08:48:09.096 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:48:09
Dec 13 08:48:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:48:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:48:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'images']
Dec 13 08:48:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:48:09 compute-0 ovn_controller[148476]: 2025-12-13T08:48:09Z|01033|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 08:48:09 compute-0 nova_compute[248510]: 2025-12-13 08:48:09.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:48:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852006070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:48:09 compute-0 nova_compute[248510]: 2025-12-13 08:48:09.693 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:09 compute-0 nova_compute[248510]: 2025-12-13 08:48:09.699 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:48:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/852006070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:48:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:10 compute-0 nova_compute[248510]: 2025-12-13 08:48:10.010 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:48:10 compute-0 nova_compute[248510]: 2025-12-13 08:48:10.243 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:48:10 compute-0 nova_compute[248510]: 2025-12-13 08:48:10.244 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:10 compute-0 ceph-mon[76537]: pgmap v2573: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:48:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:48:11 compute-0 nova_compute[248510]: 2025-12-13 08:48:11.245 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:11 compute-0 nova_compute[248510]: 2025-12-13 08:48:11.250 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:11 compute-0 nova_compute[248510]: 2025-12-13 08:48:11.250 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:11 compute-0 nova_compute[248510]: 2025-12-13 08:48:11.250 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:48:11 compute-0 nova_compute[248510]: 2025-12-13 08:48:11.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 13 08:48:12 compute-0 nova_compute[248510]: 2025-12-13 08:48:12.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:12 compute-0 nova_compute[248510]: 2025-12-13 08:48:12.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:13 compute-0 ceph-mon[76537]: pgmap v2574: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 13 08:48:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 13 08:48:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:48:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2807869102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:48:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:48:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2807869102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:48:15 compute-0 ceph-mon[76537]: pgmap v2575: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 13 08:48:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2807869102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:48:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2807869102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:48:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:48:16 compute-0 ovn_controller[148476]: 2025-12-13T08:48:16Z|01034|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 08:48:16 compute-0 nova_compute[248510]: 2025-12-13 08:48:16.831 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:17 compute-0 ceph-mon[76537]: pgmap v2576: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:48:17 compute-0 nova_compute[248510]: 2025-12-13 08:48:17.163 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615682.1614242, c3fb322f-a9db-4396-b659-2307698e5524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:48:17 compute-0 nova_compute[248510]: 2025-12-13 08:48:17.163 248514 INFO nova.compute.manager [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Stopped (Lifecycle Event)
Dec 13 08:48:17 compute-0 nova_compute[248510]: 2025-12-13 08:48:17.194 248514 DEBUG nova.compute.manager [None req-bc299604-a0ba-4be8-9bda-478d8b4323df - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:48:17 compute-0 nova_compute[248510]: 2025-12-13 08:48:17.207 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:17 compute-0 nova_compute[248510]: 2025-12-13 08:48:17.776 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:17 compute-0 podman[352695]: 2025-12-13 08:48:17.969820446 +0000 UTC m=+0.053059813 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:48:17 compute-0 podman[352694]: 2025-12-13 08:48:17.976886313 +0000 UTC m=+0.061534635 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 08:48:18 compute-0 podman[352693]: 2025-12-13 08:48:18.006017354 +0000 UTC m=+0.092385450 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec 13 08:48:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Dec 13 08:48:18 compute-0 nova_compute[248510]: 2025-12-13 08:48:18.884 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:18 compute-0 nova_compute[248510]: 2025-12-13 08:48:18.885 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:18 compute-0 nova_compute[248510]: 2025-12-13 08:48:18.941 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.034 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.035 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.041 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.042 248514 INFO nova.compute.claims [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:48:19 compute-0 ceph-mon[76537]: pgmap v2577: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.180 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:48:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636471413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.753 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.759 248514 DEBUG nova.compute.provider_tree [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:48:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.785 248514 DEBUG nova.scheduler.client.report [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.822 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.823 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.894 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.895 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.924 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:48:19 compute-0 nova_compute[248510]: 2025-12-13 08:48:19.947 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:48:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.066 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.068 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.068 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Creating image(s)
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.087 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.104 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:48:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3636471413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.134 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.139 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.225 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.226 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.227 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.227 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.249 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.252 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 99100320-043d-4f13-ac93-5fd3309abbf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.447 248514 DEBUG nova.policy [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '649a4118d92a4ee68ff645ddec797a5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b75a9df2d3584458bf4c9c127010a4d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.569 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 99100320-043d-4f13-ac93-5fd3309abbf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.641 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] resizing rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.709 248514 DEBUG nova.objects.instance [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 99100320-043d-4f13-ac93-5fd3309abbf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.728 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.728 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Ensure instance console log exists: /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.728 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.729 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:20 compute-0 nova_compute[248510]: 2025-12-13 08:48:20.729 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:20.954 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:1f:a9 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=496055dc-0896-49fa-bf70-209d5a08ecd5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=50178087-37ed-4900-b708-4cc10cbf6678) old=Port_Binding(mac=['fa:16:3e:92:1f:a9 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:48:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:20.956 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 50178087-37ed-4900-b708-4cc10cbf6678 in datapath 15538243-d813-425c-a420-0747a4cf75d2 updated
Dec 13 08:48:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:20.957 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15538243-d813-425c-a420-0747a4cf75d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:48:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:20.958 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f77e214-20e6-4560-bfa1-11c51348e617]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007718826105657202 of space, bias 1.0, pg target 0.23156478316971604 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006682139988669646 of space, bias 1.0, pg target 0.20046419966008938 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.841775278005856e-07 of space, bias 4.0, pg target 0.0007010130333607028 quantized to 16 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:48:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:48:21 compute-0 ceph-mon[76537]: pgmap v2578: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 08:48:21 compute-0 nova_compute[248510]: 2025-12-13 08:48:21.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 08:48:22 compute-0 nova_compute[248510]: 2025-12-13 08:48:22.184 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Successfully created port: 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:48:22 compute-0 nova_compute[248510]: 2025-12-13 08:48:22.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:23 compute-0 ceph-mon[76537]: pgmap v2579: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 08:48:23 compute-0 nova_compute[248510]: 2025-12-13 08:48:23.858 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Successfully updated port: 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:48:23 compute-0 nova_compute[248510]: 2025-12-13 08:48:23.893 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:48:23 compute-0 nova_compute[248510]: 2025-12-13 08:48:23.894 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquired lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:48:23 compute-0 nova_compute[248510]: 2025-12-13 08:48:23.894 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:48:24 compute-0 nova_compute[248510]: 2025-12-13 08:48:24.027 248514 DEBUG nova.compute.manager [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:48:24 compute-0 nova_compute[248510]: 2025-12-13 08:48:24.028 248514 DEBUG nova.compute.manager [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing instance network info cache due to event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:48:24 compute-0 nova_compute[248510]: 2025-12-13 08:48:24.028 248514 DEBUG oslo_concurrency.lockutils [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:48:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:48:24 compute-0 nova_compute[248510]: 2025-12-13 08:48:24.572 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:48:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:25 compute-0 ceph-mon[76537]: pgmap v2580: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:48:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.515 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.575 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Releasing lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.575 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance network_info: |[{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.576 248514 DEBUG oslo_concurrency.lockutils [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.577 248514 DEBUG nova.network.neutron [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.582 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start _get_guest_xml network_info=[{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.587 248514 WARNING nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.593 248514 DEBUG nova.virt.libvirt.host [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.594 248514 DEBUG nova.virt.libvirt.host [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.599 248514 DEBUG nova.virt.libvirt.host [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.599 248514 DEBUG nova.virt.libvirt.host [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.600 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.601 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.602 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.603 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.603 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.604 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.604 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.605 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.606 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.606 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.607 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.608 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.615 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:26 compute-0 nova_compute[248510]: 2025-12-13 08:48:26.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:48:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241298965' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.187 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.212 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.216 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:27 compute-0 ceph-mon[76537]: pgmap v2581: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:48:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4241298965' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.252 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:48:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4131002482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.821 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.822 248514 DEBUG nova.virt.libvirt.vif [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:48:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-647979759-acc',id=106,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAq9JxVWdGwadoCAlDLVLF/FgbwuunvGPYPWGxv9o2qZSwXgRBKF+h53qswJupwtL+dZgpz/rFzjqXvS7XDDi2cr6DR4JG28HfzJLgzD6wSuJyxP2VMxhs/n7K+Z53vcIw==',key_name='tempest-TestSecurityGroupsBasicOps-1306110225',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b75a9df2d3584458bf4c9c127010a4d1',ramdisk_id='',reservation_id='r-vazklzs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-647979759',owner_user_name='tempest-TestSecurityGroupsBasicOps-647979759-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:48:19Z,user_data=None,user_id='649a4118d92a4ee68ff645ddec797a5a',uuid=99100320-043d-4f13-ac93-5fd3309abbf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.823 248514 DEBUG nova.network.os_vif_util [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converting VIF {"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.824 248514 DEBUG nova.network.os_vif_util [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.825 248514 DEBUG nova.objects.instance [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99100320-043d-4f13-ac93-5fd3309abbf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.854 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <uuid>99100320-043d-4f13-ac93-5fd3309abbf7</uuid>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <name>instance-0000006a</name>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991</nova:name>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:48:26</nova:creationTime>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <nova:user uuid="649a4118d92a4ee68ff645ddec797a5a">tempest-TestSecurityGroupsBasicOps-647979759-project-member</nova:user>
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <nova:project uuid="b75a9df2d3584458bf4c9c127010a4d1">tempest-TestSecurityGroupsBasicOps-647979759</nova:project>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <nova:port uuid="5027cfa3-f4ed-4668-87ec-ebfe75f4fb14">
Dec 13 08:48:27 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <system>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <entry name="serial">99100320-043d-4f13-ac93-5fd3309abbf7</entry>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <entry name="uuid">99100320-043d-4f13-ac93-5fd3309abbf7</entry>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </system>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <os>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   </os>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <features>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   </features>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/99100320-043d-4f13-ac93-5fd3309abbf7_disk">
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       </source>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/99100320-043d-4f13-ac93-5fd3309abbf7_disk.config">
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       </source>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:48:27 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:df:76:90"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <target dev="tap5027cfa3-f4"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/console.log" append="off"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <video>
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </video>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:48:27 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:48:27 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:48:27 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:48:27 compute-0 nova_compute[248510]: </domain>
Dec 13 08:48:27 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.855 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Preparing to wait for external event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.855 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.855 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.855 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.856 248514 DEBUG nova.virt.libvirt.vif [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:48:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-647979759-acc',id=106,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAq9JxVWdGwadoCAlDLVLF/FgbwuunvGPYPWGxv9o2qZSwXgRBKF+h53qswJupwtL+dZgpz/rFzjqXvS7XDDi2cr6DR4JG28HfzJLgzD6wSuJyxP2VMxhs/n7K+Z53vcIw==',key_name='tempest-TestSecurityGroupsBasicOps-1306110225',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b75a9df2d3584458bf4c9c127010a4d1',ramdisk_id='',reservation_id='r-vazklzs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-647979759',owner_user_name='tempest-TestSecurityGroupsBasicOps-647979759-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:48:19Z,user_data=None,user_id='649a4118d92a4ee68ff645ddec797a5a',uuid=99100320-043d-4f13-ac93-5fd3309abbf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.856 248514 DEBUG nova.network.os_vif_util [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converting VIF {"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.857 248514 DEBUG nova.network.os_vif_util [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.857 248514 DEBUG os_vif [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.858 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.858 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.861 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5027cfa3-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.861 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5027cfa3-f4, col_values=(('external_ids', {'iface-id': '5027cfa3-f4ed-4668-87ec-ebfe75f4fb14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:76:90', 'vm-uuid': '99100320-043d-4f13-ac93-5fd3309abbf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:27 compute-0 NetworkManager[50376]: <info>  [1765615707.9182] manager: (tap5027cfa3-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.926 248514 INFO os_vif [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4')
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.984 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.984 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.984 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] No VIF found with MAC fa:16:3e:df:76:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:48:27 compute-0 nova_compute[248510]: 2025-12-13 08:48:27.985 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Using config drive
Dec 13 08:48:28 compute-0 nova_compute[248510]: 2025-12-13 08:48:28.003 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:48:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:48:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4131002482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:48:28 compute-0 nova_compute[248510]: 2025-12-13 08:48:28.938 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Creating config drive at /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config
Dec 13 08:48:28 compute-0 nova_compute[248510]: 2025-12-13 08:48:28.943 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgbhu9dq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.087 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgbhu9dq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.120 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.125 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:48:29 compute-0 ceph-mon[76537]: pgmap v2582: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.286 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.287 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Deleting local config drive /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config because it was imported into RBD.
Dec 13 08:48:29 compute-0 kernel: tap5027cfa3-f4: entered promiscuous mode
Dec 13 08:48:29 compute-0 NetworkManager[50376]: <info>  [1765615709.3361] manager: (tap5027cfa3-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/427)
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:29 compute-0 ovn_controller[148476]: 2025-12-13T08:48:29Z|01035|binding|INFO|Claiming lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 for this chassis.
Dec 13 08:48:29 compute-0 ovn_controller[148476]: 2025-12-13T08:48:29Z|01036|binding|INFO|5027cfa3-f4ed-4668-87ec-ebfe75f4fb14: Claiming fa:16:3e:df:76:90 10.100.0.10
Dec 13 08:48:29 compute-0 systemd-udevd[353075]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.401 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:76:90 10.100.0.10'], port_security=['fa:16:3e:df:76:90 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '99100320-043d-4f13-ac93-5fd3309abbf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b75a9df2d3584458bf4c9c127010a4d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a4d558-076e-40f7-a16c-096cc56882f7 fa0ae845-58e9-4e37-8129-696ba9497adb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e9afcf2-3319-47a2-a247-8f6d1e6c6308, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.403 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 in datapath 8a649c29-105b-45ad-91b4-9f7a7c58b419 bound to our chassis
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.405 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a649c29-105b-45ad-91b4-9f7a7c58b419
Dec 13 08:48:29 compute-0 NetworkManager[50376]: <info>  [1765615709.4108] device (tap5027cfa3-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:48:29 compute-0 NetworkManager[50376]: <info>  [1765615709.4124] device (tap5027cfa3-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:48:29 compute-0 ovn_controller[148476]: 2025-12-13T08:48:29Z|01037|binding|INFO|Setting lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 ovn-installed in OVS
Dec 13 08:48:29 compute-0 ovn_controller[148476]: 2025-12-13T08:48:29Z|01038|binding|INFO|Setting lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 up in Southbound
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.421 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5dc2d2-42e8-4156-ae38-7633750c70f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.422 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a649c29-11 in ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:48:29 compute-0 systemd-machined[210538]: New machine qemu-132-instance-0000006a.
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.425 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a649c29-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.426 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88f6a94d-68f1-46ab-8dbf-a84d56d2c68d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.427 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e5f78f-3130-42da-bc43-ca7fd18d3cd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-0000006a.
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.439 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[46cf6f70-5934-46a5-a5a5-195337b74296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.461 248514 DEBUG nova.network.neutron [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updated VIF entry in instance network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.461 248514 DEBUG nova.network.neutron [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.465 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd65d10-40a9-4742-b542-0f05cc210951]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.480 248514 DEBUG oslo_concurrency.lockutils [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.504 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5e0ef653-65bc-4556-a808-87ea77011920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.511 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dabd3144-6a58-4b44-ac27-7632d2e471e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 NetworkManager[50376]: <info>  [1765615709.5126] manager: (tap8a649c29-10): new Veth device (/org/freedesktop/NetworkManager/Devices/428)
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.549 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e64eac0e-d73d-48ec-9aad-ee602cd23309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.553 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[33a53044-cbb4-4684-b28d-955e2c9c836c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 NetworkManager[50376]: <info>  [1765615709.5751] device (tap8a649c29-10): carrier: link connected
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.580 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4da2660f-fac9-4053-8234-21312867e3e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.597 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[afd7071b-8bd1-42c3-9097-b56999388957]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a649c29-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:39:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828679, 'reachable_time': 44018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353113, 'error': None, 'target': 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af2f2617-10bb-493f-96d2-0eb0b64a1342]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:3988'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 828679, 'tstamp': 828679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353114, 'error': None, 'target': 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[614ef3ff-a4d6-4691-a804-c7585918e8d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a649c29-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:39:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828679, 'reachable_time': 44018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353115, 'error': None, 'target': 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.661 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f45bf186-602a-4d22-96d1-43a0616ccef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 kernel: tap8a649c29-10: entered promiscuous mode
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.722 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[73772902-0529-4dc0-b899-b8e93dd6a121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.723 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a649c29-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.723 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.724 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a649c29-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.729 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a649c29-10, col_values=(('external_ids', {'iface-id': '7a1c6f9d-5534-4f2e-8492-9a55e2589a5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:29 compute-0 ovn_controller[148476]: 2025-12-13T08:48:29Z|01039|binding|INFO|Releasing lport 7a1c6f9d-5534-4f2e-8492-9a55e2589a5b from this chassis (sb_readonly=0)
Dec 13 08:48:29 compute-0 NetworkManager[50376]: <info>  [1765615709.7357] manager: (tap8a649c29-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Dec 13 08:48:29 compute-0 nova_compute[248510]: 2025-12-13 08:48:29.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.747 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a649c29-105b-45ad-91b4-9f7a7c58b419.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a649c29-105b-45ad-91b4-9f7a7c58b419.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7a2832-f67f-4c2b-959a-9d2d51a02268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.748 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-8a649c29-105b-45ad-91b4-9f7a7c58b419
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/8a649c29-105b-45ad-91b4-9f7a7c58b419.pid.haproxy
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 8a649c29-105b-45ad-91b4-9f7a7c58b419
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:48:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.749 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'env', 'PROCESS_TAG=haproxy-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a649c29-105b-45ad-91b4-9f7a7c58b419.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:48:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 08:48:30 compute-0 podman[353145]: 2025-12-13 08:48:30.114693411 +0000 UTC m=+0.052016726 container create 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 08:48:30 compute-0 systemd[1]: Started libpod-conmon-1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429.scope.
Dec 13 08:48:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:48:30 compute-0 podman[353145]: 2025-12-13 08:48:30.086386341 +0000 UTC m=+0.023709676 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7930ac85e7195342e64df4fba87493093ddc740f0c0e309c4994565a0f79486e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:30 compute-0 podman[353145]: 2025-12-13 08:48:30.205133091 +0000 UTC m=+0.142456416 container init 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:48:30 compute-0 podman[353145]: 2025-12-13 08:48:30.211850489 +0000 UTC m=+0.149173804 container start 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:48:30 compute-0 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [NOTICE]   (353164) : New worker (353166) forked
Dec 13 08:48:30 compute-0 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [NOTICE]   (353164) : Loading success.
Dec 13 08:48:30 compute-0 nova_compute[248510]: 2025-12-13 08:48:30.384 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615710.3838782, 99100320-043d-4f13-ac93-5fd3309abbf7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:48:30 compute-0 nova_compute[248510]: 2025-12-13 08:48:30.385 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] VM Started (Lifecycle Event)
Dec 13 08:48:30 compute-0 nova_compute[248510]: 2025-12-13 08:48:30.411 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:48:30 compute-0 nova_compute[248510]: 2025-12-13 08:48:30.415 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615710.3844368, 99100320-043d-4f13-ac93-5fd3309abbf7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:48:30 compute-0 nova_compute[248510]: 2025-12-13 08:48:30.415 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] VM Paused (Lifecycle Event)
Dec 13 08:48:30 compute-0 nova_compute[248510]: 2025-12-13 08:48:30.437 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:48:30 compute-0 nova_compute[248510]: 2025-12-13 08:48:30.441 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:48:30 compute-0 nova_compute[248510]: 2025-12-13 08:48:30.467 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:48:31 compute-0 ceph-mon[76537]: pgmap v2583: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 08:48:31 compute-0 nova_compute[248510]: 2025-12-13 08:48:31.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 08:48:32 compute-0 nova_compute[248510]: 2025-12-13 08:48:32.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:33 compute-0 ceph-mon[76537]: pgmap v2584: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 08:48:33 compute-0 sshd-session[353217]: Invalid user sol from 193.32.162.146 port 39110
Dec 13 08:48:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 13 08:48:34 compute-0 sshd-session[353217]: Connection closed by invalid user sol 193.32.162.146 port 39110 [preauth]
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.674 248514 DEBUG nova.compute.manager [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.674 248514 DEBUG oslo_concurrency.lockutils [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.675 248514 DEBUG oslo_concurrency.lockutils [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.675 248514 DEBUG oslo_concurrency.lockutils [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.675 248514 DEBUG nova.compute.manager [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Processing event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.676 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.678 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615714.6786327, 99100320-043d-4f13-ac93-5fd3309abbf7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.679 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] VM Resumed (Lifecycle Event)
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.681 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.683 248514 INFO nova.virt.libvirt.driver [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance spawned successfully.
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.683 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.749 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.754 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.758 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.758 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.759 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.759 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.759 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.760 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:48:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.794 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.920 248514 INFO nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Took 14.85 seconds to spawn the instance on the hypervisor.
Dec 13 08:48:34 compute-0 nova_compute[248510]: 2025-12-13 08:48:34.921 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:48:35 compute-0 nova_compute[248510]: 2025-12-13 08:48:35.021 248514 INFO nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Took 16.02 seconds to build instance.
Dec 13 08:48:35 compute-0 nova_compute[248510]: 2025-12-13 08:48:35.064 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:35 compute-0 ceph-mon[76537]: pgmap v2585: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 13 08:48:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 21 KiB/s wr, 11 op/s
Dec 13 08:48:36 compute-0 ceph-mon[76537]: pgmap v2586: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 21 KiB/s wr, 11 op/s
Dec 13 08:48:36 compute-0 nova_compute[248510]: 2025-12-13 08:48:36.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:36 compute-0 nova_compute[248510]: 2025-12-13 08:48:36.881 248514 DEBUG nova.compute.manager [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:48:36 compute-0 nova_compute[248510]: 2025-12-13 08:48:36.881 248514 DEBUG oslo_concurrency.lockutils [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:36 compute-0 nova_compute[248510]: 2025-12-13 08:48:36.882 248514 DEBUG oslo_concurrency.lockutils [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:36 compute-0 nova_compute[248510]: 2025-12-13 08:48:36.882 248514 DEBUG oslo_concurrency.lockutils [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:36 compute-0 nova_compute[248510]: 2025-12-13 08:48:36.883 248514 DEBUG nova.compute.manager [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] No waiting events found dispatching network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:48:36 compute-0 nova_compute[248510]: 2025-12-13 08:48:36.883 248514 WARNING nova.compute.manager [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received unexpected event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 for instance with vm_state active and task_state None.
Dec 13 08:48:38 compute-0 nova_compute[248510]: 2025-12-13 08:48:38.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 954 KiB/s rd, 21 KiB/s wr, 41 op/s
Dec 13 08:48:39 compute-0 ceph-mon[76537]: pgmap v2587: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 954 KiB/s rd, 21 KiB/s wr, 41 op/s
Dec 13 08:48:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Dec 13 08:48:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:48:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:48:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:48:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:48:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:48:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:48:41 compute-0 ceph-mon[76537]: pgmap v2588: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Dec 13 08:48:41 compute-0 nova_compute[248510]: 2025-12-13 08:48:41.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:48:42 compute-0 nova_compute[248510]: 2025-12-13 08:48:42.147 248514 DEBUG nova.compute.manager [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:48:42 compute-0 nova_compute[248510]: 2025-12-13 08:48:42.148 248514 DEBUG nova.compute.manager [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing instance network info cache due to event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:48:42 compute-0 nova_compute[248510]: 2025-12-13 08:48:42.148 248514 DEBUG oslo_concurrency.lockutils [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:48:42 compute-0 nova_compute[248510]: 2025-12-13 08:48:42.148 248514 DEBUG oslo_concurrency.lockutils [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:48:42 compute-0 nova_compute[248510]: 2025-12-13 08:48:42.149 248514 DEBUG nova.network.neutron [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:48:42 compute-0 ceph-mon[76537]: pgmap v2589: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:48:43 compute-0 nova_compute[248510]: 2025-12-13 08:48:43.008 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:48:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:44 compute-0 nova_compute[248510]: 2025-12-13 08:48:44.863 248514 DEBUG nova.network.neutron [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updated VIF entry in instance network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:48:44 compute-0 nova_compute[248510]: 2025-12-13 08:48:44.864 248514 DEBUG nova.network.neutron [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:48:44 compute-0 nova_compute[248510]: 2025-12-13 08:48:44.887 248514 DEBUG oslo_concurrency.lockutils [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:48:45 compute-0 ceph-mon[76537]: pgmap v2590: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:48:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 08:48:46 compute-0 ovn_controller[148476]: 2025-12-13T08:48:46Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:76:90 10.100.0.10
Dec 13 08:48:46 compute-0 ovn_controller[148476]: 2025-12-13T08:48:46Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:76:90 10.100.0.10
Dec 13 08:48:46 compute-0 nova_compute[248510]: 2025-12-13 08:48:46.843 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:47 compute-0 ceph-mon[76537]: pgmap v2591: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 08:48:48 compute-0 nova_compute[248510]: 2025-12-13 08:48:48.010 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 174 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 706 KiB/s wr, 82 op/s
Dec 13 08:48:48 compute-0 podman[353222]: 2025-12-13 08:48:48.986845025 +0000 UTC m=+0.066708316 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:48:48 compute-0 podman[353221]: 2025-12-13 08:48:48.991479661 +0000 UTC m=+0.071534116 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 13 08:48:49 compute-0 podman[353220]: 2025-12-13 08:48:49.011007281 +0000 UTC m=+0.104342400 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:48:49 compute-0 ceph-mon[76537]: pgmap v2592: 321 pgs: 321 active+clean; 174 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 706 KiB/s wr, 82 op/s
Dec 13 08:48:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Dec 13 08:48:51 compute-0 ceph-mon[76537]: pgmap v2593: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Dec 13 08:48:51 compute-0 nova_compute[248510]: 2025-12-13 08:48:51.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 08:48:52 compute-0 sudo[353281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:48:52 compute-0 sudo[353281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:52 compute-0 sudo[353281]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:52 compute-0 sudo[353306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:48:52 compute-0 sudo[353306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:53 compute-0 nova_compute[248510]: 2025-12-13 08:48:53.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:53 compute-0 sudo[353306]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:53 compute-0 ceph-mon[76537]: pgmap v2594: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 08:48:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:48:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:48:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:48:53 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:48:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:48:53 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:48:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:48:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:48:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:48:53 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:48:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:48:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:48:53 compute-0 sudo[353363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:48:53 compute-0 sudo[353363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:53 compute-0 sudo[353363]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:53 compute-0 sudo[353388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:48:53 compute-0 sudo[353388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:53 compute-0 podman[353425]: 2025-12-13 08:48:53.664244446 +0000 UTC m=+0.041411469 container create 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:48:53 compute-0 systemd[1]: Started libpod-conmon-318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80.scope.
Dec 13 08:48:53 compute-0 podman[353425]: 2025-12-13 08:48:53.64565211 +0000 UTC m=+0.022819163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:48:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:48:53 compute-0 podman[353425]: 2025-12-13 08:48:53.833283054 +0000 UTC m=+0.210450177 container init 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 08:48:53 compute-0 podman[353425]: 2025-12-13 08:48:53.844153517 +0000 UTC m=+0.221320580 container start 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:48:53 compute-0 podman[353425]: 2025-12-13 08:48:53.851383098 +0000 UTC m=+0.228550151 container attach 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:48:53 compute-0 quirky_pare[353441]: 167 167
Dec 13 08:48:53 compute-0 systemd[1]: libpod-318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80.scope: Deactivated successfully.
Dec 13 08:48:53 compute-0 conmon[353441]: conmon 318a9775b505a30abb2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80.scope/container/memory.events
Dec 13 08:48:53 compute-0 podman[353425]: 2025-12-13 08:48:53.854319332 +0000 UTC m=+0.231486355 container died 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 08:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5a70b09ab64113ba21e2d8ed4ce3d378893a6174d4deb25e7cfc39527f7b535-merged.mount: Deactivated successfully.
Dec 13 08:48:53 compute-0 podman[353425]: 2025-12-13 08:48:53.902234043 +0000 UTC m=+0.279401066 container remove 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 08:48:53 compute-0 systemd[1]: libpod-conmon-318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80.scope: Deactivated successfully.
Dec 13 08:48:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:48:54 compute-0 podman[353465]: 2025-12-13 08:48:54.087927479 +0000 UTC m=+0.022871895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:48:54 compute-0 podman[353465]: 2025-12-13 08:48:54.188303556 +0000 UTC m=+0.123247922 container create 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:48:54 compute-0 systemd[1]: Started libpod-conmon-3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e.scope.
Dec 13 08:48:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:48:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:48:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:48:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:48:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:48:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:48:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:54 compute-0 podman[353465]: 2025-12-13 08:48:54.505992561 +0000 UTC m=+0.440936957 container init 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:48:54 compute-0 podman[353465]: 2025-12-13 08:48:54.514420093 +0000 UTC m=+0.449364459 container start 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:48:54 compute-0 podman[353465]: 2025-12-13 08:48:54.523634364 +0000 UTC m=+0.458578740 container attach 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:48:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:48:55 compute-0 confident_wiles[353481]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:48:55 compute-0 confident_wiles[353481]: --> All data devices are unavailable
Dec 13 08:48:55 compute-0 systemd[1]: libpod-3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e.scope: Deactivated successfully.
Dec 13 08:48:55 compute-0 podman[353465]: 2025-12-13 08:48:55.078960367 +0000 UTC m=+1.013904733 container died 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186-merged.mount: Deactivated successfully.
Dec 13 08:48:55 compute-0 podman[353465]: 2025-12-13 08:48:55.142654354 +0000 UTC m=+1.077598720 container remove 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:48:55 compute-0 systemd[1]: libpod-conmon-3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e.scope: Deactivated successfully.
Dec 13 08:48:55 compute-0 sudo[353388]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:55 compute-0 ceph-mon[76537]: pgmap v2595: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:48:55 compute-0 sudo[353514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:48:55 compute-0 sudo[353514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:55 compute-0 sudo[353514]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:55 compute-0 sudo[353539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:48:55 compute-0 sudo[353539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:55.427 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:48:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:55.429 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:48:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:48:55.429 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:48:55 compute-0 podman[353577]: 2025-12-13 08:48:55.678692234 +0000 UTC m=+0.042448085 container create 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:48:55 compute-0 systemd[1]: Started libpod-conmon-4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9.scope.
Dec 13 08:48:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:48:55 compute-0 podman[353577]: 2025-12-13 08:48:55.748593417 +0000 UTC m=+0.112349288 container init 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:48:55 compute-0 podman[353577]: 2025-12-13 08:48:55.658388335 +0000 UTC m=+0.022144206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:48:55 compute-0 podman[353577]: 2025-12-13 08:48:55.756090775 +0000 UTC m=+0.119846646 container start 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 08:48:55 compute-0 podman[353577]: 2025-12-13 08:48:55.759213883 +0000 UTC m=+0.122969734 container attach 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 08:48:55 compute-0 clever_hertz[353594]: 167 167
Dec 13 08:48:55 compute-0 systemd[1]: libpod-4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9.scope: Deactivated successfully.
Dec 13 08:48:55 compute-0 podman[353577]: 2025-12-13 08:48:55.760842414 +0000 UTC m=+0.124598255 container died 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 08:48:55 compute-0 nova_compute[248510]: 2025-12-13 08:48:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d716b08ff5d2e9cfcf9b58877788931e2f2f205dee78bb27f4b82cda8bf932d0-merged.mount: Deactivated successfully.
Dec 13 08:48:55 compute-0 podman[353577]: 2025-12-13 08:48:55.797063252 +0000 UTC m=+0.160819093 container remove 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:48:55 compute-0 systemd[1]: libpod-conmon-4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9.scope: Deactivated successfully.
Dec 13 08:48:55 compute-0 podman[353617]: 2025-12-13 08:48:55.979583638 +0000 UTC m=+0.042279791 container create ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:48:56 compute-0 systemd[1]: Started libpod-conmon-ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8.scope.
Dec 13 08:48:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:48:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:56 compute-0 podman[353617]: 2025-12-13 08:48:55.963331671 +0000 UTC m=+0.026027844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:48:56 compute-0 podman[353617]: 2025-12-13 08:48:56.069703728 +0000 UTC m=+0.132399931 container init ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 08:48:56 compute-0 podman[353617]: 2025-12-13 08:48:56.077337669 +0000 UTC m=+0.140033822 container start ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 08:48:56 compute-0 podman[353617]: 2025-12-13 08:48:56.08096691 +0000 UTC m=+0.143663073 container attach ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]: {
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:     "0": [
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:         {
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "devices": [
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "/dev/loop3"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             ],
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_name": "ceph_lv0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_size": "21470642176",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "name": "ceph_lv0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "tags": {
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cluster_name": "ceph",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.crush_device_class": "",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.encrypted": "0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.objectstore": "bluestore",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osd_id": "0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.type": "block",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.vdo": "0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.with_tpm": "0"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             },
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "type": "block",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "vg_name": "ceph_vg0"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:         }
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:     ],
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:     "1": [
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:         {
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "devices": [
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "/dev/loop4"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             ],
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_name": "ceph_lv1",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_size": "21470642176",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "name": "ceph_lv1",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "tags": {
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cluster_name": "ceph",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.crush_device_class": "",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.encrypted": "0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.objectstore": "bluestore",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osd_id": "1",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.type": "block",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.vdo": "0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.with_tpm": "0"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             },
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "type": "block",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "vg_name": "ceph_vg1"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:         }
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:     ],
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:     "2": [
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:         {
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "devices": [
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "/dev/loop5"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             ],
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_name": "ceph_lv2",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_size": "21470642176",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "name": "ceph_lv2",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "tags": {
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.cluster_name": "ceph",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.crush_device_class": "",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.encrypted": "0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.objectstore": "bluestore",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osd_id": "2",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.type": "block",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.vdo": "0",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:                 "ceph.with_tpm": "0"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             },
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "type": "block",
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:             "vg_name": "ceph_vg2"
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:         }
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]:     ]
Dec 13 08:48:56 compute-0 cool_chebyshev[353633]: }
Dec 13 08:48:56 compute-0 systemd[1]: libpod-ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8.scope: Deactivated successfully.
Dec 13 08:48:56 compute-0 podman[353642]: 2025-12-13 08:48:56.434541265 +0000 UTC m=+0.024949467 container died ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26-merged.mount: Deactivated successfully.
Dec 13 08:48:56 compute-0 podman[353642]: 2025-12-13 08:48:56.475160333 +0000 UTC m=+0.065568525 container remove ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 08:48:56 compute-0 systemd[1]: libpod-conmon-ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8.scope: Deactivated successfully.
Dec 13 08:48:56 compute-0 sudo[353539]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:56 compute-0 sudo[353657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:48:56 compute-0 sudo[353657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:56 compute-0 sudo[353657]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:56 compute-0 sudo[353682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:48:56 compute-0 sudo[353682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:56 compute-0 nova_compute[248510]: 2025-12-13 08:48:56.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:56 compute-0 podman[353717]: 2025-12-13 08:48:56.921093634 +0000 UTC m=+0.040500847 container create 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 08:48:56 compute-0 systemd[1]: Started libpod-conmon-2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36.scope.
Dec 13 08:48:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:48:56 compute-0 podman[353717]: 2025-12-13 08:48:56.902645831 +0000 UTC m=+0.022053064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:48:57 compute-0 podman[353717]: 2025-12-13 08:48:57.000285929 +0000 UTC m=+0.119693172 container init 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 08:48:57 compute-0 podman[353717]: 2025-12-13 08:48:57.00908445 +0000 UTC m=+0.128491663 container start 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:48:57 compute-0 podman[353717]: 2025-12-13 08:48:57.013050909 +0000 UTC m=+0.132458182 container attach 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:48:57 compute-0 zen_lewin[353734]: 167 167
Dec 13 08:48:57 compute-0 systemd[1]: libpod-2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36.scope: Deactivated successfully.
Dec 13 08:48:57 compute-0 podman[353739]: 2025-12-13 08:48:57.059543755 +0000 UTC m=+0.029102791 container died 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d961f2016584af4e21d2ee14f666e1ee307e64512d42a05366b6f8bc0636d81e-merged.mount: Deactivated successfully.
Dec 13 08:48:57 compute-0 podman[353739]: 2025-12-13 08:48:57.093659891 +0000 UTC m=+0.063218887 container remove 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 08:48:57 compute-0 systemd[1]: libpod-conmon-2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36.scope: Deactivated successfully.
Dec 13 08:48:57 compute-0 ceph-mon[76537]: pgmap v2596: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:48:57 compute-0 podman[353762]: 2025-12-13 08:48:57.279946371 +0000 UTC m=+0.040392893 container create f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 08:48:57 compute-0 systemd[1]: Started libpod-conmon-f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed.scope.
Dec 13 08:48:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:57 compute-0 podman[353762]: 2025-12-13 08:48:57.262775621 +0000 UTC m=+0.023222163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:48:57 compute-0 podman[353762]: 2025-12-13 08:48:57.377830496 +0000 UTC m=+0.138277048 container init f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:48:57 compute-0 podman[353762]: 2025-12-13 08:48:57.38439221 +0000 UTC m=+0.144838732 container start f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 08:48:57 compute-0 podman[353762]: 2025-12-13 08:48:57.389126139 +0000 UTC m=+0.149572691 container attach f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:48:58 compute-0 nova_compute[248510]: 2025-12-13 08:48:58.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:48:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:48:58 compute-0 lvm[353858]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:48:58 compute-0 lvm[353857]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:48:58 compute-0 lvm[353858]: VG ceph_vg1 finished
Dec 13 08:48:58 compute-0 lvm[353857]: VG ceph_vg0 finished
Dec 13 08:48:58 compute-0 lvm[353860]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:48:58 compute-0 lvm[353860]: VG ceph_vg2 finished
Dec 13 08:48:58 compute-0 nervous_meninsky[353779]: {}
Dec 13 08:48:58 compute-0 systemd[1]: libpod-f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed.scope: Deactivated successfully.
Dec 13 08:48:58 compute-0 systemd[1]: libpod-f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed.scope: Consumed 1.385s CPU time.
Dec 13 08:48:58 compute-0 podman[353762]: 2025-12-13 08:48:58.24509456 +0000 UTC m=+1.005541112 container died f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088-merged.mount: Deactivated successfully.
Dec 13 08:48:58 compute-0 podman[353762]: 2025-12-13 08:48:58.289794061 +0000 UTC m=+1.050240583 container remove f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:48:58 compute-0 systemd[1]: libpod-conmon-f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed.scope: Deactivated successfully.
Dec 13 08:48:58 compute-0 sudo[353682]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:48:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:48:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:48:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:48:58 compute-0 sudo[353875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:48:58 compute-0 sudo[353875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:48:58 compute-0 sudo[353875]: pam_unix(sudo:session): session closed for user root
Dec 13 08:48:58 compute-0 nova_compute[248510]: 2025-12-13 08:48:58.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:48:59 compute-0 ceph-mon[76537]: pgmap v2597: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:48:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:48:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:48:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Dec 13 08:49:00 compute-0 ceph-mon[76537]: pgmap v2598: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Dec 13 08:49:01 compute-0 nova_compute[248510]: 2025-12-13 08:49:01.850 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 08:49:03 compute-0 nova_compute[248510]: 2025-12-13 08:49:03.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:03 compute-0 ceph-mon[76537]: pgmap v2599: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 08:49:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 13 08:49:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:04 compute-0 nova_compute[248510]: 2025-12-13 08:49:04.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:04 compute-0 nova_compute[248510]: 2025-12-13 08:49:04.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:49:04 compute-0 nova_compute[248510]: 2025-12-13 08:49:04.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:49:05 compute-0 nova_compute[248510]: 2025-12-13 08:49:05.087 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:49:05 compute-0 nova_compute[248510]: 2025-12-13 08:49:05.087 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:49:05 compute-0 nova_compute[248510]: 2025-12-13 08:49:05.088 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:49:05 compute-0 nova_compute[248510]: 2025-12-13 08:49:05.088 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:49:05 compute-0 ceph-mon[76537]: pgmap v2600: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 13 08:49:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 426 B/s wr, 1 op/s
Dec 13 08:49:06 compute-0 nova_compute[248510]: 2025-12-13 08:49:06.864 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:07 compute-0 ceph-mon[76537]: pgmap v2601: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 426 B/s wr, 1 op/s
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.630 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.658 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.659 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.660 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.798 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:49:07 compute-0 nova_compute[248510]: 2025-12-13 08:49:07.799 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 1 op/s
Dec 13 08:49:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:08.326 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.327 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:08.327 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:49:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:49:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256843846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.424 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.535 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.536 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.539 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.540 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.709 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.710 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3299MB free_disk=59.89645113516599GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.710 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.711 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:08 compute-0 ceph-mon[76537]: pgmap v2602: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 1 op/s
Dec 13 08:49:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3256843846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 99100320-043d-4f13-ac93-5fd3309abbf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.873 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:49:08 compute-0 nova_compute[248510]: 2025-12-13 08:49:08.988 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.081 248514 DEBUG nova.compute.manager [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.082 248514 DEBUG nova.compute.manager [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing instance network info cache due to event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.082 248514 DEBUG oslo_concurrency.lockutils [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.082 248514 DEBUG oslo_concurrency.lockutils [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.083 248514 DEBUG nova.network.neutron [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.154 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.155 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.156 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.156 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.157 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.159 248514 INFO nova.compute.manager [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Terminating instance
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.161 248514 DEBUG nova.compute.manager [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:49:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:49:09
Dec 13 08:49:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:49:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:49:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', 'images', 'vms', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 13 08:49:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:49:09 compute-0 kernel: tap5027cfa3-f4 (unregistering): left promiscuous mode
Dec 13 08:49:09 compute-0 NetworkManager[50376]: <info>  [1765615749.4769] device (tap5027cfa3-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:49:09 compute-0 ovn_controller[148476]: 2025-12-13T08:49:09Z|01040|binding|INFO|Releasing lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 from this chassis (sb_readonly=0)
Dec 13 08:49:09 compute-0 ovn_controller[148476]: 2025-12-13T08:49:09Z|01041|binding|INFO|Setting lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 down in Southbound
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:09 compute-0 ovn_controller[148476]: 2025-12-13T08:49:09Z|01042|binding|INFO|Removing iface tap5027cfa3-f4 ovn-installed in OVS
Dec 13 08:49:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.491 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:76:90 10.100.0.10'], port_security=['fa:16:3e:df:76:90 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '99100320-043d-4f13-ac93-5fd3309abbf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b75a9df2d3584458bf4c9c127010a4d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a4d558-076e-40f7-a16c-096cc56882f7 fa0ae845-58e9-4e37-8129-696ba9497adb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e9afcf2-3319-47a2-a247-8f6d1e6c6308, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:49:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.492 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 in datapath 8a649c29-105b-45ad-91b4-9f7a7c58b419 unbound from our chassis
Dec 13 08:49:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.494 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a649c29-105b-45ad-91b4-9f7a7c58b419, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:49:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.499 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[976c2973-c817-4966-a4b9-372c92116832]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.507 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.507 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 namespace which is not needed anymore
Dec 13 08:49:09 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Dec 13 08:49:09 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006a.scope: Consumed 14.213s CPU time.
Dec 13 08:49:09 compute-0 systemd-machined[210538]: Machine qemu-132-instance-0000006a terminated.
Dec 13 08:49:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:49:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062591854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.598 248514 INFO nova.virt.libvirt.driver [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance destroyed successfully.
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.599 248514 DEBUG nova.objects.instance [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lazy-loading 'resources' on Instance uuid 99100320-043d-4f13-ac93-5fd3309abbf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.617 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.620 248514 DEBUG nova.virt.libvirt.vif [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:48:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-647979759-acc',id=106,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAq9JxVWdGwadoCAlDLVLF/FgbwuunvGPYPWGxv9o2qZSwXgRBKF+h53qswJupwtL+dZgpz/rFzjqXvS7XDDi2cr6DR4JG28HfzJLgzD6wSuJyxP2VMxhs/n7K+Z53vcIw==',key_name='tempest-TestSecurityGroupsBasicOps-1306110225',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b75a9df2d3584458bf4c9c127010a4d1',ramdisk_id='',reservation_id='r-vazklzs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-647979759',owner_user_name='tempest-TestSecurityGroupsBasicOps-647979759-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:48:34Z,user_data=None,user_id='649a4118d92a4ee68ff645ddec797a5a',uuid=99100320-043d-4f13-ac93-5fd3309abbf7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.621 248514 DEBUG nova.network.os_vif_util [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converting VIF {"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.622 248514 DEBUG nova.network.os_vif_util [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.623 248514 DEBUG os_vif [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.626 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5027cfa3-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.633 248514 INFO os_vif [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4')
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.653 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.678 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.703 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.704 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.953 248514 DEBUG nova.compute.manager [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-unplugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.954 248514 DEBUG oslo_concurrency.lockutils [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.954 248514 DEBUG oslo_concurrency.lockutils [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.954 248514 DEBUG oslo_concurrency.lockutils [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.955 248514 DEBUG nova.compute.manager [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] No waiting events found dispatching network-vif-unplugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:49:09 compute-0 nova_compute[248510]: 2025-12-13 08:49:09.955 248514 DEBUG nova.compute.manager [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-unplugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:49:10 compute-0 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [NOTICE]   (353164) : haproxy version is 2.8.14-c23fe91
Dec 13 08:49:10 compute-0 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [NOTICE]   (353164) : path to executable is /usr/sbin/haproxy
Dec 13 08:49:10 compute-0 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [WARNING]  (353164) : Exiting Master process...
Dec 13 08:49:10 compute-0 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [ALERT]    (353164) : Current worker (353166) exited with code 143 (Terminated)
Dec 13 08:49:10 compute-0 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [WARNING]  (353164) : All workers exited. Exiting... (0)
Dec 13 08:49:10 compute-0 systemd[1]: libpod-1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429.scope: Deactivated successfully.
Dec 13 08:49:10 compute-0 podman[353977]: 2025-12-13 08:49:10.013116934 +0000 UTC m=+0.401948169 container died 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:49:10 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4062591854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:49:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429-userdata-shm.mount: Deactivated successfully.
Dec 13 08:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7930ac85e7195342e64df4fba87493093ddc740f0c0e309c4994565a0f79486e-merged.mount: Deactivated successfully.
Dec 13 08:49:11 compute-0 podman[353977]: 2025-12-13 08:49:11.434290007 +0000 UTC m=+1.823121172 container cleanup 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:49:11 compute-0 systemd[1]: libpod-conmon-1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429.scope: Deactivated successfully.
Dec 13 08:49:11 compute-0 nova_compute[248510]: 2025-12-13 08:49:11.627 248514 DEBUG nova.network.neutron [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updated VIF entry in instance network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:49:11 compute-0 nova_compute[248510]: 2025-12-13 08:49:11.628 248514 DEBUG nova.network.neutron [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:49:11 compute-0 nova_compute[248510]: 2025-12-13 08:49:11.657 248514 DEBUG oslo_concurrency.lockutils [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:49:11 compute-0 nova_compute[248510]: 2025-12-13 08:49:11.704 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:11 compute-0 nova_compute[248510]: 2025-12-13 08:49:11.705 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:11 compute-0 nova_compute[248510]: 2025-12-13 08:49:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:11 compute-0 nova_compute[248510]: 2025-12-13 08:49:11.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:49:11 compute-0 ceph-mon[76537]: pgmap v2603: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Dec 13 08:49:11 compute-0 nova_compute[248510]: 2025-12-13 08:49:11.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:12 compute-0 podman[354024]: 2025-12-13 08:49:12.042491556 +0000 UTC m=+0.581671095 container remove 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b606d539-ccfb-4c79-a6bf-4048e90c8fef]: (4, ('Sat Dec 13 08:49:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 (1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429)\n1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429\nSat Dec 13 08:49:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 (1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429)\n1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[399e8820-b24e-4f1f-a9f2-c99e9d05ff54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.056 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a649c29-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:49:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Dec 13 08:49:12 compute-0 nova_compute[248510]: 2025-12-13 08:49:12.058 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:12 compute-0 kernel: tap8a649c29-10: left promiscuous mode
Dec 13 08:49:12 compute-0 nova_compute[248510]: 2025-12-13 08:49:12.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.091 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5ff60f-5bca-4d9e-a759-9f7048ea578f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.106 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18c849b1-4240-413c-b36c-9eacd5a4deac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.108 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f474818c-0431-45c9-8884-acc62d957b8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.128 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40d61dab-a145-4058-84d0-e4e8fb7b46ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828671, 'reachable_time': 22002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354040, 'error': None, 'target': 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.131 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:49:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.131 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dd747def-d8d4-4ac6-91cd-f694b2e215b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d8a649c29\x2d105b\x2d45ad\x2d91b4\x2d9f7a7c58b419.mount: Deactivated successfully.
Dec 13 08:49:12 compute-0 nova_compute[248510]: 2025-12-13 08:49:12.196 248514 DEBUG nova.compute.manager [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:49:12 compute-0 nova_compute[248510]: 2025-12-13 08:49:12.197 248514 DEBUG oslo_concurrency.lockutils [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:12 compute-0 nova_compute[248510]: 2025-12-13 08:49:12.197 248514 DEBUG oslo_concurrency.lockutils [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:12 compute-0 nova_compute[248510]: 2025-12-13 08:49:12.198 248514 DEBUG oslo_concurrency.lockutils [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:12 compute-0 nova_compute[248510]: 2025-12-13 08:49:12.198 248514 DEBUG nova.compute.manager [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] No waiting events found dispatching network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:49:12 compute-0 nova_compute[248510]: 2025-12-13 08:49:12.198 248514 WARNING nova.compute.manager [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received unexpected event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 for instance with vm_state active and task_state deleting.
Dec 13 08:49:12 compute-0 ceph-mon[76537]: pgmap v2604: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Dec 13 08:49:13 compute-0 nova_compute[248510]: 2025-12-13 08:49:13.382 248514 INFO nova.virt.libvirt.driver [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Deleting instance files /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7_del
Dec 13 08:49:13 compute-0 nova_compute[248510]: 2025-12-13 08:49:13.383 248514 INFO nova.virt.libvirt.driver [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Deletion of /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7_del complete
Dec 13 08:49:13 compute-0 nova_compute[248510]: 2025-12-13 08:49:13.456 248514 INFO nova.compute.manager [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Took 4.29 seconds to destroy the instance on the hypervisor.
Dec 13 08:49:13 compute-0 nova_compute[248510]: 2025-12-13 08:49:13.457 248514 DEBUG oslo.service.loopingcall [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:49:13 compute-0 nova_compute[248510]: 2025-12-13 08:49:13.457 248514 DEBUG nova.compute.manager [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:49:13 compute-0 nova_compute[248510]: 2025-12-13 08:49:13.458 248514 DEBUG nova.network.neutron [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:49:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 5.7 KiB/s wr, 28 op/s
Dec 13 08:49:14 compute-0 nova_compute[248510]: 2025-12-13 08:49:14.243 248514 DEBUG nova.network.neutron [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:49:14 compute-0 nova_compute[248510]: 2025-12-13 08:49:14.267 248514 INFO nova.compute.manager [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Took 0.81 seconds to deallocate network for instance.
Dec 13 08:49:14 compute-0 nova_compute[248510]: 2025-12-13 08:49:14.361 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:14 compute-0 nova_compute[248510]: 2025-12-13 08:49:14.362 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:14 compute-0 nova_compute[248510]: 2025-12-13 08:49:14.383 248514 DEBUG nova.compute.manager [req-a62180f4-9147-45bb-bdd4-4b04b03647d2 req-469578e0-c8e7-432d-8499-62e79a75ce3c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-deleted-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:49:14 compute-0 nova_compute[248510]: 2025-12-13 08:49:14.444 248514 DEBUG oslo_concurrency.processutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:49:14 compute-0 nova_compute[248510]: 2025-12-13 08:49:14.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:49:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237925263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:15 compute-0 nova_compute[248510]: 2025-12-13 08:49:15.038 248514 DEBUG oslo_concurrency.processutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:49:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:49:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1050045149' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:49:15 compute-0 nova_compute[248510]: 2025-12-13 08:49:15.047 248514 DEBUG nova.compute.provider_tree [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:49:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:49:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1050045149' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:49:15 compute-0 nova_compute[248510]: 2025-12-13 08:49:15.070 248514 DEBUG nova.scheduler.client.report [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:49:15 compute-0 nova_compute[248510]: 2025-12-13 08:49:15.102 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:15 compute-0 nova_compute[248510]: 2025-12-13 08:49:15.132 248514 INFO nova.scheduler.client.report [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Deleted allocations for instance 99100320-043d-4f13-ac93-5fd3309abbf7
Dec 13 08:49:15 compute-0 ceph-mon[76537]: pgmap v2605: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 5.7 KiB/s wr, 28 op/s
Dec 13 08:49:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3237925263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1050045149' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:49:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1050045149' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:49:15 compute-0 nova_compute[248510]: 2025-12-13 08:49:15.249 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 5.6 KiB/s wr, 27 op/s
Dec 13 08:49:16 compute-0 nova_compute[248510]: 2025-12-13 08:49:16.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:17 compute-0 ceph-mon[76537]: pgmap v2606: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 5.6 KiB/s wr, 27 op/s
Dec 13 08:49:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:17.329 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:49:17 compute-0 nova_compute[248510]: 2025-12-13 08:49:17.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Dec 13 08:49:19 compute-0 ceph-mon[76537]: pgmap v2607: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Dec 13 08:49:19 compute-0 nova_compute[248510]: 2025-12-13 08:49:19.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:19 compute-0 podman[354066]: 2025-12-13 08:49:19.978606906 +0000 UTC m=+0.061209406 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:49:20 compute-0 podman[354067]: 2025-12-13 08:49:20.004878095 +0000 UTC m=+0.082881509 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 08:49:20 compute-0 podman[354065]: 2025-12-13 08:49:20.005624294 +0000 UTC m=+0.089257729 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:49:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007723968476299632 of space, bias 1.0, pg target 0.23171905428898898 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006682139988669646 of space, bias 1.0, pg target 0.20046419966008938 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.841775278005856e-07 of space, bias 4.0, pg target 0.0007010130333607028 quantized to 16 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:49:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:49:21 compute-0 ceph-mon[76537]: pgmap v2608: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Dec 13 08:49:21 compute-0 ovn_controller[148476]: 2025-12-13T08:49:21Z|01043|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 08:49:21 compute-0 nova_compute[248510]: 2025-12-13 08:49:21.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:21 compute-0 nova_compute[248510]: 2025-12-13 08:49:21.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:23 compute-0 ceph-mon[76537]: pgmap v2609: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:24 compute-0 ceph-mon[76537]: pgmap v2610: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:24 compute-0 nova_compute[248510]: 2025-12-13 08:49:24.597 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615749.5951822, 99100320-043d-4f13-ac93-5fd3309abbf7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:49:24 compute-0 nova_compute[248510]: 2025-12-13 08:49:24.597 248514 INFO nova.compute.manager [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] VM Stopped (Lifecycle Event)
Dec 13 08:49:24 compute-0 nova_compute[248510]: 2025-12-13 08:49:24.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:24 compute-0 nova_compute[248510]: 2025-12-13 08:49:24.672 248514 DEBUG nova.compute.manager [None req-1b1c37f6-c2a7-4bfd-b5ab-0b03c84f2aff - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:49:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.142 248514 DEBUG nova.compute.manager [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.142 248514 DEBUG nova.compute.manager [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing instance network info cache due to event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.143 248514 DEBUG oslo_concurrency.lockutils [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.143 248514 DEBUG oslo_concurrency.lockutils [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.143 248514 DEBUG nova.network.neutron [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.316 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.317 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.317 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.317 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.317 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.318 248514 INFO nova.compute.manager [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Terminating instance
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.319 248514 DEBUG nova.compute.manager [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:49:25 compute-0 kernel: tap1a00c927-1c (unregistering): left promiscuous mode
Dec 13 08:49:25 compute-0 NetworkManager[50376]: <info>  [1765615765.6118] device (tap1a00c927-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:25 compute-0 ovn_controller[148476]: 2025-12-13T08:49:25Z|01044|binding|INFO|Releasing lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c from this chassis (sb_readonly=0)
Dec 13 08:49:25 compute-0 ovn_controller[148476]: 2025-12-13T08:49:25Z|01045|binding|INFO|Setting lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c down in Southbound
Dec 13 08:49:25 compute-0 ovn_controller[148476]: 2025-12-13T08:49:25Z|01046|binding|INFO|Removing iface tap1a00c927-1c ovn-installed in OVS
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.628 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:2b:42 10.100.0.9'], port_security=['fa:16:3e:9c:2b:42 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1d73d88c-ca9a-4136-80de-fa2cf028ffb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '531d7c80-e840-46e0-9afc-03ae0558f787 73f9632c-0914-472c-9969-a269b215d831', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffad85fc-28b3-4529-8d06-0367d9c3d476, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1a00c927-1c7f-4af5-9337-d6e58800dc3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.629 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1a00c927-1c7f-4af5-9337-d6e58800dc3c in datapath b7bba2fe-699d-4423-a6e2-09604625a8f5 unbound from our chassis
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.630 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b7bba2fe-699d-4423-a6e2-09604625a8f5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a2ff63-1a9d-49d6-8d7a-cc876948b702]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.631 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 namespace which is not needed anymore
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.634 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:25 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Deactivated successfully.
Dec 13 08:49:25 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Consumed 17.468s CPU time.
Dec 13 08:49:25 compute-0 systemd-machined[210538]: Machine qemu-130-instance-00000069 terminated.
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.757 248514 INFO nova.virt.libvirt.driver [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance destroyed successfully.
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.758 248514 DEBUG nova.objects.instance [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:49:25 compute-0 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [NOTICE]   (351333) : haproxy version is 2.8.14-c23fe91
Dec 13 08:49:25 compute-0 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [NOTICE]   (351333) : path to executable is /usr/sbin/haproxy
Dec 13 08:49:25 compute-0 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [WARNING]  (351333) : Exiting Master process...
Dec 13 08:49:25 compute-0 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [ALERT]    (351333) : Current worker (351335) exited with code 143 (Terminated)
Dec 13 08:49:25 compute-0 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [WARNING]  (351333) : All workers exited. Exiting... (0)
Dec 13 08:49:25 compute-0 systemd[1]: libpod-63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5.scope: Deactivated successfully.
Dec 13 08:49:25 compute-0 podman[354154]: 2025-12-13 08:49:25.773134699 +0000 UTC m=+0.053620445 container died 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.779 248514 DEBUG nova.virt.libvirt.vif [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=105,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOgNSj2RX2tEOr5Rxtdc3T7qrIqjyVapwoURlTzSwBUNw2HAjV8i9+69CD+ahp0R2Tk6YrJ3W0cDR2tzHXyNVMUTiAkgjDao6U5yvxeoFoLQPs8Nmve95azrQ/Z/Vbs68Q==',key_name='tempest-TestSecurityGroupsBasicOps-80214463',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:47:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-c1fl6h76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:47:32Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=1d73d88c-ca9a-4136-80de-fa2cf028ffb7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.780 248514 DEBUG nova.network.os_vif_util [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.781 248514 DEBUG nova.network.os_vif_util [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.781 248514 DEBUG os_vif [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.784 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a00c927-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.823 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.826 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.829 248514 INFO os_vif [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c')
Dec 13 08:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5-userdata-shm.mount: Deactivated successfully.
Dec 13 08:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-27792fd296ec41c0b07a50cc3d8774b88ded6a613398922a1314c6ce563b646f-merged.mount: Deactivated successfully.
Dec 13 08:49:25 compute-0 podman[354154]: 2025-12-13 08:49:25.852726175 +0000 UTC m=+0.133211921 container cleanup 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:49:25 compute-0 systemd[1]: libpod-conmon-63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5.scope: Deactivated successfully.
Dec 13 08:49:25 compute-0 podman[354211]: 2025-12-13 08:49:25.93829525 +0000 UTC m=+0.060888527 container remove 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d25c9046-1a5b-410b-bef4-71324205ff0d]: (4, ('Sat Dec 13 08:49:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 (63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5)\n63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5\nSat Dec 13 08:49:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 (63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5)\n63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.946 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bfcb30-efe2-495e-af4a-482ef90deee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.947 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7bba2fe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:25 compute-0 kernel: tapb7bba2fe-60: left promiscuous mode
Dec 13 08:49:25 compute-0 nova_compute[248510]: 2025-12-13 08:49:25.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.965 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[844dd31f-f042-4b08-8d9e-68b7fc45a674]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.982 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eab7bc23-627f-44a7-8417-13d62bed30b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.984 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5068f2-d45a-4e69-8da2-f2a7123f0bce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:26.002 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f344d33-5a25-408f-ad2e-9a2774a2cad7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822846, 'reachable_time': 18390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354226, 'error': None, 'target': 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:26 compute-0 systemd[1]: run-netns-ovnmeta\x2db7bba2fe\x2d699d\x2d4423\x2da6e2\x2d09604625a8f5.mount: Deactivated successfully.
Dec 13 08:49:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:26.005 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:49:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:26.005 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[92d2317e-9966-4087-a145-ffa1b76dd5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:49:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 08:49:26 compute-0 nova_compute[248510]: 2025-12-13 08:49:26.138 248514 INFO nova.virt.libvirt.driver [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Deleting instance files /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7_del
Dec 13 08:49:26 compute-0 nova_compute[248510]: 2025-12-13 08:49:26.139 248514 INFO nova.virt.libvirt.driver [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Deletion of /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7_del complete
Dec 13 08:49:26 compute-0 nova_compute[248510]: 2025-12-13 08:49:26.215 248514 INFO nova.compute.manager [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Took 0.90 seconds to destroy the instance on the hypervisor.
Dec 13 08:49:26 compute-0 nova_compute[248510]: 2025-12-13 08:49:26.216 248514 DEBUG oslo.service.loopingcall [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:49:26 compute-0 nova_compute[248510]: 2025-12-13 08:49:26.217 248514 DEBUG nova.compute.manager [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:49:26 compute-0 nova_compute[248510]: 2025-12-13 08:49:26.217 248514 DEBUG nova.network.neutron [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:49:26 compute-0 nova_compute[248510]: 2025-12-13 08:49:26.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:27 compute-0 ceph-mon[76537]: pgmap v2611: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 08:49:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 85 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 3 op/s
Dec 13 08:49:28 compute-0 nova_compute[248510]: 2025-12-13 08:49:28.818 248514 DEBUG nova.network.neutron [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated VIF entry in instance network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:49:28 compute-0 nova_compute[248510]: 2025-12-13 08:49:28.819 248514 DEBUG nova.network.neutron [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:49:28 compute-0 nova_compute[248510]: 2025-12-13 08:49:28.854 248514 DEBUG oslo_concurrency.lockutils [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:49:28 compute-0 nova_compute[248510]: 2025-12-13 08:49:28.985 248514 DEBUG nova.network.neutron [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.003 248514 INFO nova.compute.manager [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Took 2.79 seconds to deallocate network for instance.
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.062 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.063 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.074 248514 DEBUG nova.compute.manager [req-b384e0ae-a2f0-4fa7-be25-211747d1b2c0 req-a19622b2-ec4c-4bdf-ba52-bccbe4a37ae8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-vif-deleted-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:49:29 compute-0 ceph-mon[76537]: pgmap v2612: 321 pgs: 321 active+clean; 85 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 3 op/s
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.141 248514 DEBUG oslo_concurrency.processutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:49:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:49:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572251386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.748 248514 DEBUG oslo_concurrency.processutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.755 248514 DEBUG nova.compute.provider_tree [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:49:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.782 248514 DEBUG nova.scheduler.client.report [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.829 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.864 248514 INFO nova.scheduler.client.report [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 1d73d88c-ca9a-4136-80de-fa2cf028ffb7
Dec 13 08:49:29 compute-0 nova_compute[248510]: 2025-12-13 08:49:29.954 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2572251386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:30 compute-0 nova_compute[248510]: 2025-12-13 08:49:30.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:31 compute-0 ceph-mon[76537]: pgmap v2613: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:31 compute-0 nova_compute[248510]: 2025-12-13 08:49:31.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:33 compute-0 ceph-mon[76537]: pgmap v2614: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:35 compute-0 ceph-mon[76537]: pgmap v2615: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:35 compute-0 nova_compute[248510]: 2025-12-13 08:49:35.828 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:36 compute-0 nova_compute[248510]: 2025-12-13 08:49:36.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:37 compute-0 nova_compute[248510]: 2025-12-13 08:49:37.077 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:37 compute-0 nova_compute[248510]: 2025-12-13 08:49:37.151 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:37 compute-0 ceph-mon[76537]: pgmap v2616: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:39 compute-0 ceph-mon[76537]: pgmap v2617: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:49:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec 13 08:49:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:49:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:49:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:49:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:49:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:49:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:49:40 compute-0 nova_compute[248510]: 2025-12-13 08:49:40.757 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615765.7549932, 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:49:40 compute-0 nova_compute[248510]: 2025-12-13 08:49:40.757 248514 INFO nova.compute.manager [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] VM Stopped (Lifecycle Event)
Dec 13 08:49:40 compute-0 nova_compute[248510]: 2025-12-13 08:49:40.785 248514 DEBUG nova.compute.manager [None req-82d416e1-f49b-4fbc-9f2b-93c38ffb3798 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:49:40 compute-0 nova_compute[248510]: 2025-12-13 08:49:40.831 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:41 compute-0 ceph-mon[76537]: pgmap v2618: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec 13 08:49:41 compute-0 nova_compute[248510]: 2025-12-13 08:49:41.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:49:43 compute-0 ceph-mon[76537]: pgmap v2619: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:49:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:49:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:45 compute-0 ceph-mon[76537]: pgmap v2620: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:49:45 compute-0 nova_compute[248510]: 2025-12-13 08:49:45.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:49:47 compute-0 nova_compute[248510]: 2025-12-13 08:49:47.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:47 compute-0 ceph-mon[76537]: pgmap v2621: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:49:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 08:49:49 compute-0 ceph-mon[76537]: pgmap v2622: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 08:49:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:50 compute-0 nova_compute[248510]: 2025-12-13 08:49:50.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:50 compute-0 podman[354253]: 2025-12-13 08:49:50.956683994 +0000 UTC m=+0.046447385 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:49:50 compute-0 podman[354252]: 2025-12-13 08:49:50.966976183 +0000 UTC m=+0.060633842 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 13 08:49:50 compute-0 podman[354251]: 2025-12-13 08:49:50.989061826 +0000 UTC m=+0.083324120 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 08:49:51 compute-0 ceph-mon[76537]: pgmap v2623: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:52 compute-0 nova_compute[248510]: 2025-12-13 08:49:52.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:53 compute-0 ceph-mon[76537]: pgmap v2624: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.383534) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793383624, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2087, "num_deletes": 253, "total_data_size": 3662861, "memory_usage": 3723280, "flush_reason": "Manual Compaction"}
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793410404, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3563511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49792, "largest_seqno": 51878, "table_properties": {"data_size": 3553860, "index_size": 6145, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19427, "raw_average_key_size": 20, "raw_value_size": 3534757, "raw_average_value_size": 3701, "num_data_blocks": 271, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615572, "oldest_key_time": 1765615572, "file_creation_time": 1765615793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 26931 microseconds, and 9825 cpu microseconds.
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.410462) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3563511 bytes OK
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.410489) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.413901) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.413920) EVENT_LOG_v1 {"time_micros": 1765615793413915, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.413938) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3654095, prev total WAL file size 3654095, number of live WAL files 2.
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.415052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3479KB)], [116(9089KB)]
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793415131, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 12871267, "oldest_snapshot_seqno": -1}
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7452 keys, 11081509 bytes, temperature: kUnknown
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793497937, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11081509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11031253, "index_size": 30496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 193220, "raw_average_key_size": 25, "raw_value_size": 10897514, "raw_average_value_size": 1462, "num_data_blocks": 1198, "num_entries": 7452, "num_filter_entries": 7452, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.498510) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11081509 bytes
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.500880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.8 rd, 133.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7974, records dropped: 522 output_compression: NoCompression
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.500909) EVENT_LOG_v1 {"time_micros": 1765615793500895, "job": 70, "event": "compaction_finished", "compaction_time_micros": 83133, "compaction_time_cpu_micros": 27448, "output_level": 6, "num_output_files": 1, "total_output_size": 11081509, "num_input_records": 7974, "num_output_records": 7452, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793502131, "job": 70, "event": "table_file_deletion", "file_number": 118}
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793504256, "job": 70, "event": "table_file_deletion", "file_number": 116}
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.414937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:49:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:49:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:54 compute-0 ceph-mon[76537]: pgmap v2625: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:55.427 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:55.428 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:49:55.428 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:55 compute-0 nova_compute[248510]: 2025-12-13 08:49:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:55 compute-0 nova_compute[248510]: 2025-12-13 08:49:55.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:56 compute-0 nova_compute[248510]: 2025-12-13 08:49:56.973 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:56 compute-0 nova_compute[248510]: 2025-12-13 08:49:56.973 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:56 compute-0 nova_compute[248510]: 2025-12-13 08:49:56.994 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:49:57 compute-0 nova_compute[248510]: 2025-12-13 08:49:57.034 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:49:57 compute-0 nova_compute[248510]: 2025-12-13 08:49:57.128 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:57 compute-0 nova_compute[248510]: 2025-12-13 08:49:57.128 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:57 compute-0 nova_compute[248510]: 2025-12-13 08:49:57.138 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:49:57 compute-0 nova_compute[248510]: 2025-12-13 08:49:57.139 248514 INFO nova.compute.claims [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:49:57 compute-0 ceph-mon[76537]: pgmap v2626: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:57 compute-0 nova_compute[248510]: 2025-12-13 08:49:57.269 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:49:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:49:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769668349' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:57 compute-0 nova_compute[248510]: 2025-12-13 08:49:57.907 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:49:57 compute-0 nova_compute[248510]: 2025-12-13 08:49:57.916 248514 DEBUG nova.compute.provider_tree [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:49:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.133 248514 DEBUG nova.scheduler.client.report [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:49:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1769668349' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.174 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.175 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.250 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.250 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.291 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.330 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.435 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.437 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.438 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Creating image(s)
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.465 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.503 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:49:58 compute-0 sudo[354340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:49:58 compute-0 sudo[354340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:49:58 compute-0 sudo[354340]: pam_unix(sudo:session): session closed for user root
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.532 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.536 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:49:58 compute-0 sudo[354408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:49:58 compute-0 sudo[354408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.629 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.630 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.630 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.631 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.655 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.659 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 8d919892-73fd-4a11-be79-2a1a9280a987_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:49:58 compute-0 nova_compute[248510]: 2025-12-13 08:49:58.717 248514 DEBUG nova.policy [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.035 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 8d919892-73fd-4a11-be79-2a1a9280a987_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:49:59 compute-0 sudo[354408]: pam_unix(sudo:session): session closed for user root
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.107 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:49:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:49:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:49:59 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:49:59 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:49:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:49:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:49:59 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:49:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: pgmap v2627: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 08:49:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:49:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:49:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:49:59 compute-0 sudo[354569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.206 248514 DEBUG nova.objects.instance [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 8d919892-73fd-4a11-be79-2a1a9280a987 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:49:59 compute-0 sudo[354569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:49:59 compute-0 sudo[354569]: pam_unix(sudo:session): session closed for user root
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.227 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.228 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Ensure instance console log exists: /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.229 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.229 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.230 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:49:59 compute-0 sudo[354612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:49:59 compute-0 sudo[354612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:49:59 compute-0 podman[354648]: 2025-12-13 08:49:59.518510824 +0000 UTC m=+0.039079031 container create b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 08:49:59 compute-0 systemd[1]: Started libpod-conmon-b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705.scope.
Dec 13 08:49:59 compute-0 podman[354648]: 2025-12-13 08:49:59.501266551 +0000 UTC m=+0.021834798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:49:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:49:59 compute-0 podman[354648]: 2025-12-13 08:49:59.615843434 +0000 UTC m=+0.136411671 container init b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:49:59 compute-0 podman[354648]: 2025-12-13 08:49:59.628174253 +0000 UTC m=+0.148742480 container start b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 08:49:59 compute-0 podman[354648]: 2025-12-13 08:49:59.632401119 +0000 UTC m=+0.152969356 container attach b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 08:49:59 compute-0 cool_mccarthy[354664]: 167 167
Dec 13 08:49:59 compute-0 systemd[1]: libpod-b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705.scope: Deactivated successfully.
Dec 13 08:49:59 compute-0 podman[354648]: 2025-12-13 08:49:59.637475856 +0000 UTC m=+0.158044093 container died b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 13 08:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-765e921f3a175de5a49a3b38ce96ffdd9cdba80c63da2a5ed882bb02b16e0642-merged.mount: Deactivated successfully.
Dec 13 08:49:59 compute-0 podman[354648]: 2025-12-13 08:49:59.683153882 +0000 UTC m=+0.203722099 container remove b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 08:49:59 compute-0 systemd[1]: libpod-conmon-b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705.scope: Deactivated successfully.
Dec 13 08:49:59 compute-0 nova_compute[248510]: 2025-12-13 08:49:59.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:49:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:49:59 compute-0 podman[354689]: 2025-12-13 08:49:59.851135813 +0000 UTC m=+0.044462775 container create 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:49:59 compute-0 systemd[1]: Started libpod-conmon-3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19.scope.
Dec 13 08:49:59 compute-0 podman[354689]: 2025-12-13 08:49:59.828873955 +0000 UTC m=+0.022200917 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:49:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:49:59 compute-0 podman[354689]: 2025-12-13 08:49:59.945683264 +0000 UTC m=+0.139010226 container init 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:49:59 compute-0 podman[354689]: 2025-12-13 08:49:59.952389342 +0000 UTC m=+0.145716274 container start 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:49:59 compute-0 podman[354689]: 2025-12-13 08:49:59.956779212 +0000 UTC m=+0.150106144 container attach 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:50:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 85 B/s wr, 10 op/s
Dec 13 08:50:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Dec 13 08:50:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Dec 13 08:50:00 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Dec 13 08:50:00 compute-0 nova_compute[248510]: 2025-12-13 08:50:00.413 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Successfully created port: 8d84f494-97c1-4708-b8df-444e42f55484 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:50:00 compute-0 goofy_yalow[354705]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:50:00 compute-0 goofy_yalow[354705]: --> All data devices are unavailable
Dec 13 08:50:00 compute-0 systemd[1]: libpod-3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19.scope: Deactivated successfully.
Dec 13 08:50:00 compute-0 podman[354689]: 2025-12-13 08:50:00.454261226 +0000 UTC m=+0.647588158 container died 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b-merged.mount: Deactivated successfully.
Dec 13 08:50:00 compute-0 podman[354689]: 2025-12-13 08:50:00.500335881 +0000 UTC m=+0.693662813 container remove 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:50:00 compute-0 systemd[1]: libpod-conmon-3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19.scope: Deactivated successfully.
Dec 13 08:50:00 compute-0 sudo[354612]: pam_unix(sudo:session): session closed for user root
Dec 13 08:50:00 compute-0 sudo[354736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:50:00 compute-0 sudo[354736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:50:00 compute-0 sudo[354736]: pam_unix(sudo:session): session closed for user root
Dec 13 08:50:00 compute-0 sudo[354761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:50:00 compute-0 sudo[354761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:50:00 compute-0 nova_compute[248510]: 2025-12-13 08:50:00.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:01 compute-0 podman[354799]: 2025-12-13 08:50:01.009431094 +0000 UTC m=+0.066690803 container create 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:50:01 compute-0 podman[354799]: 2025-12-13 08:50:00.970386535 +0000 UTC m=+0.027646324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:50:01 compute-0 systemd[1]: Started libpod-conmon-34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99.scope.
Dec 13 08:50:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:50:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Dec 13 08:50:01 compute-0 ceph-mon[76537]: pgmap v2628: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 85 B/s wr, 10 op/s
Dec 13 08:50:01 compute-0 ceph-mon[76537]: osdmap e274: 3 total, 3 up, 3 in
Dec 13 08:50:01 compute-0 podman[354799]: 2025-12-13 08:50:01.287208959 +0000 UTC m=+0.344468698 container init 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:50:01 compute-0 podman[354799]: 2025-12-13 08:50:01.295128407 +0000 UTC m=+0.352388156 container start 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 08:50:01 compute-0 bold_dhawan[354816]: 167 167
Dec 13 08:50:01 compute-0 systemd[1]: libpod-34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99.scope: Deactivated successfully.
Dec 13 08:50:01 compute-0 podman[354799]: 2025-12-13 08:50:01.357623934 +0000 UTC m=+0.414883733 container attach 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:50:01 compute-0 podman[354799]: 2025-12-13 08:50:01.358354972 +0000 UTC m=+0.415614741 container died 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:50:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Dec 13 08:50:01 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Dec 13 08:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf37ab939bf1855dd1052c77111df13da9d844b28c4046cc91c5d88f39da154f-merged.mount: Deactivated successfully.
Dec 13 08:50:01 compute-0 podman[354799]: 2025-12-13 08:50:01.415048534 +0000 UTC m=+0.472308243 container remove 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 08:50:01 compute-0 systemd[1]: libpod-conmon-34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99.scope: Deactivated successfully.
Dec 13 08:50:01 compute-0 podman[354842]: 2025-12-13 08:50:01.580288327 +0000 UTC m=+0.042360083 container create da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 08:50:01 compute-0 systemd[1]: Started libpod-conmon-da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78.scope.
Dec 13 08:50:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:50:01 compute-0 podman[354842]: 2025-12-13 08:50:01.561950037 +0000 UTC m=+0.024021823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:01 compute-0 podman[354842]: 2025-12-13 08:50:01.675814402 +0000 UTC m=+0.137886178 container init da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 08:50:01 compute-0 podman[354842]: 2025-12-13 08:50:01.683461184 +0000 UTC m=+0.145532940 container start da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:50:01 compute-0 podman[354842]: 2025-12-13 08:50:01.687391902 +0000 UTC m=+0.149463678 container attach da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]: {
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:     "0": [
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:         {
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "devices": [
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "/dev/loop3"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             ],
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_name": "ceph_lv0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_size": "21470642176",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "name": "ceph_lv0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "tags": {
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cluster_name": "ceph",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.crush_device_class": "",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.encrypted": "0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.objectstore": "bluestore",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osd_id": "0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.type": "block",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.vdo": "0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.with_tpm": "0"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             },
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "type": "block",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "vg_name": "ceph_vg0"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:         }
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:     ],
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:     "1": [
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:         {
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "devices": [
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "/dev/loop4"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             ],
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_name": "ceph_lv1",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_size": "21470642176",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "name": "ceph_lv1",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "tags": {
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cluster_name": "ceph",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.crush_device_class": "",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.encrypted": "0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.objectstore": "bluestore",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osd_id": "1",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.type": "block",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.vdo": "0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.with_tpm": "0"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             },
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "type": "block",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "vg_name": "ceph_vg1"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:         }
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:     ],
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:     "2": [
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:         {
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "devices": [
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "/dev/loop5"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             ],
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_name": "ceph_lv2",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_size": "21470642176",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "name": "ceph_lv2",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "tags": {
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.cluster_name": "ceph",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.crush_device_class": "",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.encrypted": "0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.objectstore": "bluestore",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osd_id": "2",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.type": "block",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.vdo": "0",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:                 "ceph.with_tpm": "0"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             },
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "type": "block",
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:             "vg_name": "ceph_vg2"
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:         }
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]:     ]
Dec 13 08:50:02 compute-0 sharp_hofstadter[354859]: }
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:02 compute-0 systemd[1]: libpod-da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78.scope: Deactivated successfully.
Dec 13 08:50:02 compute-0 podman[354842]: 2025-12-13 08:50:02.044990308 +0000 UTC m=+0.507062064 container died da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 08:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c-merged.mount: Deactivated successfully.
Dec 13 08:50:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 127 B/s wr, 0 op/s
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.093 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Successfully updated port: 8d84f494-97c1-4708-b8df-444e42f55484 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:50:02 compute-0 podman[354842]: 2025-12-13 08:50:02.096362116 +0000 UTC m=+0.558433872 container remove da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:50:02 compute-0 systemd[1]: libpod-conmon-da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78.scope: Deactivated successfully.
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.120 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.120 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.121 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:50:02 compute-0 sudo[354761]: pam_unix(sudo:session): session closed for user root
Dec 13 08:50:02 compute-0 sudo[354879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:50:02 compute-0 sudo[354879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:50:02 compute-0 sudo[354879]: pam_unix(sudo:session): session closed for user root
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.255 248514 DEBUG nova.compute.manager [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-changed-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.256 248514 DEBUG nova.compute.manager [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing instance network info cache due to event network-changed-8d84f494-97c1-4708-b8df-444e42f55484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.256 248514 DEBUG oslo_concurrency.lockutils [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:50:02 compute-0 sudo[354904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:50:02 compute-0 sudo[354904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:50:02 compute-0 nova_compute[248510]: 2025-12-13 08:50:02.360 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:50:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Dec 13 08:50:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Dec 13 08:50:02 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Dec 13 08:50:02 compute-0 ceph-mon[76537]: osdmap e275: 3 total, 3 up, 3 in
Dec 13 08:50:02 compute-0 podman[354941]: 2025-12-13 08:50:02.608946708 +0000 UTC m=+0.042202589 container create e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:50:02 compute-0 systemd[1]: Started libpod-conmon-e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a.scope.
Dec 13 08:50:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:50:02 compute-0 podman[354941]: 2025-12-13 08:50:02.59068582 +0000 UTC m=+0.023941721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:50:02 compute-0 podman[354941]: 2025-12-13 08:50:02.690576705 +0000 UTC m=+0.123832606 container init e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 08:50:02 compute-0 podman[354941]: 2025-12-13 08:50:02.698086663 +0000 UTC m=+0.131342544 container start e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 08:50:02 compute-0 priceless_goodall[354957]: 167 167
Dec 13 08:50:02 compute-0 systemd[1]: libpod-e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a.scope: Deactivated successfully.
Dec 13 08:50:02 compute-0 podman[354941]: 2025-12-13 08:50:02.707875639 +0000 UTC m=+0.141131510 container attach e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 08:50:02 compute-0 podman[354941]: 2025-12-13 08:50:02.711241433 +0000 UTC m=+0.144497324 container died e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 08:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdce28a27de5d06019b981e271a3a5c5f9c16727fa70b449f1bbc2c361ea39cc-merged.mount: Deactivated successfully.
Dec 13 08:50:02 compute-0 podman[354941]: 2025-12-13 08:50:02.762280383 +0000 UTC m=+0.195536294 container remove e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 08:50:02 compute-0 systemd[1]: libpod-conmon-e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a.scope: Deactivated successfully.
Dec 13 08:50:02 compute-0 podman[354981]: 2025-12-13 08:50:02.951037186 +0000 UTC m=+0.044763344 container create d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:50:02 compute-0 systemd[1]: Started libpod-conmon-d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1.scope.
Dec 13 08:50:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:03 compute-0 podman[354981]: 2025-12-13 08:50:03.030544869 +0000 UTC m=+0.124271057 container init d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 08:50:03 compute-0 podman[354981]: 2025-12-13 08:50:02.933384213 +0000 UTC m=+0.027110391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:50:03 compute-0 podman[354981]: 2025-12-13 08:50:03.038413866 +0000 UTC m=+0.132140024 container start d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:50:03 compute-0 podman[354981]: 2025-12-13 08:50:03.042353075 +0000 UTC m=+0.136079243 container attach d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 08:50:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Dec 13 08:50:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Dec 13 08:50:03 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Dec 13 08:50:03 compute-0 ceph-mon[76537]: pgmap v2631: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 127 B/s wr, 0 op/s
Dec 13 08:50:03 compute-0 ceph-mon[76537]: osdmap e276: 3 total, 3 up, 3 in
Dec 13 08:50:03 compute-0 lvm[355077]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:50:03 compute-0 lvm[355076]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:50:03 compute-0 lvm[355077]: VG ceph_vg1 finished
Dec 13 08:50:03 compute-0 lvm[355076]: VG ceph_vg0 finished
Dec 13 08:50:03 compute-0 lvm[355079]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:50:03 compute-0 lvm[355079]: VG ceph_vg2 finished
Dec 13 08:50:03 compute-0 intelligent_babbage[354998]: {}
Dec 13 08:50:03 compute-0 systemd[1]: libpod-d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1.scope: Deactivated successfully.
Dec 13 08:50:03 compute-0 systemd[1]: libpod-d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1.scope: Consumed 1.309s CPU time.
Dec 13 08:50:03 compute-0 podman[354981]: 2025-12-13 08:50:03.87891604 +0000 UTC m=+0.972642198 container died d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:50:03 compute-0 nova_compute[248510]: 2025-12-13 08:50:03.887 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa-merged.mount: Deactivated successfully.
Dec 13 08:50:03 compute-0 podman[354981]: 2025-12-13 08:50:03.929293613 +0000 UTC m=+1.023019761 container remove d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:50:03 compute-0 systemd[1]: libpod-conmon-d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1.scope: Deactivated successfully.
Dec 13 08:50:03 compute-0 sudo[354904]: pam_unix(sudo:session): session closed for user root
Dec 13 08:50:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:50:03 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:50:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.001 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.002 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance network_info: |[{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.002 248514 DEBUG oslo_concurrency.lockutils [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.002 248514 DEBUG nova.network.neutron [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:50:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.005 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start _get_guest_xml network_info=[{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.011 248514 WARNING nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.019 248514 DEBUG nova.virt.libvirt.host [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.020 248514 DEBUG nova.virt.libvirt.host [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.024 248514 DEBUG nova.virt.libvirt.host [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.025 248514 DEBUG nova.virt.libvirt.host [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.025 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.025 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.027 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.027 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.027 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.027 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.031 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:04 compute-0 sudo[355095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:50:04 compute-0 sudo[355095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:50:04 compute-0 sudo[355095]: pam_unix(sudo:session): session closed for user root
Dec 13 08:50:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 151 KiB/s rd, 5.3 MiB/s wr, 223 op/s
Dec 13 08:50:04 compute-0 ceph-mon[76537]: osdmap e277: 3 total, 3 up, 3 in
Dec 13 08:50:04 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:50:04 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:50:04 compute-0 ceph-mon[76537]: pgmap v2634: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 151 KiB/s rd, 5.3 MiB/s wr, 223 op/s
Dec 13 08:50:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:50:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327126050' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.616 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.638 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:04 compute-0 nova_compute[248510]: 2025-12-13 08:50:04.642 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Dec 13 08:50:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Dec 13 08:50:05 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Dec 13 08:50:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:50:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506370385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.225 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.227 248514 DEBUG nova.virt.libvirt.vif [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=107,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-rsplf7ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:49:58Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=8d919892-73fd-4a11-be79-2a1a9280a987,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.228 248514 DEBUG nova.network.os_vif_util [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.229 248514 DEBUG nova.network.os_vif_util [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.230 248514 DEBUG nova.objects.instance [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d919892-73fd-4a11-be79-2a1a9280a987 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.252 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <uuid>8d919892-73fd-4a11-be79-2a1a9280a987</uuid>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <name>instance-0000006b</name>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340</nova:name>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:50:04</nova:creationTime>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <nova:port uuid="8d84f494-97c1-4708-b8df-444e42f55484">
Dec 13 08:50:05 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <system>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <entry name="serial">8d919892-73fd-4a11-be79-2a1a9280a987</entry>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <entry name="uuid">8d919892-73fd-4a11-be79-2a1a9280a987</entry>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </system>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <os>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   </os>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <features>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   </features>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/8d919892-73fd-4a11-be79-2a1a9280a987_disk">
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       </source>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/8d919892-73fd-4a11-be79-2a1a9280a987_disk.config">
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       </source>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:50:05 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:87:03:ce"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <target dev="tap8d84f494-97"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/console.log" append="off"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <video>
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </video>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:50:05 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:50:05 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:50:05 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:50:05 compute-0 nova_compute[248510]: </domain>
Dec 13 08:50:05 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.252 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Preparing to wait for external event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.253 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.253 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.253 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.254 248514 DEBUG nova.virt.libvirt.vif [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=107,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-rsplf7ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:49:58Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=8d919892-73fd-4a11-be79-2a1a9280a987,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.255 248514 DEBUG nova.network.os_vif_util [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.255 248514 DEBUG nova.network.os_vif_util [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.256 248514 DEBUG os_vif [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.257 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.257 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.261 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.261 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d84f494-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d84f494-97, col_values=(('external_ids', {'iface-id': '8d84f494-97c1-4708-b8df-444e42f55484', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:03:ce', 'vm-uuid': '8d919892-73fd-4a11-be79-2a1a9280a987'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.263 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:05 compute-0 NetworkManager[50376]: <info>  [1765615805.2654] manager: (tap8d84f494-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.267 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.272 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.273 248514 INFO os_vif [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97')
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.346 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.347 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.347 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:87:03:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.348 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Using config drive
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.373 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1327126050' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:05 compute-0 ceph-mon[76537]: osdmap e278: 3 total, 3 up, 3 in
Dec 13 08:50:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/506370385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.803 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.803 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.864 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Creating config drive at /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.871 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwev1jesv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.957 248514 DEBUG nova.network.neutron [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updated VIF entry in instance network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.958 248514 DEBUG nova.network.neutron [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:50:05 compute-0 nova_compute[248510]: 2025-12-13 08:50:05.987 248514 DEBUG oslo_concurrency.lockutils [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.018 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwev1jesv" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.042 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.046 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 4.5 MiB/s wr, 189 op/s
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.334 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.335 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Deleting local config drive /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config because it was imported into RBD.
Dec 13 08:50:06 compute-0 kernel: tap8d84f494-97: entered promiscuous mode
Dec 13 08:50:06 compute-0 NetworkManager[50376]: <info>  [1765615806.3915] manager: (tap8d84f494-97): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Dec 13 08:50:06 compute-0 systemd-udevd[355074]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:50:06 compute-0 ovn_controller[148476]: 2025-12-13T08:50:06Z|01047|binding|INFO|Claiming lport 8d84f494-97c1-4708-b8df-444e42f55484 for this chassis.
Dec 13 08:50:06 compute-0 ovn_controller[148476]: 2025-12-13T08:50:06Z|01048|binding|INFO|8d84f494-97c1-4708-b8df-444e42f55484: Claiming fa:16:3e:87:03:ce 10.100.0.11
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:06 compute-0 NetworkManager[50376]: <info>  [1765615806.4063] device (tap8d84f494-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:50:06 compute-0 NetworkManager[50376]: <info>  [1765615806.4073] device (tap8d84f494-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.409 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:03:ce 10.100.0.11'], port_security=['fa:16:3e:87:03:ce 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8d919892-73fd-4a11-be79-2a1a9280a987', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c8d25b9-2d2e-4d46-aaac-2d96c7d8db60 8b6ad7fe-fef3-4e9f-8568-5f8d9cabfe04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2f617a-f802-4ec5-b71a-0c176c8c3ae6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8d84f494-97c1-4708-b8df-444e42f55484) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.410 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8d84f494-97c1-4708-b8df-444e42f55484 in datapath abf04c22-5ac7-46ee-bfad-53f95095fba3 bound to our chassis
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.412 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abf04c22-5ac7-46ee-bfad-53f95095fba3
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.423 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f80541-3b26-4317-8b56-fac69c52ec9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.424 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabf04c22-51 in ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.426 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabf04c22-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.426 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[66d63047-507c-48f3-9610-4d00c8599b4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.427 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86727e10-733e-4c87-bf07-923fa2391cb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 systemd-machined[210538]: New machine qemu-133-instance-0000006b.
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.437 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1c6e24-97f2-4c89-92ea-9443fb6fa859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:06 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006b.
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.460 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d90d3c-1e52-4fdd-b0df-28629171d423]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_controller[148476]: 2025-12-13T08:50:06Z|01049|binding|INFO|Setting lport 8d84f494-97c1-4708-b8df-444e42f55484 ovn-installed in OVS
Dec 13 08:50:06 compute-0 ovn_controller[148476]: 2025-12-13T08:50:06Z|01050|binding|INFO|Setting lport 8d84f494-97c1-4708-b8df-444e42f55484 up in Southbound
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.491 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c8da15cc-ebdd-452d-b7a3-00abd5f49378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 NetworkManager[50376]: <info>  [1765615806.4982] manager: (tapabf04c22-50): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.497 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6869f6c-3cde-4b90-ac1c-2f18f44d086a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.528 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e29fab5c-cba4-4369-9217-77d68af8c35d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.530 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1754fd29-5c10-45fe-b78d-8b8790dcbb01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 NetworkManager[50376]: <info>  [1765615806.5553] device (tapabf04c22-50): carrier: link connected
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.561 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ee91d2b7-dc3d-4899-8df0-e9cad23fcb77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.577 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e954ae1-ab88-4b17-8c86-cf7085a1bb5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabf04c22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:36:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838377, 'reachable_time': 33674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355288, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.592 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[65057111-031c-4295-ada4-626f893c1156]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:36c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838377, 'tstamp': 838377}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355289, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7269fbd2-57e7-47c5-a459-a5ef839ad9da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabf04c22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:36:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838377, 'reachable_time': 33674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355290, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7166421-7766-475c-a1dd-0966ce042042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.709 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[757dba42-bede-42fd-bb70-2b5dd8900ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.711 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabf04c22-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.712 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.712 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabf04c22-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:06 compute-0 NetworkManager[50376]: <info>  [1765615806.7151] manager: (tapabf04c22-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:06 compute-0 kernel: tapabf04c22-50: entered promiscuous mode
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.717 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabf04c22-50, col_values=(('external_ids', {'iface-id': '6b94eeb9-e344-4933-88eb-29577cf3087f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:06 compute-0 ovn_controller[148476]: 2025-12-13T08:50:06Z|01051|binding|INFO|Releasing lport 6b94eeb9-e344-4933-88eb-29577cf3087f from this chassis (sb_readonly=0)
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.720 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abf04c22-5ac7-46ee-bfad-53f95095fba3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abf04c22-5ac7-46ee-bfad-53f95095fba3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[16428ed9-8e3f-4a01-aa5d-df314fc3e228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.722 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-abf04c22-5ac7-46ee-bfad-53f95095fba3
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/abf04c22-5ac7-46ee-bfad-53f95095fba3.pid.haproxy
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID abf04c22-5ac7-46ee-bfad-53f95095fba3
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:50:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.723 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'env', 'PROCESS_TAG=haproxy-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abf04c22-5ac7-46ee-bfad-53f95095fba3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:50:06 compute-0 nova_compute[248510]: 2025-12-13 08:50:06.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Dec 13 08:50:07 compute-0 ceph-mon[76537]: pgmap v2636: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 4.5 MiB/s wr, 189 op/s
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:07 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Dec 13 08:50:07 compute-0 podman[355347]: 2025-12-13 08:50:07.073589329 +0000 UTC m=+0.022789752 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:50:07 compute-0 podman[355347]: 2025-12-13 08:50:07.735762552 +0000 UTC m=+0.684962965 container create 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.750 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615807.7500963, 8d919892-73fd-4a11-be79-2a1a9280a987 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.750 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] VM Started (Lifecycle Event)
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.779 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.784 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615807.7502682, 8d919892-73fd-4a11-be79-2a1a9280a987 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.785 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] VM Paused (Lifecycle Event)
Dec 13 08:50:07 compute-0 systemd[1]: Started libpod-conmon-0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112.scope.
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.809 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.814 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7015d7066325922029e49e3e8c4de1d3ef5bbeec39e3ee942291a993c00912b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:50:07 compute-0 nova_compute[248510]: 2025-12-13 08:50:07.841 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:50:08 compute-0 podman[355347]: 2025-12-13 08:50:08.046270646 +0000 UTC m=+0.995471059 container init 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:50:08 compute-0 podman[355347]: 2025-12-13 08:50:08.052295027 +0000 UTC m=+1.001495430 container start 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:50:08 compute-0 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [NOTICE]   (355384) : New worker (355386) forked
Dec 13 08:50:08 compute-0 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [NOTICE]   (355384) : Loading success.
Dec 13 08:50:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 3.7 MiB/s wr, 169 op/s
Dec 13 08:50:08 compute-0 ceph-mon[76537]: osdmap e279: 3 total, 3 up, 3 in
Dec 13 08:50:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:08.952 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:08.954 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.969 248514 DEBUG nova.compute.manager [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.970 248514 DEBUG oslo_concurrency.lockutils [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.970 248514 DEBUG oslo_concurrency.lockutils [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.970 248514 DEBUG oslo_concurrency.lockutils [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.970 248514 DEBUG nova.compute.manager [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Processing event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.971 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.975 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615808.9756334, 8d919892-73fd-4a11-be79-2a1a9280a987 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.976 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] VM Resumed (Lifecycle Event)
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.977 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.981 248514 INFO nova.virt.libvirt.driver [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance spawned successfully.
Dec 13 08:50:08 compute-0 nova_compute[248510]: 2025-12-13 08:50:08.982 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.004 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.009 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.010 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.010 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.011 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.011 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.012 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.016 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.049 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.100 248514 INFO nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Took 10.66 seconds to spawn the instance on the hypervisor.
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.101 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.183 248514 INFO nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Took 12.13 seconds to build instance.
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.220 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:50:09
Dec 13 08:50:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:50:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:50:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'vms']
Dec 13 08:50:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:50:09 compute-0 ceph-mon[76537]: pgmap v2638: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 3.7 MiB/s wr, 169 op/s
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.801 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:50:09 compute-0 nova_compute[248510]: 2025-12-13 08:50:09.801 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 2.0 MiB/s wr, 149 op/s
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:50:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688414309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.388 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.476 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.476 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.632 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.634 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3647MB free_disk=59.96662239357829GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.634 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.634 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.745 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 8d919892-73fd-4a11-be79-2a1a9280a987 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.745 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.746 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:50:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:50:10 compute-0 nova_compute[248510]: 2025-12-13 08:50:10.785 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.093 248514 DEBUG nova.compute.manager [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.093 248514 DEBUG oslo_concurrency.lockutils [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.094 248514 DEBUG oslo_concurrency.lockutils [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.094 248514 DEBUG oslo_concurrency.lockutils [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.094 248514 DEBUG nova.compute.manager [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] No waiting events found dispatching network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.095 248514 WARNING nova.compute.manager [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received unexpected event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 for instance with vm_state active and task_state None.
Dec 13 08:50:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Dec 13 08:50:11 compute-0 ceph-mon[76537]: pgmap v2639: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 2.0 MiB/s wr, 149 op/s
Dec 13 08:50:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2688414309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Dec 13 08:50:11 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Dec 13 08:50:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:50:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638770936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.389 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.396 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.414 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.437 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:50:11 compute-0 nova_compute[248510]: 2025-12-13 08:50:11.438 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:12 compute-0 nova_compute[248510]: 2025-12-13 08:50:12.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 27 KiB/s wr, 62 op/s
Dec 13 08:50:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Dec 13 08:50:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Dec 13 08:50:12 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Dec 13 08:50:12 compute-0 ceph-mon[76537]: osdmap e280: 3 total, 3 up, 3 in
Dec 13 08:50:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2638770936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:12 compute-0 ovn_controller[148476]: 2025-12-13T08:50:12Z|01052|binding|INFO|Releasing lport 6b94eeb9-e344-4933-88eb-29577cf3087f from this chassis (sb_readonly=0)
Dec 13 08:50:12 compute-0 NetworkManager[50376]: <info>  [1765615812.3901] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Dec 13 08:50:12 compute-0 NetworkManager[50376]: <info>  [1765615812.3911] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Dec 13 08:50:12 compute-0 nova_compute[248510]: 2025-12-13 08:50:12.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:12 compute-0 nova_compute[248510]: 2025-12-13 08:50:12.438 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:12 compute-0 nova_compute[248510]: 2025-12-13 08:50:12.439 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:12 compute-0 ovn_controller[148476]: 2025-12-13T08:50:12Z|01053|binding|INFO|Releasing lport 6b94eeb9-e344-4933-88eb-29577cf3087f from this chassis (sb_readonly=0)
Dec 13 08:50:12 compute-0 nova_compute[248510]: 2025-12-13 08:50:12.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:13 compute-0 nova_compute[248510]: 2025-12-13 08:50:13.222 248514 DEBUG nova.compute.manager [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-changed-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:50:13 compute-0 nova_compute[248510]: 2025-12-13 08:50:13.222 248514 DEBUG nova.compute.manager [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing instance network info cache due to event network-changed-8d84f494-97c1-4708-b8df-444e42f55484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:50:13 compute-0 nova_compute[248510]: 2025-12-13 08:50:13.223 248514 DEBUG oslo_concurrency.lockutils [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:50:13 compute-0 nova_compute[248510]: 2025-12-13 08:50:13.223 248514 DEBUG oslo_concurrency.lockutils [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:50:13 compute-0 nova_compute[248510]: 2025-12-13 08:50:13.224 248514 DEBUG nova.network.neutron [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:50:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Dec 13 08:50:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Dec 13 08:50:13 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Dec 13 08:50:13 compute-0 ceph-mon[76537]: pgmap v2641: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 27 KiB/s wr, 62 op/s
Dec 13 08:50:13 compute-0 ceph-mon[76537]: osdmap e281: 3 total, 3 up, 3 in
Dec 13 08:50:13 compute-0 nova_compute[248510]: 2025-12-13 08:50:13.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:13 compute-0 nova_compute[248510]: 2025-12-13 08:50:13.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:50:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:13.958 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 295 op/s
Dec 13 08:50:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Dec 13 08:50:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Dec 13 08:50:14 compute-0 ceph-mon[76537]: osdmap e282: 3 total, 3 up, 3 in
Dec 13 08:50:14 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Dec 13 08:50:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Dec 13 08:50:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Dec 13 08:50:14 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Dec 13 08:50:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:50:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470245815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:50:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:50:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470245815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:50:15 compute-0 nova_compute[248510]: 2025-12-13 08:50:15.267 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:15 compute-0 ceph-mon[76537]: pgmap v2644: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 295 op/s
Dec 13 08:50:15 compute-0 ceph-mon[76537]: osdmap e283: 3 total, 3 up, 3 in
Dec 13 08:50:15 compute-0 ceph-mon[76537]: osdmap e284: 3 total, 3 up, 3 in
Dec 13 08:50:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1470245815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:50:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1470245815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:50:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Dec 13 08:50:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Dec 13 08:50:15 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Dec 13 08:50:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 9.3 KiB/s wr, 373 op/s
Dec 13 08:50:16 compute-0 nova_compute[248510]: 2025-12-13 08:50:16.453 248514 DEBUG nova.network.neutron [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updated VIF entry in instance network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:50:16 compute-0 nova_compute[248510]: 2025-12-13 08:50:16.454 248514 DEBUG nova.network.neutron [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:50:16 compute-0 nova_compute[248510]: 2025-12-13 08:50:16.480 248514 DEBUG oslo_concurrency.lockutils [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:50:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Dec 13 08:50:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Dec 13 08:50:16 compute-0 ceph-mon[76537]: osdmap e285: 3 total, 3 up, 3 in
Dec 13 08:50:16 compute-0 ceph-mon[76537]: pgmap v2648: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 9.3 KiB/s wr, 373 op/s
Dec 13 08:50:16 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Dec 13 08:50:17 compute-0 nova_compute[248510]: 2025-12-13 08:50:17.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:17 compute-0 nova_compute[248510]: 2025-12-13 08:50:17.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:17 compute-0 ceph-mon[76537]: osdmap e286: 3 total, 3 up, 3 in
Dec 13 08:50:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 4.0 KiB/s wr, 74 op/s
Dec 13 08:50:18 compute-0 nova_compute[248510]: 2025-12-13 08:50:18.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:18 compute-0 ceph-mon[76537]: pgmap v2650: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 4.0 KiB/s wr, 74 op/s
Dec 13 08:50:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 8.6 KiB/s wr, 151 op/s
Dec 13 08:50:20 compute-0 nova_compute[248510]: 2025-12-13 08:50:20.269 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:20 compute-0 ovn_controller[148476]: 2025-12-13T08:50:20Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:03:ce 10.100.0.11
Dec 13 08:50:20 compute-0 ovn_controller[148476]: 2025-12-13T08:50:20Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:03:ce 10.100.0.11
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036111357405043337 of space, bias 1.0, pg target 0.10833407221513001 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669159180967263 of space, bias 1.0, pg target 0.2007477542901789 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.734186507205972e-07 of space, bias 4.0, pg target 0.0006881023808647166 quantized to 16 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:50:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:50:21 compute-0 ceph-mon[76537]: pgmap v2651: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 8.6 KiB/s wr, 151 op/s
Dec 13 08:50:21 compute-0 nova_compute[248510]: 2025-12-13 08:50:21.790 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:21 compute-0 nova_compute[248510]: 2025-12-13 08:50:21.790 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 08:50:21 compute-0 nova_compute[248510]: 2025-12-13 08:50:21.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 08:50:21 compute-0 podman[355444]: 2025-12-13 08:50:21.968270286 +0000 UTC m=+0.054268651 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 13 08:50:21 compute-0 podman[355443]: 2025-12-13 08:50:21.973031476 +0000 UTC m=+0.066060397 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:50:22 compute-0 podman[355442]: 2025-12-13 08:50:22.003535001 +0000 UTC m=+0.098890791 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:50:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 6.7 KiB/s wr, 118 op/s
Dec 13 08:50:22 compute-0 nova_compute[248510]: 2025-12-13 08:50:22.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:23 compute-0 ceph-mon[76537]: pgmap v2652: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 6.7 KiB/s wr, 118 op/s
Dec 13 08:50:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 3.1 MiB/s wr, 195 op/s
Dec 13 08:50:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Dec 13 08:50:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Dec 13 08:50:24 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Dec 13 08:50:25 compute-0 ceph-mon[76537]: pgmap v2653: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 3.1 MiB/s wr, 195 op/s
Dec 13 08:50:25 compute-0 ceph-mon[76537]: osdmap e287: 3 total, 3 up, 3 in
Dec 13 08:50:25 compute-0 nova_compute[248510]: 2025-12-13 08:50:25.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 488 KiB/s rd, 2.8 MiB/s wr, 174 op/s
Dec 13 08:50:27 compute-0 nova_compute[248510]: 2025-12-13 08:50:27.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:27 compute-0 ceph-mon[76537]: pgmap v2655: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 488 KiB/s rd, 2.8 MiB/s wr, 174 op/s
Dec 13 08:50:27 compute-0 nova_compute[248510]: 2025-12-13 08:50:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:27 compute-0 nova_compute[248510]: 2025-12-13 08:50:27.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 08:50:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 2.6 MiB/s wr, 132 op/s
Dec 13 08:50:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Dec 13 08:50:29 compute-0 ceph-mon[76537]: pgmap v2656: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 2.6 MiB/s wr, 132 op/s
Dec 13 08:50:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Dec 13 08:50:29 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Dec 13 08:50:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 3.2 MiB/s wr, 98 op/s
Dec 13 08:50:30 compute-0 ceph-mon[76537]: osdmap e288: 3 total, 3 up, 3 in
Dec 13 08:50:30 compute-0 nova_compute[248510]: 2025-12-13 08:50:30.274 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:31 compute-0 ceph-mon[76537]: pgmap v2658: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 3.2 MiB/s wr, 98 op/s
Dec 13 08:50:31 compute-0 nova_compute[248510]: 2025-12-13 08:50:31.789 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.131 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.656 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "35cf794f-191d-48c6-9cee-746c7f1345d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.656 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.677 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.770 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.771 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.780 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.780 248514 INFO nova.compute.claims [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:50:32 compute-0 nova_compute[248510]: 2025-12-13 08:50:32.924 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:33 compute-0 ceph-mon[76537]: pgmap v2659: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Dec 13 08:50:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:50:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968745251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.488 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.493 248514 DEBUG nova.compute.provider_tree [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.516 248514 DEBUG nova.scheduler.client.report [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.541 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.541 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.603 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.603 248514 DEBUG nova.network.neutron [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.628 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.650 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.746 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.747 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.747 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Creating image(s)
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.772 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.794 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.816 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.820 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "6788d23df91f0893ccfbbff5ab81ae6cb178abb0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:33 compute-0 nova_compute[248510]: 2025-12-13 08:50:33.821 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "6788d23df91f0893ccfbbff5ab81ae6cb178abb0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 16 op/s
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.100 248514 DEBUG nova.virt.libvirt.imagebackend [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/2a6d824f-44d5-41be-b030-32dfee0e816a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/2a6d824f-44d5-41be-b030-32dfee0e816a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.160 248514 DEBUG nova.virt.libvirt.imagebackend [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/2a6d824f-44d5-41be-b030-32dfee0e816a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.161 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] cloning images/2a6d824f-44d5-41be-b030-32dfee0e816a@snap to None/35cf794f-191d-48c6-9cee-746c7f1345d6_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.192 248514 DEBUG nova.network.neutron [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.193 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:50:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2968745251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.259 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "6788d23df91f0893ccfbbff5ab81ae6cb178abb0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.381 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] resizing rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.448 248514 DEBUG nova.objects.instance [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lazy-loading 'migration_context' on Instance uuid 35cf794f-191d-48c6-9cee-746c7f1345d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.476 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.477 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Ensure instance console log exists: /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.478 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.478 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.478 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.480 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='e4b42f14e17f85e97648723f4cbbb1bd',container_format='bare',created_at=2025-12-13T08:50:28Z,direct_url=<?>,disk_format='raw',id=2a6d824f-44d5-41be-b030-32dfee0e816a,min_disk=0,min_ram=0,name='tempest-image-dependency-test-509129331',owner='adc96857d97b46b8834f6fb544e19670',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-12-13T08:50:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '2a6d824f-44d5-41be-b030-32dfee0e816a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.485 248514 WARNING nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.491 248514 DEBUG nova.virt.libvirt.host [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.492 248514 DEBUG nova.virt.libvirt.host [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.495 248514 DEBUG nova.virt.libvirt.host [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.495 248514 DEBUG nova.virt.libvirt.host [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.496 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.496 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='e4b42f14e17f85e97648723f4cbbb1bd',container_format='bare',created_at=2025-12-13T08:50:28Z,direct_url=<?>,disk_format='raw',id=2a6d824f-44d5-41be-b030-32dfee0e816a,min_disk=0,min_ram=0,name='tempest-image-dependency-test-509129331',owner='adc96857d97b46b8834f6fb544e19670',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-12-13T08:50:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.497 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.497 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.497 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.497 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.498 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.498 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.498 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.499 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.499 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.499 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:50:34 compute-0 nova_compute[248510]: 2025-12-13 08:50:34.502 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:50:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/649305437' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.099 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.121 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.125 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:35 compute-0 ceph-mon[76537]: pgmap v2660: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 16 op/s
Dec 13 08:50:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/649305437' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.276 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:50:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426095890' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.694 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.696 248514 DEBUG nova.objects.instance [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lazy-loading 'pci_devices' on Instance uuid 35cf794f-191d-48c6-9cee-746c7f1345d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.740 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <uuid>35cf794f-191d-48c6-9cee-746c7f1345d6</uuid>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <name>instance-0000006c</name>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <nova:name>instance-depend-image</nova:name>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:50:34</nova:creationTime>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <nova:user uuid="e7ff6f539f764ed2a1c838a697e60b52">tempest-ImageDependencyTests-1370295620-project-member</nova:user>
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <nova:project uuid="adc96857d97b46b8834f6fb544e19670">tempest-ImageDependencyTests-1370295620</nova:project>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="2a6d824f-44d5-41be-b030-32dfee0e816a"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <system>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <entry name="serial">35cf794f-191d-48c6-9cee-746c7f1345d6</entry>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <entry name="uuid">35cf794f-191d-48c6-9cee-746c7f1345d6</entry>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     </system>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <os>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   </os>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <features>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   </features>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/35cf794f-191d-48c6-9cee-746c7f1345d6_disk">
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config">
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       </source>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:50:35 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/console.log" append="off"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <video>
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     </video>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:50:35 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:50:35 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:50:35 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:50:35 compute-0 nova_compute[248510]: </domain>
Dec 13 08:50:35 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.814 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.815 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.816 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Using config drive
Dec 13 08:50:35 compute-0 nova_compute[248510]: 2025-12-13 08:50:35.845 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.060 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Creating config drive at /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.066 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9qjvcnn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 16 KiB/s wr, 15 op/s
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.219 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9qjvcnn" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.220 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.220 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2426095890' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.253 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.257 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.322 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.418 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.419 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Deleting local config drive /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config because it was imported into RBD.
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.422 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.423 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.430 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.431 248514 INFO nova.compute.claims [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:50:36 compute-0 systemd-machined[210538]: New machine qemu-134-instance-0000006c.
Dec 13 08:50:36 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006c.
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.748 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.922 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615836.9207609, 35cf794f-191d-48c6-9cee-746c7f1345d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.923 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] VM Resumed (Lifecycle Event)
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.927 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.927 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.931 248514 INFO nova.virt.libvirt.driver [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance spawned successfully.
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.932 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.957 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.963 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.967 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.967 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.968 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.968 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.969 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:36 compute-0 nova_compute[248510]: 2025-12-13 08:50:36.969 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.008 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.011 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615836.922719, 35cf794f-191d-48c6-9cee-746c7f1345d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.012 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] VM Started (Lifecycle Event)
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.043 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.048 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.078 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.084 248514 INFO nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 3.34 seconds to spawn the instance on the hypervisor.
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.084 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.184 248514 INFO nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 4.45 seconds to build instance.
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.186 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.217 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:37 compute-0 ceph-mon[76537]: pgmap v2661: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 16 KiB/s wr, 15 op/s
Dec 13 08:50:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:50:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411183811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.326 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.333 248514 DEBUG nova.compute.provider_tree [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.350 248514 DEBUG nova.scheduler.client.report [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.378 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.379 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.441 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.442 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.469 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.496 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.601 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.603 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.604 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Creating image(s)
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.625 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.656 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.684 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.687 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.763 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.763 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.764 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.764 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.784 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.788 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:37 compute-0 nova_compute[248510]: 2025-12-13 08:50:37.889 248514 DEBUG nova.policy [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:50:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 17 KiB/s wr, 25 op/s
Dec 13 08:50:38 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec 13 08:50:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1411183811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:38 compute-0 nova_compute[248510]: 2025-12-13 08:50:38.369 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:38 compute-0 nova_compute[248510]: 2025-12-13 08:50:38.447 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:50:38 compute-0 nova_compute[248510]: 2025-12-13 08:50:38.559 248514 DEBUG nova.objects.instance [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 67dc02f2-2430-4cc6-a575-bdfd238a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:50:38 compute-0 nova_compute[248510]: 2025-12-13 08:50:38.577 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:50:38 compute-0 nova_compute[248510]: 2025-12-13 08:50:38.577 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Ensure instance console log exists: /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:50:38 compute-0 nova_compute[248510]: 2025-12-13 08:50:38.578 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:38 compute-0 nova_compute[248510]: 2025-12-13 08:50:38.578 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:38 compute-0 nova_compute[248510]: 2025-12-13 08:50:38.578 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:39 compute-0 nova_compute[248510]: 2025-12-13 08:50:39.304 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Successfully created port: 5c3b12ae-3db8-4a4c-b558-19ab89a179ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:50:39 compute-0 ceph-mon[76537]: pgmap v2662: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 17 KiB/s wr, 25 op/s
Dec 13 08:50:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 150 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 84 op/s
Dec 13 08:50:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:50:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:50:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:50:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:50:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:50:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.136 248514 DEBUG nova.compute.manager [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.183 248514 INFO nova.compute.manager [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] instance snapshotting
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.278 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.419 248514 INFO nova.virt.libvirt.driver [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Beginning live snapshot process
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.559 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] creating snapshot(c19d2b933f9942ada3db82e0a518b226) on rbd image(35cf794f-191d-48c6-9cee-746c7f1345d6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.806 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Successfully updated port: 5c3b12ae-3db8-4a4c-b558-19ab89a179ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.835 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.835 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.835 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.921 248514 DEBUG nova.compute.manager [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-changed-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.922 248514 DEBUG nova.compute.manager [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Refreshing instance network info cache due to event network-changed-5c3b12ae-3db8-4a4c-b558-19ab89a179ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:50:40 compute-0 nova_compute[248510]: 2025-12-13 08:50:40.922 248514 DEBUG oslo_concurrency.lockutils [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:50:41 compute-0 nova_compute[248510]: 2025-12-13 08:50:41.008 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:50:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Dec 13 08:50:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Dec 13 08:50:41 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Dec 13 08:50:41 compute-0 ceph-mon[76537]: pgmap v2663: 321 pgs: 321 active+clean; 150 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 84 op/s
Dec 13 08:50:41 compute-0 nova_compute[248510]: 2025-12-13 08:50:41.457 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] cloning vms/35cf794f-191d-48c6-9cee-746c7f1345d6_disk@c19d2b933f9942ada3db82e0a518b226 to images/25f6e4ed-352a-433b-befc-4cedd791f581 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:50:41 compute-0 nova_compute[248510]: 2025-12-13 08:50:41.544 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] flattening images/25f6e4ed-352a-433b-befc-4cedd791f581 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:50:41 compute-0 nova_compute[248510]: 2025-12-13 08:50:41.689 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] removing snapshot(c19d2b933f9942ada3db82e0a518b226) on rbd image(35cf794f-191d-48c6-9cee-746c7f1345d6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:50:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 150 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.2 MiB/s wr, 91 op/s
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.189 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Dec 13 08:50:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Dec 13 08:50:42 compute-0 ceph-mon[76537]: osdmap e289: 3 total, 3 up, 3 in
Dec 13 08:50:42 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.889 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Updating instance_info_cache with network_info: [{"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.917 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.917 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance network_info: |[{"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.918 248514 DEBUG oslo_concurrency.lockutils [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.918 248514 DEBUG nova.network.neutron [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Refreshing network info cache for port 5c3b12ae-3db8-4a4c-b558-19ab89a179ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.921 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start _get_guest_xml network_info=[{"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.925 248514 WARNING nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.930 248514 DEBUG nova.virt.libvirt.host [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.931 248514 DEBUG nova.virt.libvirt.host [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.933 248514 DEBUG nova.virt.libvirt.host [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.934 248514 DEBUG nova.virt.libvirt.host [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.934 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.935 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.935 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.935 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.936 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.936 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.936 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.937 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.937 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.937 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.938 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.938 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:50:42 compute-0 nova_compute[248510]: 2025-12-13 08:50:42.941 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:43 compute-0 nova_compute[248510]: 2025-12-13 08:50:43.057 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] creating snapshot(snap) on rbd image(25f6e4ed-352a-433b-befc-4cedd791f581) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:50:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:50:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855967601' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:43 compute-0 nova_compute[248510]: 2025-12-13 08:50:43.536 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:43 compute-0 nova_compute[248510]: 2025-12-13 08:50:43.563 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:43 compute-0 nova_compute[248510]: 2025-12-13 08:50:43.567 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Dec 13 08:50:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Dec 13 08:50:43 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Dec 13 08:50:43 compute-0 ceph-mon[76537]: pgmap v2665: 321 pgs: 321 active+clean; 150 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.2 MiB/s wr, 91 op/s
Dec 13 08:50:43 compute-0 ceph-mon[76537]: osdmap e290: 3 total, 3 up, 3 in
Dec 13 08:50:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/855967601' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 3.6 MiB/s wr, 240 op/s
Dec 13 08:50:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:50:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697935915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.185 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.187 248514 DEBUG nova.virt.libvirt.vif [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=109,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9wzr47fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:50:37Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=67dc02f2-2430-4cc6-a575-bdfd238a5460,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.187 248514 DEBUG nova.network.os_vif_util [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.188 248514 DEBUG nova.network.os_vif_util [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.189 248514 DEBUG nova.objects.instance [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 67dc02f2-2430-4cc6-a575-bdfd238a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.223 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <uuid>67dc02f2-2430-4cc6-a575-bdfd238a5460</uuid>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <name>instance-0000006d</name>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238</nova:name>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:50:42</nova:creationTime>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <nova:port uuid="5c3b12ae-3db8-4a4c-b558-19ab89a179ff">
Dec 13 08:50:44 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <system>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <entry name="serial">67dc02f2-2430-4cc6-a575-bdfd238a5460</entry>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <entry name="uuid">67dc02f2-2430-4cc6-a575-bdfd238a5460</entry>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </system>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <os>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   </os>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <features>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   </features>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/67dc02f2-2430-4cc6-a575-bdfd238a5460_disk">
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       </source>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config">
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       </source>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:50:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:8f:2f:3f"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <target dev="tap5c3b12ae-3d"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/console.log" append="off"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <video>
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </video>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:50:44 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:50:44 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:50:44 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:50:44 compute-0 nova_compute[248510]: </domain>
Dec 13 08:50:44 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.225 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Preparing to wait for external event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.225 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.225 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.226 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.227 248514 DEBUG nova.virt.libvirt.vif [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=109,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9wzr47fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:50:37Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=67dc02f2-2430-4cc6-a575-bdfd238a5460,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.227 248514 DEBUG nova.network.os_vif_util [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.228 248514 DEBUG nova.network.os_vif_util [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.228 248514 DEBUG os_vif [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.230 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.230 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.234 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c3b12ae-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.234 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c3b12ae-3d, col_values=(('external_ids', {'iface-id': '5c3b12ae-3db8-4a4c-b558-19ab89a179ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:2f:3f', 'vm-uuid': '67dc02f2-2430-4cc6-a575-bdfd238a5460'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.236 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:44 compute-0 NetworkManager[50376]: <info>  [1765615844.2373] manager: (tap5c3b12ae-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:44 compute-0 nova_compute[248510]: 2025-12-13 08:50:44.243 248514 INFO os_vif [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d')
Dec 13 08:50:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:45 compute-0 ceph-mon[76537]: osdmap e291: 3 total, 3 up, 3 in
Dec 13 08:50:45 compute-0 ceph-mon[76537]: pgmap v2668: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 3.6 MiB/s wr, 240 op/s
Dec 13 08:50:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/697935915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:50:45 compute-0 nova_compute[248510]: 2025-12-13 08:50:45.178 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:50:45 compute-0 nova_compute[248510]: 2025-12-13 08:50:45.179 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:50:45 compute-0 nova_compute[248510]: 2025-12-13 08:50:45.179 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:8f:2f:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:50:45 compute-0 nova_compute[248510]: 2025-12-13 08:50:45.180 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Using config drive
Dec 13 08:50:45 compute-0 nova_compute[248510]: 2025-12-13 08:50:45.257 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.003 248514 INFO nova.virt.libvirt.driver [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Snapshot image upload complete
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.004 248514 INFO nova.compute.manager [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 5.82 seconds to snapshot the instance on the hypervisor.
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.019 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Creating config drive at /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.024 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qejnvxl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 1.7 MiB/s wr, 125 op/s
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.177 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qejnvxl" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.212 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.217 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.374 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.376 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Deleting local config drive /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config because it was imported into RBD.
Dec 13 08:50:46 compute-0 kernel: tap5c3b12ae-3d: entered promiscuous mode
Dec 13 08:50:46 compute-0 NetworkManager[50376]: <info>  [1765615846.4223] manager: (tap5c3b12ae-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Dec 13 08:50:46 compute-0 ovn_controller[148476]: 2025-12-13T08:50:46Z|01054|binding|INFO|Claiming lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff for this chassis.
Dec 13 08:50:46 compute-0 ovn_controller[148476]: 2025-12-13T08:50:46Z|01055|binding|INFO|5c3b12ae-3db8-4a4c-b558-19ab89a179ff: Claiming fa:16:3e:8f:2f:3f 10.100.0.9
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.433 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:2f:3f 10.100.0.9'], port_security=['fa:16:3e:8f:2f:3f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '67dc02f2-2430-4cc6-a575-bdfd238a5460', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b6ad7fe-fef3-4e9f-8568-5f8d9cabfe04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2f617a-f802-4ec5-b71a-0c176c8c3ae6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5c3b12ae-3db8-4a4c-b558-19ab89a179ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.434 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5c3b12ae-3db8-4a4c-b558-19ab89a179ff in datapath abf04c22-5ac7-46ee-bfad-53f95095fba3 bound to our chassis
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.435 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abf04c22-5ac7-46ee-bfad-53f95095fba3
Dec 13 08:50:46 compute-0 ovn_controller[148476]: 2025-12-13T08:50:46Z|01056|binding|INFO|Setting lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff ovn-installed in OVS
Dec 13 08:50:46 compute-0 ovn_controller[148476]: 2025-12-13T08:50:46Z|01057|binding|INFO|Setting lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff up in Southbound
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.449 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1249969-1027-4f65-8bc4-08521a1b4235]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:46 compute-0 systemd-udevd[356366]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:50:46 compute-0 NetworkManager[50376]: <info>  [1765615846.4685] device (tap5c3b12ae-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:50:46 compute-0 NetworkManager[50376]: <info>  [1765615846.4693] device (tap5c3b12ae-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:50:46 compute-0 systemd-machined[210538]: New machine qemu-135-instance-0000006d.
Dec 13 08:50:46 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006d.
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.478 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d864852-ed1f-4879-ae6b-dc14d1936a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.482 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[91653346-a7dd-4e39-9ad9-5d864518523f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.509 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1feb41-816d-4edf-b8ae-98a753f585c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.528 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6254f071-f187-43c6-bc08-36d5ed57c01a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabf04c22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:36:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838377, 'reachable_time': 33674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356377, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.544 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b479791f-9a9c-494e-8d73-c0a6def91227]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabf04c22-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838387, 'tstamp': 838387}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356381, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabf04c22-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838390, 'tstamp': 838390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356381, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabf04c22-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabf04c22-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabf04c22-50, col_values=(('external_ids', {'iface-id': '6b94eeb9-e344-4933-88eb-29577cf3087f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:50:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.589 248514 DEBUG nova.network.neutron [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Updated VIF entry in instance network info cache for port 5c3b12ae-3db8-4a4c-b558-19ab89a179ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.590 248514 DEBUG nova.network.neutron [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Updating instance_info_cache with network_info: [{"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.620 248514 DEBUG oslo_concurrency.lockutils [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.821 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615846.8207915, 67dc02f2-2430-4cc6-a575-bdfd238a5460 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.822 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] VM Started (Lifecycle Event)
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.849 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.853 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615846.8217306, 67dc02f2-2430-4cc6-a575-bdfd238a5460 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.853 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] VM Paused (Lifecycle Event)
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.880 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.883 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.894 248514 DEBUG nova.compute.manager [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.894 248514 DEBUG oslo_concurrency.lockutils [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.895 248514 DEBUG oslo_concurrency.lockutils [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.895 248514 DEBUG oslo_concurrency.lockutils [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.895 248514 DEBUG nova.compute.manager [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Processing event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.896 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.900 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.902 248514 INFO nova.virt.libvirt.driver [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance spawned successfully.
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.902 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.908 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.909 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615846.8988144, 67dc02f2-2430-4cc6-a575-bdfd238a5460 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.909 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] VM Resumed (Lifecycle Event)
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.958 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.963 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.963 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.963 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.964 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.964 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.964 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:50:46 compute-0 nova_compute[248510]: 2025-12-13 08:50:46.969 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.031 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.056 248514 INFO nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Took 9.45 seconds to spawn the instance on the hypervisor.
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.057 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.144 248514 INFO nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Took 10.74 seconds to build instance.
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.163 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Dec 13 08:50:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Dec 13 08:50:47 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Dec 13 08:50:47 compute-0 ceph-mon[76537]: pgmap v2669: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 1.7 MiB/s wr, 125 op/s
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.989 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "35cf794f-191d-48c6-9cee-746c7f1345d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.989 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.989 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "35cf794f-191d-48c6-9cee-746c7f1345d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.990 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.990 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.991 248514 INFO nova.compute.manager [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Terminating instance
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.991 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "refresh_cache-35cf794f-191d-48c6-9cee-746c7f1345d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.991 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquired lock "refresh_cache-35cf794f-191d-48c6-9cee-746c7f1345d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:50:47 compute-0 nova_compute[248510]: 2025-12-13 08:50:47.992 248514 DEBUG nova.network.neutron [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:50:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 1.7 MiB/s wr, 171 op/s
Dec 13 08:50:48 compute-0 nova_compute[248510]: 2025-12-13 08:50:48.220 248514 DEBUG nova.network.neutron [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:50:48 compute-0 ceph-mon[76537]: osdmap e292: 3 total, 3 up, 3 in
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.051 248514 DEBUG nova.network.neutron [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.075 248514 DEBUG nova.compute.manager [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.076 248514 DEBUG oslo_concurrency.lockutils [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.076 248514 DEBUG oslo_concurrency.lockutils [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.076 248514 DEBUG oslo_concurrency.lockutils [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.077 248514 DEBUG nova.compute.manager [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] No waiting events found dispatching network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.077 248514 WARNING nova.compute.manager [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received unexpected event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff for instance with vm_state active and task_state None.
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.089 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Releasing lock "refresh_cache-35cf794f-191d-48c6-9cee-746c7f1345d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.089 248514 DEBUG nova.compute.manager [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:50:49 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Dec 13 08:50:49 compute-0 systemd-machined[210538]: Machine qemu-134-instance-0000006c terminated.
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.236 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:49 compute-0 ceph-mon[76537]: pgmap v2671: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 1.7 MiB/s wr, 171 op/s
Dec 13 08:50:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:50:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 52K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1483 writes, 6641 keys, 1483 commit groups, 1.0 writes per commit group, ingest: 9.77 MB, 0.02 MB/s
                                           Interval WAL: 1484 writes, 1484 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     18.6      3.50              0.22        35    0.100       0      0       0.0       0.0
                                             L6      1/0   10.57 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4     72.9     61.2      4.64              0.84        34    0.136    202K    18K       0.0       0.0
                                            Sum      1/0   10.57 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     41.6     42.9      8.13              1.06        69    0.118    202K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.1     75.7     77.7      0.68              0.15        10    0.068     37K   2503       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     72.9     61.2      4.64              0.84        34    0.136    202K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     18.6      3.49              0.22        34    0.103       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.063, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.34 GB write, 0.07 MB/s write, 0.33 GB read, 0.07 MB/s read, 8.1 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 40.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000385 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2543,39.22 MB,12.9002%) FilterBlock(70,565.23 KB,0.181575%) IndexBlock(70,956.06 KB,0.307123%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.309 248514 INFO nova.virt.libvirt.driver [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance destroyed successfully.
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.310 248514 DEBUG nova.objects.instance [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lazy-loading 'resources' on Instance uuid 35cf794f-191d-48c6-9cee-746c7f1345d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:50:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Dec 13 08:50:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Dec 13 08:50:49 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.967 248514 INFO nova.virt.libvirt.driver [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Deleting instance files /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6_del
Dec 13 08:50:49 compute-0 nova_compute[248510]: 2025-12-13 08:50:49.968 248514 INFO nova.virt.libvirt.driver [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Deletion of /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6_del complete
Dec 13 08:50:50 compute-0 nova_compute[248510]: 2025-12-13 08:50:50.067 248514 INFO nova.compute.manager [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 0.98 seconds to destroy the instance on the hypervisor.
Dec 13 08:50:50 compute-0 nova_compute[248510]: 2025-12-13 08:50:50.068 248514 DEBUG oslo.service.loopingcall [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:50:50 compute-0 nova_compute[248510]: 2025-12-13 08:50:50.068 248514 DEBUG nova.compute.manager [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:50:50 compute-0 nova_compute[248510]: 2025-12-13 08:50:50.068 248514 DEBUG nova.network.neutron [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:50:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 28 KiB/s wr, 220 op/s
Dec 13 08:50:50 compute-0 ceph-mon[76537]: osdmap e293: 3 total, 3 up, 3 in
Dec 13 08:50:50 compute-0 ceph-mon[76537]: pgmap v2673: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 28 KiB/s wr, 220 op/s
Dec 13 08:50:50 compute-0 nova_compute[248510]: 2025-12-13 08:50:50.959 248514 DEBUG nova.network.neutron [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:50:51 compute-0 nova_compute[248510]: 2025-12-13 08:50:51.038 248514 DEBUG nova.network.neutron [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:50:51 compute-0 nova_compute[248510]: 2025-12-13 08:50:51.055 248514 INFO nova.compute.manager [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 0.99 seconds to deallocate network for instance.
Dec 13 08:50:51 compute-0 nova_compute[248510]: 2025-12-13 08:50:51.167 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:51 compute-0 nova_compute[248510]: 2025-12-13 08:50:51.167 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:51 compute-0 nova_compute[248510]: 2025-12-13 08:50:51.404 248514 DEBUG oslo_concurrency.processutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:50:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:50:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880413754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:51 compute-0 nova_compute[248510]: 2025-12-13 08:50:51.977 248514 DEBUG oslo_concurrency.processutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:50:51 compute-0 nova_compute[248510]: 2025-12-13 08:50:51.986 248514 DEBUG nova.compute.provider_tree [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:50:52 compute-0 nova_compute[248510]: 2025-12-13 08:50:52.009 248514 DEBUG nova.scheduler.client.report [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:50:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/880413754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:50:52 compute-0 nova_compute[248510]: 2025-12-13 08:50:52.037 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:52 compute-0 nova_compute[248510]: 2025-12-13 08:50:52.067 248514 INFO nova.scheduler.client.report [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Deleted allocations for instance 35cf794f-191d-48c6-9cee-746c7f1345d6
Dec 13 08:50:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 21 KiB/s wr, 156 op/s
Dec 13 08:50:52 compute-0 nova_compute[248510]: 2025-12-13 08:50:52.139 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:52 compute-0 nova_compute[248510]: 2025-12-13 08:50:52.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:53 compute-0 podman[356470]: 2025-12-13 08:50:53.002906424 +0000 UTC m=+0.058576080 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:50:53 compute-0 podman[356469]: 2025-12-13 08:50:53.005879149 +0000 UTC m=+0.079267119 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 08:50:53 compute-0 ceph-mon[76537]: pgmap v2674: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 21 KiB/s wr, 156 op/s
Dec 13 08:50:53 compute-0 podman[356468]: 2025-12-13 08:50:53.072984171 +0000 UTC m=+0.147931400 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.224 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.251 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 8d919892-73fd-4a11-be79-2a1a9280a987 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.252 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 67dc02f2-2430-4cc6-a575-bdfd238a5460 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.252 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.253 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.253 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.254 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.290 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:53 compute-0 nova_compute[248510]: 2025-12-13 08:50:53.291 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 232 op/s
Dec 13 08:50:54 compute-0 nova_compute[248510]: 2025-12-13 08:50:54.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Dec 13 08:50:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Dec 13 08:50:54 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.808157) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854808180, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1029, "num_deletes": 254, "total_data_size": 1342178, "memory_usage": 1364584, "flush_reason": "Manual Compaction"}
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854819167, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 973086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51879, "largest_seqno": 52907, "table_properties": {"data_size": 968409, "index_size": 2201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11437, "raw_average_key_size": 21, "raw_value_size": 958737, "raw_average_value_size": 1782, "num_data_blocks": 97, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615794, "oldest_key_time": 1765615794, "file_creation_time": 1765615854, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 11071 microseconds, and 3264 cpu microseconds.
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.819218) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 973086 bytes OK
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.819242) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.821246) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.821270) EVENT_LOG_v1 {"time_micros": 1765615854821263, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.821291) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1337229, prev total WAL file size 1337229, number of live WAL files 2.
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.822005) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303130' seq:72057594037927935, type:22 .. '6D6772737461740032323632' seq:0, type:0; will stop at (end)
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(950KB)], [119(10MB)]
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854822032, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 12054595, "oldest_snapshot_seqno": -1}
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7492 keys, 8923456 bytes, temperature: kUnknown
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854895765, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 8923456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8876283, "index_size": 27349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 194381, "raw_average_key_size": 25, "raw_value_size": 8745073, "raw_average_value_size": 1167, "num_data_blocks": 1067, "num_entries": 7492, "num_filter_entries": 7492, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615854, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.896142) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8923456 bytes
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.900292) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.0 rd, 120.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(21.6) write-amplify(9.2) OK, records in: 7990, records dropped: 498 output_compression: NoCompression
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.900324) EVENT_LOG_v1 {"time_micros": 1765615854900313, "job": 72, "event": "compaction_finished", "compaction_time_micros": 73940, "compaction_time_cpu_micros": 23237, "output_level": 6, "num_output_files": 1, "total_output_size": 8923456, "num_input_records": 7990, "num_output_records": 7492, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854901084, "job": 72, "event": "table_file_deletion", "file_number": 121}
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854903369, "job": 72, "event": "table_file_deletion", "file_number": 119}
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.821927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:55 compute-0 ceph-mon[76537]: pgmap v2675: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 232 op/s
Dec 13 08:50:55 compute-0 ceph-mon[76537]: osdmap e294: 3 total, 3 up, 3 in
Dec 13 08:50:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:55.429 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:50:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:55.429 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:50:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:50:55.430 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:50:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 KiB/s wr, 198 op/s
Dec 13 08:50:56 compute-0 nova_compute[248510]: 2025-12-13 08:50:56.802 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:57 compute-0 ceph-mon[76537]: pgmap v2677: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 KiB/s wr, 198 op/s
Dec 13 08:50:57 compute-0 nova_compute[248510]: 2025-12-13 08:50:57.237 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 181 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 108 op/s
Dec 13 08:50:58 compute-0 ovn_controller[148476]: 2025-12-13T08:50:58Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:2f:3f 10.100.0.9
Dec 13 08:50:58 compute-0 ovn_controller[148476]: 2025-12-13T08:50:58Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:2f:3f 10.100.0.9
Dec 13 08:50:59 compute-0 ceph-mon[76537]: pgmap v2678: 321 pgs: 321 active+clean; 181 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 108 op/s
Dec 13 08:50:59 compute-0 nova_compute[248510]: 2025-12-13 08:50:59.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:50:59 compute-0 nova_compute[248510]: 2025-12-13 08:50:59.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:50:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.807355) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859807430, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 297, "num_deletes": 256, "total_data_size": 74663, "memory_usage": 80680, "flush_reason": "Manual Compaction"}
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859809974, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 74024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52908, "largest_seqno": 53204, "table_properties": {"data_size": 72098, "index_size": 154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4874, "raw_average_key_size": 17, "raw_value_size": 68266, "raw_average_value_size": 245, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615855, "oldest_key_time": 1765615855, "file_creation_time": 1765615859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 2657 microseconds, and 1058 cpu microseconds.
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.810023) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 74024 bytes OK
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.810039) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811313) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811325) EVENT_LOG_v1 {"time_micros": 1765615859811321, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 72472, prev total WAL file size 72472, number of live WAL files 2.
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811652) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303130' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(72KB)], [122(8714KB)]
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859811700, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 8997480, "oldest_snapshot_seqno": -1}
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7251 keys, 8884822 bytes, temperature: kUnknown
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859890815, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8884822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8838622, "index_size": 26952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18181, "raw_key_size": 190319, "raw_average_key_size": 26, "raw_value_size": 8710997, "raw_average_value_size": 1201, "num_data_blocks": 1047, "num_entries": 7251, "num_filter_entries": 7251, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.891118) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8884822 bytes
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.893304) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.6 rd, 112.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.5 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(241.6) write-amplify(120.0) OK, records in: 7770, records dropped: 519 output_compression: NoCompression
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.893328) EVENT_LOG_v1 {"time_micros": 1765615859893317, "job": 74, "event": "compaction_finished", "compaction_time_micros": 79195, "compaction_time_cpu_micros": 32724, "output_level": 6, "num_output_files": 1, "total_output_size": 8884822, "num_input_records": 7770, "num_output_records": 7251, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859893468, "job": 74, "event": "table_file_deletion", "file_number": 124}
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859894847, "job": 74, "event": "table_file_deletion", "file_number": 122}
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:50:59 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:51:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 194 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 123 op/s
Dec 13 08:51:00 compute-0 ceph-mon[76537]: pgmap v2679: 321 pgs: 321 active+clean; 194 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 123 op/s
Dec 13 08:51:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 194 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 123 op/s
Dec 13 08:51:02 compute-0 nova_compute[248510]: 2025-12-13 08:51:02.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:03 compute-0 ceph-mon[76537]: pgmap v2680: 321 pgs: 321 active+clean; 194 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 123 op/s
Dec 13 08:51:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Dec 13 08:51:04 compute-0 sudo[356530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:51:04 compute-0 sudo[356530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:04 compute-0 sudo[356530]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:04 compute-0 sudo[356555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:51:04 compute-0 sudo[356555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.290 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.290 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.290 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.291 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.291 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.292 248514 INFO nova.compute.manager [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Terminating instance
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.293 248514 DEBUG nova.compute.manager [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.308 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615849.3071628, 35cf794f-191d-48c6-9cee-746c7f1345d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.309 248514 INFO nova.compute.manager [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] VM Stopped (Lifecycle Event)
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.334 248514 DEBUG nova.compute.manager [None req-fba204c5-69c7-4e77-b0b3-a3a36281b492 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:51:04 compute-0 kernel: tap5c3b12ae-3d (unregistering): left promiscuous mode
Dec 13 08:51:04 compute-0 NetworkManager[50376]: <info>  [1765615864.3559] device (tap5c3b12ae-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:51:04 compute-0 ovn_controller[148476]: 2025-12-13T08:51:04Z|01058|binding|INFO|Releasing lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff from this chassis (sb_readonly=0)
Dec 13 08:51:04 compute-0 ovn_controller[148476]: 2025-12-13T08:51:04Z|01059|binding|INFO|Setting lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff down in Southbound
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 ovn_controller[148476]: 2025-12-13T08:51:04Z|01060|binding|INFO|Removing iface tap5c3b12ae-3d ovn-installed in OVS
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.372 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:2f:3f 10.100.0.9'], port_security=['fa:16:3e:8f:2f:3f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '67dc02f2-2430-4cc6-a575-bdfd238a5460', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b6ad7fe-fef3-4e9f-8568-5f8d9cabfe04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2f617a-f802-4ec5-b71a-0c176c8c3ae6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5c3b12ae-3db8-4a4c-b558-19ab89a179ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.373 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5c3b12ae-3db8-4a4c-b558-19ab89a179ff in datapath abf04c22-5ac7-46ee-bfad-53f95095fba3 unbound from our chassis
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.375 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abf04c22-5ac7-46ee-bfad-53f95095fba3
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.383 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.392 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52e548c8-d0a6-4ca0-92ff-eb7436898a36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:04 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Dec 13 08:51:04 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Consumed 11.890s CPU time.
Dec 13 08:51:04 compute-0 systemd-machined[210538]: Machine qemu-135-instance-0000006d terminated.
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.422 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[660b76c3-ae1b-44e2-a98a-4ac0ea874d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.425 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[daea573f-9450-405a-9bd5-46ecec6f0eac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.452 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0246c6-baef-488f-ad41-d7754fa45ca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.469 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa957d66-792e-497c-84c4-c8b2eb4445c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabf04c22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:36:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838377, 'reachable_time': 33674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356605, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.485 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bcaf4be6-44b4-49f1-8966-c4032f496610]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabf04c22-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838387, 'tstamp': 838387}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356606, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabf04c22-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838390, 'tstamp': 838390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356606, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.487 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabf04c22-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.494 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabf04c22-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.495 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.495 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabf04c22-50, col_values=(('external_ids', {'iface-id': '6b94eeb9-e344-4933-88eb-29577cf3087f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.495 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.528 248514 INFO nova.virt.libvirt.driver [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance destroyed successfully.
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.529 248514 DEBUG nova.objects.instance [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 67dc02f2-2430-4cc6-a575-bdfd238a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.552 248514 DEBUG nova.virt.libvirt.vif [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=109,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:50:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9wzr47fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:50:47Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=67dc02f2-2430-4cc6-a575-bdfd238a5460,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.553 248514 DEBUG nova.network.os_vif_util [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.554 248514 DEBUG nova.network.os_vif_util [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.554 248514 DEBUG os_vif [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.556 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c3b12ae-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.561 248514 INFO os_vif [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d')
Dec 13 08:51:04 compute-0 sudo[356555]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:51:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:51:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:51:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:51:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:51:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:51:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:51:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:51:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:51:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:51:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:51:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:51:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:04 compute-0 sudo[356655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:51:04 compute-0 sudo[356655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:04 compute-0 sudo[356655]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.865 248514 INFO nova.virt.libvirt.driver [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Deleting instance files /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460_del
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.866 248514 INFO nova.virt.libvirt.driver [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Deletion of /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460_del complete
Dec 13 08:51:04 compute-0 sudo[356680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:51:04 compute-0 sudo[356680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.925 248514 INFO nova.compute.manager [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Took 0.63 seconds to destroy the instance on the hypervisor.
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.925 248514 DEBUG oslo.service.loopingcall [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.926 248514 DEBUG nova.compute.manager [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:51:04 compute-0 nova_compute[248510]: 2025-12-13 08:51:04.926 248514 DEBUG nova.network.neutron [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:51:05 compute-0 ceph-mon[76537]: pgmap v2681: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Dec 13 08:51:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:51:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:51:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:51:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:51:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:51:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:51:05 compute-0 podman[356718]: 2025-12-13 08:51:05.199209686 +0000 UTC m=+0.052360643 container create 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 08:51:05 compute-0 systemd[1]: Started libpod-conmon-5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba.scope.
Dec 13 08:51:05 compute-0 podman[356718]: 2025-12-13 08:51:05.175151963 +0000 UTC m=+0.028302950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:51:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:51:05 compute-0 podman[356718]: 2025-12-13 08:51:05.291760657 +0000 UTC m=+0.144911644 container init 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:51:05 compute-0 podman[356718]: 2025-12-13 08:51:05.298990568 +0000 UTC m=+0.152141525 container start 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:51:05 compute-0 podman[356718]: 2025-12-13 08:51:05.302204809 +0000 UTC m=+0.155355776 container attach 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:51:05 compute-0 systemd[1]: libpod-5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba.scope: Deactivated successfully.
Dec 13 08:51:05 compute-0 priceless_cartwright[356734]: 167 167
Dec 13 08:51:05 compute-0 podman[356718]: 2025-12-13 08:51:05.305595074 +0000 UTC m=+0.158746031 container died 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 08:51:05 compute-0 conmon[356734]: conmon 5b9db9a18e95c1abd83d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba.scope/container/memory.events
Dec 13 08:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdfea8696b6bce665fa277c0d16a617ad5fb98b4d18fc79086896d9603878240-merged.mount: Deactivated successfully.
Dec 13 08:51:05 compute-0 podman[356718]: 2025-12-13 08:51:05.346334415 +0000 UTC m=+0.199485372 container remove 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:51:05 compute-0 systemd[1]: libpod-conmon-5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba.scope: Deactivated successfully.
Dec 13 08:51:05 compute-0 nova_compute[248510]: 2025-12-13 08:51:05.430 248514 DEBUG nova.compute.manager [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-unplugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:05 compute-0 nova_compute[248510]: 2025-12-13 08:51:05.432 248514 DEBUG oslo_concurrency.lockutils [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:05 compute-0 nova_compute[248510]: 2025-12-13 08:51:05.432 248514 DEBUG oslo_concurrency.lockutils [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:05 compute-0 nova_compute[248510]: 2025-12-13 08:51:05.432 248514 DEBUG oslo_concurrency.lockutils [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:05 compute-0 nova_compute[248510]: 2025-12-13 08:51:05.433 248514 DEBUG nova.compute.manager [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] No waiting events found dispatching network-vif-unplugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:51:05 compute-0 nova_compute[248510]: 2025-12-13 08:51:05.433 248514 DEBUG nova.compute.manager [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-unplugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:51:05 compute-0 podman[356759]: 2025-12-13 08:51:05.52999829 +0000 UTC m=+0.044407824 container create 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 08:51:05 compute-0 systemd[1]: Started libpod-conmon-918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911.scope.
Dec 13 08:51:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:05 compute-0 podman[356759]: 2025-12-13 08:51:05.511166138 +0000 UTC m=+0.025575692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:51:05 compute-0 podman[356759]: 2025-12-13 08:51:05.625146766 +0000 UTC m=+0.139556320 container init 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 08:51:05 compute-0 podman[356759]: 2025-12-13 08:51:05.634322856 +0000 UTC m=+0.148732390 container start 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:51:05 compute-0 podman[356759]: 2025-12-13 08:51:05.638689625 +0000 UTC m=+0.153099179 container attach 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 08:51:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Dec 13 08:51:06 compute-0 determined_cerf[356775]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:51:06 compute-0 determined_cerf[356775]: --> All data devices are unavailable
Dec 13 08:51:06 compute-0 systemd[1]: libpod-918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911.scope: Deactivated successfully.
Dec 13 08:51:06 compute-0 podman[356759]: 2025-12-13 08:51:06.132708402 +0000 UTC m=+0.647117936 container died 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 08:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df-merged.mount: Deactivated successfully.
Dec 13 08:51:06 compute-0 nova_compute[248510]: 2025-12-13 08:51:06.167 248514 DEBUG nova.network.neutron [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:51:06 compute-0 podman[356759]: 2025-12-13 08:51:06.173155046 +0000 UTC m=+0.687564580 container remove 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:51:06 compute-0 systemd[1]: libpod-conmon-918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911.scope: Deactivated successfully.
Dec 13 08:51:06 compute-0 nova_compute[248510]: 2025-12-13 08:51:06.193 248514 INFO nova.compute.manager [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Took 1.27 seconds to deallocate network for instance.
Dec 13 08:51:06 compute-0 sudo[356680]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:06 compute-0 nova_compute[248510]: 2025-12-13 08:51:06.246 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:06 compute-0 nova_compute[248510]: 2025-12-13 08:51:06.247 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:06 compute-0 sudo[356807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:51:06 compute-0 sudo[356807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:06 compute-0 sudo[356807]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:06 compute-0 sudo[356832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:51:06 compute-0 sudo[356832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:06 compute-0 nova_compute[248510]: 2025-12-13 08:51:06.541 248514 DEBUG oslo_concurrency.processutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:06 compute-0 podman[356870]: 2025-12-13 08:51:06.616211715 +0000 UTC m=+0.043732248 container create 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:51:06 compute-0 systemd[1]: Started libpod-conmon-783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f.scope.
Dec 13 08:51:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:51:06 compute-0 podman[356870]: 2025-12-13 08:51:06.69180655 +0000 UTC m=+0.119327103 container init 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 08:51:06 compute-0 podman[356870]: 2025-12-13 08:51:06.596958622 +0000 UTC m=+0.024479175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:51:06 compute-0 podman[356870]: 2025-12-13 08:51:06.698265412 +0000 UTC m=+0.125785945 container start 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:51:06 compute-0 podman[356870]: 2025-12-13 08:51:06.7017805 +0000 UTC m=+0.129301063 container attach 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:51:06 compute-0 systemd[1]: libpod-783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f.scope: Deactivated successfully.
Dec 13 08:51:06 compute-0 kind_goldberg[356886]: 167 167
Dec 13 08:51:06 compute-0 conmon[356886]: conmon 783588bc265afa382791 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f.scope/container/memory.events
Dec 13 08:51:06 compute-0 podman[356870]: 2025-12-13 08:51:06.704342944 +0000 UTC m=+0.131863637 container died 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0af0a68954049caddaf836efbc2b57535a9024b0e35dae25037e20fcf5e1f05-merged.mount: Deactivated successfully.
Dec 13 08:51:06 compute-0 podman[356870]: 2025-12-13 08:51:06.742901461 +0000 UTC m=+0.170421994 container remove 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:51:06 compute-0 systemd[1]: libpod-conmon-783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f.scope: Deactivated successfully.
Dec 13 08:51:06 compute-0 podman[356928]: 2025-12-13 08:51:06.90437746 +0000 UTC m=+0.041967393 container create 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:51:06 compute-0 systemd[1]: Started libpod-conmon-26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be.scope.
Dec 13 08:51:06 compute-0 podman[356928]: 2025-12-13 08:51:06.886980604 +0000 UTC m=+0.024570557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:51:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:07 compute-0 podman[356928]: 2025-12-13 08:51:07.023802145 +0000 UTC m=+0.161392088 container init 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:51:07 compute-0 podman[356928]: 2025-12-13 08:51:07.030812831 +0000 UTC m=+0.168402774 container start 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 08:51:07 compute-0 podman[356928]: 2025-12-13 08:51:07.035410616 +0000 UTC m=+0.173000549 container attach 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 08:51:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:51:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713157814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.133 248514 DEBUG oslo_concurrency.processutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.142 248514 DEBUG nova.compute.provider_tree [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.165 248514 DEBUG nova.scheduler.client.report [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:51:07 compute-0 ceph-mon[76537]: pgmap v2682: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Dec 13 08:51:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3713157814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.194 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.236 248514 INFO nova.scheduler.client.report [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 67dc02f2-2430-4cc6-a575-bdfd238a5460
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:07 compute-0 bold_rubin[356944]: {
Dec 13 08:51:07 compute-0 bold_rubin[356944]:     "0": [
Dec 13 08:51:07 compute-0 bold_rubin[356944]:         {
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "devices": [
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "/dev/loop3"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             ],
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_name": "ceph_lv0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_size": "21470642176",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "name": "ceph_lv0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "tags": {
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cluster_name": "ceph",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.crush_device_class": "",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.encrypted": "0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.objectstore": "bluestore",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osd_id": "0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.type": "block",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.vdo": "0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.with_tpm": "0"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             },
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "type": "block",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "vg_name": "ceph_vg0"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:         }
Dec 13 08:51:07 compute-0 bold_rubin[356944]:     ],
Dec 13 08:51:07 compute-0 bold_rubin[356944]:     "1": [
Dec 13 08:51:07 compute-0 bold_rubin[356944]:         {
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "devices": [
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "/dev/loop4"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             ],
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_name": "ceph_lv1",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_size": "21470642176",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "name": "ceph_lv1",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "tags": {
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cluster_name": "ceph",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.crush_device_class": "",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.encrypted": "0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.objectstore": "bluestore",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osd_id": "1",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.type": "block",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.vdo": "0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.with_tpm": "0"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             },
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "type": "block",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "vg_name": "ceph_vg1"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:         }
Dec 13 08:51:07 compute-0 bold_rubin[356944]:     ],
Dec 13 08:51:07 compute-0 bold_rubin[356944]:     "2": [
Dec 13 08:51:07 compute-0 bold_rubin[356944]:         {
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "devices": [
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "/dev/loop5"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             ],
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_name": "ceph_lv2",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_size": "21470642176",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "name": "ceph_lv2",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "tags": {
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.cluster_name": "ceph",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.crush_device_class": "",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.encrypted": "0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.objectstore": "bluestore",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osd_id": "2",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.type": "block",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.vdo": "0",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:                 "ceph.with_tpm": "0"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             },
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "type": "block",
Dec 13 08:51:07 compute-0 bold_rubin[356944]:             "vg_name": "ceph_vg2"
Dec 13 08:51:07 compute-0 bold_rubin[356944]:         }
Dec 13 08:51:07 compute-0 bold_rubin[356944]:     ]
Dec 13 08:51:07 compute-0 bold_rubin[356944]: }
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.343 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:07 compute-0 systemd[1]: libpod-26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be.scope: Deactivated successfully.
Dec 13 08:51:07 compute-0 podman[356928]: 2025-12-13 08:51:07.351873061 +0000 UTC m=+0.489463004 container died 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 08:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129-merged.mount: Deactivated successfully.
Dec 13 08:51:07 compute-0 podman[356928]: 2025-12-13 08:51:07.465449379 +0000 UTC m=+0.603039312 container remove 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 08:51:07 compute-0 sudo[356832]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:07 compute-0 systemd[1]: libpod-conmon-26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be.scope: Deactivated successfully.
Dec 13 08:51:07 compute-0 sudo[356967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:51:07 compute-0 sudo[356967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:07 compute-0 sudo[356967]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:07 compute-0 sudo[356992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:51:07 compute-0 sudo[356992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.660 248514 DEBUG nova.compute.manager [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.660 248514 DEBUG oslo_concurrency.lockutils [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.660 248514 DEBUG oslo_concurrency.lockutils [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.661 248514 DEBUG oslo_concurrency.lockutils [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.661 248514 DEBUG nova.compute.manager [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] No waiting events found dispatching network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.661 248514 WARNING nova.compute.manager [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received unexpected event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff for instance with vm_state deleted and task_state None.
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.661 248514 DEBUG nova.compute.manager [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-deleted-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:51:07 compute-0 nova_compute[248510]: 2025-12-13 08:51:07.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:51:08 compute-0 podman[357028]: 2025-12-13 08:51:07.906893468 +0000 UTC m=+0.021329335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:51:08 compute-0 podman[357028]: 2025-12-13 08:51:08.051697219 +0000 UTC m=+0.166133056 container create 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:51:08 compute-0 nova_compute[248510]: 2025-12-13 08:51:08.083 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:51:08 compute-0 nova_compute[248510]: 2025-12-13 08:51:08.083 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:51:08 compute-0 nova_compute[248510]: 2025-12-13 08:51:08.083 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:51:08 compute-0 nova_compute[248510]: 2025-12-13 08:51:08.084 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d919892-73fd-4a11-be79-2a1a9280a987 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:51:08 compute-0 systemd[1]: Started libpod-conmon-05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e.scope.
Dec 13 08:51:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 163 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Dec 13 08:51:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:51:08 compute-0 podman[357028]: 2025-12-13 08:51:08.21768239 +0000 UTC m=+0.332118257 container init 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 08:51:08 compute-0 podman[357028]: 2025-12-13 08:51:08.224931392 +0000 UTC m=+0.339367229 container start 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 08:51:08 compute-0 hardcore_tharp[357044]: 167 167
Dec 13 08:51:08 compute-0 systemd[1]: libpod-05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e.scope: Deactivated successfully.
Dec 13 08:51:08 compute-0 podman[357028]: 2025-12-13 08:51:08.229032645 +0000 UTC m=+0.343468502 container attach 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 08:51:08 compute-0 podman[357028]: 2025-12-13 08:51:08.230690076 +0000 UTC m=+0.345125923 container died 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-391ae54c8cc2df406a282594583ab9804f7bb7721424fcb2eee669a48b7b01d7-merged.mount: Deactivated successfully.
Dec 13 08:51:08 compute-0 podman[357028]: 2025-12-13 08:51:08.310840736 +0000 UTC m=+0.425276573 container remove 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:51:08 compute-0 systemd[1]: libpod-conmon-05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e.scope: Deactivated successfully.
Dec 13 08:51:08 compute-0 podman[357070]: 2025-12-13 08:51:08.518462182 +0000 UTC m=+0.077746330 container create 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:51:08 compute-0 systemd[1]: Started libpod-conmon-13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f.scope.
Dec 13 08:51:08 compute-0 podman[357070]: 2025-12-13 08:51:08.463397661 +0000 UTC m=+0.022681829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:51:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:08 compute-0 podman[357070]: 2025-12-13 08:51:08.650968334 +0000 UTC m=+0.210252492 container init 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:51:08 compute-0 podman[357070]: 2025-12-13 08:51:08.657374045 +0000 UTC m=+0.216658193 container start 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 08:51:08 compute-0 podman[357070]: 2025-12-13 08:51:08.66316034 +0000 UTC m=+0.222444488 container attach 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:51:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:09.256 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:09.259 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:51:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:51:09
Dec 13 08:51:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:51:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:51:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.mgr', 'backups', 'vms', 'default.rgw.log', 'default.rgw.meta', 'images']
Dec 13 08:51:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:51:09 compute-0 lvm[357162]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:51:09 compute-0 lvm[357162]: VG ceph_vg0 finished
Dec 13 08:51:09 compute-0 lvm[357165]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:51:09 compute-0 lvm[357165]: VG ceph_vg1 finished
Dec 13 08:51:09 compute-0 lvm[357167]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:51:09 compute-0 lvm[357167]: VG ceph_vg2 finished
Dec 13 08:51:09 compute-0 lvm[357168]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:51:09 compute-0 lvm[357168]: VG ceph_vg1 finished
Dec 13 08:51:09 compute-0 lvm[357170]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:51:09 compute-0 lvm[357170]: VG ceph_vg1 finished
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:09 compute-0 sleepy_dubinsky[357086]: {}
Dec 13 08:51:09 compute-0 systemd[1]: libpod-13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f.scope: Deactivated successfully.
Dec 13 08:51:09 compute-0 systemd[1]: libpod-13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f.scope: Consumed 1.312s CPU time.
Dec 13 08:51:09 compute-0 podman[357070]: 2025-12-13 08:51:09.590662645 +0000 UTC m=+1.149946793 container died 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.776 248514 DEBUG nova.compute.manager [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-changed-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.776 248514 DEBUG nova.compute.manager [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing instance network info cache due to event network-changed-8d84f494-97c1-4708-b8df-444e42f55484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.777 248514 DEBUG oslo_concurrency.lockutils [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:51:09 compute-0 ceph-mon[76537]: pgmap v2683: 321 pgs: 321 active+clean; 163 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.828 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.829 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.830 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.831 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.831 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.833 248514 INFO nova.compute.manager [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Terminating instance
Dec 13 08:51:09 compute-0 nova_compute[248510]: 2025-12-13 08:51:09.835 248514 DEBUG nova.compute.manager [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 121 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 1.1 MiB/s wr, 68 op/s
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:51:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.227 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.255 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.255 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.256 248514 DEBUG oslo_concurrency.lockutils [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.256 248514 DEBUG nova.network.neutron [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.257 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:51:10 compute-0 kernel: tap8d84f494-97 (unregistering): left promiscuous mode
Dec 13 08:51:10 compute-0 NetworkManager[50376]: <info>  [1765615870.3573] device (tap8d84f494-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:10 compute-0 ovn_controller[148476]: 2025-12-13T08:51:10Z|01061|binding|INFO|Releasing lport 8d84f494-97c1-4708-b8df-444e42f55484 from this chassis (sb_readonly=0)
Dec 13 08:51:10 compute-0 ovn_controller[148476]: 2025-12-13T08:51:10Z|01062|binding|INFO|Setting lport 8d84f494-97c1-4708-b8df-444e42f55484 down in Southbound
Dec 13 08:51:10 compute-0 ovn_controller[148476]: 2025-12-13T08:51:10Z|01063|binding|INFO|Removing iface tap8d84f494-97 ovn-installed in OVS
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf-merged.mount: Deactivated successfully.
Dec 13 08:51:10 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Dec 13 08:51:10 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Consumed 14.946s CPU time.
Dec 13 08:51:10 compute-0 systemd-machined[210538]: Machine qemu-133-instance-0000006b terminated.
Dec 13 08:51:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.428 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:03:ce 10.100.0.11'], port_security=['fa:16:3e:87:03:ce 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8d919892-73fd-4a11-be79-2a1a9280a987', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c8d25b9-2d2e-4d46-aaac-2d96c7d8db60 8b6ad7fe-fef3-4e9f-8568-5f8d9cabfe04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2f617a-f802-4ec5-b71a-0c176c8c3ae6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8d84f494-97c1-4708-b8df-444e42f55484) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:51:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.429 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8d84f494-97c1-4708-b8df-444e42f55484 in datapath abf04c22-5ac7-46ee-bfad-53f95095fba3 unbound from our chassis
Dec 13 08:51:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.430 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abf04c22-5ac7-46ee-bfad-53f95095fba3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:51:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.430 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[854e6854-7e27-4804-b05d-204bd9dc6f4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.431 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 namespace which is not needed anymore
Dec 13 08:51:10 compute-0 NetworkManager[50376]: <info>  [1765615870.4574] manager: (tap8d84f494-97): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.487 248514 INFO nova.virt.libvirt.driver [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance destroyed successfully.
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.488 248514 DEBUG nova.objects.instance [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 8d919892-73fd-4a11-be79-2a1a9280a987 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.515 248514 DEBUG nova.virt.libvirt.vif [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=107,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:50:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-rsplf7ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:50:09Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=8d919892-73fd-4a11-be79-2a1a9280a987,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.515 248514 DEBUG nova.network.os_vif_util [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.516 248514 DEBUG nova.network.os_vif_util [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.516 248514 DEBUG os_vif [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.518 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d84f494-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.524 248514 INFO os_vif [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97')
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:51:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:51:10 compute-0 podman[357070]: 2025-12-13 08:51:10.795652318 +0000 UTC m=+2.354936466 container remove 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:51:10 compute-0 ceph-mon[76537]: pgmap v2684: 321 pgs: 321 active+clean; 121 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 1.1 MiB/s wr, 68 op/s
Dec 13 08:51:10 compute-0 sudo[356992]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:51:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:51:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:51:10 compute-0 systemd[1]: libpod-conmon-13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f.scope: Deactivated successfully.
Dec 13 08:51:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.913 248514 DEBUG nova.compute.manager [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-unplugged-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.914 248514 DEBUG oslo_concurrency.lockutils [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.914 248514 DEBUG oslo_concurrency.lockutils [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.915 248514 DEBUG oslo_concurrency.lockutils [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.915 248514 DEBUG nova.compute.manager [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] No waiting events found dispatching network-vif-unplugged-8d84f494-97c1-4708-b8df-444e42f55484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:51:10 compute-0 nova_compute[248510]: 2025-12-13 08:51:10.915 248514 DEBUG nova.compute.manager [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-unplugged-8d84f494-97c1-4708-b8df-444e42f55484 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:51:10 compute-0 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [NOTICE]   (355384) : haproxy version is 2.8.14-c23fe91
Dec 13 08:51:10 compute-0 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [NOTICE]   (355384) : path to executable is /usr/sbin/haproxy
Dec 13 08:51:10 compute-0 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [ALERT]    (355384) : Current worker (355386) exited with code 143 (Terminated)
Dec 13 08:51:10 compute-0 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [WARNING]  (355384) : All workers exited. Exiting... (0)
Dec 13 08:51:10 compute-0 systemd[1]: libpod-0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112.scope: Deactivated successfully.
Dec 13 08:51:10 compute-0 podman[357232]: 2025-12-13 08:51:10.93413716 +0000 UTC m=+0.060580700 container died 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:51:10 compute-0 sudo[357238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:51:10 compute-0 sudo[357238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:51:10 compute-0 sudo[357238]: pam_unix(sudo:session): session closed for user root
Dec 13 08:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112-userdata-shm.mount: Deactivated successfully.
Dec 13 08:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7015d7066325922029e49e3e8c4de1d3ef5bbeec39e3ee942291a993c00912b6-merged.mount: Deactivated successfully.
Dec 13 08:51:10 compute-0 podman[357232]: 2025-12-13 08:51:10.985419426 +0000 UTC m=+0.111862966 container cleanup 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:51:10 compute-0 systemd[1]: libpod-conmon-0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112.scope: Deactivated successfully.
Dec 13 08:51:11 compute-0 podman[357286]: 2025-12-13 08:51:11.063419942 +0000 UTC m=+0.050981580 container remove 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.071 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d7507ef4-5fcb-484a-92ed-d180b5e7738c]: (4, ('Sat Dec 13 08:51:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 (0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112)\n0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112\nSat Dec 13 08:51:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 (0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112)\n0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.073 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6ddc1c-6a1c-414e-9692-70f90ce9091f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.075 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabf04c22-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:11 compute-0 kernel: tapabf04c22-50: left promiscuous mode
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.085 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52ae25d7-255e-4eff-a814-11a2f52c8199]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.112 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b807d3a6-4d3c-4b4e-95f1-c783f30931ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.113 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8dcb86f-c269-458a-bdba-f23e4d3886fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.121 248514 INFO nova.virt.libvirt.driver [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Deleting instance files /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987_del
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.122 248514 INFO nova.virt.libvirt.driver [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Deletion of /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987_del complete
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.131 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[54b47e68-9340-4160-a56e-770c6f55ac73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838370, 'reachable_time': 25825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357301, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:11 compute-0 systemd[1]: run-netns-ovnmeta\x2dabf04c22\x2d5ac7\x2d46ee\x2dbfad\x2d53f95095fba3.mount: Deactivated successfully.
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.135 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:51:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.135 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a92c3e-180d-450e-932d-588c3b165192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.189 248514 INFO nova.compute.manager [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Took 1.35 seconds to destroy the instance on the hypervisor.
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.190 248514 DEBUG oslo.service.loopingcall [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.191 248514 DEBUG nova.compute.manager [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.191 248514 DEBUG nova.network.neutron [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.811 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.812 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.849 248514 DEBUG nova.network.neutron [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updated VIF entry in instance network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.850 248514 DEBUG nova.network.neutron [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:51:11 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:51:11 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:51:11 compute-0 nova_compute[248510]: 2025-12-13 08:51:11.873 248514 DEBUG oslo_concurrency.lockutils [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:51:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 121 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 57 KiB/s wr, 40 op/s
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.254 248514 DEBUG nova.network.neutron [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.277 248514 INFO nova.compute.manager [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Took 1.09 seconds to deallocate network for instance.
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.310 248514 DEBUG nova.compute.manager [req-eeed2041-55e4-4392-9cdd-ae764484f00b req-d8f7805f-d19c-4138-b0dc-ce975cf9bbf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-deleted-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.366 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.366 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:51:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490537596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.398 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.593 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.595 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3712MB free_disk=59.94190831575543GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.596 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:12 compute-0 nova_compute[248510]: 2025-12-13 08:51:12.614 248514 DEBUG oslo_concurrency.processutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:12 compute-0 ceph-mon[76537]: pgmap v2685: 321 pgs: 321 active+clean; 121 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 57 KiB/s wr, 40 op/s
Dec 13 08:51:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1490537596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:51:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684781816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.192 248514 DEBUG oslo_concurrency.processutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.197 248514 DEBUG nova.compute.manager [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.198 248514 DEBUG oslo_concurrency.lockutils [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.198 248514 DEBUG oslo_concurrency.lockutils [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.199 248514 DEBUG oslo_concurrency.lockutils [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.199 248514 DEBUG nova.compute.manager [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] No waiting events found dispatching network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.200 248514 WARNING nova.compute.manager [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received unexpected event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 for instance with vm_state deleted and task_state None.
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.206 248514 DEBUG nova.compute.provider_tree [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.228 248514 DEBUG nova.scheduler.client.report [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:51:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:13.262 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.306 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.309 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.345 248514 INFO nova.scheduler.client.report [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 8d919892-73fd-4a11-be79-2a1a9280a987
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.394 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.395 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.416 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:13 compute-0 nova_compute[248510]: 2025-12-13 08:51:13.458 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2684781816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:51:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987824012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:14 compute-0 nova_compute[248510]: 2025-12-13 08:51:14.012 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:14 compute-0 nova_compute[248510]: 2025-12-13 08:51:14.018 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:51:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 59 KiB/s wr, 68 op/s
Dec 13 08:51:14 compute-0 nova_compute[248510]: 2025-12-13 08:51:14.222 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:51:14 compute-0 nova_compute[248510]: 2025-12-13 08:51:14.251 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:51:14 compute-0 nova_compute[248510]: 2025-12-13 08:51:14.252 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/987824012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:14 compute-0 ceph-mon[76537]: pgmap v2686: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 59 KiB/s wr, 68 op/s
Dec 13 08:51:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:51:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/341224126' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:51:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:51:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/341224126' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:51:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:15 compute-0 nova_compute[248510]: 2025-12-13 08:51:15.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/341224126' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:51:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/341224126' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:51:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec 13 08:51:16 compute-0 nova_compute[248510]: 2025-12-13 08:51:16.254 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:51:16 compute-0 nova_compute[248510]: 2025-12-13 08:51:16.254 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:51:16 compute-0 ceph-mon[76537]: pgmap v2687: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec 13 08:51:17 compute-0 nova_compute[248510]: 2025-12-13 08:51:17.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec 13 08:51:18 compute-0 nova_compute[248510]: 2025-12-13 08:51:18.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:51:19 compute-0 ceph-mon[76537]: pgmap v2688: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec 13 08:51:19 compute-0 nova_compute[248510]: 2025-12-13 08:51:19.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:19 compute-0 nova_compute[248510]: 2025-12-13 08:51:19.527 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615864.527167, 67dc02f2-2430-4cc6-a575-bdfd238a5460 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:51:19 compute-0 nova_compute[248510]: 2025-12-13 08:51:19.528 248514 INFO nova.compute.manager [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] VM Stopped (Lifecycle Event)
Dec 13 08:51:19 compute-0 nova_compute[248510]: 2025-12-13 08:51:19.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:19 compute-0 nova_compute[248510]: 2025-12-13 08:51:19.562 248514 DEBUG nova.compute.manager [None req-bae99fae-eecb-4046-90e2-3168abe9b9ee - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:51:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.5 KiB/s wr, 42 op/s
Dec 13 08:51:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:20 compute-0 ceph-mon[76537]: pgmap v2689: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.5 KiB/s wr, 42 op/s
Dec 13 08:51:20 compute-0 nova_compute[248510]: 2025-12-13 08:51:20.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3073215557897495e-05 of space, bias 1.0, pg target 0.003921964667369248 quantized to 32 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693612553454349 of space, bias 1.0, pg target 0.2008083766036305 quantized to 32 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.740707038769602e-07 of space, bias 4.0, pg target 0.0006888848446523523 quantized to 16 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:51:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:51:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 13 08:51:22 compute-0 nova_compute[248510]: 2025-12-13 08:51:22.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:23 compute-0 ceph-mon[76537]: pgmap v2690: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 13 08:51:23 compute-0 podman[357371]: 2025-12-13 08:51:23.962799542 +0000 UTC m=+0.053642776 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:51:23 compute-0 podman[357372]: 2025-12-13 08:51:23.98466101 +0000 UTC m=+0.062055707 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 08:51:23 compute-0 podman[357370]: 2025-12-13 08:51:23.984687421 +0000 UTC m=+0.072021717 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:51:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 13 08:51:24 compute-0 ceph-mon[76537]: pgmap v2691: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 13 08:51:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:25 compute-0 nova_compute[248510]: 2025-12-13 08:51:25.485 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615870.4848216, 8d919892-73fd-4a11-be79-2a1a9280a987 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:51:25 compute-0 nova_compute[248510]: 2025-12-13 08:51:25.486 248514 INFO nova.compute.manager [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] VM Stopped (Lifecycle Event)
Dec 13 08:51:25 compute-0 nova_compute[248510]: 2025-12-13 08:51:25.524 248514 DEBUG nova.compute.manager [None req-fba8d9c3-2733-4074-bd94-914db2bc6961 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:51:25 compute-0 nova_compute[248510]: 2025-12-13 08:51:25.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:27 compute-0 nova_compute[248510]: 2025-12-13 08:51:27.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:27 compute-0 ceph-mon[76537]: pgmap v2692: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:28 compute-0 ceph-mon[76537]: pgmap v2693: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:30 compute-0 nova_compute[248510]: 2025-12-13 08:51:30.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:31 compute-0 ceph-mon[76537]: pgmap v2694: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:32 compute-0 nova_compute[248510]: 2025-12-13 08:51:32.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:33 compute-0 ceph-mon[76537]: pgmap v2695: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:35 compute-0 ceph-mon[76537]: pgmap v2696: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:35 compute-0 nova_compute[248510]: 2025-12-13 08:51:35.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:37 compute-0 ceph-mon[76537]: pgmap v2697: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:37 compute-0 nova_compute[248510]: 2025-12-13 08:51:37.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:39 compute-0 ceph-mon[76537]: pgmap v2698: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:51:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:51:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:51:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:51:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:51:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:51:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:40 compute-0 nova_compute[248510]: 2025-12-13 08:51:40.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:41 compute-0 ceph-mon[76537]: pgmap v2699: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:41 compute-0 nova_compute[248510]: 2025-12-13 08:51:41.935 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:41 compute-0 nova_compute[248510]: 2025-12-13 08:51:41.935 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:41 compute-0 nova_compute[248510]: 2025-12-13 08:51:41.969 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:51:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:42 compute-0 nova_compute[248510]: 2025-12-13 08:51:42.223 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:42 compute-0 nova_compute[248510]: 2025-12-13 08:51:42.224 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:42 compute-0 nova_compute[248510]: 2025-12-13 08:51:42.233 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:51:42 compute-0 nova_compute[248510]: 2025-12-13 08:51:42.233 248514 INFO nova.compute.claims [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:51:42 compute-0 nova_compute[248510]: 2025-12-13 08:51:42.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:42 compute-0 nova_compute[248510]: 2025-12-13 08:51:42.372 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:42 compute-0 ceph-mon[76537]: pgmap v2700: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:51:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123659522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.075 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.703s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.082 248514 DEBUG nova.compute.provider_tree [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.109 248514 DEBUG nova.scheduler.client.report [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.438 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.438 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.512 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.512 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.542 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:51:43 compute-0 nova_compute[248510]: 2025-12-13 08:51:43.566 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.066 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:51:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2123659522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.068 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.069 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Creating image(s)
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.111 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:51:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.136 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.157 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.160 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.203 248514 DEBUG nova.policy [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.243 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.244 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.245 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.245 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.265 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.269 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.581 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.666 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.762 248514 DEBUG nova.objects.instance [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 32f46650-28ec-40b8-8cbb-afb8a34cda45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.780 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.780 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Ensure instance console log exists: /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.781 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.781 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:44 compute-0 nova_compute[248510]: 2025-12-13 08:51:44.781 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:45 compute-0 ceph-mon[76537]: pgmap v2701: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:45 compute-0 nova_compute[248510]: 2025-12-13 08:51:45.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:46 compute-0 ceph-mon[76537]: pgmap v2702: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:51:47 compute-0 nova_compute[248510]: 2025-12-13 08:51:47.067 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Successfully created port: 4b6e89d0-3078-468e-ad37-fa04ed14c96a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:51:47 compute-0 nova_compute[248510]: 2025-12-13 08:51:47.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 53 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s wr, 0 op/s
Dec 13 08:51:48 compute-0 nova_compute[248510]: 2025-12-13 08:51:48.727 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Successfully updated port: 4b6e89d0-3078-468e-ad37-fa04ed14c96a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:51:48 compute-0 nova_compute[248510]: 2025-12-13 08:51:48.745 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:51:48 compute-0 nova_compute[248510]: 2025-12-13 08:51:48.746 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:51:48 compute-0 nova_compute[248510]: 2025-12-13 08:51:48.746 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:51:48 compute-0 nova_compute[248510]: 2025-12-13 08:51:48.988 248514 DEBUG nova.compute.manager [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:48 compute-0 nova_compute[248510]: 2025-12-13 08:51:48.988 248514 DEBUG nova.compute.manager [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing instance network info cache due to event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:51:48 compute-0 nova_compute[248510]: 2025-12-13 08:51:48.989 248514 DEBUG oslo_concurrency.lockutils [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:51:49 compute-0 ceph-mon[76537]: pgmap v2703: 321 pgs: 321 active+clean; 53 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s wr, 0 op/s
Dec 13 08:51:50 compute-0 nova_compute[248510]: 2025-12-13 08:51:50.040 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:51:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:50 compute-0 nova_compute[248510]: 2025-12-13 08:51:50.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:51 compute-0 ceph-mon[76537]: pgmap v2704: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:51 compute-0 sshd-session[357620]: Invalid user sol from 193.32.162.146 port 50146
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.763 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.845 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.845 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance network_info: |[{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.846 248514 DEBUG oslo_concurrency.lockutils [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.846 248514 DEBUG nova.network.neutron [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.848 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start _get_guest_xml network_info=[{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:51:51 compute-0 sshd-session[357620]: Connection closed by invalid user sol 193.32.162.146 port 50146 [preauth]
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.862 248514 WARNING nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.882 248514 DEBUG nova.virt.libvirt.host [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.884 248514 DEBUG nova.virt.libvirt.host [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.890 248514 DEBUG nova.virt.libvirt.host [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.890 248514 DEBUG nova.virt.libvirt.host [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.890 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.891 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.891 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.891 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.891 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.892 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.892 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.892 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.892 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.893 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.893 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.893 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:51:51 compute-0 nova_compute[248510]: 2025-12-13 08:51:51.896 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:52 compute-0 nova_compute[248510]: 2025-12-13 08:51:52.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:51:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994822390' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:51:52 compute-0 ceph-mon[76537]: pgmap v2705: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2994822390' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:51:52 compute-0 nova_compute[248510]: 2025-12-13 08:51:52.514 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:52 compute-0 nova_compute[248510]: 2025-12-13 08:51:52.535 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:51:52 compute-0 nova_compute[248510]: 2025-12-13 08:51:52.540 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:51:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645111657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.130 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.132 248514 DEBUG nova.virt.libvirt.vif [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=110,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1EAPswQkN6gGnzWb6nWTEPMlUNbetceQGhufBgelanH3kUDSBVad+EWLTxUJKeHTg22cPL3Ixvag9/dm2M/FjTcKf+ix54cOXq9k631rEiL8V03+5FUWKZfsVC6yBvBw==',key_name='tempest-TestSecurityGroupsBasicOps-451827369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-lhganq1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:51:43Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=32f46650-28ec-40b8-8cbb-afb8a34cda45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.132 248514 DEBUG nova.network.os_vif_util [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.133 248514 DEBUG nova.network.os_vif_util [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.135 248514 DEBUG nova.objects.instance [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 32f46650-28ec-40b8-8cbb-afb8a34cda45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.154 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <uuid>32f46650-28ec-40b8-8cbb-afb8a34cda45</uuid>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <name>instance-0000006e</name>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557</nova:name>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:51:51</nova:creationTime>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <nova:port uuid="4b6e89d0-3078-468e-ad37-fa04ed14c96a">
Dec 13 08:51:53 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <system>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <entry name="serial">32f46650-28ec-40b8-8cbb-afb8a34cda45</entry>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <entry name="uuid">32f46650-28ec-40b8-8cbb-afb8a34cda45</entry>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </system>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <os>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   </os>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <features>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   </features>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/32f46650-28ec-40b8-8cbb-afb8a34cda45_disk">
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       </source>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config">
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       </source>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:51:53 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:c0:e1:26"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <target dev="tap4b6e89d0-30"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/console.log" append="off"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <video>
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </video>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:51:53 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:51:53 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:51:53 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:51:53 compute-0 nova_compute[248510]: </domain>
Dec 13 08:51:53 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.155 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Preparing to wait for external event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.156 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.156 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.156 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.157 248514 DEBUG nova.virt.libvirt.vif [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=110,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1EAPswQkN6gGnzWb6nWTEPMlUNbetceQGhufBgelanH3kUDSBVad+EWLTxUJKeHTg22cPL3Ixvag9/dm2M/FjTcKf+ix54cOXq9k631rEiL8V03+5FUWKZfsVC6yBvBw==',key_name='tempest-TestSecurityGroupsBasicOps-451827369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-lhganq1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:51:43Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=32f46650-28ec-40b8-8cbb-afb8a34cda45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.157 248514 DEBUG nova.network.os_vif_util [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.158 248514 DEBUG nova.network.os_vif_util [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.158 248514 DEBUG os_vif [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.159 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.160 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.163 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b6e89d0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.164 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b6e89d0-30, col_values=(('external_ids', {'iface-id': '4b6e89d0-3078-468e-ad37-fa04ed14c96a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:e1:26', 'vm-uuid': '32f46650-28ec-40b8-8cbb-afb8a34cda45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.165 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:53 compute-0 NetworkManager[50376]: <info>  [1765615913.1679] manager: (tap4b6e89d0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.168 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.175 248514 INFO os_vif [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30')
Dec 13 08:51:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3645111657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.577 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.578 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.578 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:c0:e1:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.578 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Using config drive
Dec 13 08:51:53 compute-0 nova_compute[248510]: 2025-12-13 08:51:53.597 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:51:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:54 compute-0 nova_compute[248510]: 2025-12-13 08:51:54.958 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Creating config drive at /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config
Dec 13 08:51:54 compute-0 nova_compute[248510]: 2025-12-13 08:51:54.963 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44ywgd7f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:54 compute-0 podman[357705]: 2025-12-13 08:51:54.977219435 +0000 UTC m=+0.065335389 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:51:54 compute-0 podman[357706]: 2025-12-13 08:51:54.99098498 +0000 UTC m=+0.077914354 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:51:54 compute-0 podman[357704]: 2025-12-13 08:51:54.997237407 +0000 UTC m=+0.085303880 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:51:55 compute-0 ceph-mon[76537]: pgmap v2706: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:55 compute-0 nova_compute[248510]: 2025-12-13 08:51:55.108 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44ywgd7f" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:55 compute-0 nova_compute[248510]: 2025-12-13 08:51:55.133 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:51:55 compute-0 nova_compute[248510]: 2025-12-13 08:51:55.138 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:51:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:51:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:55.430 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:55 compute-0 nova_compute[248510]: 2025-12-13 08:51:55.472 248514 DEBUG nova.network.neutron [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updated VIF entry in instance network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:51:55 compute-0 nova_compute[248510]: 2025-12-13 08:51:55.473 248514 DEBUG nova.network.neutron [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:51:55 compute-0 nova_compute[248510]: 2025-12-13 08:51:55.529 248514 DEBUG oslo_concurrency.lockutils [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:51:55 compute-0 nova_compute[248510]: 2025-12-13 08:51:55.913 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.776s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:51:55 compute-0 nova_compute[248510]: 2025-12-13 08:51:55.914 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Deleting local config drive /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config because it was imported into RBD.
Dec 13 08:51:55 compute-0 kernel: tap4b6e89d0-30: entered promiscuous mode
Dec 13 08:51:55 compute-0 NetworkManager[50376]: <info>  [1765615915.9889] manager: (tap4b6e89d0-30): new Tun device (/org/freedesktop/NetworkManager/Devices/440)
Dec 13 08:51:56 compute-0 ovn_controller[148476]: 2025-12-13T08:51:56Z|01064|binding|INFO|Claiming lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a for this chassis.
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.008 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:56 compute-0 ovn_controller[148476]: 2025-12-13T08:51:56Z|01065|binding|INFO|4b6e89d0-3078-468e-ad37-fa04ed14c96a: Claiming fa:16:3e:c0:e1:26 10.100.0.9
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.014 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:56 compute-0 systemd-udevd[357819]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:51:56 compute-0 NetworkManager[50376]: <info>  [1765615916.0381] device (tap4b6e89d0-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:51:56 compute-0 NetworkManager[50376]: <info>  [1765615916.0391] device (tap4b6e89d0-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:51:56 compute-0 ovn_controller[148476]: 2025-12-13T08:51:56Z|01066|binding|INFO|Setting lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a ovn-installed in OVS
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.082 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:56 compute-0 systemd-machined[210538]: New machine qemu-136-instance-0000006e.
Dec 13 08:51:56 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006e.
Dec 13 08:51:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.178 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:e1:26 10.100.0.9'], port_security=['fa:16:3e:c0:e1:26 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '32f46650-28ec-40b8-8cbb-afb8a34cda45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c6d054d-d66c-470b-bd1c-b4cabcf90c1c a55bf908-21fd-47cc-b7fd-f8685207b408', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=369f5416-159a-497c-b005-e2677c61c320, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b6e89d0-3078-468e-ad37-fa04ed14c96a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:51:56 compute-0 ovn_controller[148476]: 2025-12-13T08:51:56Z|01067|binding|INFO|Setting lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a up in Southbound
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.181 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b6e89d0-3078-468e-ad37-fa04ed14c96a in datapath e742c2df-df4d-48f6-8153-8353ec98fe5c bound to our chassis
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.183 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e742c2df-df4d-48f6-8153-8353ec98fe5c
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.203 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2148b79a-2ad5-4622-b861-c148b6bd9a65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.205 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape742c2df-d1 in ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.208 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape742c2df-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.208 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6927761-d9b0-4127-ba90-ea5a80902b32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.210 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef29aa7a-dcef-4a83-b142-f5e432185dc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.220 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4170b7da-8b17-458f-9924-97da400471cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.239 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39ccc8c3-b25a-4050-8c55-8c72b023a7d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.270 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e5f8ee-1b80-4475-96b0-54cd092eb01b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 NetworkManager[50376]: <info>  [1765615916.2797] manager: (tape742c2df-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/441)
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.281 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c420707-d6b7-47c8-98a1-4815e2c10fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.320 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eadc1e36-415d-4cdb-9c61-eb823be30a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.327 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ca432ee4-db57-4a31-949b-2f08ebefa3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 NetworkManager[50376]: <info>  [1765615916.3603] device (tape742c2df-d0): carrier: link connected
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.367 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e728c55-a17d-47dc-95d0-478919045b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.385 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e693c7-ead2-48a2-aea5-2c6fc494cb23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape742c2df-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:a4:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849357, 'reachable_time': 27765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357855, 'error': None, 'target': 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.410 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b0b32d-d397-46bc-a3c6-9e6b3f60d914]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:a4f3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 849357, 'tstamp': 849357}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357856, 'error': None, 'target': 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c194cc03-e56a-4858-9c36-ded6c6a49114]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape742c2df-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:a4:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849357, 'reachable_time': 27765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357857, 'error': None, 'target': 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.480 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[268bdc71-a7b3-4122-8f2c-07ba69c8ded3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.548 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6df07c28-5e5b-4e05-8cc4-6bb6faca93e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape742c2df-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.551 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape742c2df-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:56 compute-0 kernel: tape742c2df-d0: entered promiscuous mode
Dec 13 08:51:56 compute-0 NetworkManager[50376]: <info>  [1765615916.5530] manager: (tape742c2df-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.555 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape742c2df-d0, col_values=(('external_ids', {'iface-id': 'a0154a44-7c71-4193-bf4a-ea45abd45ba7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:51:56 compute-0 ovn_controller[148476]: 2025-12-13T08:51:56Z|01068|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.572 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e742c2df-df4d-48f6-8153-8353ec98fe5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e742c2df-df4d-48f6-8153-8353ec98fe5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cd18db-27f5-44bf-81df-b54562a52169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.574 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-e742c2df-df4d-48f6-8153-8353ec98fe5c
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/e742c2df-df4d-48f6-8153-8353ec98fe5c.pid.haproxy
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID e742c2df-df4d-48f6-8153-8353ec98fe5c
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:51:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.575 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'env', 'PROCESS_TAG=haproxy-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e742c2df-df4d-48f6-8153-8353ec98fe5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.742 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615916.7417943, 32f46650-28ec-40b8-8cbb-afb8a34cda45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.743 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] VM Started (Lifecycle Event)
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.780 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.785 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615916.7431455, 32f46650-28ec-40b8-8cbb-afb8a34cda45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.786 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] VM Paused (Lifecycle Event)
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.820 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.824 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:51:56 compute-0 nova_compute[248510]: 2025-12-13 08:51:56.906 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:51:56 compute-0 podman[357931]: 2025-12-13 08:51:56.981965699 +0000 UTC m=+0.072980371 container create ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:51:57 compute-0 systemd[1]: Started libpod-conmon-ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d.scope.
Dec 13 08:51:57 compute-0 podman[357931]: 2025-12-13 08:51:56.932473528 +0000 UTC m=+0.023488230 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:51:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82de2f5cdb66864aa371b69f7e8060f09c82097adfafe67c8d328dd59b19987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:51:57 compute-0 podman[357931]: 2025-12-13 08:51:57.079248948 +0000 UTC m=+0.170263640 container init ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 08:51:57 compute-0 podman[357931]: 2025-12-13 08:51:57.084450358 +0000 UTC m=+0.175465030 container start ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:51:57 compute-0 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [NOTICE]   (357952) : New worker (357954) forked
Dec 13 08:51:57 compute-0 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [NOTICE]   (357952) : Loading success.
Dec 13 08:51:57 compute-0 ceph-mon[76537]: pgmap v2707: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:57 compute-0 nova_compute[248510]: 2025-12-13 08:51:57.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.585 248514 DEBUG nova.compute.manager [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.586 248514 DEBUG oslo_concurrency.lockutils [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.586 248514 DEBUG oslo_concurrency.lockutils [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.586 248514 DEBUG oslo_concurrency.lockutils [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.586 248514 DEBUG nova.compute.manager [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Processing event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.587 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.594 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615918.5937185, 32f46650-28ec-40b8-8cbb-afb8a34cda45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.594 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] VM Resumed (Lifecycle Event)
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.596 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.600 248514 INFO nova.virt.libvirt.driver [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance spawned successfully.
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.601 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.620 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.627 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.632 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.632 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.633 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.633 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.634 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.634 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.750 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.840 248514 INFO nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Took 14.77 seconds to spawn the instance on the hypervisor.
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.840 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:51:58 compute-0 nova_compute[248510]: 2025-12-13 08:51:58.955 248514 INFO nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Took 16.78 seconds to build instance.
Dec 13 08:51:59 compute-0 nova_compute[248510]: 2025-12-13 08:51:59.015 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:51:59 compute-0 ceph-mon[76537]: pgmap v2708: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:51:59 compute-0 nova_compute[248510]: 2025-12-13 08:51:59.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec 13 08:52:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:01 compute-0 nova_compute[248510]: 2025-12-13 08:52:01.083 248514 DEBUG nova.compute.manager [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:01 compute-0 nova_compute[248510]: 2025-12-13 08:52:01.083 248514 DEBUG oslo_concurrency.lockutils [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:01 compute-0 nova_compute[248510]: 2025-12-13 08:52:01.083 248514 DEBUG oslo_concurrency.lockutils [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:01 compute-0 nova_compute[248510]: 2025-12-13 08:52:01.084 248514 DEBUG oslo_concurrency.lockutils [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:01 compute-0 nova_compute[248510]: 2025-12-13 08:52:01.084 248514 DEBUG nova.compute.manager [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] No waiting events found dispatching network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:52:01 compute-0 nova_compute[248510]: 2025-12-13 08:52:01.084 248514 WARNING nova.compute.manager [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received unexpected event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a for instance with vm_state active and task_state None.
Dec 13 08:52:01 compute-0 ceph-mon[76537]: pgmap v2709: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec 13 08:52:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 13 08:52:02 compute-0 nova_compute[248510]: 2025-12-13 08:52:02.386 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:03 compute-0 nova_compute[248510]: 2025-12-13 08:52:03.168 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:03 compute-0 ceph-mon[76537]: pgmap v2710: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 13 08:52:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:52:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 34K writes, 136K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 34K writes, 12K syncs, 2.83 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4214 writes, 14K keys, 4214 commit groups, 1.0 writes per commit group, ingest: 14.64 MB, 0.02 MB/s
                                           Interval WAL: 4215 writes, 1764 syncs, 2.39 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:52:03 compute-0 nova_compute[248510]: 2025-12-13 08:52:03.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:03 compute-0 NetworkManager[50376]: <info>  [1765615923.9846] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Dec 13 08:52:03 compute-0 NetworkManager[50376]: <info>  [1765615923.9854] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Dec 13 08:52:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:52:04 compute-0 ovn_controller[148476]: 2025-12-13T08:52:04Z|01069|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 08:52:04 compute-0 ovn_controller[148476]: 2025-12-13T08:52:04Z|01070|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 08:52:04 compute-0 nova_compute[248510]: 2025-12-13 08:52:04.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:04 compute-0 ceph-mon[76537]: pgmap v2711: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:52:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:05 compute-0 nova_compute[248510]: 2025-12-13 08:52:05.716 248514 DEBUG nova.compute.manager [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:05 compute-0 nova_compute[248510]: 2025-12-13 08:52:05.717 248514 DEBUG nova.compute.manager [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing instance network info cache due to event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:52:05 compute-0 nova_compute[248510]: 2025-12-13 08:52:05.718 248514 DEBUG oslo_concurrency.lockutils [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:52:05 compute-0 nova_compute[248510]: 2025-12-13 08:52:05.718 248514 DEBUG oslo_concurrency.lockutils [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:52:05 compute-0 nova_compute[248510]: 2025-12-13 08:52:05.718 248514 DEBUG nova.network.neutron [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:52:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:52:07 compute-0 nova_compute[248510]: 2025-12-13 08:52:07.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:07 compute-0 ceph-mon[76537]: pgmap v2712: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:52:07 compute-0 nova_compute[248510]: 2025-12-13 08:52:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:07 compute-0 nova_compute[248510]: 2025-12-13 08:52:07.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:52:07 compute-0 nova_compute[248510]: 2025-12-13 08:52:07.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:52:08 compute-0 nova_compute[248510]: 2025-12-13 08:52:08.088 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:52:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:52:08 compute-0 nova_compute[248510]: 2025-12-13 08:52:08.132 248514 DEBUG nova.network.neutron [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updated VIF entry in instance network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:52:08 compute-0 nova_compute[248510]: 2025-12-13 08:52:08.136 248514 DEBUG nova.network.neutron [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:52:08 compute-0 nova_compute[248510]: 2025-12-13 08:52:08.164 248514 DEBUG oslo_concurrency.lockutils [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:52:08 compute-0 nova_compute[248510]: 2025-12-13 08:52:08.165 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:52:08 compute-0 nova_compute[248510]: 2025-12-13 08:52:08.166 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:52:08 compute-0 nova_compute[248510]: 2025-12-13 08:52:08.166 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 32f46650-28ec-40b8-8cbb-afb8a34cda45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:52:08 compute-0 nova_compute[248510]: 2025-12-13 08:52:08.170 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:08 compute-0 ceph-mon[76537]: pgmap v2713: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:52:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:52:09
Dec 13 08:52:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:52:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:52:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'images', '.rgw.root', 'volumes', 'default.rgw.control', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec 13 08:52:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:52:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.399 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.435 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.435 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.435 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.500 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.501 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.548 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.657 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.659 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.671 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.671 248514 INFO nova.compute.claims [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:52:10 compute-0 ceph-mon[76537]: pgmap v2714: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:52:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.778 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.807 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.807 248514 DEBUG nova.compute.provider_tree [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.828 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.875 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 08:52:10 compute-0 nova_compute[248510]: 2025-12-13 08:52:10.951 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:11 compute-0 sudo[357964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:52:11 compute-0 sudo[357964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:11 compute-0 sudo[357964]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:52:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.8 total, 600.0 interval
                                           Cumulative writes: 35K writes, 135K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.78 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3929 writes, 15K keys, 3929 commit groups, 1.0 writes per commit group, ingest: 14.64 MB, 0.02 MB/s
                                           Interval WAL: 3928 writes, 1587 syncs, 2.48 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:52:11 compute-0 sudo[357990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:52:11 compute-0 sudo[357990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:52:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751258432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.544 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.555 248514 DEBUG nova.compute.provider_tree [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.583 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.610 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.611 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.668 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.668 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.701 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.758 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:52:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/751258432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:52:11 compute-0 sudo[357990]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.811 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.811 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:52:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:52:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:52:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:52:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:52:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:52:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:52:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:52:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:52:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:52:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:52:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.931 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.934 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.935 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating image(s)
Dec 13 08:52:11 compute-0 nova_compute[248510]: 2025-12-13 08:52:11.967 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:52:11 compute-0 sudo[358069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:52:11 compute-0 sudo[358069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:11 compute-0 sudo[358069]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.017 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.053 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:52:12 compute-0 sudo[358138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:52:12 compute-0 sudo[358138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.060 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.112 248514 DEBUG nova.policy [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fa34623cd3de4a47aa57959f09b3ff79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:52:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.164 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.164 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.165 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.165 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.189 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.196 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:12 compute-0 podman[358241]: 2025-12-13 08:52:12.424969016 +0000 UTC m=+0.098394848 container create b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 08:52:12 compute-0 podman[358241]: 2025-12-13 08:52:12.37208175 +0000 UTC m=+0.045507602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:52:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:52:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117600970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.493 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:12 compute-0 systemd[1]: Started libpod-conmon-b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a.scope.
Dec 13 08:52:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.594 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.595 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:52:12 compute-0 podman[358241]: 2025-12-13 08:52:12.657514617 +0000 UTC m=+0.330940479 container init b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:52:12 compute-0 podman[358241]: 2025-12-13 08:52:12.66803381 +0000 UTC m=+0.341459632 container start b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:52:12 compute-0 ecstatic_mirzakhani[358261]: 167 167
Dec 13 08:52:12 compute-0 systemd[1]: libpod-b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a.scope: Deactivated successfully.
Dec 13 08:52:12 compute-0 conmon[358261]: conmon b31401648f204b1477c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a.scope/container/memory.events
Dec 13 08:52:12 compute-0 podman[358241]: 2025-12-13 08:52:12.800684536 +0000 UTC m=+0.474110378 container attach b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 08:52:12 compute-0 podman[358241]: 2025-12-13 08:52:12.801381864 +0000 UTC m=+0.474807716 container died b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.837 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.838 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3481MB free_disk=59.96660049352795GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.839 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:52:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:52:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:52:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:52:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:52:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:52:12 compute-0 ceph-mon[76537]: pgmap v2715: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 08:52:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/117600970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.922 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 32f46650-28ec-40b8-8cbb-afb8a34cda45 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.922 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance fcc617ec-f5f9-41bb-ad4b-86d790622e74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.923 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:52:12 compute-0 nova_compute[248510]: 2025-12-13 08:52:12.923 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:52:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:13.007 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:52:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:13.009 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.017 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:13 compute-0 ovn_controller[148476]: 2025-12-13T08:52:13Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:e1:26 10.100.0.9
Dec 13 08:52:13 compute-0 ovn_controller[148476]: 2025-12-13T08:52:13Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:e1:26 10.100.0.9
Dec 13 08:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-645e5c0440744b070078b618d95a01689c91062b7e3c914a4faebc4de817fe72-merged.mount: Deactivated successfully.
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.299 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.363 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] resizing rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:52:13 compute-0 podman[358241]: 2025-12-13 08:52:13.453449632 +0000 UTC m=+1.126875464 container remove b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:52:13 compute-0 systemd[1]: libpod-conmon-b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a.scope: Deactivated successfully.
Dec 13 08:52:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:52:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982669787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.674 248514 DEBUG nova.objects.instance [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'migration_context' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.678 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.684 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.704 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.705 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Ensure instance console log exists: /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.707 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.708 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.708 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.710 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:52:13 compute-0 podman[358361]: 2025-12-13 08:52:13.735466613 +0000 UTC m=+0.112752378 container create 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 08:52:13 compute-0 podman[358361]: 2025-12-13 08:52:13.649156389 +0000 UTC m=+0.026442184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.749 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:52:13 compute-0 nova_compute[248510]: 2025-12-13 08:52:13.750 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:13 compute-0 systemd[1]: Started libpod-conmon-95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa.scope.
Dec 13 08:52:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:13 compute-0 podman[358361]: 2025-12-13 08:52:13.972416594 +0000 UTC m=+0.349702449 container init 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:52:13 compute-0 podman[358361]: 2025-12-13 08:52:13.980015294 +0000 UTC m=+0.357301059 container start 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:52:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 159 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 130 op/s
Dec 13 08:52:14 compute-0 podman[358361]: 2025-12-13 08:52:14.216400231 +0000 UTC m=+0.593686016 container attach 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:52:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1982669787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:52:14 compute-0 nova_compute[248510]: 2025-12-13 08:52:14.225 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Successfully created port: b2143648-4c23-49b5-8777-433a5b34c7ce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:52:14 compute-0 goofy_bouman[358398]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:52:14 compute-0 goofy_bouman[358398]: --> All data devices are unavailable
Dec 13 08:52:14 compute-0 systemd[1]: libpod-95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa.scope: Deactivated successfully.
Dec 13 08:52:14 compute-0 podman[358418]: 2025-12-13 08:52:14.54102641 +0000 UTC m=+0.026834913 container died 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:52:14 compute-0 nova_compute[248510]: 2025-12-13 08:52:14.750 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:14 compute-0 nova_compute[248510]: 2025-12-13 08:52:14.751 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6-merged.mount: Deactivated successfully.
Dec 13 08:52:14 compute-0 podman[358418]: 2025-12-13 08:52:14.942754053 +0000 UTC m=+0.428562506 container remove 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:52:14 compute-0 systemd[1]: libpod-conmon-95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa.scope: Deactivated successfully.
Dec 13 08:52:14 compute-0 sudo[358138]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:15.011 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:15 compute-0 sudo[358433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:52:15 compute-0 sudo[358433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:15 compute-0 sudo[358433]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:52:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3748475953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:52:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:52:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3748475953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:52:15 compute-0 sudo[358458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:52:15 compute-0 sudo[358458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:15 compute-0 podman[358496]: 2025-12-13 08:52:15.442239236 +0000 UTC m=+0.023640813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:52:15 compute-0 ceph-mon[76537]: pgmap v2716: 321 pgs: 321 active+clean; 159 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 130 op/s
Dec 13 08:52:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3748475953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:52:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3748475953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:52:15 compute-0 podman[358496]: 2025-12-13 08:52:15.673834923 +0000 UTC m=+0.255236490 container create 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:52:15 compute-0 nova_compute[248510]: 2025-12-13 08:52:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:15 compute-0 nova_compute[248510]: 2025-12-13 08:52:15.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:52:16 compute-0 systemd[1]: Started libpod-conmon-2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce.scope.
Dec 13 08:52:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:52:16 compute-0 podman[358496]: 2025-12-13 08:52:16.076510729 +0000 UTC m=+0.657912306 container init 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 08:52:16 compute-0 podman[358496]: 2025-12-13 08:52:16.083593597 +0000 UTC m=+0.664995144 container start 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 08:52:16 compute-0 admiring_burnell[358512]: 167 167
Dec 13 08:52:16 compute-0 systemd[1]: libpod-2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce.scope: Deactivated successfully.
Dec 13 08:52:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 159 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 3.8 MiB/s wr, 66 op/s
Dec 13 08:52:16 compute-0 podman[358496]: 2025-12-13 08:52:16.146142495 +0000 UTC m=+0.727544052 container attach 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 08:52:16 compute-0 podman[358496]: 2025-12-13 08:52:16.146594546 +0000 UTC m=+0.727996103 container died 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:52:16 compute-0 nova_compute[248510]: 2025-12-13 08:52:16.200 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Successfully updated port: b2143648-4c23-49b5-8777-433a5b34c7ce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-998edd489d72cafcb72f30530f93fc419d01c24864bc4ffb05f21b6632771eca-merged.mount: Deactivated successfully.
Dec 13 08:52:16 compute-0 nova_compute[248510]: 2025-12-13 08:52:16.275 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:52:16 compute-0 nova_compute[248510]: 2025-12-13 08:52:16.276 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:52:16 compute-0 nova_compute[248510]: 2025-12-13 08:52:16.276 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:52:16 compute-0 nova_compute[248510]: 2025-12-13 08:52:16.379 248514 DEBUG nova.compute.manager [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:16 compute-0 nova_compute[248510]: 2025-12-13 08:52:16.379 248514 DEBUG nova.compute.manager [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:52:16 compute-0 nova_compute[248510]: 2025-12-13 08:52:16.379 248514 DEBUG oslo_concurrency.lockutils [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:52:16 compute-0 podman[358496]: 2025-12-13 08:52:16.44069135 +0000 UTC m=+1.022092897 container remove 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:52:16 compute-0 systemd[1]: libpod-conmon-2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce.scope: Deactivated successfully.
Dec 13 08:52:16 compute-0 nova_compute[248510]: 2025-12-13 08:52:16.649 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:52:16 compute-0 podman[358540]: 2025-12-13 08:52:16.622620462 +0000 UTC m=+0.026401663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:52:16 compute-0 podman[358540]: 2025-12-13 08:52:16.941716341 +0000 UTC m=+0.345497522 container create 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:52:17 compute-0 ceph-mon[76537]: pgmap v2717: 321 pgs: 321 active+clean; 159 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 3.8 MiB/s wr, 66 op/s
Dec 13 08:52:17 compute-0 systemd[1]: Started libpod-conmon-15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd.scope.
Dec 13 08:52:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:17 compute-0 podman[358540]: 2025-12-13 08:52:17.175174615 +0000 UTC m=+0.578955816 container init 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:52:17 compute-0 podman[358540]: 2025-12-13 08:52:17.185295898 +0000 UTC m=+0.589077079 container start 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 08:52:17 compute-0 podman[358540]: 2025-12-13 08:52:17.202613313 +0000 UTC m=+0.606394524 container attach 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 08:52:17 compute-0 nova_compute[248510]: 2025-12-13 08:52:17.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]: {
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:     "0": [
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:         {
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "devices": [
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "/dev/loop3"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             ],
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_name": "ceph_lv0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_size": "21470642176",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "name": "ceph_lv0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "tags": {
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cluster_name": "ceph",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.crush_device_class": "",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.encrypted": "0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.objectstore": "bluestore",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osd_id": "0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.type": "block",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.vdo": "0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.with_tpm": "0"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             },
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "type": "block",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "vg_name": "ceph_vg0"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:         }
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:     ],
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:     "1": [
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:         {
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "devices": [
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "/dev/loop4"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             ],
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_name": "ceph_lv1",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_size": "21470642176",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "name": "ceph_lv1",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "tags": {
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cluster_name": "ceph",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.crush_device_class": "",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.encrypted": "0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.objectstore": "bluestore",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osd_id": "1",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.type": "block",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.vdo": "0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.with_tpm": "0"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             },
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "type": "block",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "vg_name": "ceph_vg1"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:         }
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:     ],
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:     "2": [
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:         {
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "devices": [
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "/dev/loop5"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             ],
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_name": "ceph_lv2",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_size": "21470642176",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "name": "ceph_lv2",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "tags": {
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.cluster_name": "ceph",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.crush_device_class": "",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.encrypted": "0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.objectstore": "bluestore",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osd_id": "2",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.type": "block",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.vdo": "0",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:                 "ceph.with_tpm": "0"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             },
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "type": "block",
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:             "vg_name": "ceph_vg2"
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:         }
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]:     ]
Dec 13 08:52:17 compute-0 relaxed_feynman[358557]: }
Dec 13 08:52:17 compute-0 systemd[1]: libpod-15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd.scope: Deactivated successfully.
Dec 13 08:52:17 compute-0 conmon[358557]: conmon 15095555ef4af562efe8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd.scope/container/memory.events
Dec 13 08:52:17 compute-0 podman[358540]: 2025-12-13 08:52:17.545703515 +0000 UTC m=+0.949484716 container died 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:52:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d-merged.mount: Deactivated successfully.
Dec 13 08:52:17 compute-0 podman[358540]: 2025-12-13 08:52:17.785569739 +0000 UTC m=+1.189350920 container remove 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 08:52:17 compute-0 systemd[1]: libpod-conmon-15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd.scope: Deactivated successfully.
Dec 13 08:52:17 compute-0 ovn_controller[148476]: 2025-12-13T08:52:17Z|01071|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 08:52:17 compute-0 sudo[358458]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:17 compute-0 nova_compute[248510]: 2025-12-13 08:52:17.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:17 compute-0 sudo[358576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:52:17 compute-0 sudo[358576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:17 compute-0 sudo[358576]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:17 compute-0 sudo[358601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:52:17 compute-0 sudo[358601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 163 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.8 MiB/s wr, 77 op/s
Dec 13 08:52:18 compute-0 nova_compute[248510]: 2025-12-13 08:52:18.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:18 compute-0 podman[358638]: 2025-12-13 08:52:18.375171152 +0000 UTC m=+0.077056073 container create 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 08:52:18 compute-0 podman[358638]: 2025-12-13 08:52:18.322114562 +0000 UTC m=+0.023999523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:52:18 compute-0 systemd[1]: Started libpod-conmon-176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f.scope.
Dec 13 08:52:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:52:18 compute-0 podman[358638]: 2025-12-13 08:52:18.654791493 +0000 UTC m=+0.356676434 container init 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 08:52:18 compute-0 podman[358638]: 2025-12-13 08:52:18.662423645 +0000 UTC m=+0.364308566 container start 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:52:18 compute-0 affectionate_cray[358654]: 167 167
Dec 13 08:52:18 compute-0 systemd[1]: libpod-176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f.scope: Deactivated successfully.
Dec 13 08:52:18 compute-0 podman[358638]: 2025-12-13 08:52:18.778444744 +0000 UTC m=+0.480329655 container attach 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:52:18 compute-0 podman[358638]: 2025-12-13 08:52:18.779981742 +0000 UTC m=+0.481866663 container died 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 08:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b191b7ade96d6cc4c83c07e6d47bb47cc0d2901416d2e9ae8ecdc834a1c185d4-merged.mount: Deactivated successfully.
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.160 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.185 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.186 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance network_info: |[{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.188 248514 DEBUG oslo_concurrency.lockutils [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.188 248514 DEBUG nova.network.neutron [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.191 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start _get_guest_xml network_info=[{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.195 248514 WARNING nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.201 248514 DEBUG nova.virt.libvirt.host [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.203 248514 DEBUG nova.virt.libvirt.host [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.209 248514 DEBUG nova.virt.libvirt.host [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.210 248514 DEBUG nova.virt.libvirt.host [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.210 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.211 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.211 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.211 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.211 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.212 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.212 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.212 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.212 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.213 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.213 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.213 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.216 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:19 compute-0 ceph-mon[76537]: pgmap v2718: 321 pgs: 321 active+clean; 163 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.8 MiB/s wr, 77 op/s
Dec 13 08:52:19 compute-0 podman[358638]: 2025-12-13 08:52:19.417173718 +0000 UTC m=+1.119058649 container remove 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 08:52:19 compute-0 systemd[1]: libpod-conmon-176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f.scope: Deactivated successfully.
Dec 13 08:52:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 08:52:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.2 total, 600.0 interval
                                           Cumulative writes: 28K writes, 114K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 28K writes, 9954 syncs, 2.88 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3291 writes, 12K keys, 3291 commit groups, 1.0 writes per commit group, ingest: 13.13 MB, 0.02 MB/s
                                           Interval WAL: 3291 writes, 1338 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 08:52:19 compute-0 podman[358701]: 2025-12-13 08:52:19.603812198 +0000 UTC m=+0.027286935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:52:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:52:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/103592188' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.864 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.889 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:52:19 compute-0 nova_compute[248510]: 2025-12-13 08:52:19.893 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:20 compute-0 podman[358701]: 2025-12-13 08:52:20.022876315 +0000 UTC m=+0.446351032 container create 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:52:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 167 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 08:52:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:20 compute-0 systemd[1]: Started libpod-conmon-22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6.scope.
Dec 13 08:52:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/103592188' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:52:20 compute-0 podman[358701]: 2025-12-13 08:52:20.620752155 +0000 UTC m=+1.044226882 container init 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 08:52:20 compute-0 podman[358701]: 2025-12-13 08:52:20.632457608 +0000 UTC m=+1.055932325 container start 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:52:20 compute-0 nova_compute[248510]: 2025-12-13 08:52:20.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011156126648185698 of space, bias 1.0, pg target 0.3346837994455709 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693546727135707 of space, bias 1.0, pg target 0.2008064018140712 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.736360017727181e-07 of space, bias 4.0, pg target 0.0006883632021272618 quantized to 16 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:52:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:52:21 compute-0 podman[358701]: 2025-12-13 08:52:21.293626995 +0000 UTC m=+1.717101702 container attach 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:52:21 compute-0 lvm[358836]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:52:21 compute-0 lvm[358837]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:52:21 compute-0 lvm[358836]: VG ceph_vg1 finished
Dec 13 08:52:21 compute-0 lvm[358837]: VG ceph_vg0 finished
Dec 13 08:52:21 compute-0 lvm[358839]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:52:21 compute-0 lvm[358839]: VG ceph_vg2 finished
Dec 13 08:52:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:52:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711485502' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.432 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.434 248514 DEBUG nova.virt.libvirt.vif [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAebkVWpSolmv9rvqo0lcOY35Sse8ZKcOxj7o5K69+ccF5rX0zOaHwOkKpIPYjoJDZ0lGFUDC6z1VdzfkWYgp+l6o2jOhZfRbhSP9Loovk747mSM12eSN6XwaL50YLDn8Q==',key_name='tempest-TestShelveInstance-1284595260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:52:11Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.434 248514 DEBUG nova.network.os_vif_util [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.435 248514 DEBUG nova.network.os_vif_util [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.436 248514 DEBUG nova.objects.instance [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'pci_devices' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.458 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <uuid>fcc617ec-f5f9-41bb-ad4b-86d790622e74</uuid>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <name>instance-0000006f</name>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <nova:name>tempest-TestShelveInstance-server-1747506169</nova:name>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:52:19</nova:creationTime>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <nova:user uuid="fa34623cd3de4a47aa57959f09b3ff79">tempest-TestShelveInstance-2105398574-project-member</nova:user>
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <nova:project uuid="ff4d2c6ad4dc4848ac9f55ff1b9e829a">tempest-TestShelveInstance-2105398574</nova:project>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <nova:port uuid="b2143648-4c23-49b5-8777-433a5b34c7ce">
Dec 13 08:52:21 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <system>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <entry name="serial">fcc617ec-f5f9-41bb-ad4b-86d790622e74</entry>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <entry name="uuid">fcc617ec-f5f9-41bb-ad4b-86d790622e74</entry>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </system>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <os>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   </os>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <features>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   </features>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk">
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       </source>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config">
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       </source>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:52:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:02:b7:87"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <target dev="tapb2143648-4c"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/console.log" append="off"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <video>
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </video>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:52:21 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:52:21 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:52:21 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:52:21 compute-0 nova_compute[248510]: </domain>
Dec 13 08:52:21 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.459 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Preparing to wait for external event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.459 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.459 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.459 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.460 248514 DEBUG nova.virt.libvirt.vif [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAebkVWpSolmv9rvqo0lcOY35Sse8ZKcOxj7o5K69+ccF5rX0zOaHwOkKpIPYjoJDZ0lGFUDC6z1VdzfkWYgp+l6o2jOhZfRbhSP9Loovk747mSM12eSN6XwaL50YLDn8Q==',key_name='tempest-TestShelveInstance-1284595260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:52:11Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.460 248514 DEBUG nova.network.os_vif_util [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.461 248514 DEBUG nova.network.os_vif_util [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.461 248514 DEBUG os_vif [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.462 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.462 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:52:21 compute-0 musing_banach[358757]: {}
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.466 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2143648-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.466 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2143648-4c, col_values=(('external_ids', {'iface-id': 'b2143648-4c23-49b5-8777-433a5b34c7ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:b7:87', 'vm-uuid': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:21 compute-0 NetworkManager[50376]: <info>  [1765615941.4708] manager: (tapb2143648-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.478 248514 INFO os_vif [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c')
Dec 13 08:52:21 compute-0 systemd[1]: libpod-22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6.scope: Deactivated successfully.
Dec 13 08:52:21 compute-0 systemd[1]: libpod-22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6.scope: Consumed 1.462s CPU time.
Dec 13 08:52:21 compute-0 podman[358845]: 2025-12-13 08:52:21.529251613 +0000 UTC m=+0.024800433 container died 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.915 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.915 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.915 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No VIF found with MAC fa:16:3e:02:b7:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.916 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Using config drive
Dec 13 08:52:21 compute-0 nova_compute[248510]: 2025-12-13 08:52:21.966 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:52:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 167 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 08:52:22 compute-0 ceph-mon[76537]: pgmap v2719: 321 pgs: 321 active+clean; 167 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 08:52:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2711485502' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541-merged.mount: Deactivated successfully.
Dec 13 08:52:22 compute-0 nova_compute[248510]: 2025-12-13 08:52:22.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:22 compute-0 podman[358845]: 2025-12-13 08:52:22.421954375 +0000 UTC m=+0.917503175 container remove 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:52:22 compute-0 systemd[1]: libpod-conmon-22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6.scope: Deactivated successfully.
Dec 13 08:52:22 compute-0 sudo[358601]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:52:22 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:52:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:52:22 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:52:22 compute-0 sudo[358876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:52:22 compute-0 sudo[358876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:52:22 compute-0 sudo[358876]: pam_unix(sudo:session): session closed for user root
Dec 13 08:52:22 compute-0 nova_compute[248510]: 2025-12-13 08:52:22.921 248514 DEBUG nova.network.neutron [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:52:22 compute-0 nova_compute[248510]: 2025-12-13 08:52:22.923 248514 DEBUG nova.network.neutron [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:52:22 compute-0 nova_compute[248510]: 2025-12-13 08:52:22.952 248514 DEBUG oslo_concurrency.lockutils [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:52:23 compute-0 nova_compute[248510]: 2025-12-13 08:52:23.005 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating config drive at /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config
Dec 13 08:52:23 compute-0 nova_compute[248510]: 2025-12-13 08:52:23.011 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcrtstmjd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:23 compute-0 nova_compute[248510]: 2025-12-13 08:52:23.179 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcrtstmjd" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 08:52:23 compute-0 nova_compute[248510]: 2025-12-13 08:52:23.517 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:52:23 compute-0 nova_compute[248510]: 2025-12-13 08:52:23.521 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:23 compute-0 ceph-mon[76537]: pgmap v2720: 321 pgs: 321 active+clean; 167 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 08:52:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:52:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:52:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.410 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.412 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deleting local config drive /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config because it was imported into RBD.
Dec 13 08:52:24 compute-0 kernel: tapb2143648-4c: entered promiscuous mode
Dec 13 08:52:24 compute-0 NetworkManager[50376]: <info>  [1765615944.4992] manager: (tapb2143648-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Dec 13 08:52:24 compute-0 ovn_controller[148476]: 2025-12-13T08:52:24Z|01072|binding|INFO|Claiming lport b2143648-4c23-49b5-8777-433a5b34c7ce for this chassis.
Dec 13 08:52:24 compute-0 ovn_controller[148476]: 2025-12-13T08:52:24Z|01073|binding|INFO|b2143648-4c23-49b5-8777-433a5b34c7ce: Claiming fa:16:3e:02:b7:87 10.100.0.4
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.507 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:87 10.100.0.4'], port_security=['fa:16:3e:02:b7:87 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db56c55-78f1-455f-855e-db3acef05ff3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9bebfabc-b3c7-415b-9881-5a1028b3b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efdd787e-21fc-422f-be64-57ef7368490d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2143648-4c23-49b5-8777-433a5b34c7ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.509 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2143648-4c23-49b5-8777-433a5b34c7ce in datapath 6db56c55-78f1-455f-855e-db3acef05ff3 bound to our chassis
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.510 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 08:52:24 compute-0 ovn_controller[148476]: 2025-12-13T08:52:24Z|01074|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce ovn-installed in OVS
Dec 13 08:52:24 compute-0 ovn_controller[148476]: 2025-12-13T08:52:24Z|01075|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce up in Southbound
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.522 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.525 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18bca0c0-6228-45b2-b15b-108cfd7694d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.526 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6db56c55-71 in ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.528 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6db56c55-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.529 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f41a372-5f46-4c7d-a245-b7d18a08a275]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.530 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca186484-ebcc-4ad0-acfd-5dab4227c68d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 systemd-udevd[358955]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:52:24 compute-0 systemd-machined[210538]: New machine qemu-137-instance-0000006f.
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.545 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[02d96675-384e-4d16-ae99-6e54c6cc9783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 NetworkManager[50376]: <info>  [1765615944.5513] device (tapb2143648-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:52:24 compute-0 NetworkManager[50376]: <info>  [1765615944.5520] device (tapb2143648-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:52:24 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006f.
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.565 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27e0e555-1210-4285-be43-aa6c3e7d01a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.601 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[618021f8-bae8-4452-a346-cc23b32d5d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 NetworkManager[50376]: <info>  [1765615944.6107] manager: (tap6db56c55-70): new Veth device (/org/freedesktop/NetworkManager/Devices/447)
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87f71812-19da-490d-85ee-667e0f91b915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.657 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a2593440-539c-4ac4-a217-4cb810314e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.661 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2015a230-50fa-418a-819d-fc1160f1286d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 NetworkManager[50376]: <info>  [1765615944.6878] device (tap6db56c55-70): carrier: link connected
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.697 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[15fbee32-87d1-4678-993d-441a6d01e78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.718 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6fbf83-f28a-4636-b7f2-462e94ac6d2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6db56c55-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852190, 'reachable_time': 33294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358989, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[20ef2a45-0af6-43d7-8c4f-8179cc150a61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:4b23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852190, 'tstamp': 852190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358990, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[08e39f62-ec16-4bf0-b6e1-dcbfa1666ab7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6db56c55-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852190, 'reachable_time': 33294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358991, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[600ba002-2331-40c1-86a1-cdc7544a8c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b93a8a7f-841c-4e9c-a7c9-f5f9121b3630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.886 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6db56c55-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.886 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.886 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6db56c55-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:24 compute-0 kernel: tap6db56c55-70: entered promiscuous mode
Dec 13 08:52:24 compute-0 NetworkManager[50376]: <info>  [1765615944.8891] manager: (tap6db56c55-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6db56c55-70, col_values=(('external_ids', {'iface-id': '401abfe8-06be-4b57-8432-310dcd747a81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:24 compute-0 ovn_controller[148476]: 2025-12-13T08:52:24Z|01076|binding|INFO|Releasing lport 401abfe8-06be-4b57-8432-310dcd747a81 from this chassis (sb_readonly=0)
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.927 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.928 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe16d0cc-ca4b-41ef-8aa5-489b01d88157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.929 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:52:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.929 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'env', 'PROCESS_TAG=haproxy-6db56c55-78f1-455f-855e-db3acef05ff3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6db56c55-78f1-455f-855e-db3acef05ff3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:52:24 compute-0 nova_compute[248510]: 2025-12-13 08:52:24.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:25 compute-0 ceph-mon[76537]: pgmap v2721: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 08:52:25 compute-0 podman[359059]: 2025-12-13 08:52:25.28300925 +0000 UTC m=+0.024870245 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.608 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615945.608165, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.609 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Started (Lifecycle Event)
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.639 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.643 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615945.6095579, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.644 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Paused (Lifecycle Event)
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.925 248514 DEBUG nova.compute.manager [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.925 248514 DEBUG oslo_concurrency.lockutils [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.926 248514 DEBUG oslo_concurrency.lockutils [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.926 248514 DEBUG oslo_concurrency.lockutils [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.926 248514 DEBUG nova.compute.manager [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Processing event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.927 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.930 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.935 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance spawned successfully.
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.936 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.944 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.947 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615945.9299753, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:52:25 compute-0 nova_compute[248510]: 2025-12-13 08:52:25.948 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Resumed (Lifecycle Event)
Dec 13 08:52:25 compute-0 podman[359059]: 2025-12-13 08:52:25.988358645 +0000 UTC m=+0.730219620 container create b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.011 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.016 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.016 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.017 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.017 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.017 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.018 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.065 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:52:26 compute-0 systemd[1]: Started libpod-conmon-b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06.scope.
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.089 248514 INFO nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Took 14.16 seconds to spawn the instance on the hypervisor.
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.089 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:52:26 compute-0 podman[359080]: 2025-12-13 08:52:26.123370891 +0000 UTC m=+0.206788716 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 08:52:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:52:26 compute-0 podman[359079]: 2025-12-13 08:52:26.130153811 +0000 UTC m=+0.212927330 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d101a5c71da0f833ee85e3864334a8ddce618ff29ffdbe45b70d5c22bb3233b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:52:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 103 KiB/s wr, 23 op/s
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.210 248514 INFO nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Took 15.60 seconds to build instance.
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.243 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:26 compute-0 ovn_controller[148476]: 2025-12-13T08:52:26Z|01077|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 08:52:26 compute-0 ovn_controller[148476]: 2025-12-13T08:52:26Z|01078|binding|INFO|Releasing lport 401abfe8-06be-4b57-8432-310dcd747a81 from this chassis (sb_readonly=0)
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:26 compute-0 podman[359059]: 2025-12-13 08:52:26.365797879 +0000 UTC m=+1.107658894 container init b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 08:52:26 compute-0 podman[359059]: 2025-12-13 08:52:26.371458641 +0000 UTC m=+1.113319626 container start b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:52:26 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [NOTICE]   (359147) : New worker (359149) forked
Dec 13 08:52:26 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [NOTICE]   (359147) : Loading success.
Dec 13 08:52:26 compute-0 podman[359078]: 2025-12-13 08:52:26.399824973 +0000 UTC m=+0.487307590 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 13 08:52:26 compute-0 nova_compute[248510]: 2025-12-13 08:52:26.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:27 compute-0 nova_compute[248510]: 2025-12-13 08:52:27.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:27 compute-0 ceph-mon[76537]: pgmap v2722: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 103 KiB/s wr, 23 op/s
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.086 248514 DEBUG nova.compute.manager [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.086 248514 DEBUG oslo_concurrency.lockutils [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.087 248514 DEBUG oslo_concurrency.lockutils [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.087 248514 DEBUG oslo_concurrency.lockutils [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.087 248514 DEBUG nova.compute.manager [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.087 248514 WARNING nova.compute.manager [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state active and task_state None.
Dec 13 08:52:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 115 KiB/s wr, 73 op/s
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.861 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.861 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.861 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.862 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.862 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.863 248514 INFO nova.compute.manager [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Terminating instance
Dec 13 08:52:28 compute-0 nova_compute[248510]: 2025-12-13 08:52:28.864 248514 DEBUG nova.compute.manager [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:52:28 compute-0 ceph-mon[76537]: pgmap v2723: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 115 KiB/s wr, 73 op/s
Dec 13 08:52:29 compute-0 kernel: tap4b6e89d0-30 (unregistering): left promiscuous mode
Dec 13 08:52:29 compute-0 NetworkManager[50376]: <info>  [1765615949.4000] device (tap4b6e89d0-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:29 compute-0 ovn_controller[148476]: 2025-12-13T08:52:29Z|01079|binding|INFO|Releasing lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a from this chassis (sb_readonly=0)
Dec 13 08:52:29 compute-0 ovn_controller[148476]: 2025-12-13T08:52:29Z|01080|binding|INFO|Setting lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a down in Southbound
Dec 13 08:52:29 compute-0 ovn_controller[148476]: 2025-12-13T08:52:29Z|01081|binding|INFO|Removing iface tap4b6e89d0-30 ovn-installed in OVS
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.423 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:e1:26 10.100.0.9'], port_security=['fa:16:3e:c0:e1:26 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '32f46650-28ec-40b8-8cbb-afb8a34cda45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c6d054d-d66c-470b-bd1c-b4cabcf90c1c a55bf908-21fd-47cc-b7fd-f8685207b408', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=369f5416-159a-497c-b005-e2677c61c320, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b6e89d0-3078-468e-ad37-fa04ed14c96a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:52:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.424 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b6e89d0-3078-468e-ad37-fa04ed14c96a in datapath e742c2df-df4d-48f6-8153-8353ec98fe5c unbound from our chassis
Dec 13 08:52:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.426 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e742c2df-df4d-48f6-8153-8353ec98fe5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:52:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.426 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[48dc9f4d-907b-42f6-819c-dee2a6dab20e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.427 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c namespace which is not needed anymore
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:29 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Dec 13 08:52:29 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006e.scope: Consumed 13.825s CPU time.
Dec 13 08:52:29 compute-0 systemd-machined[210538]: Machine qemu-136-instance-0000006e terminated.
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.503 248514 INFO nova.virt.libvirt.driver [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance destroyed successfully.
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.503 248514 DEBUG nova.objects.instance [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 32f46650-28ec-40b8-8cbb-afb8a34cda45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.529 248514 DEBUG nova.virt.libvirt.vif [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=110,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1EAPswQkN6gGnzWb6nWTEPMlUNbetceQGhufBgelanH3kUDSBVad+EWLTxUJKeHTg22cPL3Ixvag9/dm2M/FjTcKf+ix54cOXq9k631rEiL8V03+5FUWKZfsVC6yBvBw==',key_name='tempest-TestSecurityGroupsBasicOps-451827369',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:51:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-lhganq1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:51:58Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=32f46650-28ec-40b8-8cbb-afb8a34cda45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.529 248514 DEBUG nova.network.os_vif_util [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.530 248514 DEBUG nova.network.os_vif_util [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.530 248514 DEBUG os_vif [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.533 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b6e89d0-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.534 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:29 compute-0 nova_compute[248510]: 2025-12-13 08:52:29.538 248514 INFO os_vif [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30')
Dec 13 08:52:29 compute-0 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [NOTICE]   (357952) : haproxy version is 2.8.14-c23fe91
Dec 13 08:52:29 compute-0 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [NOTICE]   (357952) : path to executable is /usr/sbin/haproxy
Dec 13 08:52:29 compute-0 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [WARNING]  (357952) : Exiting Master process...
Dec 13 08:52:29 compute-0 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [ALERT]    (357952) : Current worker (357954) exited with code 143 (Terminated)
Dec 13 08:52:29 compute-0 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [WARNING]  (357952) : All workers exited. Exiting... (0)
Dec 13 08:52:29 compute-0 systemd[1]: libpod-ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d.scope: Deactivated successfully.
Dec 13 08:52:29 compute-0 podman[359192]: 2025-12-13 08:52:29.796700602 +0000 UTC m=+0.281142700 container died ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:52:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 108 KiB/s wr, 86 op/s
Dec 13 08:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d-userdata-shm.mount: Deactivated successfully.
Dec 13 08:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a82de2f5cdb66864aa371b69f7e8060f09c82097adfafe67c8d328dd59b19987-merged.mount: Deactivated successfully.
Dec 13 08:52:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:30 compute-0 nova_compute[248510]: 2025-12-13 08:52:30.233 248514 DEBUG nova.compute.manager [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:30 compute-0 nova_compute[248510]: 2025-12-13 08:52:30.233 248514 DEBUG nova.compute.manager [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing instance network info cache due to event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:52:30 compute-0 nova_compute[248510]: 2025-12-13 08:52:30.234 248514 DEBUG oslo_concurrency.lockutils [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:52:30 compute-0 nova_compute[248510]: 2025-12-13 08:52:30.234 248514 DEBUG oslo_concurrency.lockutils [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:52:30 compute-0 nova_compute[248510]: 2025-12-13 08:52:30.234 248514 DEBUG nova.network.neutron [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:52:30 compute-0 podman[359192]: 2025-12-13 08:52:30.488795515 +0000 UTC m=+0.973237623 container cleanup ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:52:30 compute-0 systemd[1]: libpod-conmon-ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d.scope: Deactivated successfully.
Dec 13 08:52:31 compute-0 ceph-mon[76537]: pgmap v2724: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 108 KiB/s wr, 86 op/s
Dec 13 08:52:31 compute-0 podman[359240]: 2025-12-13 08:52:31.852568308 +0000 UTC m=+1.335808823 container remove ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.862 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a27d4422-64a7-441e-b463-9a4bcd234238]: (4, ('Sat Dec 13 08:52:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c (ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d)\nae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d\nSat Dec 13 08:52:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c (ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d)\nae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.864 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[12192075-fb88-4562-9c29-e27bc309ecd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.865 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape742c2df-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:31 compute-0 nova_compute[248510]: 2025-12-13 08:52:31.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:31 compute-0 kernel: tape742c2df-d0: left promiscuous mode
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.874 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3585c421-895b-4ed7-b3b3-13803584c960]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:31 compute-0 nova_compute[248510]: 2025-12-13 08:52:31.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.898 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9c5df7-219e-4dae-8585-ebe665e1fb18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.900 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7daddc7-1837-4368-b494-22eada63437f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6184e9b-6d5a-4bbf-9e86-622072b7bfa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849348, 'reachable_time': 28673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359256, 'error': None, 'target': 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:31 compute-0 systemd[1]: run-netns-ovnmeta\x2de742c2df\x2ddf4d\x2d48f6\x2d8153\x2d8353ec98fe5c.mount: Deactivated successfully.
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.921 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:52:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.921 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dcea2618-c011-4ec7-a438-73db2fcaca06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.402 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.420 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-unplugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.421 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.421 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.421 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.421 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] No waiting events found dispatching network-vif-unplugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.422 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-unplugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.422 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.422 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.423 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.424 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.424 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] No waiting events found dispatching network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.425 248514 WARNING nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received unexpected event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a for instance with vm_state active and task_state deleting.
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.557 248514 DEBUG nova.network.neutron [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updated VIF entry in instance network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.558 248514 DEBUG nova.network.neutron [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:52:32 compute-0 nova_compute[248510]: 2025-12-13 08:52:32.691 248514 DEBUG oslo_concurrency.lockutils [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:52:33 compute-0 ceph-mon[76537]: pgmap v2725: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.503 248514 DEBUG nova.compute.manager [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.504 248514 DEBUG nova.compute.manager [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.504 248514 DEBUG oslo_concurrency.lockutils [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.505 248514 DEBUG oslo_concurrency.lockutils [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.505 248514 DEBUG nova.network.neutron [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.799 248514 INFO nova.virt.libvirt.driver [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Deleting instance files /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45_del
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.801 248514 INFO nova.virt.libvirt.driver [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Deletion of /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45_del complete
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.888 248514 INFO nova.compute.manager [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Took 5.02 seconds to destroy the instance on the hypervisor.
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.889 248514 DEBUG oslo.service.loopingcall [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.890 248514 DEBUG nova.compute.manager [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:52:33 compute-0 nova_compute[248510]: 2025-12-13 08:52:33.890 248514 DEBUG nova.network.neutron [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:52:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 88 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 100 op/s
Dec 13 08:52:34 compute-0 nova_compute[248510]: 2025-12-13 08:52:34.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:35 compute-0 nova_compute[248510]: 2025-12-13 08:52:35.294 248514 DEBUG nova.network.neutron [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:52:35 compute-0 nova_compute[248510]: 2025-12-13 08:52:35.324 248514 INFO nova.compute.manager [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Took 1.43 seconds to deallocate network for instance.
Dec 13 08:52:35 compute-0 nova_compute[248510]: 2025-12-13 08:52:35.393 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:35 compute-0 nova_compute[248510]: 2025-12-13 08:52:35.394 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:35 compute-0 nova_compute[248510]: 2025-12-13 08:52:35.430 248514 DEBUG nova.compute.manager [req-e34162b2-06cf-4fe2-bd67-0488f3e365ce req-a11d7639-6bbf-44e3-af3d-2f9caf6e8ab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-deleted-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:35 compute-0 ceph-mon[76537]: pgmap v2726: 321 pgs: 321 active+clean; 88 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 100 op/s
Dec 13 08:52:35 compute-0 nova_compute[248510]: 2025-12-13 08:52:35.501 248514 DEBUG oslo_concurrency.processutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:52:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:52:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544682842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.122 248514 DEBUG oslo_concurrency.processutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.130 248514 DEBUG nova.compute.provider_tree [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:52:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 88 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.157 248514 DEBUG nova.scheduler.client.report [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.190 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.217 248514 INFO nova.scheduler.client.report [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 32f46650-28ec-40b8-8cbb-afb8a34cda45
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.318 248514 DEBUG nova.network.neutron [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.319 248514 DEBUG nova.network.neutron [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.331 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.345 248514 DEBUG oslo_concurrency.lockutils [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:52:36 compute-0 nova_compute[248510]: 2025-12-13 08:52:36.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1544682842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:52:36 compute-0 ceph-mon[76537]: pgmap v2727: 321 pgs: 321 active+clean; 88 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Dec 13 08:52:37 compute-0 nova_compute[248510]: 2025-12-13 08:52:37.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 105 op/s
Dec 13 08:52:39 compute-0 nova_compute[248510]: 2025-12-13 08:52:39.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:52:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:52:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:52:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:52:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:52:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:52:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 88 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 14 KiB/s wr, 60 op/s
Dec 13 08:52:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:40 compute-0 ceph-mon[76537]: pgmap v2728: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 105 op/s
Dec 13 08:52:41 compute-0 nova_compute[248510]: 2025-12-13 08:52:41.112 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:41 compute-0 ceph-mon[76537]: pgmap v2729: 321 pgs: 321 active+clean; 88 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 14 KiB/s wr, 60 op/s
Dec 13 08:52:41 compute-0 ovn_controller[148476]: 2025-12-13T08:52:41Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:b7:87 10.100.0.4
Dec 13 08:52:41 compute-0 ovn_controller[148476]: 2025-12-13T08:52:41Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:b7:87 10.100.0.4
Dec 13 08:52:41 compute-0 ovn_controller[148476]: 2025-12-13T08:52:41Z|01082|binding|INFO|Releasing lport 401abfe8-06be-4b57-8432-310dcd747a81 from this chassis (sb_readonly=0)
Dec 13 08:52:41 compute-0 nova_compute[248510]: 2025-12-13 08:52:41.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 88 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 14 KiB/s wr, 36 op/s
Dec 13 08:52:42 compute-0 nova_compute[248510]: 2025-12-13 08:52:42.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:43 compute-0 ceph-mon[76537]: pgmap v2730: 321 pgs: 321 active+clean; 88 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 14 KiB/s wr, 36 op/s
Dec 13 08:52:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 13 08:52:44 compute-0 nova_compute[248510]: 2025-12-13 08:52:44.499 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615949.4979377, 32f46650-28ec-40b8-8cbb-afb8a34cda45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:52:44 compute-0 nova_compute[248510]: 2025-12-13 08:52:44.500 248514 INFO nova.compute.manager [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] VM Stopped (Lifecycle Event)
Dec 13 08:52:44 compute-0 nova_compute[248510]: 2025-12-13 08:52:44.525 248514 DEBUG nova.compute.manager [None req-db214919-404c-4918-a1b5-8a8f72fd62aa - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:52:44 compute-0 nova_compute[248510]: 2025-12-13 08:52:44.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:45 compute-0 ceph-mon[76537]: pgmap v2731: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 13 08:52:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 08:52:46 compute-0 ceph-mon[76537]: pgmap v2732: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 08:52:47 compute-0 nova_compute[248510]: 2025-12-13 08:52:47.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 08:52:49 compute-0 ceph-mon[76537]: pgmap v2733: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 08:52:49 compute-0 nova_compute[248510]: 2025-12-13 08:52:49.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:49 compute-0 nova_compute[248510]: 2025-12-13 08:52:49.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:50 compute-0 nova_compute[248510]: 2025-12-13 08:52:50.051 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:50 compute-0 nova_compute[248510]: 2025-12-13 08:52:50.052 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:50 compute-0 nova_compute[248510]: 2025-12-13 08:52:50.052 248514 INFO nova.compute.manager [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Shelving
Dec 13 08:52:50 compute-0 nova_compute[248510]: 2025-12-13 08:52:50.075 248514 DEBUG nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 08:52:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 13 08:52:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:50 compute-0 nova_compute[248510]: 2025-12-13 08:52:50.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:51 compute-0 ceph-mon[76537]: pgmap v2734: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 13 08:52:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 13 08:52:52 compute-0 kernel: tapb2143648-4c (unregistering): left promiscuous mode
Dec 13 08:52:52 compute-0 NetworkManager[50376]: <info>  [1765615972.3476] device (tapb2143648-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:52:52 compute-0 nova_compute[248510]: 2025-12-13 08:52:52.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:52 compute-0 ovn_controller[148476]: 2025-12-13T08:52:52Z|01083|binding|INFO|Releasing lport b2143648-4c23-49b5-8777-433a5b34c7ce from this chassis (sb_readonly=0)
Dec 13 08:52:52 compute-0 ovn_controller[148476]: 2025-12-13T08:52:52Z|01084|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce down in Southbound
Dec 13 08:52:52 compute-0 ovn_controller[148476]: 2025-12-13T08:52:52Z|01085|binding|INFO|Removing iface tapb2143648-4c ovn-installed in OVS
Dec 13 08:52:52 compute-0 nova_compute[248510]: 2025-12-13 08:52:52.360 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.365 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:87 10.100.0.4'], port_security=['fa:16:3e:02:b7:87 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db56c55-78f1-455f-855e-db3acef05ff3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9bebfabc-b3c7-415b-9881-5a1028b3b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efdd787e-21fc-422f-be64-57ef7368490d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2143648-4c23-49b5-8777-433a5b34c7ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.367 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2143648-4c23-49b5-8777-433a5b34c7ce in datapath 6db56c55-78f1-455f-855e-db3acef05ff3 unbound from our chassis
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.368 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db56c55-78f1-455f-855e-db3acef05ff3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.369 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[61a9ed37-94fc-4092-a325-f1d1aa2c0e7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.369 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 namespace which is not needed anymore
Dec 13 08:52:52 compute-0 nova_compute[248510]: 2025-12-13 08:52:52.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:52 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Dec 13 08:52:52 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006f.scope: Consumed 14.061s CPU time.
Dec 13 08:52:52 compute-0 systemd-machined[210538]: Machine qemu-137-instance-0000006f terminated.
Dec 13 08:52:52 compute-0 nova_compute[248510]: 2025-12-13 08:52:52.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:52 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [NOTICE]   (359147) : haproxy version is 2.8.14-c23fe91
Dec 13 08:52:52 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [NOTICE]   (359147) : path to executable is /usr/sbin/haproxy
Dec 13 08:52:52 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [WARNING]  (359147) : Exiting Master process...
Dec 13 08:52:52 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [ALERT]    (359147) : Current worker (359149) exited with code 143 (Terminated)
Dec 13 08:52:52 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [WARNING]  (359147) : All workers exited. Exiting... (0)
Dec 13 08:52:52 compute-0 systemd[1]: libpod-b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06.scope: Deactivated successfully.
Dec 13 08:52:52 compute-0 podman[359305]: 2025-12-13 08:52:52.519311708 +0000 UTC m=+0.049950223 container died b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06-userdata-shm.mount: Deactivated successfully.
Dec 13 08:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d101a5c71da0f833ee85e3864334a8ddce618ff29ffdbe45b70d5c22bb3233b-merged.mount: Deactivated successfully.
Dec 13 08:52:52 compute-0 podman[359305]: 2025-12-13 08:52:52.585481717 +0000 UTC m=+0.116120242 container cleanup b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 08:52:52 compute-0 systemd[1]: libpod-conmon-b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06.scope: Deactivated successfully.
Dec 13 08:52:52 compute-0 podman[359343]: 2025-12-13 08:52:52.664528849 +0000 UTC m=+0.053907163 container remove b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bcf982-b677-4e53-9b35-e4f0fc2651b1]: (4, ('Sat Dec 13 08:52:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 (b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06)\nb21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06\nSat Dec 13 08:52:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 (b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06)\nb21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.672 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2b086b53-03f3-45ed-a07f-81330416bfee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.674 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6db56c55-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:52:52 compute-0 nova_compute[248510]: 2025-12-13 08:52:52.697 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:52 compute-0 kernel: tap6db56c55-70: left promiscuous mode
Dec 13 08:52:52 compute-0 nova_compute[248510]: 2025-12-13 08:52:52.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.717 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3e0992-de1b-4fb8-8574-9002884763bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b265c1d-e107-4629-81d3-80c30fb53da0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.739 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d37cadd0-a076-4777-8ee9-fd052c21deb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c56412f-ae1d-4d3f-9c0c-94d7a57aabbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852181, 'reachable_time': 29470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359365, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.761 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:52:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.761 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7eab78b6-99e7-4104-b716-db935eb15266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:52:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d6db56c55\x2d78f1\x2d455f\x2d855e\x2ddb3acef05ff3.mount: Deactivated successfully.
Dec 13 08:52:53 compute-0 nova_compute[248510]: 2025-12-13 08:52:53.093 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance shutdown successfully after 3 seconds.
Dec 13 08:52:53 compute-0 nova_compute[248510]: 2025-12-13 08:52:53.098 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance destroyed successfully.
Dec 13 08:52:53 compute-0 nova_compute[248510]: 2025-12-13 08:52:53.099 248514 DEBUG nova.objects.instance [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'numa_topology' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:52:53 compute-0 nova_compute[248510]: 2025-12-13 08:52:53.427 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Beginning cold snapshot process
Dec 13 08:52:53 compute-0 ceph-mon[76537]: pgmap v2735: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 13 08:52:54 compute-0 nova_compute[248510]: 2025-12-13 08:52:54.102 248514 DEBUG nova.virt.libvirt.imagebackend [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:52:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Dec 13 08:52:54 compute-0 nova_compute[248510]: 2025-12-13 08:52:54.323 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] creating snapshot(ef9ff3d602fb4ef884bff796ec97a142) on rbd image(fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:52:54 compute-0 nova_compute[248510]: 2025-12-13 08:52:54.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Dec 13 08:52:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Dec 13 08:52:54 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Dec 13 08:52:55 compute-0 ceph-mon[76537]: pgmap v2736: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Dec 13 08:52:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:52:55 compute-0 nova_compute[248510]: 2025-12-13 08:52:55.407 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] cloning vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk@ef9ff3d602fb4ef884bff796ec97a142 to images/e80a280c-5146-4d78-99c1-0d3591de049e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:52:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:52:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:55 compute-0 nova_compute[248510]: 2025-12-13 08:52:55.701 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] flattening images/e80a280c-5146-4d78-99c1-0d3591de049e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:52:55 compute-0 nova_compute[248510]: 2025-12-13 08:52:55.818 248514 DEBUG nova.compute.manager [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:55 compute-0 nova_compute[248510]: 2025-12-13 08:52:55.818 248514 DEBUG oslo_concurrency.lockutils [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:55 compute-0 nova_compute[248510]: 2025-12-13 08:52:55.818 248514 DEBUG oslo_concurrency.lockutils [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:55 compute-0 nova_compute[248510]: 2025-12-13 08:52:55.819 248514 DEBUG oslo_concurrency.lockutils [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:55 compute-0 nova_compute[248510]: 2025-12-13 08:52:55.819 248514 DEBUG nova.compute.manager [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:52:55 compute-0 nova_compute[248510]: 2025-12-13 08:52:55.819 248514 WARNING nova.compute.manager [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state active and task_state shelving_image_uploading.
Dec 13 08:52:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 56 KiB/s wr, 9 op/s
Dec 13 08:52:56 compute-0 ceph-mon[76537]: osdmap e295: 3 total, 3 up, 3 in
Dec 13 08:52:56 compute-0 nova_compute[248510]: 2025-12-13 08:52:56.684 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] removing snapshot(ef9ff3d602fb4ef884bff796ec97a142) on rbd image(fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:52:56 compute-0 podman[359491]: 2025-12-13 08:52:56.962041509 +0000 UTC m=+0.052865647 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:52:56 compute-0 podman[359490]: 2025-12-13 08:52:56.967115086 +0000 UTC m=+0.060425326 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 08:52:56 compute-0 podman[359489]: 2025-12-13 08:52:56.991924808 +0000 UTC m=+0.086378977 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:52:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Dec 13 08:52:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Dec 13 08:52:57 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Dec 13 08:52:57 compute-0 ceph-mon[76537]: pgmap v2738: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 56 KiB/s wr, 9 op/s
Dec 13 08:52:57 compute-0 nova_compute[248510]: 2025-12-13 08:52:57.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:57 compute-0 nova_compute[248510]: 2025-12-13 08:52:57.578 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] creating snapshot(snap) on rbd image(e80a280c-5146-4d78-99c1-0d3591de049e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:52:57 compute-0 nova_compute[248510]: 2025-12-13 08:52:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:52:58 compute-0 nova_compute[248510]: 2025-12-13 08:52:58.110 248514 DEBUG nova.compute.manager [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:52:58 compute-0 nova_compute[248510]: 2025-12-13 08:52:58.110 248514 DEBUG oslo_concurrency.lockutils [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:52:58 compute-0 nova_compute[248510]: 2025-12-13 08:52:58.110 248514 DEBUG oslo_concurrency.lockutils [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:52:58 compute-0 nova_compute[248510]: 2025-12-13 08:52:58.111 248514 DEBUG oslo_concurrency.lockutils [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:52:58 compute-0 nova_compute[248510]: 2025-12-13 08:52:58.111 248514 DEBUG nova.compute.manager [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:52:58 compute-0 nova_compute[248510]: 2025-12-13 08:52:58.111 248514 WARNING nova.compute.manager [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state active and task_state shelving_image_uploading.
Dec 13 08:52:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 135 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 681 KiB/s wr, 43 op/s
Dec 13 08:52:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Dec 13 08:52:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Dec 13 08:52:58 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Dec 13 08:52:58 compute-0 ceph-mon[76537]: osdmap e296: 3 total, 3 up, 3 in
Dec 13 08:52:58 compute-0 nova_compute[248510]: 2025-12-13 08:52:58.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:52:59 compute-0 ceph-mon[76537]: pgmap v2740: 321 pgs: 321 active+clean; 135 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 681 KiB/s wr, 43 op/s
Dec 13 08:52:59 compute-0 ceph-mon[76537]: osdmap e297: 3 total, 3 up, 3 in
Dec 13 08:52:59 compute-0 nova_compute[248510]: 2025-12-13 08:52:59.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 150 op/s
Dec 13 08:53:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:00 compute-0 ceph-mon[76537]: pgmap v2742: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 150 op/s
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.070 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.070 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.094 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.214 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.215 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.225 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.225 248514 INFO nova.compute.claims [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.444 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.810 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Snapshot image upload complete
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.811 248514 DEBUG nova.compute.manager [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.865 248514 INFO nova.compute.manager [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Shelve offloading
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.875 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance destroyed successfully.
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.876 248514 DEBUG nova.compute.manager [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.879 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.880 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:01 compute-0 nova_compute[248510]: 2025-12-13 08:53:01.880 248514 DEBUG nova.network.neutron [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:53:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:53:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039086334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.051 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.057 248514 DEBUG nova.compute.provider_tree [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.077 248514 DEBUG nova.scheduler.client.report [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.106 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.107 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:53:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.4 MiB/s wr, 123 op/s
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.183 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.184 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.210 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.237 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:53:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4039086334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.399 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.400 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.400 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Creating image(s)
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.539 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.560 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.586 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.589 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.640 248514 DEBUG nova.policy [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.674 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.675 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.675 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.676 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.701 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:02 compute-0 nova_compute[248510]: 2025-12-13 08:53:02.705 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 af2dc023-560c-4c66-b330-e41218a7a4eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:03 compute-0 nova_compute[248510]: 2025-12-13 08:53:03.763 248514 DEBUG nova.network.neutron [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:03 compute-0 nova_compute[248510]: 2025-12-13 08:53:03.820 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:04 compute-0 ceph-mon[76537]: pgmap v2743: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.4 MiB/s wr, 123 op/s
Dec 13 08:53:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 209 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 6.1 MiB/s wr, 135 op/s
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.434 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 af2dc023-560c-4c66-b330-e41218a7a4eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.493 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Successfully created port: 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.498 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.746 248514 DEBUG nova.objects.instance [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid af2dc023-560c-4c66-b330-e41218a7a4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.797 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.798 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Ensure instance console log exists: /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.799 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.799 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:04 compute-0 nova_compute[248510]: 2025-12-13 08:53:04.799 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:05 compute-0 ceph-mon[76537]: pgmap v2744: 321 pgs: 321 active+clean; 209 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 6.1 MiB/s wr, 135 op/s
Dec 13 08:53:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Dec 13 08:53:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Dec 13 08:53:05 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.448 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance destroyed successfully.
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.449 248514 DEBUG nova.objects.instance [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'resources' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.466 248514 DEBUG nova.virt.libvirt.vif [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAebkVWpSolmv9rvqo0lcOY35Sse8ZKcOxj7o5K69+ccF5rX0zOaHwOkKpIPYjoJDZ0lGFUDC6z1VdzfkWYgp+l6o2jOhZfRbhSP9Loovk747mSM12eSN6XwaL50YLDn8Q==',key_name='tempest-TestShelveInstance-1284595260',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:52:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member',shelved_at='2025-12-13T08:53:01.811270',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e80a280c-5146-4d78-99c1-0d3591de049e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:52:53Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.467 248514 DEBUG nova.network.os_vif_util [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.468 248514 DEBUG nova.network.os_vif_util [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.470 248514 DEBUG os_vif [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.472 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2143648-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.479 248514 INFO os_vif [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c')
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.573 248514 DEBUG nova.compute.manager [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.574 248514 DEBUG nova.compute.manager [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.574 248514 DEBUG oslo_concurrency.lockutils [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.574 248514 DEBUG oslo_concurrency.lockutils [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.575 248514 DEBUG nova.network.neutron [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.850 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.851 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.874 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.974 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.974 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.983 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:53:05 compute-0 nova_compute[248510]: 2025-12-13 08:53:05.984 248514 INFO nova.compute.claims [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:53:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 209 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.5 MiB/s wr, 98 op/s
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.171 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.217 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deleting instance files /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74_del
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.218 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deletion of /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74_del complete
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.332 248514 INFO nova.scheduler.client.report [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Deleted allocations for instance fcc617ec-f5f9-41bb-ad4b-86d790622e74
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.414 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:06 compute-0 ceph-mon[76537]: osdmap e298: 3 total, 3 up, 3 in
Dec 13 08:53:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:53:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111711744' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.871 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.878 248514 DEBUG nova.compute.provider_tree [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.898 248514 DEBUG nova.scheduler.client.report [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.923 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.924 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.928 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.984 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:53:06 compute-0 nova_compute[248510]: 2025-12-13 08:53:06.985 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.001 248514 DEBUG oslo_concurrency.processutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.051 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.093 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.225 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.228 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.228 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Creating image(s)
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.257 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.283 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.307 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.312 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.369 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Successfully updated port: 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.373 248514 DEBUG nova.policy [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.400 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.400 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.401 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.422 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.423 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.423 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.424 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.442 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.445 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.491 248514 DEBUG nova.compute.manager [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.491 248514 DEBUG nova.compute.manager [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing instance network info cache due to event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.491 248514 DEBUG oslo_concurrency.lockutils [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.592 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615972.5882154, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.592 248514 INFO nova.compute.manager [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Stopped (Lifecycle Event)
Dec 13 08:53:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:53:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3872345773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:07 compute-0 ceph-mon[76537]: pgmap v2746: 321 pgs: 321 active+clean; 209 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.5 MiB/s wr, 98 op/s
Dec 13 08:53:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3111711744' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.617 248514 DEBUG nova.compute.manager [None req-b82a58b8-d8e1-427b-a513-05b6739c36f7 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.626 248514 DEBUG oslo_concurrency.processutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.632 248514 DEBUG nova.compute.provider_tree [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.669 248514 DEBUG nova.scheduler.client.report [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.709 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.712 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.765 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.791 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.832 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.832 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.832 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.834 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.836 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 17.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.840 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.926 248514 DEBUG nova.objects.instance [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 2d2a33c7-0a90-4b64-b291-b268d37dce5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.955 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.955 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Ensure instance console log exists: /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.956 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.956 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:07 compute-0 nova_compute[248510]: 2025-12-13 08:53:07.956 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 194 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 132 op/s
Dec 13 08:53:08 compute-0 nova_compute[248510]: 2025-12-13 08:53:08.346 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Successfully created port: 1babd66f-ec6a-4702-8a8f-839d32ba8761 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:53:08 compute-0 nova_compute[248510]: 2025-12-13 08:53:08.623 248514 DEBUG nova.network.neutron [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:53:08 compute-0 nova_compute[248510]: 2025-12-13 08:53:08.623 248514 DEBUG nova.network.neutron [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": null, "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb2143648-4c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:08 compute-0 nova_compute[248510]: 2025-12-13 08:53:08.646 248514 DEBUG oslo_concurrency.lockutils [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3872345773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:08 compute-0 ceph-mon[76537]: pgmap v2747: 321 pgs: 321 active+clean; 194 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 132 op/s
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.193739) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989193864, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1357, "num_deletes": 252, "total_data_size": 2146655, "memory_usage": 2187696, "flush_reason": "Manual Compaction"}
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.214 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Successfully updated port: 1babd66f-ec6a-4702-8a8f-839d32ba8761 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989218182, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 2096873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53205, "largest_seqno": 54561, "table_properties": {"data_size": 2090460, "index_size": 3616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13719, "raw_average_key_size": 20, "raw_value_size": 2077476, "raw_average_value_size": 3041, "num_data_blocks": 162, "num_entries": 683, "num_filter_entries": 683, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615860, "oldest_key_time": 1765615860, "file_creation_time": 1765615989, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 24489 microseconds, and 5351 cpu microseconds.
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.218232) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 2096873 bytes OK
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.218254) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.226045) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.226113) EVENT_LOG_v1 {"time_micros": 1765615989226104, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.226139) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 2140576, prev total WAL file size 2140576, number of live WAL files 2.
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.227254) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(2047KB)], [125(8676KB)]
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989227376, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10981695, "oldest_snapshot_seqno": -1}
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.235 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.235 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.235 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.282 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:53:09
Dec 13 08:53:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:53:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:53:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms']
Dec 13 08:53:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.313 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.313 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance network_info: |[{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.314 248514 DEBUG oslo_concurrency.lockutils [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.314 248514 DEBUG nova.network.neutron [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.317 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start _get_guest_xml network_info=[{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.321 248514 DEBUG nova.compute.manager [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.321 248514 DEBUG nova.compute.manager [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing instance network info cache due to event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.321 248514 DEBUG oslo_concurrency.lockutils [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.323 248514 WARNING nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.329 248514 DEBUG nova.virt.libvirt.host [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.329 248514 DEBUG nova.virt.libvirt.host [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.333 248514 DEBUG nova.virt.libvirt.host [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.334 248514 DEBUG nova.virt.libvirt.host [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.334 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.334 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.335 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.335 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.335 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.335 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.336 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.336 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.336 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.336 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.337 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.337 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7414 keys, 9200330 bytes, temperature: kUnknown
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989338246, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9200330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9152687, "index_size": 28010, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 194409, "raw_average_key_size": 26, "raw_value_size": 9021865, "raw_average_value_size": 1216, "num_data_blocks": 1086, "num_entries": 7414, "num_filter_entries": 7414, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615989, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.341 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.338459) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9200330 bytes
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.344558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.2 rd, 83.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(9.6) write-amplify(4.4) OK, records in: 7934, records dropped: 520 output_compression: NoCompression
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.344591) EVENT_LOG_v1 {"time_micros": 1765615989344578, "job": 76, "event": "compaction_finished", "compaction_time_micros": 110722, "compaction_time_cpu_micros": 48275, "output_level": 6, "num_output_files": 1, "total_output_size": 9200330, "num_input_records": 7934, "num_output_records": 7414, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989345121, "job": 76, "event": "table_file_deletion", "file_number": 127}
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989346655, "job": 76, "event": "table_file_deletion", "file_number": 125}
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.226996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:53:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.511 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:53:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:09.646 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:09.648 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:53:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:09.650 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:53:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:09.651 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb37c20-6473-4a5e-b099-2bec28f618cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2912515147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.928 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.951 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:09 compute-0 nova_compute[248510]: 2025-12-13 08:53:09.956 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Dec 13 08:53:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862715630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.561 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.564 248514 DEBUG nova.virt.libvirt.vif [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-673770696',display_name='tempest-TestNetworkBasicOps-server-673770696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-673770696',id=112,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFWWKdP0apeEEX6KLq89U2vRGSHeV3KAUwR7F/v8SOdmJ9w4un8uAKW6W1VsXiUAnc8fLGuX3ip0yk759e6Z6EnqMVZe+COaAk19ulIyzOUeifphXpMKMaa2a+4orpaKaw==',key_name='tempest-TestNetworkBasicOps-724159602',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ahio6eh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:02Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=af2dc023-560c-4c66-b330-e41218a7a4eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.565 248514 DEBUG nova.network.os_vif_util [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.566 248514 DEBUG nova.network.os_vif_util [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.569 248514 DEBUG nova.objects.instance [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid af2dc023-560c-4c66-b330-e41218a7a4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.596 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <uuid>af2dc023-560c-4c66-b330-e41218a7a4eb</uuid>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <name>instance-00000070</name>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-673770696</nova:name>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:53:09</nova:creationTime>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <nova:port uuid="0eac2381-7f12-4f67-bde8-76c8fb9ae0b0">
Dec 13 08:53:10 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <system>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <entry name="serial">af2dc023-560c-4c66-b330-e41218a7a4eb</entry>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <entry name="uuid">af2dc023-560c-4c66-b330-e41218a7a4eb</entry>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </system>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <os>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   </os>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <features>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   </features>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/af2dc023-560c-4c66-b330-e41218a7a4eb_disk">
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config">
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:10 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:6a:eb:91"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <target dev="tap0eac2381-7f"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/console.log" append="off"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <video>
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </video>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:53:10 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:53:10 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:53:10 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:53:10 compute-0 nova_compute[248510]: </domain>
Dec 13 08:53:10 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.597 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Preparing to wait for external event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.597 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.598 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.598 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.600 248514 DEBUG nova.virt.libvirt.vif [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-673770696',display_name='tempest-TestNetworkBasicOps-server-673770696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-673770696',id=112,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFWWKdP0apeEEX6KLq89U2vRGSHeV3KAUwR7F/v8SOdmJ9w4un8uAKW6W1VsXiUAnc8fLGuX3ip0yk759e6Z6EnqMVZe+COaAk19ulIyzOUeifphXpMKMaa2a+4orpaKaw==',key_name='tempest-TestNetworkBasicOps-724159602',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ahio6eh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:02Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=af2dc023-560c-4c66-b330-e41218a7a4eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.600 248514 DEBUG nova.network.os_vif_util [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.601 248514 DEBUG nova.network.os_vif_util [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.602 248514 DEBUG os_vif [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.603 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.604 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.608 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0eac2381-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.609 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0eac2381-7f, col_values=(('external_ids', {'iface-id': '0eac2381-7f12-4f67-bde8-76c8fb9ae0b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:eb:91', 'vm-uuid': 'af2dc023-560c-4c66-b330-e41218a7a4eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:10 compute-0 NetworkManager[50376]: <info>  [1765615990.6119] manager: (tap0eac2381-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.616 248514 INFO os_vif [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f')
Dec 13 08:53:10 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2912515147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.702 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.702 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.702 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:6a:eb:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.703 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Using config drive
Dec 13 08:53:10 compute-0 nova_compute[248510]: 2025-12-13 08:53:10.722 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:53:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.082 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.108 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.108 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance network_info: |[{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.108 248514 DEBUG oslo_concurrency.lockutils [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.109 248514 DEBUG nova.network.neutron [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.111 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start _get_guest_xml network_info=[{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.115 248514 WARNING nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.124 248514 DEBUG nova.virt.libvirt.host [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.124 248514 DEBUG nova.virt.libvirt.host [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.132 248514 DEBUG nova.virt.libvirt.host [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.132 248514 DEBUG nova.virt.libvirt.host [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.133 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.133 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.133 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.133 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.135 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.135 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.138 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.307 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Creating config drive at /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.312 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnx0mqudc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.457 248514 DEBUG nova.network.neutron [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updated VIF entry in instance network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.458 248514 DEBUG nova.network.neutron [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.461 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnx0mqudc" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.486 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.490 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.534 248514 DEBUG oslo_concurrency.lockutils [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:11 compute-0 ceph-mon[76537]: pgmap v2748: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Dec 13 08:53:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/862715630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.653 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.653 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Deleting local config drive /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config because it was imported into RBD.
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.664 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.668 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.668 248514 INFO nova.compute.manager [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Unshelving
Dec 13 08:53:11 compute-0 kernel: tap0eac2381-7f: entered promiscuous mode
Dec 13 08:53:11 compute-0 ovn_controller[148476]: 2025-12-13T08:53:11Z|01086|binding|INFO|Claiming lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 for this chassis.
Dec 13 08:53:11 compute-0 NetworkManager[50376]: <info>  [1765615991.7089] manager: (tap0eac2381-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Dec 13 08:53:11 compute-0 ovn_controller[148476]: 2025-12-13T08:53:11Z|01087|binding|INFO|0eac2381-7f12-4f67-bde8-76c8fb9ae0b0: Claiming fa:16:3e:6a:eb:91 10.100.0.8
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.716 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:eb:91 10.100.0.8'], port_security=['fa:16:3e:6a:eb:91 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'af2dc023-560c-4c66-b330-e41218a7a4eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09b8bd52-e920-467f-994b-646113fcb821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6cedfe2-a795-4750-8f73-fd0610750728', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e75a11b-9fc0-4a04-84da-8ed3853196e7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.719 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 in datapath 09b8bd52-e920-467f-994b-646113fcb821 bound to our chassis
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.721 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09b8bd52-e920-467f-994b-646113fcb821
Dec 13 08:53:11 compute-0 ovn_controller[148476]: 2025-12-13T08:53:11Z|01088|binding|INFO|Setting lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 ovn-installed in OVS
Dec 13 08:53:11 compute-0 ovn_controller[148476]: 2025-12-13T08:53:11Z|01089|binding|INFO|Setting lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 up in Southbound
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b37dae6e-b762-4875-8cb2-62223f739a72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.740 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap09b8bd52-e1 in ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:53:11 compute-0 systemd-machined[210538]: New machine qemu-138-instance-00000070.
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.743 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap09b8bd52-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.744 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1526e835-221f-4d34-bac4-abd5ba3ac571]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.746 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[adfbaadd-70b3-48b8-bb9b-a80adc1da533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 systemd-udevd[360141]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:53:11 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-00000070.
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.761 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[3534c9a7-a7ac-4482-befe-7b9d40661ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 NetworkManager[50376]: <info>  [1765615991.7673] device (tap0eac2381-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:53:11 compute-0 NetworkManager[50376]: <info>  [1765615991.7679] device (tap0eac2381-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:53:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2374952932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.776 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.776 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.784 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'pci_requests' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.789 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[61474119-ca60-416a-afd0-5d53f35337a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.798 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.821 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[18ffba9f-72ef-4d20-acb9-152a4c8ef946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d75155c-647b-4fdf-837c-16f0831f988d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 NetworkManager[50376]: <info>  [1765615991.8282] manager: (tap09b8bd52-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/451)
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.829 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.838 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.861 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[89621412-6cb4-459b-b854-b70d9e9c915c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.865 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[10c40cd0-4411-45ae-80ee-9498c0265377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 NetworkManager[50376]: <info>  [1765615991.8903] device (tap09b8bd52-e0): carrier: link connected
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.893 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.895 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[74251966-5e0b-46c1-b65c-19632d969938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.899 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'numa_topology' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db0f2abc-d5ef-43ac-aeb7-28291285a69d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09b8bd52-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:21:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856910, 'reachable_time': 28899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360194, 'error': None, 'target': 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.923 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:53:11 compute-0 nova_compute[248510]: 2025-12-13 08:53:11.924 248514 INFO nova.compute.claims [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.940 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f58d0cbc-4ce6-4c4b-a376-b1e8b080ded4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:213c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856910, 'tstamp': 856910}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360195, 'error': None, 'target': 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.966 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57538ebc-5408-4ab6-9649-654bf312dadc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09b8bd52-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:21:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856910, 'reachable_time': 28899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360211, 'error': None, 'target': 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.008 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[183b15dc-7c9a-4c35-b3e3-6671db4cd850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.080 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c009e97d-e50e-4d93-b9b6-468235b881a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.083 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09b8bd52-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.084 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09b8bd52-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.126 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:12 compute-0 NetworkManager[50376]: <info>  [1765615992.1349] manager: (tap09b8bd52-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Dec 13 08:53:12 compute-0 kernel: tap09b8bd52-e0: entered promiscuous mode
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.140 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09b8bd52-e0, col_values=(('external_ids', {'iface-id': 'eef5d4b2-f2d3-4d15-9528-0e68d65ce454'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.144 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/09b8bd52-e920-467f-994b-646113fcb821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/09b8bd52-e920-467f-994b-646113fcb821.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:53:12 compute-0 ovn_controller[148476]: 2025-12-13T08:53:12Z|01090|binding|INFO|Releasing lport eef5d4b2-f2d3-4d15-9528-0e68d65ce454 from this chassis (sb_readonly=0)
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[346c753e-55ef-4212-9d38-2cf51b400b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.151 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-09b8bd52-e920-467f-994b-646113fcb821
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/09b8bd52-e920-467f-994b-646113fcb821.pid.haproxy
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 09b8bd52-e920-467f-994b-646113fcb821
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:53:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.151 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'env', 'PROCESS_TAG=haproxy-09b8bd52-e920-467f-994b-646113fcb821', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/09b8bd52-e920-467f-994b-646113fcb821.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:53:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.167 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615992.150785, af2dc023-560c-4c66-b330-e41218a7a4eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.168 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] VM Started (Lifecycle Event)
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.198 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.205 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615992.1513333, af2dc023-560c-4c66-b330-e41218a7a4eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.206 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] VM Paused (Lifecycle Event)
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.228 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.236 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.247 248514 DEBUG nova.compute.manager [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.248 248514 DEBUG oslo_concurrency.lockutils [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.248 248514 DEBUG oslo_concurrency.lockutils [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.248 248514 DEBUG oslo_concurrency.lockutils [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.248 248514 DEBUG nova.compute.manager [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Processing event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.249 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.253 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.261 248514 INFO nova.virt.libvirt.driver [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance spawned successfully.
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.261 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.268 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.269 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615992.252289, af2dc023-560c-4c66-b330-e41218a7a4eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.269 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] VM Resumed (Lifecycle Event)
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.286 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.288 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.289 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.289 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.289 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.290 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.290 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.319 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.372 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.414 248514 INFO nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Took 10.01 seconds to spawn the instance on the hypervisor.
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.414 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385961892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.472 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.473 248514 DEBUG nova.virt.libvirt.vif [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=113,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-ogye2bl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:07Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2d2a33c7-0a90-4b64-b291-b268d37dce5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.474 248514 DEBUG nova.network.os_vif_util [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.475 248514 DEBUG nova.network.os_vif_util [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.476 248514 DEBUG nova.objects.instance [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d2a33c7-0a90-4b64-b291-b268d37dce5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.531 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <uuid>2d2a33c7-0a90-4b64-b291-b268d37dce5e</uuid>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <name>instance-00000071</name>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416</nova:name>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:53:11</nova:creationTime>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <nova:port uuid="1babd66f-ec6a-4702-8a8f-839d32ba8761">
Dec 13 08:53:12 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <system>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <entry name="serial">2d2a33c7-0a90-4b64-b291-b268d37dce5e</entry>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <entry name="uuid">2d2a33c7-0a90-4b64-b291-b268d37dce5e</entry>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </system>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <os>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   </os>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <features>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   </features>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk">
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config">
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:12 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b1:81:46"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <target dev="tap1babd66f-ec"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/console.log" append="off"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <video>
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </video>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:53:12 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:53:12 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:53:12 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:53:12 compute-0 nova_compute[248510]: </domain>
Dec 13 08:53:12 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.531 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Preparing to wait for external event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.531 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.532 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.532 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.532 248514 DEBUG nova.virt.libvirt.vif [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=113,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-ogye2bl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:07Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2d2a33c7-0a90-4b64-b291-b268d37dce5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.533 248514 DEBUG nova.network.os_vif_util [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.533 248514 DEBUG nova.network.os_vif_util [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.534 248514 DEBUG os_vif [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.534 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.535 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.535 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.541 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1babd66f-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.542 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1babd66f-ec, col_values=(('external_ids', {'iface-id': '1babd66f-ec6a-4702-8a8f-839d32ba8761', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:81:46', 'vm-uuid': '2d2a33c7-0a90-4b64-b291-b268d37dce5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:12 compute-0 NetworkManager[50376]: <info>  [1765615992.5441] manager: (tap1babd66f-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.552 248514 INFO os_vif [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec')
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.565 248514 INFO nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Took 11.40 seconds to build instance.
Dec 13 08:53:12 compute-0 podman[360310]: 2025-12-13 08:53:12.56824727 +0000 UTC m=+0.062351744 container create e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.599 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:12 compute-0 systemd[1]: Started libpod-conmon-e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d.scope.
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.621 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.622 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.622 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:b1:81:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.623 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Using config drive
Dec 13 08:53:12 compute-0 podman[360310]: 2025-12-13 08:53:12.530435382 +0000 UTC m=+0.024539876 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:53:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2374952932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:12 compute-0 ceph-mon[76537]: pgmap v2749: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Dec 13 08:53:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/385961892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.652 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de9dacfc504e0470aec8b697e93be4e6449bd328978b3076d04e8364d2802e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:12 compute-0 podman[360310]: 2025-12-13 08:53:12.67431901 +0000 UTC m=+0.168423504 container init e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:53:12 compute-0 podman[360310]: 2025-12-13 08:53:12.680959656 +0000 UTC m=+0.175064130 container start e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:53:12 compute-0 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [NOTICE]   (360349) : New worker (360351) forked
Dec 13 08:53:12 compute-0 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [NOTICE]   (360349) : Loading success.
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.797 248514 DEBUG nova.network.neutron [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updated VIF entry in instance network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.797 248514 DEBUG nova.network.neutron [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.817 248514 DEBUG oslo_concurrency.lockutils [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:53:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2999988297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.867 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.874 248514 DEBUG nova.compute.provider_tree [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.899 248514 DEBUG nova.scheduler.client.report [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.924 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.928 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.928 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.928 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:53:12 compute-0 nova_compute[248510]: 2025-12-13 08:53:12.929 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.135 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Creating config drive at /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.140 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpti75ytre execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.225 248514 INFO nova.network.neutron [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating port b2143648-4c23-49b5-8777-433a5b34c7ce with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.294 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpti75ytre" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.317 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.322 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.372 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.374 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:53:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336295226' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.579 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2999988297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/336295226' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.664 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.664 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.670 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.671 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.733 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.734 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Deleting local config drive /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config because it was imported into RBD.
Dec 13 08:53:13 compute-0 kernel: tap1babd66f-ec: entered promiscuous mode
Dec 13 08:53:13 compute-0 systemd-udevd[360188]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:53:13 compute-0 NetworkManager[50376]: <info>  [1765615993.8007] manager: (tap1babd66f-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Dec 13 08:53:13 compute-0 ovn_controller[148476]: 2025-12-13T08:53:13Z|01091|binding|INFO|Claiming lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 for this chassis.
Dec 13 08:53:13 compute-0 ovn_controller[148476]: 2025-12-13T08:53:13Z|01092|binding|INFO|1babd66f-ec6a-4702-8a8f-839d32ba8761: Claiming fa:16:3e:b1:81:46 10.100.0.10
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.810 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:81:46 10.100.0.10'], port_security=['fa:16:3e:b1:81:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2d2a33c7-0a90-4b64-b291-b268d37dce5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3479ed9a-2670-4333-b282-6f40685ff746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '57e2154e-1e2d-4537-afe5-11c61b80fdbc ee7df75c-fefa-4bc0-977e-537259cc7755', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9839b158-8451-4098-b558-d860fc6a92ca, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1babd66f-ec6a-4702-8a8f-839d32ba8761) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.811 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1babd66f-ec6a-4702-8a8f-839d32ba8761 in datapath 3479ed9a-2670-4333-b282-6f40685ff746 bound to our chassis
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.813 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3479ed9a-2670-4333-b282-6f40685ff746
Dec 13 08:53:13 compute-0 NetworkManager[50376]: <info>  [1765615993.8179] device (tap1babd66f-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:53:13 compute-0 NetworkManager[50376]: <info>  [1765615993.8191] device (tap1babd66f-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:53:13 compute-0 ovn_controller[148476]: 2025-12-13T08:53:13Z|01093|binding|INFO|Setting lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 ovn-installed in OVS
Dec 13 08:53:13 compute-0 ovn_controller[148476]: 2025-12-13T08:53:13Z|01094|binding|INFO|Setting lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 up in Southbound
Dec 13 08:53:13 compute-0 nova_compute[248510]: 2025-12-13 08:53:13.823 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43fc1189-5dda-4053-9151-b7a91f8462b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.833 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3479ed9a-21 in ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.835 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3479ed9a-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.835 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce53cb34-7cb3-4862-bed9-94d5a2b984e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.837 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71801a36-a1ce-49e3-90b8-d4d6896bc640]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.851 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[a191b44d-d86c-42cc-ae4f-74ee96acd85d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:13 compute-0 systemd-machined[210538]: New machine qemu-139-instance-00000071.
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.869 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3245e391-111d-4d5c-9e0a-4c7a70642f53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:13 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-00000071.
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.909 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8fe2fe-6f8c-45ab-82ef-4b221930b6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:13 compute-0 NetworkManager[50376]: <info>  [1765615993.9202] manager: (tap3479ed9a-20): new Veth device (/org/freedesktop/NetworkManager/Devices/455)
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.921 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[00f652fd-887f-4a28-8d2d-f5a71ac9bc39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.966 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8b145423-6918-4ea1-a706-36994331ae59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.971 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65749387-8a21-4af9-b93c-aa077e162d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:14 compute-0 NetworkManager[50376]: <info>  [1765615994.0004] device (tap3479ed9a-20): carrier: link connected
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.006 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b87cf192-90a1-4611-9a7e-12831297f894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.023 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03d7d402-2c41-4deb-980b-fffce20be722]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3479ed9a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:9f:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857121, 'reachable_time': 24238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360454, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.066 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.064 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4550e5cf-6260-416a-a2fd-5f5c5df62a8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3d:9fbc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857121, 'tstamp': 857121}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360455, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.068 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3512MB free_disk=59.946029658429325GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.069 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.069 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.089 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db3d6e35-b0af-485b-82f3-ab1eb4c34f83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3479ed9a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:9f:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857121, 'reachable_time': 24238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360456, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.152 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b7ce1b-cb55-48a2-9917-887a5ec92425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.157 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance af2dc023-560c-4c66-b330-e41218a7a4eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.158 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2d2a33c7-0a90-4b64-b291-b268d37dce5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.158 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance fcc617ec-f5f9-41bb-ad4b-86d790622e74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.158 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.159 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:53:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.1 MiB/s wr, 154 op/s
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.223 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6938b3cc-37c0-4454-ab69-e70f2e55518f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.225 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3479ed9a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.225 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.225 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3479ed9a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:14 compute-0 kernel: tap3479ed9a-20: entered promiscuous mode
Dec 13 08:53:14 compute-0 NetworkManager[50376]: <info>  [1765615994.2281] manager: (tap3479ed9a-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.230 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3479ed9a-20, col_values=(('external_ids', {'iface-id': 'd8892183-3d82-41f5-b0bd-dc5a1c170b1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:14 compute-0 ovn_controller[148476]: 2025-12-13T08:53:14Z|01095|binding|INFO|Releasing lport d8892183-3d82-41f5-b0bd-dc5a1c170b1a from this chassis (sb_readonly=0)
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.234 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.294 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3479ed9a-2670-4333-b282-6f40685ff746.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3479ed9a-2670-4333-b282-6f40685ff746.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d2101a19-03f0-4dab-a2fe-ca075593e4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.296 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-3479ed9a-2670-4333-b282-6f40685ff746
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/3479ed9a-2670-4333-b282-6f40685ff746.pid.haproxy
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 3479ed9a-2670-4333-b282-6f40685ff746
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:53:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.297 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'env', 'PROCESS_TAG=haproxy-3479ed9a-2670-4333-b282-6f40685ff746', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3479ed9a-2670-4333-b282-6f40685ff746.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.295 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:14 compute-0 ceph-mon[76537]: pgmap v2750: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.1 MiB/s wr, 154 op/s
Dec 13 08:53:14 compute-0 podman[360507]: 2025-12-13 08:53:14.76347091 +0000 UTC m=+0.112778629 container create b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 08:53:14 compute-0 podman[360507]: 2025-12-13 08:53:14.676600742 +0000 UTC m=+0.025908491 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:53:14 compute-0 systemd[1]: Started libpod-conmon-b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a.scope.
Dec 13 08:53:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15bfa81170ec655fc806f556ce96cba9f04e8c1b8d6c8296ea616ed5bdca96a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:53:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2191403147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:14 compute-0 podman[360507]: 2025-12-13 08:53:14.862238557 +0000 UTC m=+0.211546306 container init b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 08:53:14 compute-0 podman[360507]: 2025-12-13 08:53:14.871084638 +0000 UTC m=+0.220392357 container start b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.888 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:14 compute-0 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [NOTICE]   (360543) : New worker (360547) forked
Dec 13 08:53:14 compute-0 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [NOTICE]   (360543) : Loading success.
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.897 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.919 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.944 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:53:14 compute-0 nova_compute[248510]: 2025-12-13 08:53:14.944 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:53:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088222618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:53:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:53:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088222618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:53:15 compute-0 nova_compute[248510]: 2025-12-13 08:53:15.065 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615995.0646038, 2d2a33c7-0a90-4b64-b291-b268d37dce5e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:15 compute-0 nova_compute[248510]: 2025-12-13 08:53:15.065 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] VM Started (Lifecycle Event)
Dec 13 08:53:15 compute-0 nova_compute[248510]: 2025-12-13 08:53:15.095 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:15 compute-0 nova_compute[248510]: 2025-12-13 08:53:15.100 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615995.0702167, 2d2a33c7-0a90-4b64-b291-b268d37dce5e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:15 compute-0 nova_compute[248510]: 2025-12-13 08:53:15.100 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] VM Paused (Lifecycle Event)
Dec 13 08:53:15 compute-0 nova_compute[248510]: 2025-12-13 08:53:15.123 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:15 compute-0 nova_compute[248510]: 2025-12-13 08:53:15.127 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:15 compute-0 nova_compute[248510]: 2025-12-13 08:53:15.151 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2191403147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3088222618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:53:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3088222618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:53:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 144 op/s
Dec 13 08:53:16 compute-0 nova_compute[248510]: 2025-12-13 08:53:16.555 248514 DEBUG nova.compute.manager [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:16 compute-0 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 DEBUG oslo_concurrency.lockutils [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:16 compute-0 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 DEBUG oslo_concurrency.lockutils [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:16 compute-0 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 DEBUG oslo_concurrency.lockutils [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:16 compute-0 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 DEBUG nova.compute.manager [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] No waiting events found dispatching network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:53:16 compute-0 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 WARNING nova.compute.manager [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received unexpected event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 for instance with vm_state active and task_state None.
Dec 13 08:53:17 compute-0 ceph-mon[76537]: pgmap v2751: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 144 op/s
Dec 13 08:53:17 compute-0 nova_compute[248510]: 2025-12-13 08:53:17.467 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:17 compute-0 nova_compute[248510]: 2025-12-13 08:53:17.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:17 compute-0 nova_compute[248510]: 2025-12-13 08:53:17.957 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:53:17 compute-0 nova_compute[248510]: 2025-12-13 08:53:17.957 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:53:17 compute-0 nova_compute[248510]: 2025-12-13 08:53:17.958 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:53:17 compute-0 nova_compute[248510]: 2025-12-13 08:53:17.958 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:53:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 159 op/s
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.239 248514 DEBUG nova.compute.manager [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.239 248514 DEBUG oslo_concurrency.lockutils [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.239 248514 DEBUG oslo_concurrency.lockutils [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.240 248514 DEBUG oslo_concurrency.lockutils [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.240 248514 DEBUG nova.compute.manager [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Processing event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.240 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.252 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615999.2442622, 2d2a33c7-0a90-4b64-b291-b268d37dce5e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.259 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] VM Resumed (Lifecycle Event)
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.261 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.271 248514 INFO nova.virt.libvirt.driver [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance spawned successfully.
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.272 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:53:19 compute-0 ceph-mon[76537]: pgmap v2752: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 159 op/s
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.287 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.291 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.312 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.314 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.314 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.314 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.315 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.315 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.319 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.377 248514 INFO nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Took 12.15 seconds to spawn the instance on the hypervisor.
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.377 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.475 248514 INFO nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Took 13.54 seconds to build instance.
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.481 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.481 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.482 248514 DEBUG nova.network.neutron [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:53:19 compute-0 nova_compute[248510]: 2025-12-13 08:53:19.511 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 13 08:53:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:20.376 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:20 compute-0 ceph-mon[76537]: pgmap v2753: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007091986137092553 of space, bias 1.0, pg target 0.21275958411277657 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014283697439170003 of space, bias 1.0, pg target 0.4285109231751001 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.735894265472636e-07 of space, bias 4.0, pg target 0.0006883073118567163 quantized to 16 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:53:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.433 248514 DEBUG nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.434 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.435 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.435 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.435 248514 DEBUG nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] No waiting events found dispatching network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.436 248514 WARNING nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received unexpected event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 for instance with vm_state active and task_state None.
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.436 248514 DEBUG nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.436 248514 DEBUG nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.436 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:21.626 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:21.627 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:53:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:21.629 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:53:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:21.630 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93b65367-23f4-423c-9fe6-3576e86adba1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:21 compute-0 nova_compute[248510]: 2025-12-13 08:53:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:53:22 compute-0 sshd-session[360582]: Invalid user admin from 61.245.11.87 port 58776
Dec 13 08:53:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Dec 13 08:53:22 compute-0 sshd-session[360582]: Connection closed by invalid user admin 61.245.11.87 port 58776 [preauth]
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.470 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.561 248514 DEBUG nova.network.neutron [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.592 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.593 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.594 248514 INFO nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating image(s)
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.619 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.623 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'trusted_certs' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.625 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.626 248514 DEBUG nova.network.neutron [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.679 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.710 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.716 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "904114c6e9ea91bfc56a15099c4749b640a96cc9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:22 compute-0 nova_compute[248510]: 2025-12-13 08:53:22.718 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "904114c6e9ea91bfc56a15099c4749b640a96cc9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:23 compute-0 sudo[360638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:53:23 compute-0 sudo[360638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:23 compute-0 sudo[360638]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.071 248514 DEBUG nova.compute.manager [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.072 248514 DEBUG nova.compute.manager [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing instance network info cache due to event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.073 248514 DEBUG oslo_concurrency.lockutils [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.073 248514 DEBUG oslo_concurrency.lockutils [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.073 248514 DEBUG nova.network.neutron [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:53:23 compute-0 sudo[360663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:53:23 compute-0 sudo[360663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:23 compute-0 ceph-mon[76537]: pgmap v2754: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.343 248514 DEBUG nova.virt.libvirt.imagebackend [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e80a280c-5146-4d78-99c1-0d3591de049e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e80a280c-5146-4d78-99c1-0d3591de049e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.397 248514 DEBUG nova.virt.libvirt.imagebackend [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e80a280c-5146-4d78-99c1-0d3591de049e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.398 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] cloning images/e80a280c-5146-4d78-99c1-0d3591de049e@snap to None/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.635 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "904114c6e9ea91bfc56a15099c4749b640a96cc9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:23 compute-0 sudo[360663]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:53:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:53:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:53:23 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:53:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:53:23 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:53:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:53:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:53:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:53:23 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:53:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:53:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:53:23 compute-0 sudo[360824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:53:23 compute-0 sudo[360824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:23 compute-0 sudo[360824]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.856 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'migration_context' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:23 compute-0 sudo[360867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:53:23 compute-0 sudo[360867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:23 compute-0 nova_compute[248510]: 2025-12-13 08:53:23.959 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] flattening vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:53:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 230 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.7 MiB/s wr, 208 op/s
Dec 13 08:53:24 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:53:24 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:53:24 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:53:24 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:53:24 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:53:24 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:53:24 compute-0 podman[360939]: 2025-12-13 08:53:24.34203296 +0000 UTC m=+0.088478479 container create 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:53:24 compute-0 podman[360939]: 2025-12-13 08:53:24.282474597 +0000 UTC m=+0.028920146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:53:24 compute-0 systemd[1]: Started libpod-conmon-670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb.scope.
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.439 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Image rbd:vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.440 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.440 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Ensure instance console log exists: /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.441 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.441 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.441 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.443 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start _get_guest_xml network_info=[{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:52:49Z,direct_url=<?>,disk_format='raw',id=e80a280c-5146-4d78-99c1-0d3591de049e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1747506169-shelved',owner='ff4d2c6ad4dc4848ac9f55ff1b9e829a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:53:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.450 248514 WARNING nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:53:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:24 compute-0 podman[360939]: 2025-12-13 08:53:24.477886346 +0000 UTC m=+0.224331895 container init 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 08:53:24 compute-0 podman[360939]: 2025-12-13 08:53:24.486206635 +0000 UTC m=+0.232652154 container start 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 08:53:24 compute-0 podman[360939]: 2025-12-13 08:53:24.489620591 +0000 UTC m=+0.236066110 container attach 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 08:53:24 compute-0 systemd[1]: libpod-670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb.scope: Deactivated successfully.
Dec 13 08:53:24 compute-0 happy_johnson[360955]: 167 167
Dec 13 08:53:24 compute-0 podman[360939]: 2025-12-13 08:53:24.49559933 +0000 UTC m=+0.242044869 container died 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.506 248514 DEBUG nova.virt.libvirt.host [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.507 248514 DEBUG nova.virt.libvirt.host [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1430df9bd3976963e60d4a8c2bbae54a5f199eb326f57c1b14cbde57cbf515b-merged.mount: Deactivated successfully.
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.544 248514 DEBUG nova.virt.libvirt.host [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.544 248514 DEBUG nova.virt.libvirt.host [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.545 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.545 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:52:49Z,direct_url=<?>,disk_format='raw',id=e80a280c-5146-4d78-99c1-0d3591de049e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1747506169-shelved',owner='ff4d2c6ad4dc4848ac9f55ff1b9e829a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:53:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.545 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.545 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:53:24 compute-0 podman[360939]: 2025-12-13 08:53:24.546130557 +0000 UTC m=+0.292576076 container remove 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.546 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.546 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.546 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.546 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'vcpu_model' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:24 compute-0 systemd[1]: libpod-conmon-670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb.scope: Deactivated successfully.
Dec 13 08:53:24 compute-0 nova_compute[248510]: 2025-12-13 08:53:24.698 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:24 compute-0 podman[360980]: 2025-12-13 08:53:24.728865318 +0000 UTC m=+0.045899471 container create 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 08:53:24 compute-0 systemd[1]: Started libpod-conmon-558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d.scope.
Dec 13 08:53:24 compute-0 podman[360980]: 2025-12-13 08:53:24.709833761 +0000 UTC m=+0.026867934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:53:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:24 compute-0 podman[360980]: 2025-12-13 08:53:24.852044697 +0000 UTC m=+0.169078860 container init 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 08:53:24 compute-0 podman[360980]: 2025-12-13 08:53:24.861545135 +0000 UTC m=+0.178579288 container start 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 08:53:24 compute-0 podman[360980]: 2025-12-13 08:53:24.866376846 +0000 UTC m=+0.183411039 container attach 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 08:53:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:25.180 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:25.183 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:53:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:25.186 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:53:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:25.187 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1afc8b5d-bb3f-44b9-9e79-55bb7968eeee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566843178' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.338 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:25 compute-0 happy_goldstine[360998]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:53:25 compute-0 happy_goldstine[360998]: --> All data devices are unavailable
Dec 13 08:53:25 compute-0 systemd[1]: libpod-558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d.scope: Deactivated successfully.
Dec 13 08:53:25 compute-0 podman[360980]: 2025-12-13 08:53:25.372242999 +0000 UTC m=+0.689277152 container died 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Dec 13 08:53:25 compute-0 ceph-mon[76537]: pgmap v2755: 321 pgs: 321 active+clean; 230 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.7 MiB/s wr, 208 op/s
Dec 13 08:53:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2566843178' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.392 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.400 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71-merged.mount: Deactivated successfully.
Dec 13 08:53:25 compute-0 podman[360980]: 2025-12-13 08:53:25.43010556 +0000 UTC m=+0.747139713 container remove 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 08:53:25 compute-0 systemd[1]: libpod-conmon-558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d.scope: Deactivated successfully.
Dec 13 08:53:25 compute-0 ovn_controller[148476]: 2025-12-13T08:53:25Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:eb:91 10.100.0.8
Dec 13 08:53:25 compute-0 ovn_controller[148476]: 2025-12-13T08:53:25Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:eb:91 10.100.0.8
Dec 13 08:53:25 compute-0 sudo[360867]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:25 compute-0 sudo[361070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:53:25 compute-0 sudo[361070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:25 compute-0 sudo[361070]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.569 248514 DEBUG nova.network.neutron [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updated VIF entry in instance network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.571 248514 DEBUG nova.network.neutron [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:25 compute-0 sudo[361105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:53:25 compute-0 sudo[361105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.601 248514 DEBUG nova.network.neutron [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.602 248514 DEBUG nova.network.neutron [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.649 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:25 compute-0 nova_compute[248510]: 2025-12-13 08:53:25.651 248514 DEBUG oslo_concurrency.lockutils [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:25 compute-0 podman[361151]: 2025-12-13 08:53:25.892106184 +0000 UTC m=+0.035810079 container create e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 08:53:25 compute-0 systemd[1]: Started libpod-conmon-e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81.scope.
Dec 13 08:53:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:25 compute-0 podman[361151]: 2025-12-13 08:53:25.876574654 +0000 UTC m=+0.020278569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:53:25 compute-0 podman[361151]: 2025-12-13 08:53:25.991293761 +0000 UTC m=+0.134997676 container init e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 08:53:26 compute-0 podman[361151]: 2025-12-13 08:53:26.00164261 +0000 UTC m=+0.145346515 container start e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 08:53:26 compute-0 podman[361151]: 2025-12-13 08:53:26.005841246 +0000 UTC m=+0.149545161 container attach e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 08:53:26 compute-0 nervous_aryabhata[361167]: 167 167
Dec 13 08:53:26 compute-0 systemd[1]: libpod-e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81.scope: Deactivated successfully.
Dec 13 08:53:26 compute-0 conmon[361167]: conmon e1999cab1bbfa4d4db22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81.scope/container/memory.events
Dec 13 08:53:26 compute-0 podman[361151]: 2025-12-13 08:53:26.009441676 +0000 UTC m=+0.153145581 container died e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:53:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515509598' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-200e3598393003f8d37d71505ac742004b7087b005c2874610048f9eb3832b50-merged.mount: Deactivated successfully.
Dec 13 08:53:26 compute-0 podman[361151]: 2025-12-13 08:53:26.051939911 +0000 UTC m=+0.195643806 container remove e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.058 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.060 248514 DEBUG nova.virt.libvirt.vif [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='e80a280c-5146-4d78-99c1-0d3591de049e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1284595260',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:52:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member',shelved_at='2025-12-13T08:53:01.811270',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e80a280c-5146-4d78-99c1-0d3591de049e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:11Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.061 248514 DEBUG nova.network.os_vif_util [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.062 248514 DEBUG nova.network.os_vif_util [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:26 compute-0 systemd[1]: libpod-conmon-e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81.scope: Deactivated successfully.
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.063 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'pci_devices' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.090 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <uuid>fcc617ec-f5f9-41bb-ad4b-86d790622e74</uuid>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <name>instance-0000006f</name>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <nova:name>tempest-TestShelveInstance-server-1747506169</nova:name>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:53:24</nova:creationTime>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <nova:user uuid="fa34623cd3de4a47aa57959f09b3ff79">tempest-TestShelveInstance-2105398574-project-member</nova:user>
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <nova:project uuid="ff4d2c6ad4dc4848ac9f55ff1b9e829a">tempest-TestShelveInstance-2105398574</nova:project>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="e80a280c-5146-4d78-99c1-0d3591de049e"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <nova:port uuid="b2143648-4c23-49b5-8777-433a5b34c7ce">
Dec 13 08:53:26 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <system>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <entry name="serial">fcc617ec-f5f9-41bb-ad4b-86d790622e74</entry>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <entry name="uuid">fcc617ec-f5f9-41bb-ad4b-86d790622e74</entry>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </system>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <os>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   </os>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <features>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   </features>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk">
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config">
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:02:b7:87"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <target dev="tapb2143648-4c"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/console.log" append="off"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <video>
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </video>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:53:26 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:53:26 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:53:26 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:53:26 compute-0 nova_compute[248510]: </domain>
Dec 13 08:53:26 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.091 248514 DEBUG nova.compute.manager [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Preparing to wait for external event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.091 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.091 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.092 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.092 248514 DEBUG nova.virt.libvirt.vif [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='e80a280c-5146-4d78-99c1-0d3591de049e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1284595260',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:52:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member',shelved_at='2025-12-13T08:53:01.811270',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e80a280c-5146-4d78-99c1-0d3591de049e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:11Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.093 248514 DEBUG nova.network.os_vif_util [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.093 248514 DEBUG nova.network.os_vif_util [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.094 248514 DEBUG os_vif [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.094 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.094 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.095 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.099 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.099 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2143648-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.099 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2143648-4c, col_values=(('external_ids', {'iface-id': 'b2143648-4c23-49b5-8777-433a5b34c7ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:b7:87', 'vm-uuid': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:26 compute-0 NetworkManager[50376]: <info>  [1765616006.1025] manager: (tapb2143648-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.104 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.108 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.108 248514 INFO os_vif [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c')
Dec 13 08:53:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 230 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 156 op/s
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.218 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.219 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.219 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No VIF found with MAC fa:16:3e:02:b7:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.220 248514 INFO nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Using config drive
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.239 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.262 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'ec2_ids' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:26 compute-0 podman[361196]: 2025-12-13 08:53:26.219277507 +0000 UTC m=+0.029188093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:53:26 compute-0 nova_compute[248510]: 2025-12-13 08:53:26.328 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'keypairs' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1515509598' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:27 compute-0 podman[361196]: 2025-12-13 08:53:27.131051658 +0000 UTC m=+0.940962214 container create fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.160 248514 DEBUG nova.compute.manager [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.161 248514 DEBUG nova.compute.manager [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing instance network info cache due to event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.161 248514 DEBUG oslo_concurrency.lockutils [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.161 248514 DEBUG oslo_concurrency.lockutils [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.161 248514 DEBUG nova.network.neutron [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:53:27 compute-0 systemd[1]: Started libpod-conmon-fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606.scope.
Dec 13 08:53:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:27 compute-0 podman[361196]: 2025-12-13 08:53:27.233341292 +0000 UTC m=+1.043251878 container init fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:53:27 compute-0 podman[361196]: 2025-12-13 08:53:27.242454811 +0000 UTC m=+1.052365367 container start fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:53:27 compute-0 podman[361196]: 2025-12-13 08:53:27.252299998 +0000 UTC m=+1.062210554 container attach fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 08:53:27 compute-0 podman[361233]: 2025-12-13 08:53:27.259286763 +0000 UTC m=+0.077101454 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 08:53:27 compute-0 podman[361232]: 2025-12-13 08:53:27.272133175 +0000 UTC m=+0.091555477 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:53:27 compute-0 podman[361229]: 2025-12-13 08:53:27.297915701 +0000 UTC m=+0.117478946 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.495 248514 INFO nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating config drive at /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.501 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2s1lrv_2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]: {
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:     "0": [
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:         {
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "devices": [
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "/dev/loop3"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             ],
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_name": "ceph_lv0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_size": "21470642176",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "name": "ceph_lv0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "tags": {
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cluster_name": "ceph",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.crush_device_class": "",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.encrypted": "0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.objectstore": "bluestore",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osd_id": "0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.type": "block",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.vdo": "0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.with_tpm": "0"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             },
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "type": "block",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "vg_name": "ceph_vg0"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:         }
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:     ],
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:     "1": [
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:         {
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "devices": [
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "/dev/loop4"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             ],
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_name": "ceph_lv1",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_size": "21470642176",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "name": "ceph_lv1",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "tags": {
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cluster_name": "ceph",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.crush_device_class": "",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.encrypted": "0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.objectstore": "bluestore",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osd_id": "1",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.type": "block",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.vdo": "0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.with_tpm": "0"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             },
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "type": "block",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "vg_name": "ceph_vg1"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:         }
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:     ],
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:     "2": [
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:         {
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "devices": [
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "/dev/loop5"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             ],
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_name": "ceph_lv2",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_size": "21470642176",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "name": "ceph_lv2",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "tags": {
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.cluster_name": "ceph",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.crush_device_class": "",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.encrypted": "0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.objectstore": "bluestore",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osd_id": "2",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.type": "block",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.vdo": "0",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:                 "ceph.with_tpm": "0"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             },
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "type": "block",
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:             "vg_name": "ceph_vg2"
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:         }
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]:     ]
Dec 13 08:53:27 compute-0 stupefied_jemison[361234]: }
Dec 13 08:53:27 compute-0 systemd[1]: libpod-fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606.scope: Deactivated successfully.
Dec 13 08:53:27 compute-0 podman[361196]: 2025-12-13 08:53:27.601323549 +0000 UTC m=+1.411234105 container died fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 08:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464-merged.mount: Deactivated successfully.
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.661 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2s1lrv_2" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:27 compute-0 podman[361196]: 2025-12-13 08:53:27.681668053 +0000 UTC m=+1.491578609 container remove fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 08:53:27 compute-0 systemd[1]: libpod-conmon-fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606.scope: Deactivated successfully.
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.697 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:27 compute-0 nova_compute[248510]: 2025-12-13 08:53:27.701 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:27 compute-0 sudo[361105]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:27 compute-0 sudo[361333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:53:27 compute-0 sudo[361333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:27 compute-0 sudo[361333]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:27 compute-0 sudo[361373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:53:27 compute-0 sudo[361373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.034 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.036 248514 INFO nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deleting local config drive /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config because it was imported into RBD.
Dec 13 08:53:28 compute-0 kernel: tapb2143648-4c: entered promiscuous mode
Dec 13 08:53:28 compute-0 ovn_controller[148476]: 2025-12-13T08:53:28Z|01096|binding|INFO|Claiming lport b2143648-4c23-49b5-8777-433a5b34c7ce for this chassis.
Dec 13 08:53:28 compute-0 ovn_controller[148476]: 2025-12-13T08:53:28Z|01097|binding|INFO|b2143648-4c23-49b5-8777-433a5b34c7ce: Claiming fa:16:3e:02:b7:87 10.100.0.4
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.099 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:28 compute-0 NetworkManager[50376]: <info>  [1765616008.1035] manager: (tapb2143648-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.105 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:87 10.100.0.4'], port_security=['fa:16:3e:02:b7:87 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db56c55-78f1-455f-855e-db3acef05ff3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9bebfabc-b3c7-415b-9881-5a1028b3b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efdd787e-21fc-422f-be64-57ef7368490d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2143648-4c23-49b5-8777-433a5b34c7ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.107 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2143648-4c23-49b5-8777-433a5b34c7ce in datapath 6db56c55-78f1-455f-855e-db3acef05ff3 bound to our chassis
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.109 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 08:53:28 compute-0 ovn_controller[148476]: 2025-12-13T08:53:28Z|01098|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce ovn-installed in OVS
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:28 compute-0 ovn_controller[148476]: 2025-12-13T08:53:28Z|01099|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce up in Southbound
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.131 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0b2531-f3db-4419-afd3-7d49d09e47af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.132 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6db56c55-71 in ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:53:28 compute-0 ceph-mon[76537]: pgmap v2756: 321 pgs: 321 active+clean; 230 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 156 op/s
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.134 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6db56c55-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.134 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6de256-b9fb-402d-a0e4-0f51020d9e70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.135 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5590f2a5-2f30-4e0e-8e96-56d8e8796427]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.150 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcae99d-f9b3-4714-b7ad-e833bc757ab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 systemd-udevd[361427]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:53:28 compute-0 systemd-machined[210538]: New machine qemu-140-instance-0000006f.
Dec 13 08:53:28 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-0000006f.
Dec 13 08:53:28 compute-0 NetworkManager[50376]: <info>  [1765616008.1702] device (tapb2143648-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:53:28 compute-0 NetworkManager[50376]: <info>  [1765616008.1711] device (tapb2143648-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.167 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eded7415-70b3-4606-bbdd-4d55a7c87be2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 281 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.204 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0e38d1e0-509a-4f50-8901-fe085a6ff4ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 NetworkManager[50376]: <info>  [1765616008.2130] manager: (tap6db56c55-70): new Veth device (/org/freedesktop/NetworkManager/Devices/459)
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.211 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7794c3-e653-4fba-b5a7-924cebf73063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 systemd-udevd[361436]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.251 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d182d9-1d23-4550-9b03-1a68b63c9036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.256 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b88f2769-f2f9-44dc-85fe-97d26b2cf98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 podman[361425]: 2025-12-13 08:53:28.26821993 +0000 UTC m=+0.104392979 container create 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:53:28 compute-0 NetworkManager[50376]: <info>  [1765616008.2872] device (tap6db56c55-70): carrier: link connected
Dec 13 08:53:28 compute-0 podman[361425]: 2025-12-13 08:53:28.197779464 +0000 UTC m=+0.033952543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.295 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f611e0ba-2400-4a72-9f63-e4ac0981682c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eebdb9e1-5792-4137-8103-9e0573512f77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6db56c55-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858550, 'reachable_time': 35475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361470, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 systemd[1]: Started libpod-conmon-942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e.scope.
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.341 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1989fd-1ecb-4474-be90-aef5058b1799]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:4b23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858550, 'tstamp': 858550}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361473, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.360 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a720e9d-b1c9-4cd7-8a12-d3cf09c380b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6db56c55-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858550, 'reachable_time': 35475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361477, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.400 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79311578-0cb8-4adc-80ae-cdc1d7fc1f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.458 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e85945d4-ae63-4fb1-a0bf-77fcfbfe16ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.459 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6db56c55-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.459 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.460 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6db56c55-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:28 compute-0 kernel: tap6db56c55-70: entered promiscuous mode
Dec 13 08:53:28 compute-0 NetworkManager[50376]: <info>  [1765616008.4626] manager: (tap6db56c55-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.471 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6db56c55-70, col_values=(('external_ids', {'iface-id': '401abfe8-06be-4b57-8432-310dcd747a81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:28 compute-0 ovn_controller[148476]: 2025-12-13T08:53:28Z|01100|binding|INFO|Releasing lport 401abfe8-06be-4b57-8432-310dcd747a81 from this chassis (sb_readonly=0)
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:28 compute-0 podman[361425]: 2025-12-13 08:53:28.475187158 +0000 UTC m=+0.311360237 container init 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 08:53:28 compute-0 podman[361425]: 2025-12-13 08:53:28.483940948 +0000 UTC m=+0.320114027 container start 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.488 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:28 compute-0 gifted_swirles[361474]: 167 167
Dec 13 08:53:28 compute-0 systemd[1]: libpod-942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e.scope: Deactivated successfully.
Dec 13 08:53:28 compute-0 conmon[361474]: conmon 942894bcba2677ad5a77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e.scope/container/memory.events
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5385aa5a-61da-4439-8509-d12b307269e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.493 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:53:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.493 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'env', 'PROCESS_TAG=haproxy-6db56c55-78f1-455f-855e-db3acef05ff3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6db56c55-78f1-455f-855e-db3acef05ff3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:53:28 compute-0 podman[361425]: 2025-12-13 08:53:28.745547247 +0000 UTC m=+0.581720316 container attach 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:53:28 compute-0 podman[361425]: 2025-12-13 08:53:28.747017254 +0000 UTC m=+0.583190313 container died 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.862 248514 DEBUG nova.network.neutron [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updated VIF entry in instance network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.863 248514 DEBUG nova.network.neutron [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:28 compute-0 nova_compute[248510]: 2025-12-13 08:53:28.892 248514 DEBUG oslo_concurrency.lockutils [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7600f55b210fa74d466a8b1ac569a0c090401c1d0835a102c474385c023cb414-merged.mount: Deactivated successfully.
Dec 13 08:53:29 compute-0 ceph-mon[76537]: pgmap v2757: 321 pgs: 321 active+clean; 281 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Dec 13 08:53:29 compute-0 podman[361425]: 2025-12-13 08:53:29.156215623 +0000 UTC m=+0.992388672 container remove 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:53:29 compute-0 podman[361522]: 2025-12-13 08:53:29.170207354 +0000 UTC m=+0.119720323 container create 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 08:53:29 compute-0 podman[361522]: 2025-12-13 08:53:29.076678479 +0000 UTC m=+0.026191478 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:53:29 compute-0 systemd[1]: Started libpod-conmon-60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0.scope.
Dec 13 08:53:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00eeaad21779e0d1e4a21f624f33c14a068fe0d0787499d4a34517514951359/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.280 248514 DEBUG nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.282 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.282 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.283 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.283 248514 DEBUG nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Processing event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.283 248514 DEBUG nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.284 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.284 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.284 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.284 248514 DEBUG nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.285 248514 WARNING nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state shelved_offloaded and task_state spawning.
Dec 13 08:53:29 compute-0 podman[361522]: 2025-12-13 08:53:29.308956323 +0000 UTC m=+0.258469322 container init 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 08:53:29 compute-0 podman[361522]: 2025-12-13 08:53:29.314238025 +0000 UTC m=+0.263750994 container start 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:53:29 compute-0 systemd[1]: libpod-conmon-942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e.scope: Deactivated successfully.
Dec 13 08:53:29 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [NOTICE]   (361549) : New worker (361560) forked
Dec 13 08:53:29 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [NOTICE]   (361549) : Loading success.
Dec 13 08:53:29 compute-0 podman[361546]: 2025-12-13 08:53:29.378944088 +0000 UTC m=+0.058693843 container create b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 08:53:29 compute-0 systemd[1]: Started libpod-conmon-b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380.scope.
Dec 13 08:53:29 compute-0 podman[361546]: 2025-12-13 08:53:29.361972852 +0000 UTC m=+0.041722817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:53:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:29 compute-0 podman[361546]: 2025-12-13 08:53:29.482425652 +0000 UTC m=+0.162175417 container init b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 08:53:29 compute-0 podman[361546]: 2025-12-13 08:53:29.489658584 +0000 UTC m=+0.169408349 container start b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:53:29 compute-0 podman[361546]: 2025-12-13 08:53:29.49309824 +0000 UTC m=+0.172848035 container attach b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.756 248514 DEBUG nova.compute.manager [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.756 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616009.7554352, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.757 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Started (Lifecycle Event)
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.763 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.769 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance spawned successfully.
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.789 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.792 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.813 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.814 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616009.7557135, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.814 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Paused (Lifecycle Event)
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.838 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.852 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616009.7627423, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.852 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Resumed (Lifecycle Event)
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.880 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.886 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:29 compute-0 nova_compute[248510]: 2025-12-13 08:53:29.908 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Dec 13 08:53:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Dec 13 08:53:30 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Dec 13 08:53:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 325 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Dec 13 08:53:30 compute-0 lvm[361691]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:53:30 compute-0 lvm[361691]: VG ceph_vg0 finished
Dec 13 08:53:30 compute-0 lvm[361693]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:53:30 compute-0 lvm[361693]: VG ceph_vg1 finished
Dec 13 08:53:30 compute-0 lvm[361694]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:53:30 compute-0 lvm[361694]: VG ceph_vg2 finished
Dec 13 08:53:30 compute-0 compassionate_maxwell[361574]: {}
Dec 13 08:53:30 compute-0 systemd[1]: libpod-b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380.scope: Deactivated successfully.
Dec 13 08:53:30 compute-0 podman[361546]: 2025-12-13 08:53:30.384349266 +0000 UTC m=+1.064099031 container died b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 08:53:30 compute-0 systemd[1]: libpod-b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380.scope: Consumed 1.398s CPU time.
Dec 13 08:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07-merged.mount: Deactivated successfully.
Dec 13 08:53:30 compute-0 podman[361546]: 2025-12-13 08:53:30.506123369 +0000 UTC m=+1.185873134 container remove b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:53:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:30 compute-0 systemd[1]: libpod-conmon-b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380.scope: Deactivated successfully.
Dec 13 08:53:30 compute-0 sudo[361373]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:53:30 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:53:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:53:30 compute-0 nova_compute[248510]: 2025-12-13 08:53:30.583 248514 DEBUG nova.compute.manager [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:30 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:53:30 compute-0 nova_compute[248510]: 2025-12-13 08:53:30.663 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 18.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:30 compute-0 sudo[361707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:53:30 compute-0 sudo[361707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:53:30 compute-0 sudo[361707]: pam_unix(sudo:session): session closed for user root
Dec 13 08:53:30 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Dec 13 08:53:31 compute-0 nova_compute[248510]: 2025-12-13 08:53:31.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:31 compute-0 ceph-mon[76537]: osdmap e299: 3 total, 3 up, 3 in
Dec 13 08:53:31 compute-0 ceph-mon[76537]: pgmap v2759: 321 pgs: 321 active+clean; 325 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Dec 13 08:53:31 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:53:31 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:53:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:31.471 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:31.473 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:53:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:31.475 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:53:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:31.476 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[743fa8f2-2586-467f-a71f-91dff596ac07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:31 compute-0 nova_compute[248510]: 2025-12-13 08:53:31.892 248514 INFO nova.compute.manager [None req-a00f6f9a-ea03-479b-9d64-e20cdf6ae142 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Get console output
Dec 13 08:53:31 compute-0 nova_compute[248510]: 2025-12-13 08:53:31.899 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:53:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 325 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Dec 13 08:53:32 compute-0 nova_compute[248510]: 2025-12-13 08:53:32.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:32 compute-0 ovn_controller[148476]: 2025-12-13T08:53:32Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:81:46 10.100.0.10
Dec 13 08:53:32 compute-0 ovn_controller[148476]: 2025-12-13T08:53:32Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:81:46 10.100.0.10
Dec 13 08:53:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:33.185 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:33.186 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:53:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:33.188 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:53:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:33.188 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5e7c98-f02d-4052-bd76-a2987323147f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:33 compute-0 ceph-mon[76537]: pgmap v2760: 321 pgs: 321 active+clean; 325 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Dec 13 08:53:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 7.8 MiB/s wr, 281 op/s
Dec 13 08:53:35 compute-0 ceph-mon[76537]: pgmap v2761: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 7.8 MiB/s wr, 281 op/s
Dec 13 08:53:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Dec 13 08:53:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Dec 13 08:53:35 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Dec 13 08:53:36 compute-0 nova_compute[248510]: 2025-12-13 08:53:36.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Dec 13 08:53:36 compute-0 ceph-mon[76537]: osdmap e300: 3 total, 3 up, 3 in
Dec 13 08:53:36 compute-0 ceph-mon[76537]: pgmap v2763: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Dec 13 08:53:37 compute-0 nova_compute[248510]: 2025-12-13 08:53:37.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:37.686 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:37.687 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:53:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:37.689 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:53:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:37.690 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[878e19b1-e46c-4a58-b6c2-99a3b39b8a82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 229 op/s
Dec 13 08:53:39 compute-0 ceph-mon[76537]: pgmap v2764: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 229 op/s
Dec 13 08:53:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:53:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:53:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:53:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:53:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:53:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:53:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Dec 13 08:53:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:40.650 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:40.651 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:53:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:40.653 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:53:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:40.654 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd44a25-a181-43e4-9ab8-c729792b95dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:41 compute-0 nova_compute[248510]: 2025-12-13 08:53:41.105 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:41 compute-0 ceph-mon[76537]: pgmap v2765: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Dec 13 08:53:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Dec 13 08:53:42 compute-0 nova_compute[248510]: 2025-12-13 08:53:42.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:42 compute-0 ceph-mon[76537]: pgmap v2766: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Dec 13 08:53:43 compute-0 ovn_controller[148476]: 2025-12-13T08:53:43Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:b7:87 10.100.0.4
Dec 13 08:53:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 40 KiB/s wr, 33 op/s
Dec 13 08:53:45 compute-0 nova_compute[248510]: 2025-12-13 08:53:45.109 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:45 compute-0 nova_compute[248510]: 2025-12-13 08:53:45.110 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:45 compute-0 nova_compute[248510]: 2025-12-13 08:53:45.181 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:53:45 compute-0 ceph-mon[76537]: pgmap v2767: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 40 KiB/s wr, 33 op/s
Dec 13 08:53:45 compute-0 nova_compute[248510]: 2025-12-13 08:53:45.284 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:45 compute-0 nova_compute[248510]: 2025-12-13 08:53:45.284 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:45 compute-0 nova_compute[248510]: 2025-12-13 08:53:45.296 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:53:45 compute-0 nova_compute[248510]: 2025-12-13 08:53:45.296 248514 INFO nova.compute.claims [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:53:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:45 compute-0 nova_compute[248510]: 2025-12-13 08:53:45.663 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 38 KiB/s wr, 31 op/s
Dec 13 08:53:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:53:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435371036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.242 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.249 248514 DEBUG nova.compute.provider_tree [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:53:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1435371036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.276 248514 DEBUG nova.scheduler.client.report [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.450 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.450 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.539 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.540 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.620 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.797 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:53:46 compute-0 nova_compute[248510]: 2025-12-13 08:53:46.804 248514 DEBUG nova.policy [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.002 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.003 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.004 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Creating image(s)
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.030 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.061 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.090 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.094 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.169 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.170 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.171 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.171 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.199 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.202 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 24e9bc91-cab7-4459-921c-5000eb9839b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:47 compute-0 ceph-mon[76537]: pgmap v2768: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 38 KiB/s wr, 31 op/s
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.511 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 24e9bc91-cab7-4459-921c-5000eb9839b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.565 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.645 248514 DEBUG nova.objects.instance [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 24e9bc91-cab7-4459-921c-5000eb9839b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.677 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.678 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Ensure instance console log exists: /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.678 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.679 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.679 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.941 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.941 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:47 compute-0 nova_compute[248510]: 2025-12-13 08:53:47.964 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.042 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.042 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.050 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.050 248514 INFO nova.compute.claims [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:53:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 300 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 926 KiB/s wr, 47 op/s
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.279 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.647 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Successfully created port: 0f4fe885-0823-4b8e-93ad-70a45aba4b2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:53:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:53:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315581974' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.855 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.862 248514 DEBUG nova.compute.provider_tree [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.889 248514 DEBUG nova.scheduler.client.report [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.923 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.924 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.978 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:53:48 compute-0 nova_compute[248510]: 2025-12-13 08:53:48.978 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.005 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.030 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.153 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.155 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.155 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Creating image(s)
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.180 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.205 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.226 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.230 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.284 248514 DEBUG nova.policy [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:53:49 compute-0 ceph-mon[76537]: pgmap v2769: 321 pgs: 321 active+clean; 300 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 926 KiB/s wr, 47 op/s
Dec 13 08:53:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/315581974' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.337 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.337 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.338 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.338 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.358 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.362 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:49.463 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:49.465 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:53:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:49.466 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:53:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:49.467 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[feddfe8e-1d70-4994-af02-d36b0a9bffe1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.749 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.819 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.953 248514 DEBUG nova.objects.instance [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid fa7f7cf9-50d4-461e-ab73-21e65aa729a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.985 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.985 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Ensure instance console log exists: /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.986 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.986 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:49 compute-0 nova_compute[248510]: 2025-12-13 08:53:49.987 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 327 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.266 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Successfully created port: 6585c17a-67f3-4c7c-a637-4acf71e85c4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.435 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Successfully updated port: 0f4fe885-0823-4b8e-93ad-70a45aba4b2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.456 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.456 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.456 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:53:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.633 248514 DEBUG nova.compute.manager [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-changed-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.633 248514 DEBUG nova.compute.manager [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Refreshing instance network info cache due to event network-changed-0f4fe885-0823-4b8e-93ad-70a45aba4b2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.634 248514 DEBUG oslo_concurrency.lockutils [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:50 compute-0 nova_compute[248510]: 2025-12-13 08:53:50.686 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:51 compute-0 ceph-mon[76537]: pgmap v2770: 321 pgs: 321 active+clean; 327 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.444 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Successfully updated port: 6585c17a-67f3-4c7c-a637-4acf71e85c4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.463 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.464 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.464 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.563 248514 DEBUG nova.compute.manager [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.564 248514 DEBUG nova.compute.manager [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing instance network info cache due to event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.565 248514 DEBUG oslo_concurrency.lockutils [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.745 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.909 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Updating instance_info_cache with network_info: [{"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.943 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.943 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance network_info: |[{"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.944 248514 DEBUG oslo_concurrency.lockutils [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.944 248514 DEBUG nova.network.neutron [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Refreshing network info cache for port 0f4fe885-0823-4b8e-93ad-70a45aba4b2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.947 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start _get_guest_xml network_info=[{"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.951 248514 WARNING nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.956 248514 DEBUG nova.virt.libvirt.host [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.957 248514 DEBUG nova.virt.libvirt.host [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.960 248514 DEBUG nova.virt.libvirt.host [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.960 248514 DEBUG nova.virt.libvirt.host [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.961 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.961 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.961 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.963 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.963 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.963 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.963 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:53:51 compute-0 nova_compute[248510]: 2025-12-13 08:53:51.967 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 327 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 13 08:53:52 compute-0 nova_compute[248510]: 2025-12-13 08:53:52.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324248881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:52 compute-0 nova_compute[248510]: 2025-12-13 08:53:52.588 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:52 compute-0 nova_compute[248510]: 2025-12-13 08:53:52.611 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:52 compute-0 nova_compute[248510]: 2025-12-13 08:53:52.615 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.044 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.077 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.077 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance network_info: |[{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.078 248514 DEBUG oslo_concurrency.lockutils [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.078 248514 DEBUG nova.network.neutron [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.081 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start _get_guest_xml network_info=[{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.085 248514 WARNING nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.089 248514 DEBUG nova.virt.libvirt.host [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.090 248514 DEBUG nova.virt.libvirt.host [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.097 248514 DEBUG nova.virt.libvirt.host [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.098 248514 DEBUG nova.virt.libvirt.host [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.098 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.098 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.104 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818679212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.207 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.208 248514 DEBUG nova.virt.libvirt.vif [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1197553871',display_name='tempest-TestNetworkBasicOps-server-1197553871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1197553871',id=114,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0tz1T7CupVnJjmfwxRYFIICN0QfqtBB3GDRo75b9UqyPvHjXgcUKDczGDgsRsdgI58Js+Fgc15P+M+AHNMFdqZevDxnjQbmKdK1Wi86XTXa0E7byhCNYmQdGF2ON/oDA==',key_name='tempest-TestNetworkBasicOps-982734592',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-03pexmx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:46Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=24e9bc91-cab7-4459-921c-5000eb9839b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.209 248514 DEBUG nova.network.os_vif_util [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.210 248514 DEBUG nova.network.os_vif_util [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.211 248514 DEBUG nova.objects.instance [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 24e9bc91-cab7-4459-921c-5000eb9839b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.234 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <uuid>24e9bc91-cab7-4459-921c-5000eb9839b7</uuid>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <name>instance-00000072</name>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-1197553871</nova:name>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:53:51</nova:creationTime>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <nova:port uuid="0f4fe885-0823-4b8e-93ad-70a45aba4b2e">
Dec 13 08:53:53 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <system>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <entry name="serial">24e9bc91-cab7-4459-921c-5000eb9839b7</entry>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <entry name="uuid">24e9bc91-cab7-4459-921c-5000eb9839b7</entry>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </system>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <os>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   </os>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <features>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   </features>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/24e9bc91-cab7-4459-921c-5000eb9839b7_disk">
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config">
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:53 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:23:ed:41"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <target dev="tap0f4fe885-08"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/console.log" append="off"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <video>
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </video>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:53:53 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:53:53 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:53:53 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:53:53 compute-0 nova_compute[248510]: </domain>
Dec 13 08:53:53 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.235 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Preparing to wait for external event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.235 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.236 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.236 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.237 248514 DEBUG nova.virt.libvirt.vif [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1197553871',display_name='tempest-TestNetworkBasicOps-server-1197553871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1197553871',id=114,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0tz1T7CupVnJjmfwxRYFIICN0QfqtBB3GDRo75b9UqyPvHjXgcUKDczGDgsRsdgI58Js+Fgc15P+M+AHNMFdqZevDxnjQbmKdK1Wi86XTXa0E7byhCNYmQdGF2ON/oDA==',key_name='tempest-TestNetworkBasicOps-982734592',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-03pexmx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:46Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=24e9bc91-cab7-4459-921c-5000eb9839b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.237 248514 DEBUG nova.network.os_vif_util [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.238 248514 DEBUG nova.network.os_vif_util [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.239 248514 DEBUG os_vif [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.240 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.240 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.244 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f4fe885-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.244 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f4fe885-08, col_values=(('external_ids', {'iface-id': '0f4fe885-0823-4b8e-93ad-70a45aba4b2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:ed:41', 'vm-uuid': '24e9bc91-cab7-4459-921c-5000eb9839b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:53 compute-0 NetworkManager[50376]: <info>  [1765616033.2473] manager: (tap0f4fe885-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.259 248514 INFO os_vif [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08')
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.339 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.339 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.339 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:23:ed:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.340 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Using config drive
Dec 13 08:53:53 compute-0 ceph-mon[76537]: pgmap v2771: 321 pgs: 321 active+clean; 327 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 13 08:53:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1324248881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2818679212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.359 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.474 248514 DEBUG nova.network.neutron [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Updated VIF entry in instance network info cache for port 0f4fe885-0823-4b8e-93ad-70a45aba4b2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.475 248514 DEBUG nova.network.neutron [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Updating instance_info_cache with network_info: [{"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.496 248514 DEBUG oslo_concurrency.lockutils [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522719023' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.700 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.726 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:53 compute-0 nova_compute[248510]: 2025-12-13 08:53:53.730 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.091 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Creating config drive at /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.096 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3k2pccjs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.238 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3k2pccjs" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.262 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.266 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:53:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2450293502' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.327 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.329 248514 DEBUG nova.virt.libvirt.vif [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=115,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-e5gqnowb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:49Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=fa7f7cf9-50d4-461e-ab73-21e65aa729a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.330 248514 DEBUG nova.network.os_vif_util [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.331 248514 DEBUG nova.network.os_vif_util [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.332 248514 DEBUG nova.objects.instance [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid fa7f7cf9-50d4-461e-ab73-21e65aa729a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.352 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <uuid>fa7f7cf9-50d4-461e-ab73-21e65aa729a4</uuid>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <name>instance-00000073</name>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191</nova:name>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:53:53</nova:creationTime>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <nova:port uuid="6585c17a-67f3-4c7c-a637-4acf71e85c4f">
Dec 13 08:53:54 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <system>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <entry name="serial">fa7f7cf9-50d4-461e-ab73-21e65aa729a4</entry>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <entry name="uuid">fa7f7cf9-50d4-461e-ab73-21e65aa729a4</entry>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </system>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <os>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   </os>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <features>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   </features>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk">
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config">
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       </source>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:53:54 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:f5:d3:da"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <target dev="tap6585c17a-67"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/console.log" append="off"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <video>
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </video>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:53:54 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:53:54 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:53:54 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:53:54 compute-0 nova_compute[248510]: </domain>
Dec 13 08:53:54 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.353 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Preparing to wait for external event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.355 248514 DEBUG nova.virt.libvirt.vif [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=115,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-e5gqnowb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:49Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=fa7f7cf9-50d4-461e-ab73-21e65aa729a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.355 248514 DEBUG nova.network.os_vif_util [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.356 248514 DEBUG nova.network.os_vif_util [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.356 248514 DEBUG os_vif [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.358 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.358 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.360 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.361 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6585c17a-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.361 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6585c17a-67, col_values=(('external_ids', {'iface-id': '6585c17a-67f3-4c7c-a637-4acf71e85c4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:d3:da', 'vm-uuid': 'fa7f7cf9-50d4-461e-ab73-21e65aa729a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 NetworkManager[50376]: <info>  [1765616034.3646] manager: (tap6585c17a-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:53:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3522719023' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2450293502' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.377 248514 INFO os_vif [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67')
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.418 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.418 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Deleting local config drive /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config because it was imported into RBD.
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.446 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.446 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.446 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:f5:d3:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.447 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Using config drive
Dec 13 08:53:54 compute-0 kernel: tap0f4fe885-08: entered promiscuous mode
Dec 13 08:53:54 compute-0 NetworkManager[50376]: <info>  [1765616034.4691] manager: (tap0f4fe885-08): new Tun device (/org/freedesktop/NetworkManager/Devices/463)
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.471 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:54 compute-0 ovn_controller[148476]: 2025-12-13T08:53:54Z|01101|binding|INFO|Claiming lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e for this chassis.
Dec 13 08:53:54 compute-0 ovn_controller[148476]: 2025-12-13T08:53:54Z|01102|binding|INFO|0f4fe885-0823-4b8e-93ad-70a45aba4b2e: Claiming fa:16:3e:23:ed:41 10.100.0.19
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.485 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:ed:41 10.100.0.19'], port_security=['fa:16:3e:23:ed:41 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '24e9bc91-cab7-4459-921c-5000eb9839b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d56de33-8a42-4b7a-a729-f5a7b63e022f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19856422-45a4-439c-b584-d577928b61a5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0f4fe885-0823-4b8e-93ad-70a45aba4b2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.486 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0f4fe885-0823-4b8e-93ad-70a45aba4b2e in datapath 2e91b4b4-aa39-4f2a-bb46-9126feb64b26 bound to our chassis
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.488 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e91b4b4-aa39-4f2a-bb46-9126feb64b26
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.500 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7155bc-4e13-49d7-aa70-d46cdfbc261e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.501 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2e91b4b4-a1 in ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.503 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2e91b4b4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[02f072bb-30e4-4e71-a869-a100407ef68f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.504 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd647922-1d6c-4326-bd75-227384e06ebf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 systemd-udevd[362328]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.514 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[95d7cb35-c5ef-47d3-89b9-7c0707580626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 systemd-machined[210538]: New machine qemu-141-instance-00000072.
Dec 13 08:53:54 compute-0 NetworkManager[50376]: <info>  [1765616034.5260] device (tap0f4fe885-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:53:54 compute-0 NetworkManager[50376]: <info>  [1765616034.5266] device (tap0f4fe885-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:53:54 compute-0 ovn_controller[148476]: 2025-12-13T08:53:54Z|01103|binding|INFO|Setting lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e ovn-installed in OVS
Dec 13 08:53:54 compute-0 ovn_controller[148476]: 2025-12-13T08:53:54Z|01104|binding|INFO|Setting lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e up in Southbound
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-00000072.
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.531 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce534d93-f0ff-4dbf-89cb-640a9b5f8df5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.558 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[92fa9ea4-e249-4a0d-8466-73ddfe0c2dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 NetworkManager[50376]: <info>  [1765616034.5652] manager: (tap2e91b4b4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/464)
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.565 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[96bd98c2-e45a-45e7-83d7-2171b7503242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.601 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[62e23a2f-7644-445b-94a9-d360eccbcfd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.605 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9de645-f18d-4fe9-a305-49a4ebfb6296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 NetworkManager[50376]: <info>  [1765616034.6323] device (tap2e91b4b4-a0): carrier: link connected
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.638 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8a75ee-90b7-4310-963c-4b2189156d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.657 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc21fa1-ed68-44e3-a948-fad35ff7fa65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e91b4b4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:eb:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861184, 'reachable_time': 20409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362365, 'error': None, 'target': 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.679 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[733019c9-0dbe-4d4e-b0b5-980473591a7c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:ebfd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 861184, 'tstamp': 861184}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362366, 'error': None, 'target': 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[31c6c457-f496-46af-9fab-a124e99ce7fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e91b4b4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:eb:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861184, 'reachable_time': 20409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362367, 'error': None, 'target': 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.737 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9987e2-a9dc-4ae0-9f29-9422dc473739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.797 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d28700dc-e35b-4d46-8176-97b9927de3b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.799 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e91b4b4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.799 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.799 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e91b4b4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.806 248514 DEBUG nova.compute.manager [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.806 248514 DEBUG oslo_concurrency.lockutils [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.806 248514 DEBUG oslo_concurrency.lockutils [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.807 248514 DEBUG oslo_concurrency.lockutils [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.807 248514 DEBUG nova.compute.manager [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Processing event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 NetworkManager[50376]: <info>  [1765616034.8479] manager: (tap2e91b4b4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Dec 13 08:53:54 compute-0 kernel: tap2e91b4b4-a0: entered promiscuous mode
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.852 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.855 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e91b4b4-a0, col_values=(('external_ids', {'iface-id': '924f5729-755e-4be7-818a-17dd23445f7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 ovn_controller[148476]: 2025-12-13T08:53:54Z|01105|binding|INFO|Releasing lport 924f5729-755e-4be7-818a-17dd23445f7d from this chassis (sb_readonly=0)
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.872 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.877 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.879 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2e91b4b4-aa39-4f2a-bb46-9126feb64b26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2e91b4b4-aa39-4f2a-bb46-9126feb64b26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.880 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0154801-7470-4335-8dab-696838f2c136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.882 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-2e91b4b4-aa39-4f2a-bb46-9126feb64b26
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/2e91b4b4-aa39-4f2a-bb46-9126feb64b26.pid.haproxy
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 2e91b4b4-aa39-4f2a-bb46-9126feb64b26
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:53:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.883 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'env', 'PROCESS_TAG=haproxy-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2e91b4b4-aa39-4f2a-bb46-9126feb64b26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.936 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Creating config drive at /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config
Dec 13 08:53:54 compute-0 nova_compute[248510]: 2025-12-13 08:53:54.941 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywqikfqb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.087 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywqikfqb" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.114 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.124 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.175 248514 DEBUG nova.network.neutron [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updated VIF entry in instance network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.176 248514 DEBUG nova.network.neutron [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.274 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.275 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Deleting local config drive /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config because it was imported into RBD.
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.307 248514 DEBUG oslo_concurrency.lockutils [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:53:55 compute-0 podman[362452]: 2025-12-13 08:53:55.342275267 +0000 UTC m=+0.065168635 container create 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:53:55 compute-0 NetworkManager[50376]: <info>  [1765616035.3457] manager: (tap6585c17a-67): new Tun device (/org/freedesktop/NetworkManager/Devices/466)
Dec 13 08:53:55 compute-0 systemd-udevd[362346]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:53:55 compute-0 kernel: tap6585c17a-67: entered promiscuous mode
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:55 compute-0 ovn_controller[148476]: 2025-12-13T08:53:55Z|01106|binding|INFO|Claiming lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f for this chassis.
Dec 13 08:53:55 compute-0 ovn_controller[148476]: 2025-12-13T08:53:55Z|01107|binding|INFO|6585c17a-67f3-4c7c-a637-4acf71e85c4f: Claiming fa:16:3e:f5:d3:da 10.100.0.11
Dec 13 08:53:55 compute-0 NetworkManager[50376]: <info>  [1765616035.3615] device (tap6585c17a-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:53:55 compute-0 NetworkManager[50376]: <info>  [1765616035.3628] device (tap6585c17a-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.362 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:d3:da 10.100.0.11'], port_security=['fa:16:3e:f5:d3:da 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fa7f7cf9-50d4-461e-ab73-21e65aa729a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3479ed9a-2670-4333-b282-6f40685ff746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee7df75c-fefa-4bc0-977e-537259cc7755', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9839b158-8451-4098-b558-d860fc6a92ca, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6585c17a-67f3-4c7c-a637-4acf71e85c4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:53:55 compute-0 ceph-mon[76537]: pgmap v2772: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Dec 13 08:53:55 compute-0 ovn_controller[148476]: 2025-12-13T08:53:55Z|01108|binding|INFO|Setting lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f ovn-installed in OVS
Dec 13 08:53:55 compute-0 ovn_controller[148476]: 2025-12-13T08:53:55Z|01109|binding|INFO|Setting lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f up in Southbound
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:55 compute-0 systemd[1]: Started libpod-conmon-5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d.scope.
Dec 13 08:53:55 compute-0 systemd-machined[210538]: New machine qemu-142-instance-00000073.
Dec 13 08:53:55 compute-0 podman[362452]: 2025-12-13 08:53:55.308450589 +0000 UTC m=+0.031343977 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:53:55 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:53:55 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000073.
Dec 13 08:53:55 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 08:53:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c970440edf2f59bc003d5f5f6092186ad69c47ce3109c48710c40f04e5794b1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.432 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.433 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:55 compute-0 podman[362452]: 2025-12-13 08:53:55.443952246 +0000 UTC m=+0.166845624 container init 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:53:55 compute-0 podman[362452]: 2025-12-13 08:53:55.450468279 +0000 UTC m=+0.173361647 container start 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:53:55 compute-0 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [NOTICE]   (362518) : New worker (362524) forked
Dec 13 08:53:55 compute-0 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [NOTICE]   (362518) : Loading success.
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.504 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.506 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.5039809, 24e9bc91-cab7-4459-921c-5000eb9839b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.506 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] VM Started (Lifecycle Event)
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.511 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.515 248514 INFO nova.virt.libvirt.driver [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance spawned successfully.
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.516 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:53:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.544 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.554 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6585c17a-67f3-4c7c-a637-4acf71e85c4f in datapath 3479ed9a-2670-4333-b282-6f40685ff746 unbound from our chassis
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.554 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.556 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3479ed9a-2670-4333-b282-6f40685ff746
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.565 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.566 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.567 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.567 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.568 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.568 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.581 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8035f2-65bc-4d15-ae9a-98c0c2b45820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.604 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.604 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.5052018, 24e9bc91-cab7-4459-921c-5000eb9839b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.605 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] VM Paused (Lifecycle Event)
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.618 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[111c214e-1b48-407f-88d6-556fec01d122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.621 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a3724390-e9f8-4178-a53f-e7f7bb205263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.641 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.645 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.5105426, 24e9bc91-cab7-4459-921c-5000eb9839b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.646 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] VM Resumed (Lifecycle Event)
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.650 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed1e944-b2b4-42c6-909d-17e654002743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.657 248514 INFO nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Took 8.66 seconds to spawn the instance on the hypervisor.
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.658 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.668 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.669 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[856b93aa-0ea4-4b46-8cfd-1134e069bff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3479ed9a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:9f:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857121, 'reachable_time': 24238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362539, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.672 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d65761d-faf3-4454-a668-fc54ed048d51]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3479ed9a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857140, 'tstamp': 857140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362547, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3479ed9a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857143, 'tstamp': 857143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362547, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.688 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3479ed9a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.690 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.691 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3479ed9a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3479ed9a-20, col_values=(('external_ids', {'iface-id': 'd8892183-3d82-41f5-b0bd-dc5a1c170b1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:53:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.693 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.708 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.726 248514 INFO nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Took 10.47 seconds to build instance.
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.745 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.837 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.836684, fa7f7cf9-50d4-461e-ab73-21e65aa729a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.837 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] VM Started (Lifecycle Event)
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.866 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.871 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.83682, fa7f7cf9-50d4-461e-ab73-21e65aa729a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.871 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] VM Paused (Lifecycle Event)
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.910 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.915 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:55 compute-0 nova_compute[248510]: 2025-12-13 08:53:55.952 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Dec 13 08:53:57 compute-0 ceph-mon[76537]: pgmap v2773: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.439 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.440 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.441 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.441 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.441 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] No waiting events found dispatching network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.442 248514 WARNING nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received unexpected event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e for instance with vm_state active and task_state None.
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.442 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.442 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.443 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.443 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.443 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Processing event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.444 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.444 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.444 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.445 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.445 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] No waiting events found dispatching network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.445 248514 WARNING nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received unexpected event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f for instance with vm_state building and task_state spawning.
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.446 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.450 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616037.4500172, fa7f7cf9-50d4-461e-ab73-21e65aa729a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.450 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] VM Resumed (Lifecycle Event)
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.452 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.457 248514 INFO nova.virt.libvirt.driver [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance spawned successfully.
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.458 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.542 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.550 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.553 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.554 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.554 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.555 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.555 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.556 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.623 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.689 248514 INFO nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Took 8.54 seconds to spawn the instance on the hypervisor.
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.689 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.819 248514 INFO nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Took 9.81 seconds to build instance.
Dec 13 08:53:57 compute-0 nova_compute[248510]: 2025-12-13 08:53:57.928 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:53:57 compute-0 podman[362585]: 2025-12-13 08:53:57.980302608 +0000 UTC m=+0.064374755 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 08:53:57 compute-0 podman[362584]: 2025-12-13 08:53:57.991963541 +0000 UTC m=+0.078270544 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 08:53:58 compute-0 podman[362583]: 2025-12-13 08:53:58.044131449 +0000 UTC m=+0.131771575 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller)
Dec 13 08:53:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 115 op/s
Dec 13 08:53:59 compute-0 nova_compute[248510]: 2025-12-13 08:53:59.364 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:53:59 compute-0 ceph-mon[76537]: pgmap v2774: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 115 op/s
Dec 13 08:53:59 compute-0 nova_compute[248510]: 2025-12-13 08:53:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 180 op/s
Dec 13 08:54:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:00 compute-0 ceph-mon[76537]: pgmap v2775: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 180 op/s
Dec 13 08:54:01 compute-0 nova_compute[248510]: 2025-12-13 08:54:01.318 248514 DEBUG nova.compute.manager [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:01 compute-0 nova_compute[248510]: 2025-12-13 08:54:01.319 248514 DEBUG nova.compute.manager [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing instance network info cache due to event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:54:01 compute-0 nova_compute[248510]: 2025-12-13 08:54:01.319 248514 DEBUG oslo_concurrency.lockutils [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:54:01 compute-0 nova_compute[248510]: 2025-12-13 08:54:01.319 248514 DEBUG oslo_concurrency.lockutils [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:54:01 compute-0 nova_compute[248510]: 2025-12-13 08:54:01.319 248514 DEBUG nova.network.neutron [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:54:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Dec 13 08:54:02 compute-0 nova_compute[248510]: 2025-12-13 08:54:02.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:02 compute-0 nova_compute[248510]: 2025-12-13 08:54:02.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:03 compute-0 nova_compute[248510]: 2025-12-13 08:54:03.026 248514 DEBUG nova.network.neutron [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updated VIF entry in instance network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:54:03 compute-0 nova_compute[248510]: 2025-12-13 08:54:03.027 248514 DEBUG nova.network.neutron [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:03 compute-0 nova_compute[248510]: 2025-12-13 08:54:03.051 248514 DEBUG oslo_concurrency.lockutils [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:54:03 compute-0 ceph-mon[76537]: pgmap v2776: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Dec 13 08:54:03 compute-0 nova_compute[248510]: 2025-12-13 08:54:03.413 248514 DEBUG nova.compute.manager [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:03 compute-0 nova_compute[248510]: 2025-12-13 08:54:03.414 248514 DEBUG nova.compute.manager [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing instance network info cache due to event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:54:03 compute-0 nova_compute[248510]: 2025-12-13 08:54:03.415 248514 DEBUG oslo_concurrency.lockutils [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:54:03 compute-0 nova_compute[248510]: 2025-12-13 08:54:03.415 248514 DEBUG oslo_concurrency.lockutils [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:54:03 compute-0 nova_compute[248510]: 2025-12-13 08:54:03.415 248514 DEBUG nova.network.neutron [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:54:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 235 op/s
Dec 13 08:54:04 compute-0 nova_compute[248510]: 2025-12-13 08:54:04.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:05 compute-0 ceph-mon[76537]: pgmap v2777: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 235 op/s
Dec 13 08:54:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 208 op/s
Dec 13 08:54:07 compute-0 ceph-mon[76537]: pgmap v2778: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 208 op/s
Dec 13 08:54:07 compute-0 nova_compute[248510]: 2025-12-13 08:54:07.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:07 compute-0 nova_compute[248510]: 2025-12-13 08:54:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:07 compute-0 nova_compute[248510]: 2025-12-13 08:54:07.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:54:07 compute-0 nova_compute[248510]: 2025-12-13 08:54:07.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:54:07 compute-0 nova_compute[248510]: 2025-12-13 08:54:07.885 248514 DEBUG nova.network.neutron [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updated VIF entry in instance network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:54:07 compute-0 nova_compute[248510]: 2025-12-13 08:54:07.886 248514 DEBUG nova.network.neutron [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:07 compute-0 nova_compute[248510]: 2025-12-13 08:54:07.911 248514 DEBUG oslo_concurrency.lockutils [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:54:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 32 KiB/s wr, 212 op/s
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.508 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.510 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.510 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.511 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.643 248514 DEBUG nova.compute.manager [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.645 248514 DEBUG nova.compute.manager [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.645 248514 DEBUG oslo_concurrency.lockutils [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.717 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.717 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.718 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.719 248514 INFO nova.compute.manager [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Terminating instance
Dec 13 08:54:08 compute-0 nova_compute[248510]: 2025-12-13 08:54:08.720 248514 DEBUG nova.compute.manager [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:54:09 compute-0 kernel: tapb2143648-4c (unregistering): left promiscuous mode
Dec 13 08:54:09 compute-0 NetworkManager[50376]: <info>  [1765616049.1512] device (tapb2143648-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:54:09 compute-0 ovn_controller[148476]: 2025-12-13T08:54:09Z|01110|binding|INFO|Releasing lport b2143648-4c23-49b5-8777-433a5b34c7ce from this chassis (sb_readonly=0)
Dec 13 08:54:09 compute-0 ovn_controller[148476]: 2025-12-13T08:54:09Z|01111|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce down in Southbound
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:09 compute-0 ovn_controller[148476]: 2025-12-13T08:54:09Z|01112|binding|INFO|Removing iface tapb2143648-4c ovn-installed in OVS
Dec 13 08:54:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.173 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:87 10.100.0.4'], port_security=['fa:16:3e:02:b7:87 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db56c55-78f1-455f-855e-db3acef05ff3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9bebfabc-b3c7-415b-9881-5a1028b3b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efdd787e-21fc-422f-be64-57ef7368490d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2143648-4c23-49b5-8777-433a5b34c7ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:54:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.175 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2143648-4c23-49b5-8777-433a5b34c7ce in datapath 6db56c55-78f1-455f-855e-db3acef05ff3 unbound from our chassis
Dec 13 08:54:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.182 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db56c55-78f1-455f-855e-db3acef05ff3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:54:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3e83a02c-b790-45b0-b781-6a369f137c23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.184 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 namespace which is not needed anymore
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:09 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Dec 13 08:54:09 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Consumed 17.066s CPU time.
Dec 13 08:54:09 compute-0 systemd-machined[210538]: Machine qemu-140-instance-0000006f terminated.
Dec 13 08:54:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:54:09
Dec 13 08:54:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:54:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:54:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control']
Dec 13 08:54:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.394 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance destroyed successfully.
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.394 248514 DEBUG nova.objects.instance [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'resources' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.415 248514 DEBUG nova.virt.libvirt.vif [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAebkVWpSolmv9rvqo0lcOY35Sse8ZKcOxj7o5K69+ccF5rX0zOaHwOkKpIPYjoJDZ0lGFUDC6z1VdzfkWYgp+l6o2jOhZfRbhSP9Loovk747mSM12eSN6XwaL50YLDn8Q==',key_name='tempest-TestShelveInstance-1284595260',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:30Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.416 248514 DEBUG nova.network.os_vif_util [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.417 248514 DEBUG nova.network.os_vif_util [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.417 248514 DEBUG os_vif [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.418 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.419 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2143648-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.420 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.427 248514 INFO os_vif [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c')
Dec 13 08:54:09 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [NOTICE]   (361549) : haproxy version is 2.8.14-c23fe91
Dec 13 08:54:09 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [NOTICE]   (361549) : path to executable is /usr/sbin/haproxy
Dec 13 08:54:09 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [WARNING]  (361549) : Exiting Master process...
Dec 13 08:54:09 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [ALERT]    (361549) : Current worker (361560) exited with code 143 (Terminated)
Dec 13 08:54:09 compute-0 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [WARNING]  (361549) : All workers exited. Exiting... (0)
Dec 13 08:54:09 compute-0 systemd[1]: libpod-60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0.scope: Deactivated successfully.
Dec 13 08:54:09 compute-0 podman[362667]: 2025-12-13 08:54:09.515239629 +0000 UTC m=+0.230008228 container died 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:54:09 compute-0 ceph-mon[76537]: pgmap v2779: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 32 KiB/s wr, 212 op/s
Dec 13 08:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0-userdata-shm.mount: Deactivated successfully.
Dec 13 08:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d00eeaad21779e0d1e4a21f624f33c14a068fe0d0787499d4a34517514951359-merged.mount: Deactivated successfully.
Dec 13 08:54:09 compute-0 podman[362667]: 2025-12-13 08:54:09.801751513 +0000 UTC m=+0.516520112 container cleanup 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:54:09 compute-0 systemd[1]: libpod-conmon-60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0.scope: Deactivated successfully.
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.926 248514 DEBUG nova.compute.manager [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.927 248514 DEBUG oslo_concurrency.lockutils [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.927 248514 DEBUG oslo_concurrency.lockutils [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.927 248514 DEBUG oslo_concurrency.lockutils [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.928 248514 DEBUG nova.compute.manager [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:09 compute-0 nova_compute[248510]: 2025-12-13 08:54:09.928 248514 DEBUG nova.compute.manager [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:54:10 compute-0 podman[362723]: 2025-12-13 08:54:10.11075132 +0000 UTC m=+0.283403306 container remove 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.126 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[633222af-5b6e-4af8-b452-145cde6ebdef]: (4, ('Sat Dec 13 08:54:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 (60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0)\n60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0\nSat Dec 13 08:54:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 (60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0)\n60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.131 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5952eda4-5e68-453e-9fa0-f2b4ad4c9b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.132 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6db56c55-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:10 compute-0 kernel: tap6db56c55-70: left promiscuous mode
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.150 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.156 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84cd24fc-ebf9-45d2-a17c-dc27adcab0d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.170 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3122cdbd-e800-45a0-b4c1-3ba2df1a6530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.174 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8f8bd8-6873-4dd4-8286-ae87c02b2120]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.197 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86ee3b01-60f2-4be5-9fc1-b1170c41a62b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858541, 'reachable_time': 37108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362738, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d6db56c55\x2d78f1\x2d455f\x2d855e\x2ddb3acef05ff3.mount: Deactivated successfully.
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.200 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:54:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.201 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d885ffe2-a325-40f5-bafa-7c5b18f5abb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 378 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 442 KiB/s wr, 182 op/s
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.378 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.403 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.403 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.403 248514 DEBUG oslo_concurrency.lockutils [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.403 248514 DEBUG nova.network.neutron [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:54:10 compute-0 nova_compute[248510]: 2025-12-13 08:54:10.404 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:10 compute-0 ceph-mon[76537]: pgmap v2780: 321 pgs: 321 active+clean; 378 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 442 KiB/s wr, 182 op/s
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:54:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:54:10 compute-0 ovn_controller[148476]: 2025-12-13T08:54:10Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:ed:41 10.100.0.19
Dec 13 08:54:10 compute-0 ovn_controller[148476]: 2025-12-13T08:54:10Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:ed:41 10.100.0.19
Dec 13 08:54:11 compute-0 nova_compute[248510]: 2025-12-13 08:54:11.215 248514 INFO nova.virt.libvirt.driver [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deleting instance files /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74_del
Dec 13 08:54:11 compute-0 nova_compute[248510]: 2025-12-13 08:54:11.217 248514 INFO nova.virt.libvirt.driver [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deletion of /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74_del complete
Dec 13 08:54:11 compute-0 nova_compute[248510]: 2025-12-13 08:54:11.289 248514 INFO nova.compute.manager [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Took 2.57 seconds to destroy the instance on the hypervisor.
Dec 13 08:54:11 compute-0 nova_compute[248510]: 2025-12-13 08:54:11.292 248514 DEBUG oslo.service.loopingcall [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:54:11 compute-0 nova_compute[248510]: 2025-12-13 08:54:11.294 248514 DEBUG nova.compute.manager [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:54:11 compute-0 nova_compute[248510]: 2025-12-13 08:54:11.294 248514 DEBUG nova.network.neutron [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.026 248514 DEBUG nova.compute.manager [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.027 248514 DEBUG oslo_concurrency.lockutils [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.027 248514 DEBUG oslo_concurrency.lockutils [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.027 248514 DEBUG oslo_concurrency.lockutils [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.028 248514 DEBUG nova.compute.manager [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.028 248514 WARNING nova.compute.manager [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state active and task_state deleting.
Dec 13 08:54:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 378 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 440 KiB/s wr, 97 op/s
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.436 248514 DEBUG nova.network.neutron [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.437 248514 DEBUG nova.network.neutron [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.467 248514 DEBUG oslo_concurrency.lockutils [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.689 248514 DEBUG nova.network.neutron [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.711 248514 INFO nova.compute.manager [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Took 1.42 seconds to deallocate network for instance.
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.864 248514 DEBUG nova.compute.manager [req-68d59fe8-a22b-41bf-8f29-40298b2088c8 req-fd8ff017-dfc1-4563-8df9-420f240f9cbd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-deleted-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.900 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.901 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.901 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.901 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.902 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.944 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:12 compute-0 nova_compute[248510]: 2025-12-13 08:54:12.945 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.110 248514 DEBUG oslo_concurrency.processutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:54:13 compute-0 ceph-mon[76537]: pgmap v2781: 321 pgs: 321 active+clean; 378 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 440 KiB/s wr, 97 op/s
Dec 13 08:54:13 compute-0 ovn_controller[148476]: 2025-12-13T08:54:13Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:d3:da 10.100.0.11
Dec 13 08:54:13 compute-0 ovn_controller[148476]: 2025-12-13T08:54:13Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:d3:da 10.100.0.11
Dec 13 08:54:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:54:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/309752790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:13.512 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:54:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:13.512 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.514 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.527 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.645 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.646 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.650 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.650 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.654 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.655 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.659 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.659 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:54:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:54:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468648036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.712 248514 DEBUG oslo_concurrency.processutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.719 248514 DEBUG nova.compute.provider_tree [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.740 248514 DEBUG nova.scheduler.client.report [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.766 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.794 248514 INFO nova.scheduler.client.report [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Deleted allocations for instance fcc617ec-f5f9-41bb-ad4b-86d790622e74
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.873 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.938 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.940 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2831MB free_disk=59.802959844470024GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.940 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:13 compute-0 nova_compute[248510]: 2025-12-13 08:54:13.940 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.017 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance af2dc023-560c-4c66-b330-e41218a7a4eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.017 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2d2a33c7-0a90-4b64-b291-b268d37dce5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.017 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 24e9bc91-cab7-4459-921c-5000eb9839b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.018 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance fa7f7cf9-50d4-461e-ab73-21e65aa729a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.018 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.018 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.120 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:54:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 355 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.3 MiB/s wr, 237 op/s
Dec 13 08:54:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/309752790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/468648036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.421 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:54:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1321916222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.759 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.766 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.786 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.811 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:54:14 compute-0 nova_compute[248510]: 2025-12-13 08:54:14.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:54:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2636081950' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:54:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:54:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2636081950' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:54:15 compute-0 ceph-mon[76537]: pgmap v2782: 321 pgs: 321 active+clean; 355 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.3 MiB/s wr, 237 op/s
Dec 13 08:54:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1321916222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2636081950' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:54:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2636081950' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:54:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 355 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 867 KiB/s rd, 4.3 MiB/s wr, 157 op/s
Dec 13 08:54:16 compute-0 nova_compute[248510]: 2025-12-13 08:54:16.813 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:16 compute-0 nova_compute[248510]: 2025-12-13 08:54:16.813 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:16 compute-0 nova_compute[248510]: 2025-12-13 08:54:16.813 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:16 compute-0 nova_compute[248510]: 2025-12-13 08:54:16.813 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:54:17 compute-0 ceph-mon[76537]: pgmap v2783: 321 pgs: 321 active+clean; 355 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 867 KiB/s rd, 4.3 MiB/s wr, 157 op/s
Dec 13 08:54:17 compute-0 nova_compute[248510]: 2025-12-13 08:54:17.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 4.3 MiB/s wr, 161 op/s
Dec 13 08:54:19 compute-0 nova_compute[248510]: 2025-12-13 08:54:19.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:19 compute-0 ceph-mon[76537]: pgmap v2784: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 4.3 MiB/s wr, 161 op/s
Dec 13 08:54:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 772 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Dec 13 08:54:20 compute-0 ceph-mon[76537]: pgmap v2785: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 772 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Dec 13 08:54:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030516155096611615 of space, bias 1.0, pg target 0.9154846528983485 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697195585548564 of space, bias 1.0, pg target 0.2009158675664569 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.738067775993847e-07 of space, bias 4.0, pg target 0.0006885681331192617 quantized to 16 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:54:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.374 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.375 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.375 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.375 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.375 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.377 248514 INFO nova.compute.manager [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Terminating instance
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.378 248514 DEBUG nova.compute.manager [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:54:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.515 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:21 compute-0 kernel: tap6585c17a-67 (unregistering): left promiscuous mode
Dec 13 08:54:21 compute-0 NetworkManager[50376]: <info>  [1765616061.8967] device (tap6585c17a-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:54:21 compute-0 ovn_controller[148476]: 2025-12-13T08:54:21Z|01113|binding|INFO|Releasing lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f from this chassis (sb_readonly=0)
Dec 13 08:54:21 compute-0 ovn_controller[148476]: 2025-12-13T08:54:21Z|01114|binding|INFO|Setting lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f down in Southbound
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:21 compute-0 ovn_controller[148476]: 2025-12-13T08:54:21Z|01115|binding|INFO|Removing iface tap6585c17a-67 ovn-installed in OVS
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.923 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:d3:da 10.100.0.11', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fa7f7cf9-50d4-461e-ab73-21e65aa729a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3479ed9a-2670-4333-b282-6f40685ff746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9839b158-8451-4098-b558-d860fc6a92ca, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6585c17a-67f3-4c7c-a637-4acf71e85c4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:54:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.926 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6585c17a-67f3-4c7c-a637-4acf71e85c4f in datapath 3479ed9a-2670-4333-b282-6f40685ff746 unbound from our chassis
Dec 13 08:54:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.929 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3479ed9a-2670-4333-b282-6f40685ff746
Dec 13 08:54:21 compute-0 nova_compute[248510]: 2025-12-13 08:54:21.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.960 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[001b701f-4db9-4424-a2a0-8c3b94592a2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.990 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1f705b3d-54e7-406f-957f-62e9d58a21c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:21 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000073.scope: Deactivated successfully.
Dec 13 08:54:21 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000073.scope: Consumed 15.477s CPU time.
Dec 13 08:54:21 compute-0 systemd-machined[210538]: Machine qemu-142-instance-00000073 terminated.
Dec 13 08:54:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.995 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[72584c34-8caf-4a5e-b9ef-45945342b359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.022 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2432e567-6925-42e1-94b7-6a914d4056cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.039 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2745099-e6d7-42d1-a7da-d5d568687e96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3479ed9a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:9f:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1482, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1482, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857121, 'reachable_time': 24238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 14, 'inoctets': 992, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 14, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 992, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 14, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362820, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.062 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.062 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.063 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.063 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.063 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.063 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6404d0f1-5aca-478f-bc84-17172e52e2de]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3479ed9a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857140, 'tstamp': 857140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362821, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3479ed9a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857143, 'tstamp': 857143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362821, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.064 248514 INFO nova.compute.manager [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Terminating instance
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.065 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3479ed9a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.065 248514 DEBUG nova.compute.manager [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.066 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.072 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3479ed9a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.072 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.073 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3479ed9a-20, col_values=(('external_ids', {'iface-id': 'd8892183-3d82-41f5-b0bd-dc5a1c170b1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.073 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:54:22 compute-0 kernel: tap0f4fe885-08 (unregistering): left promiscuous mode
Dec 13 08:54:22 compute-0 NetworkManager[50376]: <info>  [1765616062.1299] device (tap0f4fe885-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:54:22 compute-0 ovn_controller[148476]: 2025-12-13T08:54:22Z|01116|binding|INFO|Releasing lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e from this chassis (sb_readonly=0)
Dec 13 08:54:22 compute-0 ovn_controller[148476]: 2025-12-13T08:54:22Z|01117|binding|INFO|Setting lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e down in Southbound
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 ovn_controller[148476]: 2025-12-13T08:54:22Z|01118|binding|INFO|Removing iface tap0f4fe885-08 ovn-installed in OVS
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.147 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:ed:41 10.100.0.19'], port_security=['fa:16:3e:23:ed:41 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '24e9bc91-cab7-4459-921c-5000eb9839b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d56de33-8a42-4b7a-a729-f5a7b63e022f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19856422-45a4-439c-b584-d577928b61a5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0f4fe885-0823-4b8e-93ad-70a45aba4b2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.149 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0f4fe885-0823-4b8e-93ad-70a45aba4b2e in datapath 2e91b4b4-aa39-4f2a-bb46-9126feb64b26 unbound from our chassis
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.151 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e91b4b4-aa39-4f2a-bb46-9126feb64b26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.152 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e3efacda-dfc3-4754-8c8b-b48816d70add]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.153 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 namespace which is not needed anymore
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000072.scope: Deactivated successfully.
Dec 13 08:54:22 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000072.scope: Consumed 16.204s CPU time.
Dec 13 08:54:22 compute-0 systemd-machined[210538]: Machine qemu-141-instance-00000072 terminated.
Dec 13 08:54:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.215 248514 INFO nova.virt.libvirt.driver [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance destroyed successfully.
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.216 248514 DEBUG nova.objects.instance [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid fa7f7cf9-50d4-461e-ab73-21e65aa729a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.236 248514 DEBUG nova.virt.libvirt.vif [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=115,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-e5gqnowb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:57Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=fa7f7cf9-50d4-461e-ab73-21e65aa729a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.236 248514 DEBUG nova.network.os_vif_util [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.237 248514 DEBUG nova.network.os_vif_util [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.238 248514 DEBUG os_vif [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.239 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6585c17a-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.250 248514 INFO os_vif [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67')
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.329 248514 INFO nova.virt.libvirt.driver [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance destroyed successfully.
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.330 248514 DEBUG nova.objects.instance [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 24e9bc91-cab7-4459-921c-5000eb9839b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.344 248514 DEBUG nova.virt.libvirt.vif [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1197553871',display_name='tempest-TestNetworkBasicOps-server-1197553871',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1197553871',id=114,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0tz1T7CupVnJjmfwxRYFIICN0QfqtBB3GDRo75b9UqyPvHjXgcUKDczGDgsRsdgI58Js+Fgc15P+M+AHNMFdqZevDxnjQbmKdK1Wi86XTXa0E7byhCNYmQdGF2ON/oDA==',key_name='tempest-TestNetworkBasicOps-982734592',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-03pexmx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:55Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=24e9bc91-cab7-4459-921c-5000eb9839b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.344 248514 DEBUG nova.network.os_vif_util [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.345 248514 DEBUG nova.network.os_vif_util [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.345 248514 DEBUG os_vif [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.347 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f4fe885-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.353 248514 INFO os_vif [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08')
Dec 13 08:54:22 compute-0 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [NOTICE]   (362518) : haproxy version is 2.8.14-c23fe91
Dec 13 08:54:22 compute-0 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [NOTICE]   (362518) : path to executable is /usr/sbin/haproxy
Dec 13 08:54:22 compute-0 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [WARNING]  (362518) : Exiting Master process...
Dec 13 08:54:22 compute-0 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [WARNING]  (362518) : Exiting Master process...
Dec 13 08:54:22 compute-0 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [ALERT]    (362518) : Current worker (362524) exited with code 143 (Terminated)
Dec 13 08:54:22 compute-0 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [WARNING]  (362518) : All workers exited. Exiting... (0)
Dec 13 08:54:22 compute-0 systemd[1]: libpod-5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d.scope: Deactivated successfully.
Dec 13 08:54:22 compute-0 podman[362858]: 2025-12-13 08:54:22.442686423 +0000 UTC m=+0.187233375 container died 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.591 248514 DEBUG nova.compute.manager [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-unplugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.591 248514 DEBUG oslo_concurrency.lockutils [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.592 248514 DEBUG oslo_concurrency.lockutils [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.592 248514 DEBUG oslo_concurrency.lockutils [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.592 248514 DEBUG nova.compute.manager [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] No waiting events found dispatching network-vif-unplugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.592 248514 DEBUG nova.compute.manager [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-unplugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:54:22 compute-0 ovn_controller[148476]: 2025-12-13T08:54:22Z|01119|binding|INFO|Releasing lport d8892183-3d82-41f5-b0bd-dc5a1c170b1a from this chassis (sb_readonly=0)
Dec 13 08:54:22 compute-0 ovn_controller[148476]: 2025-12-13T08:54:22Z|01120|binding|INFO|Releasing lport eef5d4b2-f2d3-4d15-9528-0e68d65ce454 from this chassis (sb_readonly=0)
Dec 13 08:54:22 compute-0 ovn_controller[148476]: 2025-12-13T08:54:22Z|01121|binding|INFO|Releasing lport 924f5729-755e-4be7-818a-17dd23445f7d from this chassis (sb_readonly=0)
Dec 13 08:54:22 compute-0 nova_compute[248510]: 2025-12-13 08:54:22.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d-userdata-shm.mount: Deactivated successfully.
Dec 13 08:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c970440edf2f59bc003d5f5f6092186ad69c47ce3109c48710c40f04e5794b1f-merged.mount: Deactivated successfully.
Dec 13 08:54:23 compute-0 ceph-mon[76537]: pgmap v2786: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Dec 13 08:54:23 compute-0 podman[362858]: 2025-12-13 08:54:23.503155143 +0000 UTC m=+1.247702105 container cleanup 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 08:54:23 compute-0 systemd[1]: libpod-conmon-5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d.scope: Deactivated successfully.
Dec 13 08:54:23 compute-0 podman[362939]: 2025-12-13 08:54:23.654379644 +0000 UTC m=+0.124847191 container remove 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.660 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e359c7-a9a0-4e43-959b-fa75bf07367f]: (4, ('Sat Dec 13 08:54:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 (5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d)\n5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d\nSat Dec 13 08:54:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 (5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d)\n5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.662 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e4688b7e-6dc4-4154-a724-d061e38bc45c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.663 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e91b4b4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:23 compute-0 nova_compute[248510]: 2025-12-13 08:54:23.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:23 compute-0 kernel: tap2e91b4b4-a0: left promiscuous mode
Dec 13 08:54:23 compute-0 nova_compute[248510]: 2025-12-13 08:54:23.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.684 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24140a55-c054-4de6-a987-9a7b4e5a1db2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.701 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d62f610c-3d8d-4554-94a2-fda71b254706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.703 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8477655b-b702-47df-9c4d-78423fdd28ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[67ea16b8-d63e-4454-8565-7930e67da868]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861176, 'reachable_time': 18940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362955, 'error': None, 'target': 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.727 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:54:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.728 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[817ed9c3-86b6-496b-9381-88a98c6937f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d2e91b4b4\x2daa39\x2d4f2a\x2dbb46\x2d9126feb64b26.mount: Deactivated successfully.
Dec 13 08:54:23 compute-0 nova_compute[248510]: 2025-12-13 08:54:23.921 248514 INFO nova.virt.libvirt.driver [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Deleting instance files /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4_del
Dec 13 08:54:23 compute-0 nova_compute[248510]: 2025-12-13 08:54:23.922 248514 INFO nova.virt.libvirt.driver [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Deletion of /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4_del complete
Dec 13 08:54:23 compute-0 nova_compute[248510]: 2025-12-13 08:54:23.931 248514 INFO nova.virt.libvirt.driver [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Deleting instance files /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7_del
Dec 13 08:54:23 compute-0 nova_compute[248510]: 2025-12-13 08:54:23.932 248514 INFO nova.virt.libvirt.driver [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Deletion of /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7_del complete
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.047 248514 INFO nova.compute.manager [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Took 1.98 seconds to destroy the instance on the hypervisor.
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.048 248514 DEBUG oslo.service.loopingcall [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.048 248514 DEBUG nova.compute.manager [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.048 248514 DEBUG nova.network.neutron [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.072 248514 INFO nova.compute.manager [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Took 2.69 seconds to destroy the instance on the hypervisor.
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.073 248514 DEBUG oslo.service.loopingcall [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.073 248514 DEBUG nova.compute.manager [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.073 248514 DEBUG nova.network.neutron [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:54:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 266 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.393 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616049.3917692, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.393 248514 INFO nova.compute.manager [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Stopped (Lifecycle Event)
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.425 248514 DEBUG nova.compute.manager [None req-eccedc19-d0b2-4dbf-a796-f7b480405e59 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:54:24 compute-0 ceph-mon[76537]: pgmap v2787: 321 pgs: 321 active+clean; 266 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.756 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.757 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.757 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.757 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.758 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] No waiting events found dispatching network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.758 248514 WARNING nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received unexpected event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f for instance with vm_state active and task_state deleting.
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.758 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-unplugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.758 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.759 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.759 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.759 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] No waiting events found dispatching network-vif-unplugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.759 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-unplugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.760 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.760 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.760 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.760 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.761 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] No waiting events found dispatching network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:24 compute-0 nova_compute[248510]: 2025-12-13 08:54:24.761 248514 WARNING nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received unexpected event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e for instance with vm_state active and task_state deleting.
Dec 13 08:54:25 compute-0 nova_compute[248510]: 2025-12-13 08:54:25.442 248514 DEBUG nova.network.neutron [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:25 compute-0 nova_compute[248510]: 2025-12-13 08:54:25.479 248514 DEBUG nova.network.neutron [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:25 compute-0 nova_compute[248510]: 2025-12-13 08:54:25.500 248514 INFO nova.compute.manager [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Took 1.45 seconds to deallocate network for instance.
Dec 13 08:54:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:25 compute-0 nova_compute[248510]: 2025-12-13 08:54:25.594 248514 INFO nova.compute.manager [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Took 1.52 seconds to deallocate network for instance.
Dec 13 08:54:25 compute-0 nova_compute[248510]: 2025-12-13 08:54:25.730 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:25 compute-0 nova_compute[248510]: 2025-12-13 08:54:25.731 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:25 compute-0 nova_compute[248510]: 2025-12-13 08:54:25.877 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:25 compute-0 nova_compute[248510]: 2025-12-13 08:54:25.919 248514 DEBUG nova.compute.manager [req-41de5a8d-8ff7-4991-b2ed-5f920c8e663f req-7e77f453-c30c-4ead-a4a7-2019edb98290 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-deleted-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:26 compute-0 nova_compute[248510]: 2025-12-13 08:54:26.027 248514 DEBUG oslo_concurrency.processutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:54:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 266 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 44 KiB/s wr, 36 op/s
Dec 13 08:54:26 compute-0 ceph-mon[76537]: pgmap v2788: 321 pgs: 321 active+clean; 266 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 44 KiB/s wr, 36 op/s
Dec 13 08:54:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:54:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/742887035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:26 compute-0 nova_compute[248510]: 2025-12-13 08:54:26.740 248514 DEBUG oslo_concurrency.processutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.713s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:54:26 compute-0 nova_compute[248510]: 2025-12-13 08:54:26.749 248514 DEBUG nova.compute.provider_tree [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:54:26 compute-0 nova_compute[248510]: 2025-12-13 08:54:26.798 248514 DEBUG nova.scheduler.client.report [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:54:26 compute-0 nova_compute[248510]: 2025-12-13 08:54:26.885 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:26 compute-0 nova_compute[248510]: 2025-12-13 08:54:26.888 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:26 compute-0 nova_compute[248510]: 2025-12-13 08:54:26.951 248514 INFO nova.scheduler.client.report [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 24e9bc91-cab7-4459-921c-5000eb9839b7
Dec 13 08:54:27 compute-0 nova_compute[248510]: 2025-12-13 08:54:27.067 248514 DEBUG oslo_concurrency.processutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:54:27 compute-0 nova_compute[248510]: 2025-12-13 08:54:27.118 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:27 compute-0 nova_compute[248510]: 2025-12-13 08:54:27.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:27 compute-0 nova_compute[248510]: 2025-12-13 08:54:27.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/742887035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:54:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225026825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:27 compute-0 nova_compute[248510]: 2025-12-13 08:54:27.744 248514 DEBUG oslo_concurrency.processutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:54:27 compute-0 nova_compute[248510]: 2025-12-13 08:54:27.750 248514 DEBUG nova.compute.provider_tree [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:54:27 compute-0 nova_compute[248510]: 2025-12-13 08:54:27.809 248514 DEBUG nova.scheduler.client.report [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:54:27 compute-0 nova_compute[248510]: 2025-12-13 08:54:27.920 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:28 compute-0 nova_compute[248510]: 2025-12-13 08:54:28.128 248514 DEBUG nova.compute.manager [req-abd24438-a2f6-42b9-88dd-a70f3a663452 req-684ce535-7b6c-4373-8876-10a6e1c6d357 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-deleted-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:28 compute-0 nova_compute[248510]: 2025-12-13 08:54:28.131 248514 INFO nova.scheduler.client.report [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance fa7f7cf9-50d4-461e-ab73-21e65aa729a4
Dec 13 08:54:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 45 KiB/s wr, 62 op/s
Dec 13 08:54:28 compute-0 nova_compute[248510]: 2025-12-13 08:54:28.389 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/225026825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:28 compute-0 ceph-mon[76537]: pgmap v2789: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 45 KiB/s wr, 62 op/s
Dec 13 08:54:28 compute-0 podman[363004]: 2025-12-13 08:54:28.974940816 +0000 UTC m=+0.060429616 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 08:54:28 compute-0 podman[363003]: 2025-12-13 08:54:28.983172813 +0000 UTC m=+0.066961480 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:54:29 compute-0 podman[363002]: 2025-12-13 08:54:29.013268517 +0000 UTC m=+0.097856734 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 08:54:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 200 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 47 KiB/s wr, 59 op/s
Dec 13 08:54:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:30 compute-0 sudo[363064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:54:30 compute-0 sudo[363064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:30 compute-0 sudo[363064]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:30 compute-0 sudo[363089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:54:30 compute-0 sudo[363089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:31 compute-0 ceph-mon[76537]: pgmap v2790: 321 pgs: 321 active+clean; 200 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 47 KiB/s wr, 59 op/s
Dec 13 08:54:31 compute-0 sudo[363089]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:54:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:54:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:54:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:54:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:54:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:54:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:54:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:54:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:54:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:54:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:54:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:54:31 compute-0 sudo[363145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:54:31 compute-0 sudo[363145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:31 compute-0 sudo[363145]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:31 compute-0 sudo[363170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:54:31 compute-0 sudo[363170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:31 compute-0 ovn_controller[148476]: 2025-12-13T08:54:31Z|01122|binding|INFO|Releasing lport d8892183-3d82-41f5-b0bd-dc5a1c170b1a from this chassis (sb_readonly=0)
Dec 13 08:54:31 compute-0 ovn_controller[148476]: 2025-12-13T08:54:31Z|01123|binding|INFO|Releasing lport eef5d4b2-f2d3-4d15-9528-0e68d65ce454 from this chassis (sb_readonly=0)
Dec 13 08:54:31 compute-0 nova_compute[248510]: 2025-12-13 08:54:31.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:31 compute-0 podman[363208]: 2025-12-13 08:54:31.920790118 +0000 UTC m=+0.041254756 container create 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 08:54:31 compute-0 systemd[1]: Started libpod-conmon-5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918.scope.
Dec 13 08:54:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:54:31 compute-0 podman[363208]: 2025-12-13 08:54:31.902264213 +0000 UTC m=+0.022728881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:54:32 compute-0 podman[363208]: 2025-12-13 08:54:32.009965734 +0000 UTC m=+0.130430392 container init 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:54:32 compute-0 podman[363208]: 2025-12-13 08:54:32.021574565 +0000 UTC m=+0.142039203 container start 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:32 compute-0 podman[363208]: 2025-12-13 08:54:32.025585685 +0000 UTC m=+0.146050333 container attach 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:54:32 compute-0 stupefied_austin[363224]: 167 167
Dec 13 08:54:32 compute-0 systemd[1]: libpod-5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918.scope: Deactivated successfully.
Dec 13 08:54:32 compute-0 conmon[363224]: conmon 5a7a0b452dbaa768b1ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918.scope/container/memory.events
Dec 13 08:54:32 compute-0 podman[363208]: 2025-12-13 08:54:32.029504783 +0000 UTC m=+0.149969431 container died 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9383458ef939be1283d2cdc5362744be178b85fc5bf2cd2471978c1b22b7f876-merged.mount: Deactivated successfully.
Dec 13 08:54:32 compute-0 podman[363208]: 2025-12-13 08:54:32.142518257 +0000 UTC m=+0.262982895 container remove 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:32 compute-0 systemd[1]: libpod-conmon-5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918.scope: Deactivated successfully.
Dec 13 08:54:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 200 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 57 op/s
Dec 13 08:54:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:54:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:54:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:54:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:54:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:54:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:54:32 compute-0 podman[363249]: 2025-12-13 08:54:32.3344858 +0000 UTC m=+0.060236461 container create b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:32 compute-0 systemd[1]: Started libpod-conmon-b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1.scope.
Dec 13 08:54:32 compute-0 podman[363249]: 2025-12-13 08:54:32.299092553 +0000 UTC m=+0.024843234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:54:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:32 compute-0 podman[363249]: 2025-12-13 08:54:32.450617512 +0000 UTC m=+0.176368193 container init b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:54:32 compute-0 podman[363249]: 2025-12-13 08:54:32.458692105 +0000 UTC m=+0.184442766 container start b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:54:32 compute-0 podman[363249]: 2025-12-13 08:54:32.490984954 +0000 UTC m=+0.216735635 container attach b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.713 248514 DEBUG nova.compute.manager [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.714 248514 DEBUG nova.compute.manager [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing instance network info cache due to event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.714 248514 DEBUG oslo_concurrency.lockutils [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.714 248514 DEBUG oslo_concurrency.lockutils [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.715 248514 DEBUG nova.network.neutron [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.815 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.816 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.817 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.817 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.817 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.818 248514 INFO nova.compute.manager [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Terminating instance
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.819 248514 DEBUG nova.compute.manager [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:54:32 compute-0 kernel: tap1babd66f-ec (unregistering): left promiscuous mode
Dec 13 08:54:32 compute-0 NetworkManager[50376]: <info>  [1765616072.9504] device (tap1babd66f-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:54:32 compute-0 ovn_controller[148476]: 2025-12-13T08:54:32Z|01124|binding|INFO|Releasing lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 from this chassis (sb_readonly=0)
Dec 13 08:54:32 compute-0 ovn_controller[148476]: 2025-12-13T08:54:32Z|01125|binding|INFO|Setting lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 down in Southbound
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:32 compute-0 ovn_controller[148476]: 2025-12-13T08:54:32Z|01126|binding|INFO|Removing iface tap1babd66f-ec ovn-installed in OVS
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:32 compute-0 sleepy_swartz[363265]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:54:32 compute-0 sleepy_swartz[363265]: --> All data devices are unavailable
Dec 13 08:54:32 compute-0 nova_compute[248510]: 2025-12-13 08:54:32.978 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:33 compute-0 systemd[1]: libpod-b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1.scope: Deactivated successfully.
Dec 13 08:54:33 compute-0 podman[363249]: 2025-12-13 08:54:33.003103584 +0000 UTC m=+0.728854245 container died b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Dec 13 08:54:33 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000071.scope: Deactivated successfully.
Dec 13 08:54:33 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000071.scope: Consumed 16.542s CPU time.
Dec 13 08:54:33 compute-0 systemd-machined[210538]: Machine qemu-139-instance-00000071 terminated.
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.062 248514 INFO nova.virt.libvirt.driver [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance destroyed successfully.
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.063 248514 DEBUG nova.objects.instance [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 2d2a33c7-0a90-4b64-b291-b268d37dce5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:54:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.109 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:81:46 10.100.0.10'], port_security=['fa:16:3e:b1:81:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2d2a33c7-0a90-4b64-b291-b268d37dce5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3479ed9a-2670-4333-b282-6f40685ff746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '57e2154e-1e2d-4537-afe5-11c61b80fdbc ee7df75c-fefa-4bc0-977e-537259cc7755', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9839b158-8451-4098-b558-d860fc6a92ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1babd66f-ec6a-4702-8a8f-839d32ba8761) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:54:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.110 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1babd66f-ec6a-4702-8a8f-839d32ba8761 in datapath 3479ed9a-2670-4333-b282-6f40685ff746 unbound from our chassis
Dec 13 08:54:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.111 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3479ed9a-2670-4333-b282-6f40685ff746, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:54:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.113 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[174c950a-a2fa-418e-b6aa-7adfad7ba150]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.114 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 namespace which is not needed anymore
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.128 248514 DEBUG nova.virt.libvirt.vif [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:53:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=113,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-ogye2bl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:19Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2d2a33c7-0a90-4b64-b291-b268d37dce5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.129 248514 DEBUG nova.network.os_vif_util [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.129 248514 DEBUG nova.network.os_vif_util [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.130 248514 DEBUG os_vif [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.132 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.132 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1babd66f-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.136 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:54:33 compute-0 nova_compute[248510]: 2025-12-13 08:54:33.138 248514 INFO os_vif [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec')
Dec 13 08:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169-merged.mount: Deactivated successfully.
Dec 13 08:54:33 compute-0 ceph-mon[76537]: pgmap v2791: 321 pgs: 321 active+clean; 200 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 57 op/s
Dec 13 08:54:33 compute-0 podman[363249]: 2025-12-13 08:54:33.742617666 +0000 UTC m=+1.468368327 container remove b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:33 compute-0 systemd[1]: libpod-conmon-b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1.scope: Deactivated successfully.
Dec 13 08:54:33 compute-0 sudo[363170]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:33 compute-0 sudo[363347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:54:33 compute-0 sudo[363347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:33 compute-0 sudo[363347]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:33 compute-0 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [NOTICE]   (360543) : haproxy version is 2.8.14-c23fe91
Dec 13 08:54:33 compute-0 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [NOTICE]   (360543) : path to executable is /usr/sbin/haproxy
Dec 13 08:54:33 compute-0 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [WARNING]  (360543) : Exiting Master process...
Dec 13 08:54:33 compute-0 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [WARNING]  (360543) : Exiting Master process...
Dec 13 08:54:33 compute-0 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [ALERT]    (360543) : Current worker (360547) exited with code 143 (Terminated)
Dec 13 08:54:33 compute-0 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [WARNING]  (360543) : All workers exited. Exiting... (0)
Dec 13 08:54:33 compute-0 systemd[1]: libpod-b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a.scope: Deactivated successfully.
Dec 13 08:54:33 compute-0 podman[363372]: 2025-12-13 08:54:33.912473325 +0000 UTC m=+0.050121838 container died b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:54:33 compute-0 sudo[363380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:54:33 compute-0 sudo[363380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a-userdata-shm.mount: Deactivated successfully.
Dec 13 08:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-15bfa81170ec655fc806f556ce96cba9f04e8c1b8d6c8296ea616ed5bdca96a0-merged.mount: Deactivated successfully.
Dec 13 08:54:34 compute-0 podman[363372]: 2025-12-13 08:54:34.184324391 +0000 UTC m=+0.321972894 container cleanup b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:34 compute-0 systemd[1]: libpod-conmon-b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a.scope: Deactivated successfully.
Dec 13 08:54:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 168 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 14 KiB/s wr, 61 op/s
Dec 13 08:54:34 compute-0 podman[363438]: 2025-12-13 08:54:34.265609429 +0000 UTC m=+0.057787170 container remove b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.272 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eaadab1d-ba3a-42cf-be0b-7b834c50545b]: (4, ('Sat Dec 13 08:54:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 (b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a)\nb2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a\nSat Dec 13 08:54:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 (b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a)\nb2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.274 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f88f5d0e-f7bb-45c5-a04c-5ae9683613d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3479ed9a-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:34 compute-0 podman[363439]: 2025-12-13 08:54:34.275687982 +0000 UTC m=+0.069522804 container create de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 08:54:34 compute-0 kernel: tap3479ed9a-20: left promiscuous mode
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.278 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.293 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[abc18ef4-c31e-4e53-aeda-77d9e51fbbd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.311 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb6fa77-4de5-470f-b583-fc341e2f7cf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.312 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d7003c-0bcf-4535-93ad-32fad9eaee71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:34 compute-0 podman[363439]: 2025-12-13 08:54:34.231514764 +0000 UTC m=+0.025349616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.329 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[228cff81-c5ce-49a0-8e76-cfe6737fcd1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857111, 'reachable_time': 36220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363467, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.332 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:54:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.332 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bde75909-b5b5-42ab-82ac-9480c753f266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d3479ed9a\x2d2670\x2d4333\x2db282\x2d6f40685ff746.mount: Deactivated successfully.
Dec 13 08:54:34 compute-0 systemd[1]: Started libpod-conmon-de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac.scope.
Dec 13 08:54:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:54:34 compute-0 ceph-mon[76537]: pgmap v2792: 321 pgs: 321 active+clean; 168 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 14 KiB/s wr, 61 op/s
Dec 13 08:54:34 compute-0 podman[363439]: 2025-12-13 08:54:34.589377327 +0000 UTC m=+0.383212189 container init de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 08:54:34 compute-0 podman[363439]: 2025-12-13 08:54:34.597707356 +0000 UTC m=+0.391542188 container start de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:54:34 compute-0 dazzling_elion[363470]: 167 167
Dec 13 08:54:34 compute-0 systemd[1]: libpod-de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac.scope: Deactivated successfully.
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.649 248514 DEBUG nova.compute.manager [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-unplugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.650 248514 DEBUG oslo_concurrency.lockutils [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.650 248514 DEBUG oslo_concurrency.lockutils [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.651 248514 DEBUG oslo_concurrency.lockutils [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.651 248514 DEBUG nova.compute.manager [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] No waiting events found dispatching network-vif-unplugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.651 248514 DEBUG nova.compute.manager [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-unplugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:54:34 compute-0 podman[363439]: 2025-12-13 08:54:34.708606206 +0000 UTC m=+0.502441038 container attach de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 08:54:34 compute-0 podman[363439]: 2025-12-13 08:54:34.7091399 +0000 UTC m=+0.502974732 container died de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 08:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-92ac515acc6bf0a20b0985c124b8eb1fcc7ef2345ba74df4b0fe39e7bd1bd4ca-merged.mount: Deactivated successfully.
Dec 13 08:54:34 compute-0 podman[363439]: 2025-12-13 08:54:34.859339935 +0000 UTC m=+0.653174777 container remove de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 08:54:34 compute-0 systemd[1]: libpod-conmon-de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac.scope: Deactivated successfully.
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.967 248514 INFO nova.virt.libvirt.driver [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Deleting instance files /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e_del
Dec 13 08:54:34 compute-0 nova_compute[248510]: 2025-12-13 08:54:34.969 248514 INFO nova.virt.libvirt.driver [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Deletion of /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e_del complete
Dec 13 08:54:35 compute-0 podman[363496]: 2025-12-13 08:54:35.054934429 +0000 UTC m=+0.051273076 container create f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.087 248514 DEBUG nova.compute.manager [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.087 248514 DEBUG nova.compute.manager [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing instance network info cache due to event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.087 248514 DEBUG oslo_concurrency.lockutils [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.087 248514 DEBUG oslo_concurrency.lockutils [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.088 248514 DEBUG nova.network.neutron [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:54:35 compute-0 systemd[1]: Started libpod-conmon-f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744.scope.
Dec 13 08:54:35 compute-0 podman[363496]: 2025-12-13 08:54:35.027348838 +0000 UTC m=+0.023687505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:54:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:54:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.144 248514 INFO nova.compute.manager [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Took 2.32 seconds to destroy the instance on the hypervisor.
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.144 248514 DEBUG oslo.service.loopingcall [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.144 248514 DEBUG nova.compute.manager [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.145 248514 DEBUG nova.network.neutron [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.159 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.160 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.160 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.160 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.160 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:35 compute-0 podman[363496]: 2025-12-13 08:54:35.16144272 +0000 UTC m=+0.157781387 container init f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.161 248514 INFO nova.compute.manager [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Terminating instance
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.162 248514 DEBUG nova.compute.manager [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:54:35 compute-0 podman[363496]: 2025-12-13 08:54:35.170329043 +0000 UTC m=+0.166667690 container start f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:54:35 compute-0 podman[363496]: 2025-12-13 08:54:35.370816779 +0000 UTC m=+0.367155466 container attach f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 08:54:35 compute-0 awesome_einstein[363513]: {
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:     "0": [
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:         {
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "devices": [
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "/dev/loop3"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             ],
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_name": "ceph_lv0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_size": "21470642176",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "name": "ceph_lv0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "tags": {
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cluster_name": "ceph",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.crush_device_class": "",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.encrypted": "0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.objectstore": "bluestore",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osd_id": "0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.type": "block",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.vdo": "0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.with_tpm": "0"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             },
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "type": "block",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "vg_name": "ceph_vg0"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:         }
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:     ],
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:     "1": [
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:         {
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "devices": [
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "/dev/loop4"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             ],
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_name": "ceph_lv1",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_size": "21470642176",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "name": "ceph_lv1",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "tags": {
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cluster_name": "ceph",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.crush_device_class": "",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.encrypted": "0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.objectstore": "bluestore",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osd_id": "1",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.type": "block",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.vdo": "0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.with_tpm": "0"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             },
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "type": "block",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "vg_name": "ceph_vg1"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:         }
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:     ],
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:     "2": [
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:         {
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "devices": [
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "/dev/loop5"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             ],
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_name": "ceph_lv2",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_size": "21470642176",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "name": "ceph_lv2",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "tags": {
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.cluster_name": "ceph",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.crush_device_class": "",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.encrypted": "0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.objectstore": "bluestore",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osd_id": "2",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.type": "block",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.vdo": "0",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:                 "ceph.with_tpm": "0"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             },
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "type": "block",
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:             "vg_name": "ceph_vg2"
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:         }
Dec 13 08:54:35 compute-0 awesome_einstein[363513]:     ]
Dec 13 08:54:35 compute-0 awesome_einstein[363513]: }
Dec 13 08:54:35 compute-0 systemd[1]: libpod-f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744.scope: Deactivated successfully.
Dec 13 08:54:35 compute-0 podman[363496]: 2025-12-13 08:54:35.473650198 +0000 UTC m=+0.469988845 container died f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 08:54:35 compute-0 kernel: tap0eac2381-7f (unregistering): left promiscuous mode
Dec 13 08:54:35 compute-0 NetworkManager[50376]: <info>  [1765616075.5193] device (tap0eac2381-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:54:35 compute-0 ovn_controller[148476]: 2025-12-13T08:54:35Z|01127|binding|INFO|Releasing lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 from this chassis (sb_readonly=0)
Dec 13 08:54:35 compute-0 ovn_controller[148476]: 2025-12-13T08:54:35Z|01128|binding|INFO|Setting lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 down in Southbound
Dec 13 08:54:35 compute-0 ovn_controller[148476]: 2025-12-13T08:54:35Z|01129|binding|INFO|Removing iface tap0eac2381-7f ovn-installed in OVS
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.574 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:eb:91 10.100.0.8'], port_security=['fa:16:3e:6a:eb:91 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'af2dc023-560c-4c66-b330-e41218a7a4eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09b8bd52-e920-467f-994b-646113fcb821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6cedfe2-a795-4750-8f73-fd0610750728', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e75a11b-9fc0-4a04-84da-8ed3853196e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:54:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.575 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 in datapath 09b8bd52-e920-467f-994b-646113fcb821 unbound from our chassis
Dec 13 08:54:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.577 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09b8bd52-e920-467f-994b-646113fcb821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:54:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.578 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[941a0877-994b-4932-81c1-ebe3a417da37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.579 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 namespace which is not needed anymore
Dec 13 08:54:35 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d00000070.scope: Deactivated successfully.
Dec 13 08:54:35 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d00000070.scope: Consumed 15.836s CPU time.
Dec 13 08:54:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1-merged.mount: Deactivated successfully.
Dec 13 08:54:35 compute-0 systemd-machined[210538]: Machine qemu-138-instance-00000070 terminated.
Dec 13 08:54:35 compute-0 podman[363496]: 2025-12-13 08:54:35.69866839 +0000 UTC m=+0.695007037 container remove f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:54:35 compute-0 systemd[1]: libpod-conmon-f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744.scope: Deactivated successfully.
Dec 13 08:54:35 compute-0 sudo[363380]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.762 248514 DEBUG nova.network.neutron [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updated VIF entry in instance network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.763 248514 DEBUG nova.network.neutron [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:35 compute-0 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [NOTICE]   (360349) : haproxy version is 2.8.14-c23fe91
Dec 13 08:54:35 compute-0 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [NOTICE]   (360349) : path to executable is /usr/sbin/haproxy
Dec 13 08:54:35 compute-0 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [WARNING]  (360349) : Exiting Master process...
Dec 13 08:54:35 compute-0 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [WARNING]  (360349) : Exiting Master process...
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.789 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:35 compute-0 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [ALERT]    (360349) : Current worker (360351) exited with code 143 (Terminated)
Dec 13 08:54:35 compute-0 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [WARNING]  (360349) : All workers exited. Exiting... (0)
Dec 13 08:54:35 compute-0 systemd[1]: libpod-e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d.scope: Deactivated successfully.
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.799 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:35 compute-0 podman[363557]: 2025-12-13 08:54:35.8031808 +0000 UTC m=+0.063763160 container died e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.809 248514 INFO nova.virt.libvirt.driver [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance destroyed successfully.
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.809 248514 DEBUG nova.objects.instance [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid af2dc023-560c-4c66-b330-e41218a7a4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:54:35 compute-0 sudo[363566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:54:35 compute-0 sudo[363566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:35 compute-0 sudo[363566]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.888 248514 DEBUG nova.virt.libvirt.vif [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-673770696',display_name='tempest-TestNetworkBasicOps-server-673770696',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-673770696',id=112,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFWWKdP0apeEEX6KLq89U2vRGSHeV3KAUwR7F/v8SOdmJ9w4un8uAKW6W1VsXiUAnc8fLGuX3ip0yk759e6Z6EnqMVZe+COaAk19ulIyzOUeifphXpMKMaa2a+4orpaKaw==',key_name='tempest-TestNetworkBasicOps-724159602',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ahio6eh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:12Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=af2dc023-560c-4c66-b330-e41218a7a4eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.889 248514 DEBUG nova.network.os_vif_util [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.890 248514 DEBUG nova.network.os_vif_util [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.891 248514 DEBUG os_vif [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.893 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0eac2381-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.898 248514 INFO os_vif [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f')
Dec 13 08:54:35 compute-0 sudo[363619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:54:35 compute-0 sudo[363619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:35 compute-0 nova_compute[248510]: 2025-12-13 08:54:35.914 248514 DEBUG oslo_concurrency.lockutils [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d-userdata-shm.mount: Deactivated successfully.
Dec 13 08:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8de9dacfc504e0470aec8b697e93be4e6449bd328978b3076d04e8364d2802e5-merged.mount: Deactivated successfully.
Dec 13 08:54:35 compute-0 podman[363557]: 2025-12-13 08:54:35.97986647 +0000 UTC m=+0.240448800 container cleanup e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:54:35 compute-0 systemd[1]: libpod-conmon-e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d.scope: Deactivated successfully.
Dec 13 08:54:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 168 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 9.8 KiB/s wr, 31 op/s
Dec 13 08:54:36 compute-0 podman[363664]: 2025-12-13 08:54:36.309158335 +0000 UTC m=+0.301558511 container remove e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e80e60bb-109d-4ddb-9a82-4256d6100c21]: (4, ('Sat Dec 13 08:54:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 (e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d)\ne1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d\nSat Dec 13 08:54:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 (e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d)\ne1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.317 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41ad0d82-b5a4-4f0c-a830-0576f8f92945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09b8bd52-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:36 compute-0 kernel: tap09b8bd52-e0: left promiscuous mode
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.339 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[145e0cc3-69dc-4588-a7e7-99f32354503b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.353 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7a295e-3f28-4fec-b2b9-eb774f5c443d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.355 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f6afc58e-f936-44b3-b2d6-4fb68f04b16b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.373 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8d567227-3b7f-4e4b-9695-0bf8388e6093]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856903, 'reachable_time': 40139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363701, 'error': None, 'target': 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d09b8bd52\x2de920\x2d467f\x2d994b\x2d646113fcb821.mount: Deactivated successfully.
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.376 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:54:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.376 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddc12f1-f279-494b-8d33-944d480b85f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:36 compute-0 podman[363688]: 2025-12-13 08:54:36.407514671 +0000 UTC m=+0.100007148 container create 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:54:36 compute-0 podman[363688]: 2025-12-13 08:54:36.333735021 +0000 UTC m=+0.026227528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:54:36 compute-0 systemd[1]: Started libpod-conmon-31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b.scope.
Dec 13 08:54:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:54:36 compute-0 podman[363688]: 2025-12-13 08:54:36.587779671 +0000 UTC m=+0.280272238 container init 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:54:36 compute-0 podman[363688]: 2025-12-13 08:54:36.597367981 +0000 UTC m=+0.289860469 container start 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:54:36 compute-0 agitated_tharp[363709]: 167 167
Dec 13 08:54:36 compute-0 systemd[1]: libpod-31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b.scope: Deactivated successfully.
Dec 13 08:54:36 compute-0 podman[363688]: 2025-12-13 08:54:36.650569125 +0000 UTC m=+0.343061612 container attach 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 08:54:36 compute-0 podman[363688]: 2025-12-13 08:54:36.65116687 +0000 UTC m=+0.343659347 container died 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.843 248514 DEBUG nova.network.neutron [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.862 248514 INFO nova.compute.manager [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Took 1.72 seconds to deallocate network for instance.
Dec 13 08:54:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5d9d265d2552d906e1e709f82daaf6ae3ca8b15fd8aa36f5257c8cd1024ee9f-merged.mount: Deactivated successfully.
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.909 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.909 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.910 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.910 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.910 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] No waiting events found dispatching network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.910 248514 WARNING nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received unexpected event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 for instance with vm_state active and task_state deleting.
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.911 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-unplugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.911 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.911 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.912 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.912 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] No waiting events found dispatching network-vif-unplugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.912 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-unplugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.923 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:36 compute-0 nova_compute[248510]: 2025-12-13 08:54:36.924 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:36 compute-0 podman[363688]: 2025-12-13 08:54:36.944518455 +0000 UTC m=+0.637010943 container remove 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:54:36 compute-0 systemd[1]: libpod-conmon-31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b.scope: Deactivated successfully.
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.014 248514 DEBUG oslo_concurrency.processutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:54:37 compute-0 podman[363737]: 2025-12-13 08:54:37.168375568 +0000 UTC m=+0.084796407 container create 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 08:54:37 compute-0 podman[363737]: 2025-12-13 08:54:37.108720552 +0000 UTC m=+0.025141411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:54:37 compute-0 systemd[1]: Started libpod-conmon-222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c.scope.
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.214 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616062.2129524, fa7f7cf9-50d4-461e-ab73-21e65aa729a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.215 248514 INFO nova.compute.manager [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] VM Stopped (Lifecycle Event)
Dec 13 08:54:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.244 248514 DEBUG nova.compute.manager [None req-88a6f8da-c6f3-46af-ab82-23ce71795305 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:54:37 compute-0 podman[363737]: 2025-12-13 08:54:37.256776155 +0000 UTC m=+0.173196994 container init 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:54:37 compute-0 podman[363737]: 2025-12-13 08:54:37.267793791 +0000 UTC m=+0.184214630 container start 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 08:54:37 compute-0 podman[363737]: 2025-12-13 08:54:37.271562555 +0000 UTC m=+0.187983394 container attach 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.271 248514 INFO nova.virt.libvirt.driver [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Deleting instance files /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb_del
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.271 248514 INFO nova.virt.libvirt.driver [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Deletion of /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb_del complete
Dec 13 08:54:37 compute-0 ceph-mon[76537]: pgmap v2793: 321 pgs: 321 active+clean; 168 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 9.8 KiB/s wr, 31 op/s
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.326 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616062.2975323, 24e9bc91-cab7-4459-921c-5000eb9839b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.326 248514 INFO nova.compute.manager [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] VM Stopped (Lifecycle Event)
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.331 248514 INFO nova.compute.manager [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Took 2.17 seconds to destroy the instance on the hypervisor.
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.332 248514 DEBUG oslo.service.loopingcall [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.332 248514 DEBUG nova.compute.manager [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.333 248514 DEBUG nova.network.neutron [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.348 248514 DEBUG nova.compute.manager [None req-9f5000d1-baea-47e8-add8-9b3aa2274ff4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:54:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159612600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.646 248514 DEBUG oslo_concurrency.processutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.655 248514 DEBUG nova.compute.provider_tree [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.679 248514 DEBUG nova.scheduler.client.report [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.714 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.752 248514 INFO nova.scheduler.client.report [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 2d2a33c7-0a90-4b64-b291-b268d37dce5e
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.823 248514 DEBUG nova.network.neutron [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updated VIF entry in instance network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.823 248514 DEBUG nova.network.neutron [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.865 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:37 compute-0 nova_compute[248510]: 2025-12-13 08:54:37.901 248514 DEBUG oslo_concurrency.lockutils [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:54:38 compute-0 lvm[363849]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:54:38 compute-0 lvm[363849]: VG ceph_vg0 finished
Dec 13 08:54:38 compute-0 lvm[363851]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:54:38 compute-0 lvm[363851]: VG ceph_vg1 finished
Dec 13 08:54:38 compute-0 lvm[363852]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:54:38 compute-0 lvm[363852]: VG ceph_vg2 finished
Dec 13 08:54:38 compute-0 adoring_euclid[363772]: {}
Dec 13 08:54:38 compute-0 nova_compute[248510]: 2025-12-13 08:54:38.180 248514 DEBUG nova.network.neutron [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:54:38 compute-0 nova_compute[248510]: 2025-12-13 08:54:38.204 248514 INFO nova.compute.manager [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Took 0.87 seconds to deallocate network for instance.
Dec 13 08:54:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 132 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 57 op/s
Dec 13 08:54:38 compute-0 systemd[1]: libpod-222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c.scope: Deactivated successfully.
Dec 13 08:54:38 compute-0 systemd[1]: libpod-222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c.scope: Consumed 1.484s CPU time.
Dec 13 08:54:38 compute-0 podman[363737]: 2025-12-13 08:54:38.230797756 +0000 UTC m=+1.147218605 container died 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:54:38 compute-0 nova_compute[248510]: 2025-12-13 08:54:38.255 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:38 compute-0 nova_compute[248510]: 2025-12-13 08:54:38.256 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00-merged.mount: Deactivated successfully.
Dec 13 08:54:38 compute-0 podman[363737]: 2025-12-13 08:54:38.280642766 +0000 UTC m=+1.197063605 container remove 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 08:54:38 compute-0 systemd[1]: libpod-conmon-222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c.scope: Deactivated successfully.
Dec 13 08:54:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4159612600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:38 compute-0 nova_compute[248510]: 2025-12-13 08:54:38.310 248514 DEBUG oslo_concurrency.processutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:54:38 compute-0 sudo[363619]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:54:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:54:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:54:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:54:38 compute-0 sudo[363867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:54:38 compute-0 sudo[363867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:54:38 compute-0 sudo[363867]: pam_unix(sudo:session): session closed for user root
Dec 13 08:54:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:54:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3781666245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:38 compute-0 nova_compute[248510]: 2025-12-13 08:54:38.886 248514 DEBUG oslo_concurrency.processutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:54:38 compute-0 nova_compute[248510]: 2025-12-13 08:54:38.893 248514 DEBUG nova.compute.provider_tree [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:54:39 compute-0 ceph-mon[76537]: pgmap v2794: 321 pgs: 321 active+clean; 132 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 57 op/s
Dec 13 08:54:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:54:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:54:39 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3781666245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.464 248514 DEBUG nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.464 248514 DEBUG oslo_concurrency.lockutils [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.465 248514 DEBUG oslo_concurrency.lockutils [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.465 248514 DEBUG oslo_concurrency.lockutils [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.465 248514 DEBUG nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] No waiting events found dispatching network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.465 248514 WARNING nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received unexpected event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 for instance with vm_state deleted and task_state None.
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.466 248514 DEBUG nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-deleted-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.466 248514 DEBUG nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-deleted-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.484 248514 DEBUG nova.scheduler.client.report [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.515 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:39 compute-0 nova_compute[248510]: 2025-12-13 08:54:39.825 248514 INFO nova.scheduler.client.report [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance af2dc023-560c-4c66-b330-e41218a7a4eb
Dec 13 08:54:40 compute-0 nova_compute[248510]: 2025-12-13 08:54:40.107 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:54:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:54:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:54:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:54:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:54:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:54:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 56 op/s
Dec 13 08:54:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:40 compute-0 nova_compute[248510]: 2025-12-13 08:54:40.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:54:40 compute-0 nova_compute[248510]: 2025-12-13 08:54:40.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:41.039 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8:0:1:f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:54:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:41.040 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated
Dec 13 08:54:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:41.041 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:54:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:41.042 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[19cdb1be-caca-4228-9b13-e10d9f699f5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:54:41 compute-0 ceph-mon[76537]: pgmap v2795: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 56 op/s
Dec 13 08:54:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Dec 13 08:54:42 compute-0 nova_compute[248510]: 2025-12-13 08:54:42.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:43 compute-0 ceph-mon[76537]: pgmap v2796: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Dec 13 08:54:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Dec 13 08:54:45 compute-0 ceph-mon[76537]: pgmap v2797: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Dec 13 08:54:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:45 compute-0 nova_compute[248510]: 2025-12-13 08:54:45.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Dec 13 08:54:47 compute-0 nova_compute[248510]: 2025-12-13 08:54:47.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:47 compute-0 nova_compute[248510]: 2025-12-13 08:54:47.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:47 compute-0 ceph-mon[76537]: pgmap v2798: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Dec 13 08:54:47 compute-0 nova_compute[248510]: 2025-12-13 08:54:47.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:48 compute-0 nova_compute[248510]: 2025-12-13 08:54:48.060 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616073.0588737, 2d2a33c7-0a90-4b64-b291-b268d37dce5e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:54:48 compute-0 nova_compute[248510]: 2025-12-13 08:54:48.060 248514 INFO nova.compute.manager [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] VM Stopped (Lifecycle Event)
Dec 13 08:54:48 compute-0 nova_compute[248510]: 2025-12-13 08:54:48.118 248514 DEBUG nova.compute.manager [None req-c0f17856-9737-46af-8db9-df2719d861ce - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:54:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Dec 13 08:54:48 compute-0 ceph-mon[76537]: pgmap v2799: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Dec 13 08:54:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 511 B/s wr, 25 op/s
Dec 13 08:54:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:50 compute-0 nova_compute[248510]: 2025-12-13 08:54:50.806 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616075.8042293, af2dc023-560c-4c66-b330-e41218a7a4eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:54:50 compute-0 nova_compute[248510]: 2025-12-13 08:54:50.806 248514 INFO nova.compute.manager [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] VM Stopped (Lifecycle Event)
Dec 13 08:54:50 compute-0 nova_compute[248510]: 2025-12-13 08:54:50.827 248514 DEBUG nova.compute.manager [None req-df016397-2540-4195-939a-5fdcaff1b5bf - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:54:50 compute-0 nova_compute[248510]: 2025-12-13 08:54:50.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:51 compute-0 ceph-mon[76537]: pgmap v2800: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 511 B/s wr, 25 op/s
Dec 13 08:54:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:54:52 compute-0 nova_compute[248510]: 2025-12-13 08:54:52.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:53 compute-0 ceph-mon[76537]: pgmap v2801: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:54:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:54:54 compute-0 ceph-mon[76537]: pgmap v2802: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:54:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:54:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:55.432 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:54:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:54:55.432 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:54:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:54:55 compute-0 nova_compute[248510]: 2025-12-13 08:54:55.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:54:57 compute-0 nova_compute[248510]: 2025-12-13 08:54:57.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:54:57 compute-0 ceph-mon[76537]: pgmap v2803: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:54:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:54:58 compute-0 ceph-mon[76537]: pgmap v2804: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:54:59 compute-0 podman[363916]: 2025-12-13 08:54:59.980517867 +0000 UTC m=+0.064527309 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 08:55:00 compute-0 podman[363914]: 2025-12-13 08:55:00.014635263 +0000 UTC m=+0.103789123 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:55:00 compute-0 podman[363915]: 2025-12-13 08:55:00.014449478 +0000 UTC m=+0.098688555 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 13 08:55:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:00 compute-0 nova_compute[248510]: 2025-12-13 08:55:00.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:00 compute-0 nova_compute[248510]: 2025-12-13 08:55:00.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:01 compute-0 ceph-mon[76537]: pgmap v2805: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:02 compute-0 nova_compute[248510]: 2025-12-13 08:55:02.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:02 compute-0 nova_compute[248510]: 2025-12-13 08:55:02.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:03 compute-0 ceph-mon[76537]: pgmap v2806: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:04 compute-0 ceph-mon[76537]: pgmap v2807: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:05 compute-0 nova_compute[248510]: 2025-12-13 08:55:05.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:07 compute-0 ceph-mon[76537]: pgmap v2808: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:07 compute-0 nova_compute[248510]: 2025-12-13 08:55:07.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:08 compute-0 sshd-session[363976]: Invalid user sol from 193.32.162.146 port 32946
Dec 13 08:55:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:08 compute-0 sshd-session[363976]: Connection closed by invalid user sol 193.32.162.146 port 32946 [preauth]
Dec 13 08:55:08 compute-0 nova_compute[248510]: 2025-12-13 08:55:08.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:08 compute-0 nova_compute[248510]: 2025-12-13 08:55:08.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:55:09 compute-0 nova_compute[248510]: 2025-12-13 08:55:09.122 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:55:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:55:09
Dec 13 08:55:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:55:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:55:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'backups']
Dec 13 08:55:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:55:09 compute-0 ceph-mon[76537]: pgmap v2809: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:10 compute-0 ceph-mon[76537]: pgmap v2810: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:10 compute-0 nova_compute[248510]: 2025-12-13 08:55:10.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:55:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:55:10 compute-0 nova_compute[248510]: 2025-12-13 08:55:10.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:12 compute-0 nova_compute[248510]: 2025-12-13 08:55:12.562 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:13 compute-0 ceph-mon[76537]: pgmap v2811: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:13.818 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:55:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:13.819 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:55:13 compute-0 nova_compute[248510]: 2025-12-13 08:55:13.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.211 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.211 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.236 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.336 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.337 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.345 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.345 248514 INFO nova.compute.claims [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.361 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.361 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.393 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.501 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.528 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:14 compute-0 nova_compute[248510]: 2025-12-13 08:55:14.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:14.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:55:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1607725416' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:55:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:55:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1607725416' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:55:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:55:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570311342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.141 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.148 248514 DEBUG nova.compute.provider_tree [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.169 248514 DEBUG nova.scheduler.client.report [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.194 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.194 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.196 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.205 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.206 248514 INFO nova.compute.claims [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.278 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.278 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.331 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.368 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:55:15 compute-0 ceph-mon[76537]: pgmap v2812: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1607725416' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:55:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1607725416' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:55:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3570311342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.405 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.514 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.516 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.516 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Creating image(s)
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.547 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.580 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.604 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.609 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.663 248514 DEBUG nova.policy [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.707 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.708 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.709 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.710 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.734 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.738 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2dacd79d-d668-430f-89e3-bd607a8298ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:15 compute-0 nova_compute[248510]: 2025-12-13 08:55:15.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:55:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2386839979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.035 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.043 248514 DEBUG nova.compute.provider_tree [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.065 248514 DEBUG nova.scheduler.client.report [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.088 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.089 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.091 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2dacd79d-d668-430f-89e3-bd607a8298ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.092 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.092 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.092 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.093 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.203 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.204 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.213 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:55:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.243 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.267 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.303 248514 DEBUG nova.objects.instance [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 2dacd79d-d668-430f-89e3-bd607a8298ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.328 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.328 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Ensure instance console log exists: /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.329 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.329 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.329 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.372 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.373 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.374 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Creating image(s)
Dec 13 08:55:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2386839979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.403 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.427 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.459 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.467 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.567 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.569 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.570 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.571 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.601 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.606 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5c900cfb-46bb-436b-a574-7985be3447da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:55:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719210447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.677 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.695 248514 DEBUG nova.policy [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.905 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.906 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.987517456524074GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.906 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.906 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:16 compute-0 nova_compute[248510]: 2025-12-13 08:55:16.997 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5c900cfb-46bb-436b-a574-7985be3447da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.022 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2dacd79d-d668-430f-89e3-bd607a8298ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.022 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 5c900cfb-46bb-436b-a574-7985be3447da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.023 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.023 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.060 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.103 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.187 248514 DEBUG nova.objects.instance [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.205 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.205 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Ensure instance console log exists: /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.206 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.206 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.206 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:17 compute-0 ceph-mon[76537]: pgmap v2813: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:55:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/719210447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.395 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Successfully created port: 59834f67-f81d-41bf-9bec-95eea737178e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.565 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:55:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648480722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.709 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.715 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.751 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.818 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:17 compute-0 nova_compute[248510]: 2025-12-13 08:55:17.973 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Successfully created port: a010c1a2-26e3-477b-9539-f12ad28801ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.197 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Successfully updated port: 59834f67-f81d-41bf-9bec-95eea737178e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.229 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.230 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.230 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:55:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 74 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec 13 08:55:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3648480722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.788 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.819 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.819 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.819 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:18 compute-0 nova_compute[248510]: 2025-12-13 08:55:18.819 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:55:19 compute-0 ceph-mon[76537]: pgmap v2814: 321 pgs: 321 active+clean; 74 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.416 248514 DEBUG nova.compute.manager [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-changed-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.416 248514 DEBUG nova.compute.manager [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing instance network info cache due to event network-changed-59834f67-f81d-41bf-9bec-95eea737178e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.417 248514 DEBUG oslo_concurrency.lockutils [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.435 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Successfully updated port: a010c1a2-26e3-477b-9539-f12ad28801ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.452 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.453 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.453 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.617 248514 DEBUG nova.compute.manager [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.617 248514 DEBUG nova.compute.manager [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing instance network info cache due to event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.618 248514 DEBUG oslo_concurrency.lockutils [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:19 compute-0 nova_compute[248510]: 2025-12-13 08:55:19.698 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.006 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.028 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.029 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance network_info: |[{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.029 248514 DEBUG oslo_concurrency.lockutils [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.030 248514 DEBUG nova.network.neutron [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.032 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start _get_guest_xml network_info=[{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.036 248514 WARNING nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.043 248514 DEBUG nova.virt.libvirt.host [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.044 248514 DEBUG nova.virt.libvirt.host [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.047 248514 DEBUG nova.virt.libvirt.host [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.048 248514 DEBUG nova.virt.libvirt.host [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.049 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.049 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.050 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.050 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.051 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.051 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.051 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.052 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.052 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.053 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.053 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.053 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.058 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 13 08:55:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:55:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599522550' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.644 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.665 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.668 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.811 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.831 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.831 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance network_info: |[{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.832 248514 DEBUG oslo_concurrency.lockutils [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.832 248514 DEBUG nova.network.neutron [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.835 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start _get_guest_xml network_info=[{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.838 248514 WARNING nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.845 248514 DEBUG nova.virt.libvirt.host [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.846 248514 DEBUG nova.virt.libvirt.host [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.849 248514 DEBUG nova.virt.libvirt.host [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.849 248514 DEBUG nova.virt.libvirt.host [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.850 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.850 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.850 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.852 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.852 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.852 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.852 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.857 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:20 compute-0 nova_compute[248510]: 2025-12-13 08:55:20.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007053977648106654 of space, bias 1.0, pg target 0.2116193294431996 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697138298021254 of space, bias 1.0, pg target 0.2009141489406376 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.738067775993847e-07 of space, bias 4.0, pg target 0.0006885681331192617 quantized to 16 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:55:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:55:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:55:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2339603879' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.290 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.293 248514 DEBUG nova.virt.libvirt.vif [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=116,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9i8zv62t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:15Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2dacd79d-d668-430f-89e3-bd607a8298ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.294 248514 DEBUG nova.network.os_vif_util [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.295 248514 DEBUG nova.network.os_vif_util [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.296 248514 DEBUG nova.objects.instance [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2dacd79d-d668-430f-89e3-bd607a8298ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.315 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <uuid>2dacd79d-d668-430f-89e3-bd607a8298ba</uuid>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <name>instance-00000074</name>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889</nova:name>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:55:20</nova:creationTime>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <nova:port uuid="59834f67-f81d-41bf-9bec-95eea737178e">
Dec 13 08:55:21 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <system>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <entry name="serial">2dacd79d-d668-430f-89e3-bd607a8298ba</entry>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <entry name="uuid">2dacd79d-d668-430f-89e3-bd607a8298ba</entry>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </system>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <os>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   </os>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <features>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   </features>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2dacd79d-d668-430f-89e3-bd607a8298ba_disk">
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       </source>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config">
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       </source>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:55:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:fc:dc:ee"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <target dev="tap59834f67-f8"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/console.log" append="off"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <video>
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </video>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:55:21 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:55:21 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:55:21 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:55:21 compute-0 nova_compute[248510]: </domain>
Dec 13 08:55:21 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.317 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Preparing to wait for external event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.317 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.318 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.318 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.319 248514 DEBUG nova.virt.libvirt.vif [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=116,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9i8zv62t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:15Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2dacd79d-d668-430f-89e3-bd607a8298ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.320 248514 DEBUG nova.network.os_vif_util [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.320 248514 DEBUG nova.network.os_vif_util [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.321 248514 DEBUG os_vif [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.323 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.324 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.327 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.328 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59834f67-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.328 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59834f67-f8, col_values=(('external_ids', {'iface-id': '59834f67-f81d-41bf-9bec-95eea737178e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:dc:ee', 'vm-uuid': '2dacd79d-d668-430f-89e3-bd607a8298ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:21 compute-0 NetworkManager[50376]: <info>  [1765616121.3311] manager: (tap59834f67-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.338 248514 INFO os_vif [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8')
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.394 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.394 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.394 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:fc:dc:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.395 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Using config drive
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.416 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:21 compute-0 ceph-mon[76537]: pgmap v2815: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 13 08:55:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1599522550' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2339603879' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:55:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427886782' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.504 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.528 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.533 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.572 248514 DEBUG nova.network.neutron [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updated VIF entry in instance network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.574 248514 DEBUG nova.network.neutron [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.603 248514 DEBUG oslo_concurrency.lockutils [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:21 compute-0 nova_compute[248510]: 2025-12-13 08:55:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.033 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Creating config drive at /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.038 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu32c49wo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:55:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2292909053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.112 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.114 248514 DEBUG nova.virt.libvirt.vif [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:16Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.114 248514 DEBUG nova.network.os_vif_util [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.115 248514 DEBUG nova.network.os_vif_util [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.116 248514 DEBUG nova.objects.instance [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.133 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <name>instance-00000075</name>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:55:20</nova:creationTime>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 08:55:22 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <system>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <entry name="serial">5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <entry name="uuid">5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </system>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <os>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   </os>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <features>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   </features>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5c900cfb-46bb-436b-a574-7985be3447da_disk">
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       </source>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config">
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       </source>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:55:22 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:c9:87:a3"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <target dev="tapa010c1a2-26"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log" append="off"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <video>
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </video>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:55:22 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:55:22 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:55:22 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:55:22 compute-0 nova_compute[248510]: </domain>
Dec 13 08:55:22 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.134 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Preparing to wait for external event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.134 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.135 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.135 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.135 248514 DEBUG nova.virt.libvirt.vif [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:16Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.136 248514 DEBUG nova.network.os_vif_util [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.136 248514 DEBUG nova.network.os_vif_util [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.137 248514 DEBUG os_vif [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.137 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.138 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.138 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.140 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa010c1a2-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.141 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa010c1a2-26, col_values=(('external_ids', {'iface-id': 'a010c1a2-26e3-477b-9539-f12ad28801ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:87:a3', 'vm-uuid': '5c900cfb-46bb-436b-a574-7985be3447da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 NetworkManager[50376]: <info>  [1765616122.1432] manager: (tapa010c1a2-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.150 248514 INFO os_vif [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26')
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.194 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu32c49wo" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.217 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.221 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.260 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.261 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.261 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:c9:87:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.262 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Using config drive
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.287 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.361 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.362 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Deleting local config drive /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config because it was imported into RBD.
Dec 13 08:55:22 compute-0 NetworkManager[50376]: <info>  [1765616122.4089] manager: (tap59834f67-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Dec 13 08:55:22 compute-0 kernel: tap59834f67-f8: entered promiscuous mode
Dec 13 08:55:22 compute-0 ovn_controller[148476]: 2025-12-13T08:55:22Z|01130|binding|INFO|Claiming lport 59834f67-f81d-41bf-9bec-95eea737178e for this chassis.
Dec 13 08:55:22 compute-0 ovn_controller[148476]: 2025-12-13T08:55:22Z|01131|binding|INFO|59834f67-f81d-41bf-9bec-95eea737178e: Claiming fa:16:3e:fc:dc:ee 10.100.0.7
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.412 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.425 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:dc:ee 10.100.0.7'], port_security=['fa:16:3e:fc:dc:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2dacd79d-d668-430f-89e3-bd607a8298ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49b9fdf8-f095-49d6-8a3e-6b41045e0020 daf1c258-d9fc-43cc-a960-fdfffc57ef37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ef6794c-20c0-44ef-9932-95bf5c168e3e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=59834f67-f81d-41bf-9bec-95eea737178e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.427 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 59834f67-f81d-41bf-9bec-95eea737178e in datapath d62e4a11-9334-4dbd-978f-dcabebeb9f79 bound to our chassis
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.429 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62e4a11-9334-4dbd-978f-dcabebeb9f79
Dec 13 08:55:22 compute-0 systemd-udevd[364618]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.438 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[961fe3b8-cd00-4f34-92e4-625071dbeea2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.439 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd62e4a11-91 in ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:55:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/427886782' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2292909053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.441 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd62e4a11-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.441 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[77bc4996-b7f7-4c6e-928d-850d15e9917a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.442 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8a718843-adfd-4611-a87a-7aec9a655dfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 NetworkManager[50376]: <info>  [1765616122.4538] device (tap59834f67-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:55:22 compute-0 NetworkManager[50376]: <info>  [1765616122.4558] device (tap59834f67-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.455 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[84afdf5a-bd34-4711-8b6d-e37e3f6d481b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 systemd-machined[210538]: New machine qemu-143-instance-00000074.
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.481 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5240100-8de9-40ef-b5d1-191fa6829a93]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.490 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000074.
Dec 13 08:55:22 compute-0 ovn_controller[148476]: 2025-12-13T08:55:22Z|01132|binding|INFO|Setting lport 59834f67-f81d-41bf-9bec-95eea737178e ovn-installed in OVS
Dec 13 08:55:22 compute-0 ovn_controller[148476]: 2025-12-13T08:55:22Z|01133|binding|INFO|Setting lport 59834f67-f81d-41bf-9bec-95eea737178e up in Southbound
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.501 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.520 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba06fbd-5221-4b9d-a38f-5cd5fe8ca307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.526 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46a7bec6-9f3f-4f96-aecb-2c21ffa4d917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 NetworkManager[50376]: <info>  [1765616122.5279] manager: (tapd62e4a11-90): new Veth device (/org/freedesktop/NetworkManager/Devices/470)
Dec 13 08:55:22 compute-0 systemd-udevd[364622]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.571 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b16c7f53-9104-4334-90f1-ed5e6b2b5ae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.574 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[14db448d-17d7-4e87-b3d3-4e7fcfef3caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 NetworkManager[50376]: <info>  [1765616122.6006] device (tapd62e4a11-90): carrier: link connected
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.606 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1cb1e9-3f4a-49b4-ae34-83a842e164f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03440b92-cf18-4be7-8f6d-7a38d523e1fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e4a11-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869981, 'reachable_time': 20363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364656, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.647 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35903600-c192-44f7-83a3-87801e99501d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:643c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869981, 'tstamp': 869981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364657, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[150bba2e-7661-46c9-ba2d-ffd80bbe4030]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e4a11-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869981, 'reachable_time': 20363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364658, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.702 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd4a760-9ae9-4045-9bec-b4d60205301a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.756 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22df6ace-c48e-4f78-a0c8-8cf04597355f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.758 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e4a11-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.758 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.758 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62e4a11-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:22 compute-0 NetworkManager[50376]: <info>  [1765616122.7607] manager: (tapd62e4a11-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 kernel: tapd62e4a11-90: entered promiscuous mode
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.766 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62e4a11-90, col_values=(('external_ids', {'iface-id': '3d979ee9-5b95-4edf-8ffc-3de7e778c5ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 ovn_controller[148476]: 2025-12-13T08:55:22Z|01134|binding|INFO|Releasing lport 3d979ee9-5b95-4edf-8ffc-3de7e778c5ff from this chassis (sb_readonly=0)
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 nova_compute[248510]: 2025-12-13 08:55:22.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.786 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d62e4a11-9334-4dbd-978f-dcabebeb9f79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d62e4a11-9334-4dbd-978f-dcabebeb9f79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76bd67fd-43a7-4e37-b063-ee94fbe9ec86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.788 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-d62e4a11-9334-4dbd-978f-dcabebeb9f79
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/d62e4a11-9334-4dbd-978f-dcabebeb9f79.pid.haproxy
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID d62e4a11-9334-4dbd-978f-dcabebeb9f79
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:55:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.789 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'env', 'PROCESS_TAG=haproxy-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d62e4a11-9334-4dbd-978f-dcabebeb9f79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:55:23 compute-0 podman[364693]: 2025-12-13 08:55:23.16845949 +0000 UTC m=+0.053323488 container create 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:55:23 compute-0 systemd[1]: Started libpod-conmon-88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55.scope.
Dec 13 08:55:23 compute-0 podman[364693]: 2025-12-13 08:55:23.140683773 +0000 UTC m=+0.025547791 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:55:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03dea7b690d3db119f67c61ad462e71c1d2570616b62b014daca58c0b50d9d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:23 compute-0 podman[364693]: 2025-12-13 08:55:23.262139889 +0000 UTC m=+0.147003907 container init 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:55:23 compute-0 podman[364693]: 2025-12-13 08:55:23.268935589 +0000 UTC m=+0.153799597 container start 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.285 248514 DEBUG nova.network.neutron [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updated VIF entry in instance network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.285 248514 DEBUG nova.network.neutron [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:23 compute-0 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [NOTICE]   (364730) : New worker (364747) forked
Dec 13 08:55:23 compute-0 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [NOTICE]   (364730) : Loading success.
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.307 248514 DEBUG oslo_concurrency.lockutils [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.408 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616123.4079223, 2dacd79d-d668-430f-89e3-bd607a8298ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.408 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] VM Started (Lifecycle Event)
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.436 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.441 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616123.4099493, 2dacd79d-d668-430f-89e3-bd607a8298ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.442 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] VM Paused (Lifecycle Event)
Dec 13 08:55:23 compute-0 ceph-mon[76537]: pgmap v2816: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.480 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.484 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.510 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.593 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Creating config drive at /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.598 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzea2pki2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.743 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzea2pki2" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.770 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.774 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config 5c900cfb-46bb-436b-a574-7985be3447da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.919 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config 5c900cfb-46bb-436b-a574-7985be3447da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.920 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Deleting local config drive /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config because it was imported into RBD.
Dec 13 08:55:23 compute-0 kernel: tapa010c1a2-26: entered promiscuous mode
Dec 13 08:55:23 compute-0 NetworkManager[50376]: <info>  [1765616123.9759] manager: (tapa010c1a2-26): new Tun device (/org/freedesktop/NetworkManager/Devices/472)
Dec 13 08:55:23 compute-0 ovn_controller[148476]: 2025-12-13T08:55:23Z|01135|binding|INFO|Claiming lport a010c1a2-26e3-477b-9539-f12ad28801ca for this chassis.
Dec 13 08:55:23 compute-0 ovn_controller[148476]: 2025-12-13T08:55:23Z|01136|binding|INFO|a010c1a2-26e3-477b-9539-f12ad28801ca: Claiming fa:16:3e:c9:87:a3 10.100.0.8
Dec 13 08:55:23 compute-0 systemd-udevd[364635]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:23 compute-0 nova_compute[248510]: 2025-12-13 08:55:23.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:23 compute-0 NetworkManager[50376]: <info>  [1765616123.9930] device (tapa010c1a2-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:55:23 compute-0 NetworkManager[50376]: <info>  [1765616123.9960] device (tapa010c1a2-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.046 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:24 compute-0 ovn_controller[148476]: 2025-12-13T08:55:24Z|01137|binding|INFO|Setting lport a010c1a2-26e3-477b-9539-f12ad28801ca ovn-installed in OVS
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.050 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:24 compute-0 ovn_controller[148476]: 2025-12-13T08:55:24Z|01138|binding|INFO|Setting lport a010c1a2-26e3-477b-9539-f12ad28801ca up in Southbound
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.067 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:87:a3 10.100.0.8'], port_security=['fa:16:3e:c9:87:a3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5c900cfb-46bb-436b-a574-7985be3447da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2e8c6a7-9a03-4a93-a9dd-d4be82f5297d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d20f24f-0d1e-4b3a-97e8-eb661209feb7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a010c1a2-26e3-477b-9539-f12ad28801ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.070 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a010c1a2-26e3-477b-9539-f12ad28801ca in datapath b2d9e215-0c32-4abc-92a1-ad5f852b369d bound to our chassis
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.073 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2d9e215-0c32-4abc-92a1-ad5f852b369d
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.084 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[add1a308-a0aa-4a6e-a671-be6f5a1d92f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.085 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2d9e215-01 in ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.087 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2d9e215-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.087 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f558e84c-b330-49f5-9159-2260abcdb073]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.088 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d277cebb-cc3a-4fa8-8f79-c93db76eebb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 systemd-machined[210538]: New machine qemu-144-instance-00000075.
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.098 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[40faf508-3539-483a-8baf-43dd304da564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000075.
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.112 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[adf0eba1-77e6-47aa-8961-85513784d04d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.136 248514 DEBUG nova.compute.manager [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.137 248514 DEBUG oslo_concurrency.lockutils [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.137 248514 DEBUG oslo_concurrency.lockutils [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.137 248514 DEBUG oslo_concurrency.lockutils [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.138 248514 DEBUG nova.compute.manager [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Processing event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.138 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.147 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616124.147333, 2dacd79d-d668-430f-89e3-bd607a8298ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.148 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] VM Resumed (Lifecycle Event)
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.152 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.153 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b7d7b1-844a-46a6-8a5c-4adf0af15313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.160 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc27da14-239c-43b7-94dc-dfc5ef001393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 NetworkManager[50376]: <info>  [1765616124.1622] manager: (tapb2d9e215-00): new Veth device (/org/freedesktop/NetworkManager/Devices/473)
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.163 248514 INFO nova.virt.libvirt.driver [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance spawned successfully.
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.164 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.186 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.192 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.196 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.196 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.197 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.197 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.198 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.198 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.201 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[28b0cedd-4319-4c39-b424-5cd53df7d185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.204 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b293b59f-fe1c-413d-be37-d99bf521f7ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.231 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:55:24 compute-0 NetworkManager[50376]: <info>  [1765616124.2321] device (tapb2d9e215-00): carrier: link connected
Dec 13 08:55:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.242 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b58fa18-686b-4e8d-8474-5b848292869c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[689afcc8-cc9b-4b10-bdc0-4d642833e59f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2d9e215-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:a0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870144, 'reachable_time': 38470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364837, 'error': None, 'target': 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.276 248514 INFO nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Took 8.76 seconds to spawn the instance on the hypervisor.
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.278 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1f7318-7b93-42fd-a6ff-25aa86e1a50e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:a0fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870144, 'tstamp': 870144}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364838, 'error': None, 'target': 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.299 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c85e3034-d5df-4f2c-bb7f-a4252cbd50cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2d9e215-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:a0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870144, 'reachable_time': 38470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364839, 'error': None, 'target': 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.335 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5567cb88-02d9-4f5f-a785-9fd01b381c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.351 248514 INFO nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Took 10.04 seconds to build instance.
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.369 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.384 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f9380591-3307-46e5-bf81-c49b2810b052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.385 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2d9e215-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.385 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.386 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2d9e215-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:24 compute-0 kernel: tapb2d9e215-00: entered promiscuous mode
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.416 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:24 compute-0 NetworkManager[50376]: <info>  [1765616124.4226] manager: (tapb2d9e215-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.425 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2d9e215-00, col_values=(('external_ids', {'iface-id': '091240c0-aa08-4e16-a096-0471c0ff1f24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:24 compute-0 ovn_controller[148476]: 2025-12-13T08:55:24Z|01139|binding|INFO|Releasing lport 091240c0-aa08-4e16-a096-0471c0ff1f24 from this chassis (sb_readonly=0)
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.430 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.429 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2d9e215-0c32-4abc-92a1-ad5f852b369d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2d9e215-0c32-4abc-92a1-ad5f852b369d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[90f1c49d-f1fc-472c-82c7-731916cc08cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.432 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-b2d9e215-0c32-4abc-92a1-ad5f852b369d
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/b2d9e215-0c32-4abc-92a1-ad5f852b369d.pid.haproxy
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID b2d9e215-0c32-4abc-92a1-ad5f852b369d
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:55:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.434 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'env', 'PROCESS_TAG=haproxy-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2d9e215-0c32-4abc-92a1-ad5f852b369d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.690 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616124.6895673, 5c900cfb-46bb-436b-a574-7985be3447da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.690 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] VM Started (Lifecycle Event)
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.721 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.727 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616124.6897125, 5c900cfb-46bb-436b-a574-7985be3447da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.728 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] VM Paused (Lifecycle Event)
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.759 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.763 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:55:24 compute-0 nova_compute[248510]: 2025-12-13 08:55:24.793 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:55:24 compute-0 podman[364910]: 2025-12-13 08:55:24.835980259 +0000 UTC m=+0.054929198 container create d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:55:24 compute-0 systemd[1]: Started libpod-conmon-d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6.scope.
Dec 13 08:55:24 compute-0 podman[364910]: 2025-12-13 08:55:24.809918176 +0000 UTC m=+0.028867125 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:55:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761b232b1aa41b0d759478588d2b31a2caafbf1df47bdacf3a2927b25a18515d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:24 compute-0 podman[364910]: 2025-12-13 08:55:24.927947215 +0000 UTC m=+0.146896164 container init d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 08:55:24 compute-0 podman[364910]: 2025-12-13 08:55:24.933609307 +0000 UTC m=+0.152558246 container start d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:55:24 compute-0 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [NOTICE]   (364929) : New worker (364931) forked
Dec 13 08:55:24 compute-0 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [NOTICE]   (364929) : Loading success.
Dec 13 08:55:25 compute-0 ceph-mon[76537]: pgmap v2817: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 13 08:55:25 compute-0 nova_compute[248510]: 2025-12-13 08:55:25.489 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:25 compute-0 nova_compute[248510]: 2025-12-13 08:55:25.491 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:25 compute-0 nova_compute[248510]: 2025-12-13 08:55:25.510 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:55:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:25 compute-0 nova_compute[248510]: 2025-12-13 08:55:25.617 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:25 compute-0 nova_compute[248510]: 2025-12-13 08:55:25.618 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:25 compute-0 nova_compute[248510]: 2025-12-13 08:55:25.628 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:55:25 compute-0 nova_compute[248510]: 2025-12-13 08:55:25.629 248514 INFO nova.compute.claims [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:55:25 compute-0 nova_compute[248510]: 2025-12-13 08:55:25.825 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 13 08:55:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:55:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357623257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.672 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.673 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.673 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.674 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.674 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] No waiting events found dispatching network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.674 248514 WARNING nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received unexpected event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e for instance with vm_state active and task_state None.
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.674 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.675 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.675 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.675 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.676 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Processing event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.676 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.676 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.676 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.677 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.677 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.677 248514 WARNING nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca for instance with vm_state building and task_state spawning.
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.678 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:55:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2357623257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.886 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616126.8864288, 5c900cfb-46bb-436b-a574-7985be3447da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.888 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] VM Resumed (Lifecycle Event)
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.891 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.894 248514 INFO nova.virt.libvirt.driver [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance spawned successfully.
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.895 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.903 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.908 248514 DEBUG nova.compute.provider_tree [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.915 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.918 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.932 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.935 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.936 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.936 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.936 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.937 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.943 248514 DEBUG nova.scheduler.client.report [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.946 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.982 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:26 compute-0 nova_compute[248510]: 2025-12-13 08:55:26.983 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.050 248514 INFO nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Took 10.68 seconds to spawn the instance on the hypervisor.
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.051 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.092 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.092 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.131 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.143 248514 INFO nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Took 12.67 seconds to build instance.
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.155 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.167 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.255 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.259 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.259 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Creating image(s)
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.284 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.307 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.329 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.336 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.417 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.418 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.419 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.420 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.441 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.446 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 75f348ef-4044-47a1-ba1b-f1b66513450c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.584 248514 DEBUG nova.policy [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81fb01d9d08845c3b626079ab726db7a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.843 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 75f348ef-4044-47a1-ba1b-f1b66513450c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:27 compute-0 ceph-mon[76537]: pgmap v2818: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 13 08:55:27 compute-0 nova_compute[248510]: 2025-12-13 08:55:27.945 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] resizing rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.032 248514 DEBUG nova.objects.instance [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'migration_context' on Instance uuid 75f348ef-4044-47a1-ba1b-f1b66513450c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.049 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.050 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Ensure instance console log exists: /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.050 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.051 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.051 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 148 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.3 MiB/s wr, 122 op/s
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 08:55:28 compute-0 nova_compute[248510]: 2025-12-13 08:55:28.853 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Successfully created port: 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:55:28 compute-0 ceph-mon[76537]: pgmap v2819: 321 pgs: 321 active+clean; 148 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.3 MiB/s wr, 122 op/s
Dec 13 08:55:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 181 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.1 MiB/s wr, 212 op/s
Dec 13 08:55:30 compute-0 nova_compute[248510]: 2025-12-13 08:55:30.542 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:30 compute-0 NetworkManager[50376]: <info>  [1765616130.5455] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Dec 13 08:55:30 compute-0 NetworkManager[50376]: <info>  [1765616130.5463] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Dec 13 08:55:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:30 compute-0 nova_compute[248510]: 2025-12-13 08:55:30.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:30 compute-0 ovn_controller[148476]: 2025-12-13T08:55:30Z|01140|binding|INFO|Releasing lport 091240c0-aa08-4e16-a096-0471c0ff1f24 from this chassis (sb_readonly=0)
Dec 13 08:55:30 compute-0 ovn_controller[148476]: 2025-12-13T08:55:30Z|01141|binding|INFO|Releasing lport 3d979ee9-5b95-4edf-8ffc-3de7e778c5ff from this chassis (sb_readonly=0)
Dec 13 08:55:30 compute-0 nova_compute[248510]: 2025-12-13 08:55:30.692 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:30 compute-0 podman[365130]: 2025-12-13 08:55:30.977579344 +0000 UTC m=+0.063255467 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 08:55:30 compute-0 podman[365131]: 2025-12-13 08:55:30.992584051 +0000 UTC m=+0.076218572 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:55:31 compute-0 podman[365129]: 2025-12-13 08:55:31.005157216 +0000 UTC m=+0.092240764 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:55:31 compute-0 ceph-mon[76537]: pgmap v2820: 321 pgs: 321 active+clean; 181 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.1 MiB/s wr, 212 op/s
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.302 248514 DEBUG nova.compute.manager [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-changed-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.303 248514 DEBUG nova.compute.manager [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing instance network info cache due to event network-changed-59834f67-f81d-41bf-9bec-95eea737178e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.303 248514 DEBUG oslo_concurrency.lockutils [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.304 248514 DEBUG oslo_concurrency.lockutils [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.304 248514 DEBUG nova.network.neutron [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.347 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Successfully updated port: 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.369 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.369 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.370 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.712 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.921 248514 DEBUG nova.compute.manager [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.921 248514 DEBUG nova.compute.manager [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing instance network info cache due to event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.922 248514 DEBUG oslo_concurrency.lockutils [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.922 248514 DEBUG oslo_concurrency.lockutils [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:31 compute-0 nova_compute[248510]: 2025-12-13 08:55:31.923 248514 DEBUG nova.network.neutron [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:55:32 compute-0 nova_compute[248510]: 2025-12-13 08:55:32.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 181 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Dec 13 08:55:32 compute-0 nova_compute[248510]: 2025-12-13 08:55:32.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.112 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.145 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.149 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance network_info: |[{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.152 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start _get_guest_xml network_info=[{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.159 248514 WARNING nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.166 248514 DEBUG nova.virt.libvirt.host [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.166 248514 DEBUG nova.virt.libvirt.host [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.170 248514 DEBUG nova.virt.libvirt.host [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.170 248514 DEBUG nova.virt.libvirt.host [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.171 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.171 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.171 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.172 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.172 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.172 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.173 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.173 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.173 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.174 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.174 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.174 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.178 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.295 248514 DEBUG nova.network.neutron [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updated VIF entry in instance network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.296 248514 DEBUG nova.network.neutron [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:33 compute-0 ceph-mon[76537]: pgmap v2821: 321 pgs: 321 active+clean; 181 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.355 248514 DEBUG oslo_concurrency.lockutils [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.443 248514 DEBUG nova.compute.manager [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.444 248514 DEBUG nova.compute.manager [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing instance network info cache due to event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.444 248514 DEBUG oslo_concurrency.lockutils [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.444 248514 DEBUG oslo_concurrency.lockutils [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.445 248514 DEBUG nova.network.neutron [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.673 248514 DEBUG nova.network.neutron [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updated VIF entry in instance network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.674 248514 DEBUG nova.network.neutron [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.703 248514 DEBUG oslo_concurrency.lockutils [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:55:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271251360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.789 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.825 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.831 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.872 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.873 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 08:55:33 compute-0 nova_compute[248510]: 2025-12-13 08:55:33.903 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 08:55:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 181 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Dec 13 08:55:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4271251360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:55:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066197622' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.423 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.425 248514 DEBUG nova.virt.libvirt.vif [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-608841865',display_name='tempest-TestSnapshotPattern-server-608841865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-608841865',id=118,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-dx9z8wv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=75f348ef-4044-47a1-ba1b-f1b66513450c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.425 248514 DEBUG nova.network.os_vif_util [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.426 248514 DEBUG nova.network.os_vif_util [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.427 248514 DEBUG nova.objects.instance [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 75f348ef-4044-47a1-ba1b-f1b66513450c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.465 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <uuid>75f348ef-4044-47a1-ba1b-f1b66513450c</uuid>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <name>instance-00000076</name>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <nova:name>tempest-TestSnapshotPattern-server-608841865</nova:name>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:55:33</nova:creationTime>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <nova:user uuid="81fb01d9d08845c3b626079ab726db7a">tempest-TestSnapshotPattern-1494512648-project-member</nova:user>
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <nova:project uuid="6c21c2eb2d0c4465ae562381f358fbd8">tempest-TestSnapshotPattern-1494512648</nova:project>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <nova:port uuid="63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed">
Dec 13 08:55:34 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <system>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <entry name="serial">75f348ef-4044-47a1-ba1b-f1b66513450c</entry>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <entry name="uuid">75f348ef-4044-47a1-ba1b-f1b66513450c</entry>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </system>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <os>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   </os>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <features>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   </features>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/75f348ef-4044-47a1-ba1b-f1b66513450c_disk">
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       </source>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config">
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       </source>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:55:34 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:7d:e7:bf"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <target dev="tap63a84e8b-c1"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/console.log" append="off"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <video>
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </video>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:55:34 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:55:34 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:55:34 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:55:34 compute-0 nova_compute[248510]: </domain>
Dec 13 08:55:34 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.470 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Preparing to wait for external event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.470 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.471 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.471 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.472 248514 DEBUG nova.virt.libvirt.vif [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-608841865',display_name='tempest-TestSnapshotPattern-server-608841865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-608841865',id=118,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-dx9z8wv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=75f348ef-4044-47a1-ba1b-f1b66513450c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.472 248514 DEBUG nova.network.os_vif_util [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.473 248514 DEBUG nova.network.os_vif_util [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.473 248514 DEBUG os_vif [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.474 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.475 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.478 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63a84e8b-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.479 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63a84e8b-c1, col_values=(('external_ids', {'iface-id': '63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:e7:bf', 'vm-uuid': '75f348ef-4044-47a1-ba1b-f1b66513450c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.480 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:34 compute-0 NetworkManager[50376]: <info>  [1765616134.4810] manager: (tap63a84e8b-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.487 248514 INFO os_vif [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1')
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.556 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.569 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.569 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No VIF found with MAC fa:16:3e:7d:e7:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.570 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Using config drive
Dec 13 08:55:34 compute-0 nova_compute[248510]: 2025-12-13 08:55:34.599 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:35 compute-0 ceph-mon[76537]: pgmap v2822: 321 pgs: 321 active+clean; 181 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Dec 13 08:55:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3066197622' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:55:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 181 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Dec 13 08:55:36 compute-0 nova_compute[248510]: 2025-12-13 08:55:36.654 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Creating config drive at /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config
Dec 13 08:55:36 compute-0 nova_compute[248510]: 2025-12-13 08:55:36.660 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gtvb51z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:36 compute-0 ceph-mon[76537]: pgmap v2823: 321 pgs: 321 active+clean; 181 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Dec 13 08:55:36 compute-0 nova_compute[248510]: 2025-12-13 08:55:36.804 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gtvb51z" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:36 compute-0 nova_compute[248510]: 2025-12-13 08:55:36.832 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:36 compute-0 nova_compute[248510]: 2025-12-13 08:55:36.835 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:36 compute-0 nova_compute[248510]: 2025-12-13 08:55:36.978 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:36 compute-0 nova_compute[248510]: 2025-12-13 08:55:36.979 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Deleting local config drive /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config because it was imported into RBD.
Dec 13 08:55:37 compute-0 kernel: tap63a84e8b-c1: entered promiscuous mode
Dec 13 08:55:37 compute-0 NetworkManager[50376]: <info>  [1765616137.0244] manager: (tap63a84e8b-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/478)
Dec 13 08:55:37 compute-0 ovn_controller[148476]: 2025-12-13T08:55:37Z|01142|binding|INFO|Claiming lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed for this chassis.
Dec 13 08:55:37 compute-0 ovn_controller[148476]: 2025-12-13T08:55:37Z|01143|binding|INFO|63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed: Claiming fa:16:3e:7d:e7:bf 10.100.0.3
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:37 compute-0 ovn_controller[148476]: 2025-12-13T08:55:37Z|01144|binding|INFO|Setting lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed ovn-installed in OVS
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:37 compute-0 systemd-machined[210538]: New machine qemu-145-instance-00000076.
Dec 13 08:55:37 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000076.
Dec 13 08:55:37 compute-0 systemd-udevd[365326]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:55:37 compute-0 ovn_controller[148476]: 2025-12-13T08:55:37Z|01145|binding|INFO|Setting lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed up in Southbound
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.101 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:e7:bf 10.100.0.3'], port_security=['fa:16:3e:7d:e7:bf 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '75f348ef-4044-47a1-ba1b-f1b66513450c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e1e2d18-e674-4ee6-b553-a8676e40259f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c30b2fc-21fd-4778-91da-98384d5e05df, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.103 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed in datapath d2ff4cff-54cc-40c6-a486-7e7532c2462b bound to our chassis
Dec 13 08:55:37 compute-0 NetworkManager[50376]: <info>  [1765616137.1064] device (tap63a84e8b-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:55:37 compute-0 NetworkManager[50376]: <info>  [1765616137.1069] device (tap63a84e8b-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.106 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2ff4cff-54cc-40c6-a486-7e7532c2462b
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.118 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c72d57ae-3b15-4d13-a9d6-798f602da9d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.119 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2ff4cff-51 in ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.121 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2ff4cff-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.121 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd223605-cb24-44a0-aaf1-d9e7b8f697da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.122 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2255a2a-c379-411b-a8ce-b675caddcf5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.133 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0164c1a5-51de-4bf7-9605-6fc36549b6f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_controller[148476]: 2025-12-13T08:55:37Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:dc:ee 10.100.0.7
Dec 13 08:55:37 compute-0 ovn_controller[148476]: 2025-12-13T08:55:37Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:dc:ee 10.100.0.7
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.146 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21cb71be-e2ad-4a86-9420-fb1895d74a4e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.173 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8df202c4-a6c5-410b-bba4-33f3be40d90a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 NetworkManager[50376]: <info>  [1765616137.1913] manager: (tapd2ff4cff-50): new Veth device (/org/freedesktop/NetworkManager/Devices/479)
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.193 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[00fb54ce-0a90-4687-ac04-4b5e688cb6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.227 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef13cd9-545d-421a-ac93-c0c36d6a407b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.230 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[960f8656-5c45-41c2-b380-5d8fc7fadebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 NetworkManager[50376]: <info>  [1765616137.2601] device (tapd2ff4cff-50): carrier: link connected
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.270 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[04fc2aaa-f9c2-485e-9d6e-e86ee13def12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.287 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccd7f90-a8d3-461a-a385-cfb2da0bf118]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2ff4cff-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:85:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871447, 'reachable_time': 40005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365361, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.303 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[610031df-31ef-4f20-aa96-2c1393bdc619]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:856b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871447, 'tstamp': 871447}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365362, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.319 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca490901-a5bb-4ebd-a362-524529be4acc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2ff4cff-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:85:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871447, 'reachable_time': 40005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365363, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd048956-11aa-433a-911a-4286aa0fe7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.397 248514 DEBUG nova.compute.manager [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.397 248514 DEBUG oslo_concurrency.lockutils [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.397 248514 DEBUG oslo_concurrency.lockutils [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.398 248514 DEBUG oslo_concurrency.lockutils [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.398 248514 DEBUG nova.compute.manager [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Processing event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.421 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee240e56-e84f-454d-a5ac-b6377e847f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.424 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2ff4cff-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.424 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.424 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2ff4cff-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:37 compute-0 NetworkManager[50376]: <info>  [1765616137.4273] manager: (tapd2ff4cff-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Dec 13 08:55:37 compute-0 kernel: tapd2ff4cff-50: entered promiscuous mode
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.437 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2ff4cff-50, col_values=(('external_ids', {'iface-id': '47f45749-b232-4d0c-bf37-be042ea606c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:37 compute-0 ovn_controller[148476]: 2025-12-13T08:55:37Z|01146|binding|INFO|Releasing lport 47f45749-b232-4d0c-bf37-be042ea606c8 from this chassis (sb_readonly=0)
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.446 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2ff4cff-54cc-40c6-a486-7e7532c2462b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2ff4cff-54cc-40c6-a486-7e7532c2462b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.447 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[81375666-fbab-4161-a1cd-972b3c2800a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.447 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-d2ff4cff-54cc-40c6-a486-7e7532c2462b
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/d2ff4cff-54cc-40c6-a486-7e7532c2462b.pid.haproxy
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID d2ff4cff-54cc-40c6-a486-7e7532c2462b
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:55:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.448 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'env', 'PROCESS_TAG=haproxy-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2ff4cff-54cc-40c6-a486-7e7532c2462b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.726 248514 DEBUG nova.network.neutron [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updated VIF entry in instance network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.728 248514 DEBUG nova.network.neutron [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:37 compute-0 nova_compute[248510]: 2025-12-13 08:55:37.746 248514 DEBUG oslo_concurrency.lockutils [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:37 compute-0 podman[365403]: 2025-12-13 08:55:37.963311675 +0000 UTC m=+0.108441460 container create 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:55:37 compute-0 podman[365403]: 2025-12-13 08:55:37.895694419 +0000 UTC m=+0.040824234 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:55:38 compute-0 systemd[1]: Started libpod-conmon-5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e.scope.
Dec 13 08:55:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffb7936c063d506819829d9345338322d52c45114e0f00e692ced85a23dd5f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.075 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616138.0748463, 75f348ef-4044-47a1-ba1b-f1b66513450c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.076 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] VM Started (Lifecycle Event)
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.079 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.084 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.091 248514 INFO nova.virt.libvirt.driver [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance spawned successfully.
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.092 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:55:38 compute-0 podman[365403]: 2025-12-13 08:55:38.098776961 +0000 UTC m=+0.243906776 container init 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 08:55:38 compute-0 podman[365403]: 2025-12-13 08:55:38.10790754 +0000 UTC m=+0.253037325 container start 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.126 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.133 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:55:38 compute-0 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [NOTICE]   (365455) : New worker (365457) forked
Dec 13 08:55:38 compute-0 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [NOTICE]   (365455) : Loading success.
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.140 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.140 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.141 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.141 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.142 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.142 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.179 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.180 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616138.0751743, 75f348ef-4044-47a1-ba1b-f1b66513450c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.180 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] VM Paused (Lifecycle Event)
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.209 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.214 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616138.0829155, 75f348ef-4044-47a1-ba1b-f1b66513450c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.215 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] VM Resumed (Lifecycle Event)
Dec 13 08:55:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 192 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 190 op/s
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.441 248514 INFO nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 11.18 seconds to spawn the instance on the hypervisor.
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.442 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.444 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.450 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.481 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:55:38 compute-0 sudo[365466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:55:38 compute-0 sudo[365466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:38 compute-0 sudo[365466]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.511 248514 INFO nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 12.93 seconds to build instance.
Dec 13 08:55:38 compute-0 nova_compute[248510]: 2025-12-13 08:55:38.530 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:38 compute-0 sudo[365491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 08:55:38 compute-0 sudo[365491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:38 compute-0 sudo[365491]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:55:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:55:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:38 compute-0 sudo[365536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:55:39 compute-0 sudo[365536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:39 compute-0 sudo[365536]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:39 compute-0 sudo[365561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:55:39 compute-0 sudo[365561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:39 compute-0 ovn_controller[148476]: 2025-12-13T08:55:39Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:87:a3 10.100.0.8
Dec 13 08:55:39 compute-0 ovn_controller[148476]: 2025-12-13T08:55:39Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:87:a3 10.100.0.8
Dec 13 08:55:39 compute-0 ceph-mon[76537]: pgmap v2824: 321 pgs: 321 active+clean; 192 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 190 op/s
Dec 13 08:55:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:39 compute-0 nova_compute[248510]: 2025-12-13 08:55:39.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:39 compute-0 nova_compute[248510]: 2025-12-13 08:55:39.516 248514 DEBUG nova.compute.manager [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:39 compute-0 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 DEBUG oslo_concurrency.lockutils [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:39 compute-0 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 DEBUG oslo_concurrency.lockutils [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:39 compute-0 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 DEBUG oslo_concurrency.lockutils [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:39 compute-0 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 DEBUG nova.compute.manager [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] No waiting events found dispatching network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:55:39 compute-0 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 WARNING nova.compute.manager [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received unexpected event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed for instance with vm_state active and task_state None.
Dec 13 08:55:39 compute-0 sudo[365561]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:55:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:55:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:55:39 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:55:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:55:39 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:55:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:55:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:55:39 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:55:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:55:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:55:39 compute-0 sudo[365617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:55:39 compute-0 sudo[365617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:39 compute-0 sudo[365617]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:39 compute-0 sudo[365642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:55:39 compute-0 sudo[365642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:55:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:55:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:55:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:55:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:55:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:55:40 compute-0 podman[365679]: 2025-12-13 08:55:40.185526402 +0000 UTC m=+0.036426955 container create 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:55:40 compute-0 systemd[1]: Started libpod-conmon-3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe.scope.
Dec 13 08:55:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 231 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 241 op/s
Dec 13 08:55:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:40 compute-0 podman[365679]: 2025-12-13 08:55:40.169761857 +0000 UTC m=+0.020662410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:55:40 compute-0 podman[365679]: 2025-12-13 08:55:40.265678531 +0000 UTC m=+0.116579124 container init 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 13 08:55:40 compute-0 podman[365679]: 2025-12-13 08:55:40.278435121 +0000 UTC m=+0.129335674 container start 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:55:40 compute-0 charming_bhaskara[365693]: 167 167
Dec 13 08:55:40 compute-0 systemd[1]: libpod-3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe.scope: Deactivated successfully.
Dec 13 08:55:40 compute-0 podman[365679]: 2025-12-13 08:55:40.286471643 +0000 UTC m=+0.137372216 container attach 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 08:55:40 compute-0 podman[365679]: 2025-12-13 08:55:40.287097288 +0000 UTC m=+0.137997871 container died 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:55:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebb5885c065a1855bfde3261867530d1e691054171e261ec4be75d51f4724e66-merged.mount: Deactivated successfully.
Dec 13 08:55:40 compute-0 podman[365679]: 2025-12-13 08:55:40.335269036 +0000 UTC m=+0.186169589 container remove 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:55:40 compute-0 systemd[1]: libpod-conmon-3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe.scope: Deactivated successfully.
Dec 13 08:55:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:55:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:55:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:55:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:55:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:55:40 compute-0 podman[365716]: 2025-12-13 08:55:40.536111922 +0000 UTC m=+0.047698617 container create 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 08:55:40 compute-0 systemd[1]: Started libpod-conmon-05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513.scope.
Dec 13 08:55:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:40 compute-0 podman[365716]: 2025-12-13 08:55:40.515321991 +0000 UTC m=+0.026908716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:40 compute-0 podman[365716]: 2025-12-13 08:55:40.636316864 +0000 UTC m=+0.147903579 container init 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:55:40 compute-0 podman[365716]: 2025-12-13 08:55:40.64451831 +0000 UTC m=+0.156105005 container start 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 08:55:40 compute-0 podman[365716]: 2025-12-13 08:55:40.648405557 +0000 UTC m=+0.159992252 container attach 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 08:55:41 compute-0 angry_haibt[365733]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:55:41 compute-0 angry_haibt[365733]: --> All data devices are unavailable
Dec 13 08:55:41 compute-0 systemd[1]: libpod-05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513.scope: Deactivated successfully.
Dec 13 08:55:41 compute-0 podman[365716]: 2025-12-13 08:55:41.173776099 +0000 UTC m=+0.685362794 container died 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:55:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e-merged.mount: Deactivated successfully.
Dec 13 08:55:41 compute-0 podman[365716]: 2025-12-13 08:55:41.221182068 +0000 UTC m=+0.732768763 container remove 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:55:41 compute-0 systemd[1]: libpod-conmon-05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513.scope: Deactivated successfully.
Dec 13 08:55:41 compute-0 sudo[365642]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:41 compute-0 sudo[365766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:55:41 compute-0 sudo[365766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:41 compute-0 sudo[365766]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:41 compute-0 ceph-mon[76537]: pgmap v2825: 321 pgs: 321 active+clean; 231 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 241 op/s
Dec 13 08:55:41 compute-0 sudo[365791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:55:41 compute-0 sudo[365791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:41 compute-0 podman[365828]: 2025-12-13 08:55:41.719170664 +0000 UTC m=+0.042617360 container create 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Dec 13 08:55:41 compute-0 systemd[1]: Started libpod-conmon-72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37.scope.
Dec 13 08:55:41 compute-0 podman[365828]: 2025-12-13 08:55:41.699322026 +0000 UTC m=+0.022768732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:55:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:41 compute-0 podman[365828]: 2025-12-13 08:55:41.818784831 +0000 UTC m=+0.142231547 container init 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 08:55:41 compute-0 podman[365828]: 2025-12-13 08:55:41.827353306 +0000 UTC m=+0.150799992 container start 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:55:41 compute-0 festive_lalande[365844]: 167 167
Dec 13 08:55:41 compute-0 podman[365828]: 2025-12-13 08:55:41.832718411 +0000 UTC m=+0.156165097 container attach 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Dec 13 08:55:41 compute-0 systemd[1]: libpod-72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37.scope: Deactivated successfully.
Dec 13 08:55:41 compute-0 podman[365828]: 2025-12-13 08:55:41.834541096 +0000 UTC m=+0.157987782 container died 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:55:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a154a1c4401a97e1c1542382c49e95b3c57b7f45a656aaa8e6aee4cc111e2f1e-merged.mount: Deactivated successfully.
Dec 13 08:55:41 compute-0 podman[365828]: 2025-12-13 08:55:41.881282918 +0000 UTC m=+0.204729604 container remove 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:55:41 compute-0 systemd[1]: libpod-conmon-72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37.scope: Deactivated successfully.
Dec 13 08:55:42 compute-0 podman[365868]: 2025-12-13 08:55:42.065086677 +0000 UTC m=+0.030281621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:55:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 231 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Dec 13 08:55:42 compute-0 podman[365868]: 2025-12-13 08:55:42.283861332 +0000 UTC m=+0.249056256 container create bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:55:42 compute-0 systemd[1]: Started libpod-conmon-bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266.scope.
Dec 13 08:55:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:42 compute-0 podman[365868]: 2025-12-13 08:55:42.508203717 +0000 UTC m=+0.473398671 container init bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 08:55:42 compute-0 podman[365868]: 2025-12-13 08:55:42.515530631 +0000 UTC m=+0.480725555 container start bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:55:42 compute-0 podman[365868]: 2025-12-13 08:55:42.523039789 +0000 UTC m=+0.488234733 container attach bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:55:42 compute-0 nova_compute[248510]: 2025-12-13 08:55:42.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]: {
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:     "0": [
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:         {
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "devices": [
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "/dev/loop3"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             ],
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_name": "ceph_lv0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_size": "21470642176",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "name": "ceph_lv0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "tags": {
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cluster_name": "ceph",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.crush_device_class": "",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.encrypted": "0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.objectstore": "bluestore",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osd_id": "0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.type": "block",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.vdo": "0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.with_tpm": "0"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             },
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "type": "block",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "vg_name": "ceph_vg0"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:         }
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:     ],
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:     "1": [
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:         {
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "devices": [
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "/dev/loop4"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             ],
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_name": "ceph_lv1",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_size": "21470642176",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "name": "ceph_lv1",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "tags": {
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cluster_name": "ceph",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.crush_device_class": "",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.encrypted": "0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.objectstore": "bluestore",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osd_id": "1",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.type": "block",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.vdo": "0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.with_tpm": "0"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             },
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "type": "block",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "vg_name": "ceph_vg1"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:         }
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:     ],
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:     "2": [
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:         {
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "devices": [
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "/dev/loop5"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             ],
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_name": "ceph_lv2",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_size": "21470642176",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "name": "ceph_lv2",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "tags": {
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.cluster_name": "ceph",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.crush_device_class": "",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.encrypted": "0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.objectstore": "bluestore",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osd_id": "2",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.type": "block",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.vdo": "0",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:                 "ceph.with_tpm": "0"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             },
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "type": "block",
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:             "vg_name": "ceph_vg2"
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:         }
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]:     ]
Dec 13 08:55:42 compute-0 trusting_lichterman[365885]: }
Dec 13 08:55:42 compute-0 systemd[1]: libpod-bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266.scope: Deactivated successfully.
Dec 13 08:55:42 compute-0 podman[365868]: 2025-12-13 08:55:42.849311419 +0000 UTC m=+0.814506343 container died bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:55:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316-merged.mount: Deactivated successfully.
Dec 13 08:55:42 compute-0 podman[365868]: 2025-12-13 08:55:42.952539148 +0000 UTC m=+0.917734072 container remove bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:55:42 compute-0 systemd[1]: libpod-conmon-bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266.scope: Deactivated successfully.
Dec 13 08:55:43 compute-0 sudo[365791]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:43 compute-0 sudo[365905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:55:43 compute-0 sudo[365905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:43 compute-0 sudo[365905]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:43 compute-0 sudo[365930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:55:43 compute-0 sudo[365930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:43 compute-0 ceph-mon[76537]: pgmap v2826: 321 pgs: 321 active+clean; 231 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Dec 13 08:55:43 compute-0 podman[365968]: 2025-12-13 08:55:43.419402723 +0000 UTC m=+0.050076706 container create be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 08:55:43 compute-0 systemd[1]: Started libpod-conmon-be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada.scope.
Dec 13 08:55:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:43 compute-0 podman[365968]: 2025-12-13 08:55:43.398649703 +0000 UTC m=+0.029323716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:55:43 compute-0 podman[365968]: 2025-12-13 08:55:43.50464347 +0000 UTC m=+0.135317473 container init be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:55:43 compute-0 podman[365968]: 2025-12-13 08:55:43.512845366 +0000 UTC m=+0.143519359 container start be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:55:43 compute-0 loving_banach[365986]: 167 167
Dec 13 08:55:43 compute-0 podman[365968]: 2025-12-13 08:55:43.52017179 +0000 UTC m=+0.150845803 container attach be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 08:55:43 compute-0 systemd[1]: libpod-be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada.scope: Deactivated successfully.
Dec 13 08:55:43 compute-0 podman[365968]: 2025-12-13 08:55:43.52178329 +0000 UTC m=+0.152457273 container died be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ce08ebe4d8283237fdcf0287261c36c40283b164276d072f0458629e6df175c-merged.mount: Deactivated successfully.
Dec 13 08:55:43 compute-0 podman[365968]: 2025-12-13 08:55:43.576699447 +0000 UTC m=+0.207373430 container remove be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:55:43 compute-0 systemd[1]: libpod-conmon-be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada.scope: Deactivated successfully.
Dec 13 08:55:43 compute-0 podman[366010]: 2025-12-13 08:55:43.7722423 +0000 UTC m=+0.044835805 container create 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:55:43 compute-0 systemd[1]: Started libpod-conmon-42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c.scope.
Dec 13 08:55:43 compute-0 podman[366010]: 2025-12-13 08:55:43.752048694 +0000 UTC m=+0.024642219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:55:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:43 compute-0 podman[366010]: 2025-12-13 08:55:43.879246223 +0000 UTC m=+0.151839738 container init 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:55:43 compute-0 podman[366010]: 2025-12-13 08:55:43.886476004 +0000 UTC m=+0.159069499 container start 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:55:43 compute-0 podman[366010]: 2025-12-13 08:55:43.890478394 +0000 UTC m=+0.163071909 container attach 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 08:55:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Dec 13 08:55:44 compute-0 nova_compute[248510]: 2025-12-13 08:55:44.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:44 compute-0 lvm[366104]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:55:44 compute-0 lvm[366105]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:55:44 compute-0 lvm[366104]: VG ceph_vg0 finished
Dec 13 08:55:44 compute-0 lvm[366105]: VG ceph_vg1 finished
Dec 13 08:55:44 compute-0 lvm[366107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:55:44 compute-0 lvm[366107]: VG ceph_vg2 finished
Dec 13 08:55:44 compute-0 vigilant_poincare[366026]: {}
Dec 13 08:55:44 compute-0 systemd[1]: libpod-42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c.scope: Deactivated successfully.
Dec 13 08:55:44 compute-0 systemd[1]: libpod-42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c.scope: Consumed 1.400s CPU time.
Dec 13 08:55:44 compute-0 podman[366010]: 2025-12-13 08:55:44.779699009 +0000 UTC m=+1.052292504 container died 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 13 08:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293-merged.mount: Deactivated successfully.
Dec 13 08:55:44 compute-0 podman[366010]: 2025-12-13 08:55:44.825976659 +0000 UTC m=+1.098570154 container remove 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 08:55:44 compute-0 systemd[1]: libpod-conmon-42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c.scope: Deactivated successfully.
Dec 13 08:55:44 compute-0 sudo[365930]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:55:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:55:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:44 compute-0 sudo[366121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:55:44 compute-0 sudo[366121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:55:44 compute-0 sudo[366121]: pam_unix(sudo:session): session closed for user root
Dec 13 08:55:45 compute-0 nova_compute[248510]: 2025-12-13 08:55:45.108 248514 DEBUG nova.compute.manager [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:45 compute-0 nova_compute[248510]: 2025-12-13 08:55:45.109 248514 DEBUG nova.compute.manager [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing instance network info cache due to event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:55:45 compute-0 nova_compute[248510]: 2025-12-13 08:55:45.109 248514 DEBUG oslo_concurrency.lockutils [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:45 compute-0 nova_compute[248510]: 2025-12-13 08:55:45.109 248514 DEBUG oslo_concurrency.lockutils [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:45 compute-0 nova_compute[248510]: 2025-12-13 08:55:45.109 248514 DEBUG nova.network.neutron [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:55:45 compute-0 ceph-mon[76537]: pgmap v2827: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Dec 13 08:55:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:55:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Dec 13 08:55:47 compute-0 nova_compute[248510]: 2025-12-13 08:55:47.122 248514 INFO nova.compute.manager [None req-ac1e67fe-5ffe-4f5a-956c-dfb234f7e89d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Get console output
Dec 13 08:55:47 compute-0 nova_compute[248510]: 2025-12-13 08:55:47.128 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:55:47 compute-0 ceph-mon[76537]: pgmap v2828: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Dec 13 08:55:47 compute-0 nova_compute[248510]: 2025-12-13 08:55:47.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:47 compute-0 nova_compute[248510]: 2025-12-13 08:55:47.989 248514 DEBUG nova.network.neutron [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updated VIF entry in instance network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:55:47 compute-0 nova_compute[248510]: 2025-12-13 08:55:47.989 248514 DEBUG nova.network.neutron [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:48 compute-0 nova_compute[248510]: 2025-12-13 08:55:48.016 248514 DEBUG oslo_concurrency.lockutils [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Dec 13 08:55:49 compute-0 ceph-mon[76537]: pgmap v2829: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Dec 13 08:55:49 compute-0 nova_compute[248510]: 2025-12-13 08:55:49.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 177 op/s
Dec 13 08:55:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:50 compute-0 ovn_controller[148476]: 2025-12-13T08:55:50Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:e7:bf 10.100.0.3
Dec 13 08:55:50 compute-0 ovn_controller[148476]: 2025-12-13T08:55:50Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:e7:bf 10.100.0.3
Dec 13 08:55:51 compute-0 nova_compute[248510]: 2025-12-13 08:55:51.328 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:51 compute-0 nova_compute[248510]: 2025-12-13 08:55:51.328 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:51 compute-0 nova_compute[248510]: 2025-12-13 08:55:51.328 248514 DEBUG nova.objects.instance [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'flavor' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:51 compute-0 ceph-mon[76537]: pgmap v2830: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 177 op/s
Dec 13 08:55:51 compute-0 nova_compute[248510]: 2025-12-13 08:55:51.851 248514 DEBUG nova.objects.instance [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_requests' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:51 compute-0 nova_compute[248510]: 2025-12-13 08:55:51.870 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:55:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 377 KiB/s wr, 68 op/s
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.283 248514 DEBUG nova.policy [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.346 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.347 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.365 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.455 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.456 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.468 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.469 248514 INFO nova.compute.claims [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:55:52 compute-0 ceph-mon[76537]: pgmap v2831: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 377 KiB/s wr, 68 op/s
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.665 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:52 compute-0 nova_compute[248510]: 2025-12-13 08:55:52.905 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Successfully created port: e7c59bd7-06ff-4220-ac42-ec02c9e22e2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:55:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:55:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594600232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.250 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.256 248514 DEBUG nova.compute.provider_tree [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.276 248514 DEBUG nova.scheduler.client.report [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.307 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.307 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.363 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.364 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.387 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.404 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.495 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.497 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.498 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Creating image(s)
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.523 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2594600232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.658 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.816 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.821 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.864 248514 DEBUG nova.policy [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.901 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.902 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.903 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.903 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.930 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.935 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 822fa9f6-0a5d-490e-89d7-446df19a068b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.979 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Successfully updated port: e7c59bd7-06ff-4220-ac42-ec02c9e22e2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.996 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.996 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:53 compute-0 nova_compute[248510]: 2025-12-13 08:55:53.997 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:55:54 compute-0 nova_compute[248510]: 2025-12-13 08:55:54.079 248514 DEBUG nova.compute.manager [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-changed-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:54 compute-0 nova_compute[248510]: 2025-12-13 08:55:54.079 248514 DEBUG nova.compute.manager [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing instance network info cache due to event network-changed-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:55:54 compute-0 nova_compute[248510]: 2025-12-13 08:55:54.080 248514 DEBUG oslo_concurrency.lockutils [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 279 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Dec 13 08:55:54 compute-0 nova_compute[248510]: 2025-12-13 08:55:54.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:54 compute-0 ceph-mon[76537]: pgmap v2832: 321 pgs: 321 active+clean; 279 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.054 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 822fa9f6-0a5d-490e-89d7-446df19a068b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.122 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.309 248514 DEBUG nova.objects.instance [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 822fa9f6-0a5d-490e-89d7-446df19a068b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.328 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.329 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Ensure instance console log exists: /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.329 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.329 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.330 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:55 compute-0 nova_compute[248510]: 2025-12-13 08:55:55.345 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Successfully created port: f069307e-1a47-4342-b244-d88f04ff512b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:55:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:55.432 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:55.434 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:55.435 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:55:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 279 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:55:57 compute-0 ceph-mon[76537]: pgmap v2833: 321 pgs: 321 active+clean; 279 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 08:55:57 compute-0 nova_compute[248510]: 2025-12-13 08:55:57.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.047 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Successfully updated port: f069307e-1a47-4342-b244-d88f04ff512b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.064 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.065 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.065 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.126 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.165 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.166 248514 DEBUG oslo_concurrency.lockutils [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.166 248514 DEBUG nova.network.neutron [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing network info cache for port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.169 248514 DEBUG nova.virt.libvirt.vif [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.169 248514 DEBUG nova.network.os_vif_util [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.170 248514 DEBUG nova.network.os_vif_util [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.170 248514 DEBUG os_vif [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.171 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.171 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.176 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.176 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7c59bd7-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.176 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7c59bd7-06, col_values=(('external_ids', {'iface-id': 'e7c59bd7-06ff-4220-ac42-ec02c9e22e2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:f1:bc', 'vm-uuid': '5c900cfb-46bb-436b-a574-7985be3447da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 NetworkManager[50376]: <info>  [1765616158.2174] manager: (tape7c59bd7-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.218 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.227 248514 INFO os_vif [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06')
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.227 248514 DEBUG nova.virt.libvirt.vif [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.228 248514 DEBUG nova.network.os_vif_util [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.228 248514 DEBUG nova.network.os_vif_util [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.231 248514 DEBUG nova.virt.libvirt.guest [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] attach device xml: <interface type="ethernet">
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:1d:f1:bc"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <target dev="tape7c59bd7-06"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]: </interface>
Dec 13 08:55:58 compute-0 nova_compute[248510]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 08:55:58 compute-0 kernel: tape7c59bd7-06: entered promiscuous mode
Dec 13 08:55:58 compute-0 NetworkManager[50376]: <info>  [1765616158.2470] manager: (tape7c59bd7-06): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Dec 13 08:55:58 compute-0 ovn_controller[148476]: 2025-12-13T08:55:58Z|01147|binding|INFO|Claiming lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for this chassis.
Dec 13 08:55:58 compute-0 ovn_controller[148476]: 2025-12-13T08:55:58Z|01148|binding|INFO|e7c59bd7-06ff-4220-ac42-ec02c9e22e2c: Claiming fa:16:3e:1d:f1:bc 10.100.0.18
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 297 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.9 MiB/s wr, 87 op/s
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.256 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f1:bc 10.100.0.18'], port_security=['fa:16:3e:1d:f1:bc 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '5c900cfb-46bb-436b-a574-7985be3447da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96311132-fe8d-4c3e-acff-901d94244605', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=482a9af5-c769-4836-8604-7eb97e93b8de, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.257 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c in datapath 96311132-fe8d-4c3e-acff-901d94244605 bound to our chassis
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.259 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96311132-fe8d-4c3e-acff-901d94244605
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.275 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[83f7df94-49fc-4551-be29-bded5e1ebf17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.276 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96311132-f1 in ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.279 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96311132-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9063495-cba7-423e-aa41-d7167952ec91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[08f165cf-520d-4a22-9140-d6af84a88289]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 systemd-udevd[366341]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.293 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ff973fa4-d796-4470-bc5c-10a1fbe4879c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.296 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 ovn_controller[148476]: 2025-12-13T08:55:58Z|01149|binding|INFO|Setting lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c ovn-installed in OVS
Dec 13 08:55:58 compute-0 ovn_controller[148476]: 2025-12-13T08:55:58Z|01150|binding|INFO|Setting lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c up in Southbound
Dec 13 08:55:58 compute-0 NetworkManager[50376]: <info>  [1765616158.2995] device (tape7c59bd7-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 NetworkManager[50376]: <info>  [1765616158.3014] device (tape7c59bd7-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.308 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35c947de-fa6b-44f3-acfa-cdfe88f5a1b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.337 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[60c652da-65e6-4007-a085-e38676b073e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.344 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe1b430-7529-41a5-af13-246694ec0764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 systemd-udevd[366344]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:55:58 compute-0 NetworkManager[50376]: <info>  [1765616158.3448] manager: (tap96311132-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/483)
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.362 248514 DEBUG nova.virt.libvirt.driver [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.362 248514 DEBUG nova.virt.libvirt.driver [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.363 248514 DEBUG nova.virt.libvirt.driver [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:c9:87:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.363 248514 DEBUG nova.virt.libvirt.driver [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:1d:f1:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.377 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[968a1e9b-6567-4613-a5f9-ec10add872ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.381 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa9552b-be68-4a77-8a45-cb6030aa5522]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.382 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:55:58 compute-0 NetworkManager[50376]: <info>  [1765616158.4037] device (tap96311132-f0): carrier: link connected
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.407 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6174a6a9-4dbf-47a8-9ffe-80d779b04173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.413 248514 DEBUG nova.virt.libvirt.guest [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:55:58</nova:creationTime>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 08:55:58 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     <nova:port uuid="e7c59bd7-06ff-4220-ac42-ec02c9e22e2c">
Dec 13 08:55:58 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 08:55:58 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:55:58 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:55:58 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:55:58 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.428 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab732f64-2d26-4fac-b758-7ef1ddc5b999]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96311132-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:48:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873561, 'reachable_time': 22765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366367, 'error': None, 'target': 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.444 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da9c7440-b0f6-473c-a2fd-1df4d49a9775]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:4848'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873561, 'tstamp': 873561}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366368, 'error': None, 'target': 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.448 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.462 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fc8c81-a73d-4ac4-9aca-cc25a64b2c92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96311132-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:48:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873561, 'reachable_time': 22765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366369, 'error': None, 'target': 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.500 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb731a66-6125-4f48-b01a-b0e841bebfb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.545 248514 DEBUG nova.compute.manager [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-changed-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.546 248514 DEBUG nova.compute.manager [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Refreshing instance network info cache due to event network-changed-f069307e-1a47-4342-b244-d88f04ff512b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.546 248514 DEBUG oslo_concurrency.lockutils [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.568 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e342a79c-4635-4fca-97c7-f49140830a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.569 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96311132-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.570 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.570 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96311132-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:58 compute-0 kernel: tap96311132-f0: entered promiscuous mode
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 NetworkManager[50376]: <info>  [1765616158.5744] manager: (tap96311132-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.575 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96311132-f0, col_values=(('external_ids', {'iface-id': '48362829-5f00-4442-8cbd-a68a3c85a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 ovn_controller[148476]: 2025-12-13T08:55:58Z|01151|binding|INFO|Releasing lport 48362829-5f00-4442-8cbd-a68a3c85a0da from this chassis (sb_readonly=0)
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.592 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96311132-fe8d-4c3e-acff-901d94244605.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96311132-fe8d-4c3e-acff-901d94244605.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.593 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11eb1ca3-80aa-4561-89ba-5e035d21f051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.594 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-96311132-fe8d-4c3e-acff-901d94244605
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/96311132-fe8d-4c3e-acff-901d94244605.pid.haproxy
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 96311132-fe8d-4c3e-acff-901d94244605
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:55:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.595 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'env', 'PROCESS_TAG=haproxy-96311132-fe8d-4c3e-acff-901d94244605', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96311132-fe8d-4c3e-acff-901d94244605.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.641 248514 DEBUG nova.compute.manager [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.642 248514 DEBUG oslo_concurrency.lockutils [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.642 248514 DEBUG oslo_concurrency.lockutils [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.643 248514 DEBUG oslo_concurrency.lockutils [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.643 248514 DEBUG nova.compute.manager [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:55:58 compute-0 nova_compute[248510]: 2025-12-13 08:55:58.643 248514 WARNING nova.compute.manager [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for instance with vm_state active and task_state None.
Dec 13 08:55:59 compute-0 podman[366401]: 2025-12-13 08:55:59.009439107 +0000 UTC m=+0.058875097 container create b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:55:59 compute-0 systemd[1]: Started libpod-conmon-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512.scope.
Dec 13 08:55:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb244b2b4cd5882665305b63ffc666cdd1d1c64088549bf395ad8f99e02bfcdc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:55:59 compute-0 podman[366401]: 2025-12-13 08:55:58.975585228 +0000 UTC m=+0.025021238 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:55:59 compute-0 podman[366401]: 2025-12-13 08:55:59.082845357 +0000 UTC m=+0.132281377 container init b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:55:59 compute-0 podman[366401]: 2025-12-13 08:55:59.087997067 +0000 UTC m=+0.137433057 container start b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:55:59 compute-0 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [NOTICE]   (366421) : New worker (366423) forked
Dec 13 08:55:59 compute-0 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [NOTICE]   (366421) : Loading success.
Dec 13 08:55:59 compute-0 ceph-mon[76537]: pgmap v2834: 321 pgs: 321 active+clean; 297 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.9 MiB/s wr, 87 op/s
Dec 13 08:55:59 compute-0 ovn_controller[148476]: 2025-12-13T08:55:59Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:f1:bc 10.100.0.18
Dec 13 08:55:59 compute-0 ovn_controller[148476]: 2025-12-13T08:55:59Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:f1:bc 10.100.0.18
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.066 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updating instance_info_cache with network_info: [{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.091 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.093 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance network_info: |[{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.094 248514 DEBUG oslo_concurrency.lockutils [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.094 248514 DEBUG nova.network.neutron [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Refreshing network info cache for port f069307e-1a47-4342-b244-d88f04ff512b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.097 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start _get_guest_xml network_info=[{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.103 248514 WARNING nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.112 248514 DEBUG nova.virt.libvirt.host [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.113 248514 DEBUG nova.virt.libvirt.host [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.122 248514 DEBUG nova.virt.libvirt.host [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.123 248514 DEBUG nova.virt.libvirt.host [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.123 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.123 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.125 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.125 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.125 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.125 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.128 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 13 08:56:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:56:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019380606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.710 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.737 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:00 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.741 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:00.999 248514 DEBUG nova.compute.manager [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.000 248514 DEBUG oslo_concurrency.lockutils [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.000 248514 DEBUG oslo_concurrency.lockutils [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.000 248514 DEBUG oslo_concurrency.lockutils [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.001 248514 DEBUG nova.compute.manager [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.001 248514 WARNING nova.compute.manager [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for instance with vm_state active and task_state None.
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.123 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.124 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.142 248514 DEBUG nova.network.neutron [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updated VIF entry in instance network info cache for port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.142 248514 DEBUG nova.network.neutron [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.148 248514 DEBUG nova.objects.instance [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'flavor' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.175 248514 DEBUG oslo_concurrency.lockutils [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.180 248514 DEBUG nova.virt.libvirt.vif [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.180 248514 DEBUG nova.network.os_vif_util [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.181 248514 DEBUG nova.network.os_vif_util [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.185 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.187 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.190 248514 DEBUG nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Attempting to detach device tape7c59bd7-06 from instance 5c900cfb-46bb-436b-a574-7985be3447da from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.190 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] detach device xml: <interface type="ethernet">
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:1d:f1:bc"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <target dev="tape7c59bd7-06"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]: </interface>
Dec 13 08:56:01 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.195 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.198 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface>not found in domain: <domain type='kvm' id='144'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <name>instance-00000075</name>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:55:58</nova:creationTime>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:port uuid="e7c59bd7-06ff-4220-ac42-ec02c9e22e2c">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:56:01 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <system>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='serial'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='uuid'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </system>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <os>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </os>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <features>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </features>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk' index='2'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config' index='1'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:c9:87:a3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev='tapa010c1a2-26'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:1d:f1:bc'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev='tape7c59bd7-06'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='net1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source path='/dev/pts/1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </target>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/1'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source path='/dev/pts/1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </console>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <video>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </video>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c584,c742</label>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c584,c742</imagelabel>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:56:01 compute-0 nova_compute[248510]: </domain>
Dec 13 08:56:01 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.198 248514 INFO nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully detached device tape7c59bd7-06 from instance 5c900cfb-46bb-436b-a574-7985be3447da from the persistent domain config.
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.198 248514 DEBUG nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] (1/8): Attempting to detach device tape7c59bd7-06 with device alias net1 from instance 5c900cfb-46bb-436b-a574-7985be3447da from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.199 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] detach device xml: <interface type="ethernet">
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:1d:f1:bc"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <target dev="tape7c59bd7-06"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]: </interface>
Dec 13 08:56:01 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 08:56:01 compute-0 kernel: tape7c59bd7-06 (unregistering): left promiscuous mode
Dec 13 08:56:01 compute-0 NetworkManager[50376]: <info>  [1765616161.3221] device (tape7c59bd7-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:56:01 compute-0 ceph-mon[76537]: pgmap v2835: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 13 08:56:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4019380606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.336 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765616161.3357909, 5c900cfb-46bb-436b-a574-7985be3447da => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 08:56:01 compute-0 ovn_controller[148476]: 2025-12-13T08:56:01Z|01152|binding|INFO|Releasing lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c from this chassis (sb_readonly=0)
Dec 13 08:56:01 compute-0 ovn_controller[148476]: 2025-12-13T08:56:01Z|01153|binding|INFO|Setting lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c down in Southbound
Dec 13 08:56:01 compute-0 ovn_controller[148476]: 2025-12-13T08:56:01Z|01154|binding|INFO|Removing iface tape7c59bd7-06 ovn-installed in OVS
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.353 248514 DEBUG nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Start waiting for the detach event from libvirt for device tape7c59bd7-06 with device alias net1 for instance 5c900cfb-46bb-436b-a574-7985be3447da _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.354 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.354 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f1:bc 10.100.0.18'], port_security=['fa:16:3e:1d:f1:bc 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '5c900cfb-46bb-436b-a574-7985be3447da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96311132-fe8d-4c3e-acff-901d94244605', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=482a9af5-c769-4836-8604-7eb97e93b8de, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.356 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c in datapath 96311132-fe8d-4c3e-acff-901d94244605 unbound from our chassis
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.358 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96311132-fe8d-4c3e-acff-901d94244605, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.359 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70f98c7a-17d0-4fa6-b38f-c2216152d6b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.360 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 namespace which is not needed anymore
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.368 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface>not found in domain: <domain type='kvm' id='144'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <name>instance-00000075</name>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:55:58</nova:creationTime>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:port uuid="e7c59bd7-06ff-4220-ac42-ec02c9e22e2c">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:56:01 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <system>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='serial'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='uuid'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </system>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <os>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </os>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <features>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </features>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk' index='2'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config' index='1'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:c9:87:a3'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev='tapa010c1a2-26'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source path='/dev/pts/1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </target>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/1'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source path='/dev/pts/1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </console>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <video>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </video>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c584,c742</label>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c584,c742</imagelabel>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:56:01 compute-0 nova_compute[248510]: </domain>
Dec 13 08:56:01 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.369 248514 INFO nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully detached device tape7c59bd7-06 from instance 5c900cfb-46bb-436b-a574-7985be3447da from the live domain config.
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.370 248514 DEBUG nova.virt.libvirt.vif [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.370 248514 DEBUG nova.network.os_vif_util [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.370 248514 DEBUG nova.network.os_vif_util [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.371 248514 DEBUG os_vif [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.374 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7c59bd7-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:56:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/992201784' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.379 248514 INFO os_vif [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06')
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.380 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:56:01</nova:creationTime>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:56:01 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:56:01 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.423 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.424 248514 DEBUG nova.virt.libvirt.vif [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=119,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-hzbxz9gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:53Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=822fa9f6-0a5d-490e-89d7-446df19a068b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.425 248514 DEBUG nova.network.os_vif_util [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.425 248514 DEBUG nova.network.os_vif_util [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.426 248514 DEBUG nova.objects.instance [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 822fa9f6-0a5d-490e-89d7-446df19a068b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.429 248514 DEBUG nova.compute.manager [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.445 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <uuid>822fa9f6-0a5d-490e-89d7-446df19a068b</uuid>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <name>instance-00000077</name>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999</nova:name>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:56:00</nova:creationTime>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <nova:port uuid="f069307e-1a47-4342-b244-d88f04ff512b">
Dec 13 08:56:01 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <system>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name="serial">822fa9f6-0a5d-490e-89d7-446df19a068b</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name="uuid">822fa9f6-0a5d-490e-89d7-446df19a068b</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </system>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <os>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </os>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <features>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </features>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/822fa9f6-0a5d-490e-89d7-446df19a068b_disk">
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config">
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:56:01 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:dd:ef:3d"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <target dev="tapf069307e-1a"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/console.log" append="off"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <video>
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </video>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:56:01 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:56:01 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:56:01 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:56:01 compute-0 nova_compute[248510]: </domain>
Dec 13 08:56:01 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.445 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Preparing to wait for external event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.445 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.446 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.446 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:01 compute-0 podman[366496]: 2025-12-13 08:56:01.447951918 +0000 UTC m=+0.097103526 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.447 248514 DEBUG nova.virt.libvirt.vif [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=119,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-hzbxz9gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:53Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=822fa9f6-0a5d-490e-89d7-446df19a068b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.447 248514 DEBUG nova.network.os_vif_util [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.447 248514 DEBUG nova.network.os_vif_util [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.448 248514 DEBUG os_vif [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.449 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.449 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.452 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf069307e-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.452 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf069307e-1a, col_values=(('external_ids', {'iface-id': 'f069307e-1a47-4342-b244-d88f04ff512b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:ef:3d', 'vm-uuid': '822fa9f6-0a5d-490e-89d7-446df19a068b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 NetworkManager[50376]: <info>  [1765616161.4556] manager: (tapf069307e-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.463 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.464 248514 INFO os_vif [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a')
Dec 13 08:56:01 compute-0 podman[366495]: 2025-12-13 08:56:01.481152541 +0000 UTC m=+0.134261758 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.492 248514 INFO nova.compute.manager [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] instance snapshotting
Dec 13 08:56:01 compute-0 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [NOTICE]   (366421) : haproxy version is 2.8.14-c23fe91
Dec 13 08:56:01 compute-0 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [NOTICE]   (366421) : path to executable is /usr/sbin/haproxy
Dec 13 08:56:01 compute-0 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [WARNING]  (366421) : Exiting Master process...
Dec 13 08:56:01 compute-0 podman[366492]: 2025-12-13 08:56:01.497940651 +0000 UTC m=+0.146136175 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:56:01 compute-0 systemd[1]: libpod-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512.scope: Deactivated successfully.
Dec 13 08:56:01 compute-0 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [ALERT]    (366421) : Current worker (366423) exited with code 143 (Terminated)
Dec 13 08:56:01 compute-0 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [WARNING]  (366421) : All workers exited. Exiting... (0)
Dec 13 08:56:01 compute-0 conmon[366417]: conmon b3630e20ea88cdd8cebd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512.scope/container/memory.events
Dec 13 08:56:01 compute-0 podman[366572]: 2025-12-13 08:56:01.507672566 +0000 UTC m=+0.046915348 container died b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.523 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.523 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.524 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:dd:ef:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.524 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Using config drive
Dec 13 08:56:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb244b2b4cd5882665305b63ffc666cdd1d1c64088549bf395ad8f99e02bfcdc-merged.mount: Deactivated successfully.
Dec 13 08:56:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512-userdata-shm.mount: Deactivated successfully.
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.552 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:01 compute-0 podman[366572]: 2025-12-13 08:56:01.554882199 +0000 UTC m=+0.094124981 container cleanup b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 08:56:01 compute-0 systemd[1]: libpod-conmon-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512.scope: Deactivated successfully.
Dec 13 08:56:01 compute-0 podman[366623]: 2025-12-13 08:56:01.622161196 +0000 UTC m=+0.044233360 container remove b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3287c2-be09-4925-83d7-f41ffa774be5]: (4, ('Sat Dec 13 08:56:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 (b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512)\nb3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512\nSat Dec 13 08:56:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 (b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512)\nb3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.630 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3f2b3f-8752-4e24-b385-bed7f5aba1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.631 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96311132-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:01 compute-0 kernel: tap96311132-f0: left promiscuous mode
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.654 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e244cf-d8cb-4f5f-9c0f-a892d52230b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.669 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7727b6a5-b3aa-4a96-be33-59394aa549dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e48d3961-4575-45db-b2e7-2d882bdf2cd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.688 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df70c662-23ce-4ada-b30b-7fbb9ddfe80a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873554, 'reachable_time': 41631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366641, 'error': None, 'target': 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.691 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:56:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.691 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[61235515-6c81-44b9-b0ee-d0706fdbb5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d96311132\x2dfe8d\x2d4c3e\x2dacff\x2d901d94244605.mount: Deactivated successfully.
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.803 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.845 248514 INFO nova.virt.libvirt.driver [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Beginning live snapshot process
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.986 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Creating config drive at /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config
Dec 13 08:56:01 compute-0 nova_compute[248510]: 2025-12-13 08:56:01.992 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8gl_1ect execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.062 248514 DEBUG nova.virt.libvirt.imagebackend [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.166 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8gl_1ect" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.193 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.198 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 08:56:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/992201784' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.344 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.345 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Deleting local config drive /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config because it was imported into RBD.
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.364 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] creating snapshot(5af9316f0a95453cbbf8adf24d1ded6b) on rbd image(75f348ef-4044-47a1-ba1b-f1b66513450c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:56:02 compute-0 kernel: tapf069307e-1a: entered promiscuous mode
Dec 13 08:56:02 compute-0 ovn_controller[148476]: 2025-12-13T08:56:02Z|01155|binding|INFO|Claiming lport f069307e-1a47-4342-b244-d88f04ff512b for this chassis.
Dec 13 08:56:02 compute-0 ovn_controller[148476]: 2025-12-13T08:56:02Z|01156|binding|INFO|f069307e-1a47-4342-b244-d88f04ff512b: Claiming fa:16:3e:dd:ef:3d 10.100.0.10
Dec 13 08:56:02 compute-0 systemd-udevd[366527]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:56:02 compute-0 NetworkManager[50376]: <info>  [1765616162.4007] manager: (tapf069307e-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/486)
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.407 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:ef:3d 10.100.0.10'], port_security=['fa:16:3e:dd:ef:3d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '822fa9f6-0a5d-490e-89d7-446df19a068b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49b9fdf8-f095-49d6-8a3e-6b41045e0020', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ef6794c-20c0-44ef-9932-95bf5c168e3e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f069307e-1a47-4342-b244-d88f04ff512b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.408 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f069307e-1a47-4342-b244-d88f04ff512b in datapath d62e4a11-9334-4dbd-978f-dcabebeb9f79 bound to our chassis
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.410 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62e4a11-9334-4dbd-978f-dcabebeb9f79
Dec 13 08:56:02 compute-0 NetworkManager[50376]: <info>  [1765616162.4131] device (tapf069307e-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:56:02 compute-0 NetworkManager[50376]: <info>  [1765616162.4141] device (tapf069307e-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:56:02 compute-0 ovn_controller[148476]: 2025-12-13T08:56:02Z|01157|binding|INFO|Setting lport f069307e-1a47-4342-b244-d88f04ff512b ovn-installed in OVS
Dec 13 08:56:02 compute-0 ovn_controller[148476]: 2025-12-13T08:56:02Z|01158|binding|INFO|Setting lport f069307e-1a47-4342-b244-d88f04ff512b up in Southbound
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.428 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[149396da-7c7e-4d57-864d-5803928e1ea2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:02 compute-0 systemd-machined[210538]: New machine qemu-146-instance-00000077.
Dec 13 08:56:02 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000077.
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.463 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d8ce85-1646-40c6-ad51-7b009b771a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.466 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[56d7445a-385e-4cb5-b18a-b309769a5989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.493 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[54cc44e0-448c-4947-a28d-67d7d59ece10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.510 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c631881d-a1ff-4ec7-aafd-aa310676c5ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e4a11-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869981, 'reachable_time': 20363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366756, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.531 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8bff13ca-7454-4e68-86e5-2a98a5f31682]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd62e4a11-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869993, 'tstamp': 869993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366758, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd62e4a11-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869996, 'tstamp': 869996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366758, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.533 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e4a11-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62e4a11-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62e4a11-90, col_values=(('external_ids', {'iface-id': '3d979ee9-5b95-4edf-8ffc-3de7e778c5ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.538 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.625 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.684 248514 DEBUG nova.network.neutron [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updated VIF entry in instance network info cache for port f069307e-1a47-4342-b244-d88f04ff512b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.684 248514 DEBUG nova.network.neutron [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updating instance_info_cache with network_info: [{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.709 248514 DEBUG oslo_concurrency.lockutils [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.835 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.835 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:02 compute-0 nova_compute[248510]: 2025-12-13 08:56:02.835 248514 DEBUG nova.network.neutron [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.095 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616163.0952287, 822fa9f6-0a5d-490e-89d7-446df19a068b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.096 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] VM Started (Lifecycle Event)
Dec 13 08:56:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Dec 13 08:56:03 compute-0 ceph-mon[76537]: pgmap v2836: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 08:56:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Dec 13 08:56:03 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.520 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.525 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616163.0953505, 822fa9f6-0a5d-490e-89d7-446df19a068b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.525 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] VM Paused (Lifecycle Event)
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.545 248514 DEBUG nova.compute.manager [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-deleted-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.545 248514 INFO nova.compute.manager [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Neutron deleted interface e7c59bd7-06ff-4220-ac42-ec02c9e22e2c; detaching it from the instance and deleting it from the info cache
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.546 248514 DEBUG nova.network.neutron [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.550 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.553 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.576 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] cloning vms/75f348ef-4044-47a1-ba1b-f1b66513450c_disk@5af9316f0a95453cbbf8adf24d1ded6b to images/bc45ce83-2d30-4107-90ce-9a9307d84fab clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.607 248514 DEBUG nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-unplugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.607 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 DEBUG nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-unplugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 WARNING nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-unplugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for instance with vm_state active and task_state None.
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 DEBUG nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 DEBUG nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 WARNING nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for instance with vm_state active and task_state None.
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.611 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.612 248514 DEBUG nova.objects.instance [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'system_metadata' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.637 248514 DEBUG nova.objects.instance [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'flavor' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.659 248514 DEBUG nova.virt.libvirt.vif [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.659 248514 DEBUG nova.network.os_vif_util [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.660 248514 DEBUG nova.network.os_vif_util [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.663 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.666 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface>not found in domain: <domain type='kvm' id='144'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <name>instance-00000075</name>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:56:01</nova:creationTime>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:56:03 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <system>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='serial'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='uuid'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </system>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <os>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </os>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <features>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </features>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk' index='2'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config' index='1'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:c9:87:a3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target dev='tapa010c1a2-26'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <source path='/dev/pts/1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </target>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/1'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <source path='/dev/pts/1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </console>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <video>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </video>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c584,c742</label>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c584,c742</imagelabel>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:56:03 compute-0 nova_compute[248510]: </domain>
Dec 13 08:56:03 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.666 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.671 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface>not found in domain: <domain type='kvm' id='144'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <name>instance-00000075</name>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:56:01</nova:creationTime>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:56:03 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <resource>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </resource>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <system>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='serial'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='uuid'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </system>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <os>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </os>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <features>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </features>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk' index='2'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config' index='1'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </controller>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:c9:87:a3'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target dev='tapa010c1a2-26'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <source path='/dev/pts/1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       </target>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/1'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <source path='/dev/pts/1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </console>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </input>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </graphics>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <video>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </video>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c584,c742</label>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c584,c742</imagelabel>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 08:56:03 compute-0 nova_compute[248510]: </domain>
Dec 13 08:56:03 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.672 248514 WARNING nova.virt.libvirt.driver [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Detaching interface fa:16:3e:1d:f1:bc failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tape7c59bd7-06' not found.
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.673 248514 DEBUG nova.virt.libvirt.vif [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.673 248514 DEBUG nova.network.os_vif_util [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.674 248514 DEBUG nova.network.os_vif_util [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.675 248514 DEBUG os_vif [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.677 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7c59bd7-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.678 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.680 248514 INFO os_vif [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06')
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.681 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:56:03</nova:creationTime>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 08:56:03 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 08:56:03 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:56:03 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:56:03 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:56:03 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:56:03 compute-0 nova_compute[248510]: 2025-12-13 08:56:03.893 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] flattening images/bc45ce83-2d30-4107-90ce-9a9307d84fab flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:56:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 08:56:04 compute-0 ceph-mon[76537]: osdmap e301: 3 total, 3 up, 3 in
Dec 13 08:56:04 compute-0 nova_compute[248510]: 2025-12-13 08:56:04.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:04 compute-0 ovn_controller[148476]: 2025-12-13T08:56:04Z|01159|binding|INFO|Releasing lport 47f45749-b232-4d0c-bf37-be042ea606c8 from this chassis (sb_readonly=0)
Dec 13 08:56:04 compute-0 ovn_controller[148476]: 2025-12-13T08:56:04Z|01160|binding|INFO|Releasing lport 091240c0-aa08-4e16-a096-0471c0ff1f24 from this chassis (sb_readonly=0)
Dec 13 08:56:04 compute-0 ovn_controller[148476]: 2025-12-13T08:56:04Z|01161|binding|INFO|Releasing lport 3d979ee9-5b95-4edf-8ffc-3de7e778c5ff from this chassis (sb_readonly=0)
Dec 13 08:56:04 compute-0 nova_compute[248510]: 2025-12-13 08:56:04.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.198 248514 INFO nova.network.neutron [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.198 248514 DEBUG nova.network.neutron [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.230 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:05 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.271 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.720 248514 DEBUG nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.721 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.722 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.723 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.723 248514 DEBUG nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Processing event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.724 248514 DEBUG nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.724 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.725 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.725 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.726 248514 DEBUG nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] No waiting events found dispatching network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.727 248514 WARNING nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received unexpected event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b for instance with vm_state building and task_state spawning.
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.728 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.735 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616165.7348769, 822fa9f6-0a5d-490e-89d7-446df19a068b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.735 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] VM Resumed (Lifecycle Event)
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.739 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.744 248514 INFO nova.virt.libvirt.driver [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance spawned successfully.
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.745 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.763 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.769 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.772 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.772 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.772 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.773 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.773 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.773 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:05 compute-0 ceph-mon[76537]: pgmap v2838: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.804 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.842 248514 INFO nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Took 12.35 seconds to spawn the instance on the hypervisor.
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.843 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.911 248514 INFO nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Took 13.49 seconds to build instance.
Dec 13 08:56:05 compute-0 nova_compute[248510]: 2025-12-13 08:56:05.929 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:06 compute-0 nova_compute[248510]: 2025-12-13 08:56:06.044 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] removing snapshot(5af9316f0a95453cbbf8adf24d1ded6b) on rbd image(75f348ef-4044-47a1-ba1b-f1b66513450c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:56:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 08:56:06 compute-0 nova_compute[248510]: 2025-12-13 08:56:06.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.019 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.020 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.020 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.020 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.021 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.023 248514 INFO nova.compute.manager [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Terminating instance
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.024 248514 DEBUG nova.compute.manager [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:56:07 compute-0 ceph-mon[76537]: pgmap v2839: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Dec 13 08:56:07 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Dec 13 08:56:07 compute-0 kernel: tapa010c1a2-26 (unregistering): left promiscuous mode
Dec 13 08:56:07 compute-0 NetworkManager[50376]: <info>  [1765616167.7979] device (tapa010c1a2-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:56:07 compute-0 ovn_controller[148476]: 2025-12-13T08:56:07Z|01162|binding|INFO|Releasing lport a010c1a2-26e3-477b-9539-f12ad28801ca from this chassis (sb_readonly=0)
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.810 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:07 compute-0 ovn_controller[148476]: 2025-12-13T08:56:07Z|01163|binding|INFO|Setting lport a010c1a2-26e3-477b-9539-f12ad28801ca down in Southbound
Dec 13 08:56:07 compute-0 ovn_controller[148476]: 2025-12-13T08:56:07Z|01164|binding|INFO|Removing iface tapa010c1a2-26 ovn-installed in OVS
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.818 248514 DEBUG nova.compute.manager [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.818 248514 DEBUG nova.compute.manager [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing instance network info cache due to event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:56:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.818 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:87:a3 10.100.0.8'], port_security=['fa:16:3e:c9:87:a3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5c900cfb-46bb-436b-a574-7985be3447da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2e8c6a7-9a03-4a93-a9dd-d4be82f5297d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d20f24f-0d1e-4b3a-97e8-eb661209feb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a010c1a2-26e3-477b-9539-f12ad28801ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.819 248514 DEBUG oslo_concurrency.lockutils [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.819 248514 DEBUG oslo_concurrency.lockutils [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.819 248514 DEBUG nova.network.neutron [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:56:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.820 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a010c1a2-26e3-477b-9539-f12ad28801ca in datapath b2d9e215-0c32-4abc-92a1-ad5f852b369d unbound from our chassis
Dec 13 08:56:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.823 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2d9e215-0c32-4abc-92a1-ad5f852b369d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:56:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.824 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5770f2-439b-4c58-aa99-8c8d7f2a0f83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.825 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d namespace which is not needed anymore
Dec 13 08:56:07 compute-0 nova_compute[248510]: 2025-12-13 08:56:07.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:07 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000075.scope: Deactivated successfully.
Dec 13 08:56:07 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000075.scope: Consumed 14.477s CPU time.
Dec 13 08:56:07 compute-0 systemd-machined[210538]: Machine qemu-144-instance-00000075 terminated.
Dec 13 08:56:08 compute-0 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [NOTICE]   (364929) : haproxy version is 2.8.14-c23fe91
Dec 13 08:56:08 compute-0 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [NOTICE]   (364929) : path to executable is /usr/sbin/haproxy
Dec 13 08:56:08 compute-0 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [WARNING]  (364929) : Exiting Master process...
Dec 13 08:56:08 compute-0 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [ALERT]    (364929) : Current worker (364931) exited with code 143 (Terminated)
Dec 13 08:56:08 compute-0 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [WARNING]  (364929) : All workers exited. Exiting... (0)
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.040 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] creating snapshot(snap) on rbd image(bc45ce83-2d30-4107-90ce-9a9307d84fab) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:56:08 compute-0 systemd[1]: libpod-d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6.scope: Deactivated successfully.
Dec 13 08:56:08 compute-0 podman[366897]: 2025-12-13 08:56:08.054349547 +0000 UTC m=+0.105701381 container died d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.173 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6-userdata-shm.mount: Deactivated successfully.
Dec 13 08:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-761b232b1aa41b0d759478588d2b31a2caafbf1df47bdacf3a2927b25a18515d-merged.mount: Deactivated successfully.
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.187 248514 INFO nova.virt.libvirt.driver [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance destroyed successfully.
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.188 248514 DEBUG nova.objects.instance [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.207 248514 DEBUG nova.virt.libvirt.vif [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.208 248514 DEBUG nova.network.os_vif_util [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.209 248514 DEBUG nova.network.os_vif_util [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.210 248514 DEBUG os_vif [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.212 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa010c1a2-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 364 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.1 MiB/s wr, 104 op/s
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.262 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:56:08 compute-0 nova_compute[248510]: 2025-12-13 08:56:08.265 248514 INFO os_vif [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26')
Dec 13 08:56:08 compute-0 podman[366897]: 2025-12-13 08:56:08.503818736 +0000 UTC m=+0.555170580 container cleanup d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 08:56:08 compute-0 systemd[1]: libpod-conmon-d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6.scope: Deactivated successfully.
Dec 13 08:56:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Dec 13 08:56:08 compute-0 ceph-mon[76537]: osdmap e302: 3 total, 3 up, 3 in
Dec 13 08:56:08 compute-0 ceph-mon[76537]: pgmap v2841: 321 pgs: 321 active+clean; 364 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.1 MiB/s wr, 104 op/s
Dec 13 08:56:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Dec 13 08:56:08 compute-0 podman[366972]: 2025-12-13 08:56:08.995166096 +0000 UTC m=+0.461146853 container remove d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:56:09 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.013 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eaef195d-1b6d-466e-a791-3e25036e2a30]: (4, ('Sat Dec 13 08:56:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d (d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6)\nd6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6\nSat Dec 13 08:56:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d (d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6)\nd6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c280c01a-3b54-4f66-8dea-eeec0b66884e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.024 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2d9e215-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.029 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:09 compute-0 kernel: tapb2d9e215-00: left promiscuous mode
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.055 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5332e4-19cd-449e-aae8-cb9b72870d15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.074 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6a42deb9-907f-4fbd-a4a1-a45088d372e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.078 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f16493bc-97d0-42b3-b33d-e95a461a3f46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.107 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4538e68f-e204-46e5-a4f7-d0928718fcea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870136, 'reachable_time': 29451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366985, 'error': None, 'target': 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.111 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:56:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.111 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[68cb5833-ffdb-4f14-8f49-01a5a4b8d38c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:09 compute-0 systemd[1]: run-netns-ovnmeta\x2db2d9e215\x2d0c32\x2d4abc\x2d92a1\x2dad5f852b369d.mount: Deactivated successfully.
Dec 13 08:56:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:56:09
Dec 13 08:56:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:56:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:56:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'volumes']
Dec 13 08:56:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.509 248514 INFO nova.virt.libvirt.driver [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Deleting instance files /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da_del
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.511 248514 INFO nova.virt.libvirt.driver [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Deletion of /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da_del complete
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.638 248514 INFO nova.compute.manager [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Took 2.61 seconds to destroy the instance on the hypervisor.
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.639 248514 DEBUG oslo.service.loopingcall [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.640 248514 DEBUG nova.compute.manager [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.640 248514 DEBUG nova.network.neutron [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.814 248514 DEBUG nova.network.neutron [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updated VIF entry in instance network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.816 248514 DEBUG nova.network.neutron [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.907 248514 DEBUG oslo_concurrency.lockutils [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.951 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-unplugged-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.952 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.953 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.953 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.953 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-unplugged-a010c1a2-26e3-477b-9539-f12ad28801ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.953 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-unplugged-a010c1a2-26e3-477b-9539-f12ad28801ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.954 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.954 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.954 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.955 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.955 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:09 compute-0 nova_compute[248510]: 2025-12-13 08:56:09.955 248514 WARNING nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca for instance with vm_state active and task_state deleting.
Dec 13 08:56:09 compute-0 ceph-mon[76537]: osdmap e303: 3 total, 3 up, 3 in
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 405 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.9 MiB/s wr, 266 op/s
Dec 13 08:56:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:10 compute-0 nova_compute[248510]: 2025-12-13 08:56:10.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:10 compute-0 nova_compute[248510]: 2025-12-13 08:56:10.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:56:10 compute-0 nova_compute[248510]: 2025-12-13 08:56:10.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:56:10 compute-0 nova_compute[248510]: 2025-12-13 08:56:10.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:56:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:56:10 compute-0 nova_compute[248510]: 2025-12-13 08:56:10.885 248514 INFO nova.virt.libvirt.driver [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Snapshot image upload complete
Dec 13 08:56:10 compute-0 nova_compute[248510]: 2025-12-13 08:56:10.886 248514 INFO nova.compute.manager [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 9.39 seconds to snapshot the instance on the hypervisor.
Dec 13 08:56:10 compute-0 ceph-mon[76537]: pgmap v2843: 321 pgs: 321 active+clean; 405 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.9 MiB/s wr, 266 op/s
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.038 248514 DEBUG nova.network.neutron [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.065 248514 INFO nova.compute.manager [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Took 1.43 seconds to deallocate network for instance.
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.071 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.071 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.071 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.072 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2dacd79d-d668-430f-89e3-bd607a8298ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.202 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.204 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:11 compute-0 nova_compute[248510]: 2025-12-13 08:56:11.441 248514 DEBUG oslo_concurrency.processutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:56:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159425532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.035 248514 DEBUG oslo_concurrency.processutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.043 248514 DEBUG nova.compute.provider_tree [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.068 248514 DEBUG nova.scheduler.client.report [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:56:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2159425532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.075 248514 DEBUG nova.compute.manager [req-788509ca-8c78-4127-852e-9c48ba9f69d0 req-83acf6e9-b05f-48b5-9062-a687db55b599 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-deleted-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.094 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.163 248514 INFO nova.scheduler.client.report [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 5c900cfb-46bb-436b-a574-7985be3447da
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.248 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 405 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 192 op/s
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.860 248514 DEBUG nova.compute.manager [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-changed-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.861 248514 DEBUG nova.compute.manager [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Refreshing instance network info cache due to event network-changed-f069307e-1a47-4342-b244-d88f04ff512b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.861 248514 DEBUG oslo_concurrency.lockutils [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.862 248514 DEBUG oslo_concurrency.lockutils [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:12 compute-0 nova_compute[248510]: 2025-12-13 08:56:12.862 248514 DEBUG nova.network.neutron [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Refreshing network info cache for port f069307e-1a47-4342-b244-d88f04ff512b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:56:13 compute-0 ceph-mon[76537]: pgmap v2844: 321 pgs: 321 active+clean; 405 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 192 op/s
Dec 13 08:56:13 compute-0 nova_compute[248510]: 2025-12-13 08:56:13.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:13 compute-0 nova_compute[248510]: 2025-12-13 08:56:13.691 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:13 compute-0 nova_compute[248510]: 2025-12-13 08:56:13.720 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:13 compute-0 nova_compute[248510]: 2025-12-13 08:56:13.720 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:56:13 compute-0 nova_compute[248510]: 2025-12-13 08:56:13.721 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 326 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 245 op/s
Dec 13 08:56:14 compute-0 nova_compute[248510]: 2025-12-13 08:56:14.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:14.900 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:56:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:14.900 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:56:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:14.901 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:56:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102281189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:56:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:56:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102281189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:56:15 compute-0 ceph-mon[76537]: pgmap v2845: 321 pgs: 321 active+clean; 326 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 245 op/s
Dec 13 08:56:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1102281189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:56:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1102281189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:56:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.664 248514 DEBUG nova.network.neutron [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updated VIF entry in instance network info cache for port f069307e-1a47-4342-b244-d88f04ff512b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.665 248514 DEBUG nova.network.neutron [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updating instance_info_cache with network_info: [{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.688 248514 DEBUG oslo_concurrency.lockutils [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.815 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:56:15 compute-0 nova_compute[248510]: 2025-12-13 08:56:15.816 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Dec 13 08:56:15 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.208 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.209 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.231 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:56:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 326 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.8 MiB/s wr, 184 op/s
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.383 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.384 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:56:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903390185' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.395 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.396 248514 INFO nova.compute.claims [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.414 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.745 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.745 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.754 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.754 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.760 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.761 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:56:16 compute-0 nova_compute[248510]: 2025-12-13 08:56:16.893 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:16 compute-0 ceph-mon[76537]: osdmap e304: 3 total, 3 up, 3 in
Dec 13 08:56:16 compute-0 ceph-mon[76537]: pgmap v2847: 321 pgs: 321 active+clean; 326 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.8 MiB/s wr, 184 op/s
Dec 13 08:56:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3903390185' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.036 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.039 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3021MB free_disk=59.87539869081229GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.040 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:17 compute-0 ovn_controller[148476]: 2025-12-13T08:56:17Z|01165|binding|INFO|Releasing lport 47f45749-b232-4d0c-bf37-be042ea606c8 from this chassis (sb_readonly=0)
Dec 13 08:56:17 compute-0 ovn_controller[148476]: 2025-12-13T08:56:17Z|01166|binding|INFO|Releasing lport 3d979ee9-5b95-4edf-8ffc-3de7e778c5ff from this chassis (sb_readonly=0)
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:56:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417424684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.489 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.495 248514 DEBUG nova.compute.provider_tree [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.516 248514 DEBUG nova.scheduler.client.report [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.555 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.556 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.558 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.644 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.645 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.665 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.682 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.686 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2dacd79d-d668-430f-89e3-bd607a8298ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.686 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 75f348ef-4044-47a1-ba1b-f1b66513450c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 822fa9f6-0a5d-490e-89d7-446df19a068b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 4887eb43-1570-49a5-b20e-326af1e84a7b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.799 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.801 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.802 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Creating image(s)
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.825 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.850 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.874 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.878 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "d52579aebd0b024759c7e8234ab3d7cdd411a8c3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.879 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "d52579aebd0b024759c7e8234ab3d7cdd411a8c3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.885 248514 DEBUG nova.policy [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81fb01d9d08845c3b626079ab726db7a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:56:17 compute-0 nova_compute[248510]: 2025-12-13 08:56:17.889 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/417424684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 326 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.4 MiB/s wr, 159 op/s
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.286 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.295 248514 DEBUG nova.virt.libvirt.imagebackend [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/bc45ce83-2d30-4107-90ce-9a9307d84fab/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/bc45ce83-2d30-4107-90ce-9a9307d84fab/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.359 248514 DEBUG nova.virt.libvirt.imagebackend [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/bc45ce83-2d30-4107-90ce-9a9307d84fab/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.360 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] cloning images/bc45ce83-2d30-4107-90ce-9a9307d84fab@snap to None/4887eb43-1570-49a5-b20e-326af1e84a7b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:56:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:56:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726082217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.523 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.529 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.560 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.594 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:56:18 compute-0 nova_compute[248510]: 2025-12-13 08:56:18.594 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:19 compute-0 ceph-mon[76537]: pgmap v2848: 321 pgs: 321 active+clean; 326 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.4 MiB/s wr, 159 op/s
Dec 13 08:56:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/726082217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 331 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 673 KiB/s wr, 57 op/s
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.441 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "d52579aebd0b024759c7e8234ab3d7cdd411a8c3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.493 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Successfully created port: 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.594 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.594 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.595 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.601 248514 DEBUG nova.objects.instance [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'migration_context' on Instance uuid 4887eb43-1570-49a5-b20e-326af1e84a7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.618 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.618 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Ensure instance console log exists: /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.618 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.619 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:20 compute-0 nova_compute[248510]: 2025-12-13 08:56:20.619 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019915082332499054 of space, bias 1.0, pg target 0.5974524699749716 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014274388759359915 of space, bias 1.0, pg target 0.42823166278079744 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.729684235412037e-07 of space, bias 4.0, pg target 0.0006875621082494445 quantized to 16 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:56:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:56:21 compute-0 nova_compute[248510]: 2025-12-13 08:56:21.366 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Successfully updated port: 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:56:21 compute-0 nova_compute[248510]: 2025-12-13 08:56:21.398 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:21 compute-0 nova_compute[248510]: 2025-12-13 08:56:21.399 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquired lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:21 compute-0 nova_compute[248510]: 2025-12-13 08:56:21.399 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:56:21 compute-0 ceph-mon[76537]: pgmap v2849: 321 pgs: 321 active+clean; 331 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 673 KiB/s wr, 57 op/s
Dec 13 08:56:21 compute-0 nova_compute[248510]: 2025-12-13 08:56:21.526 248514 DEBUG nova.compute.manager [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:21 compute-0 nova_compute[248510]: 2025-12-13 08:56:21.527 248514 DEBUG nova.compute.manager [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing instance network info cache due to event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:56:21 compute-0 nova_compute[248510]: 2025-12-13 08:56:21.529 248514 DEBUG oslo_concurrency.lockutils [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:21 compute-0 nova_compute[248510]: 2025-12-13 08:56:21.631 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:56:21 compute-0 ovn_controller[148476]: 2025-12-13T08:56:21Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:ef:3d 10.100.0.10
Dec 13 08:56:21 compute-0 ovn_controller[148476]: 2025-12-13T08:56:21Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:ef:3d 10.100.0.10
Dec 13 08:56:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 331 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 673 KiB/s wr, 57 op/s
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:22 compute-0 ceph-mon[76537]: pgmap v2850: 321 pgs: 321 active+clean; 331 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 673 KiB/s wr, 57 op/s
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.927 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.971 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Releasing lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.971 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance network_info: |[{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.971 248514 DEBUG oslo_concurrency.lockutils [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.972 248514 DEBUG nova.network.neutron [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.975 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start _get_guest_xml network_info=[{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:56:01Z,direct_url=<?>,disk_format='raw',id=bc45ce83-2d30-4107-90ce-9a9307d84fab,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-485800617',owner='6c21c2eb2d0c4465ae562381f358fbd8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:56:11Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': 'bc45ce83-2d30-4107-90ce-9a9307d84fab'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.980 248514 WARNING nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.987 248514 DEBUG nova.virt.libvirt.host [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.988 248514 DEBUG nova.virt.libvirt.host [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.992 248514 DEBUG nova.virt.libvirt.host [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.993 248514 DEBUG nova.virt.libvirt.host [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.994 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.994 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:56:01Z,direct_url=<?>,disk_format='raw',id=bc45ce83-2d30-4107-90ce-9a9307d84fab,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-485800617',owner='6c21c2eb2d0c4465ae562381f358fbd8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:56:11Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.995 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.996 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.997 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.997 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.998 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:56:22 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.999 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:22.999 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.000 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.000 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.001 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.009 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.177 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616168.0633242, 5c900cfb-46bb-436b-a574-7985be3447da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.178 248514 INFO nova.compute.manager [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] VM Stopped (Lifecycle Event)
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.251 248514 DEBUG nova.compute.manager [None req-258c2fee-b5ef-478e-9bd2-74489e4771bd - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.288 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:56:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905031900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.635 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.670 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/905031900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:23 compute-0 nova_compute[248510]: 2025-12-13 08:56:23.676 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:56:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148729958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 2.6 MiB/s wr, 111 op/s
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.279 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.281 248514 DEBUG nova.virt.libvirt.vif [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1968716864',display_name='tempest-TestSnapshotPattern-server-1968716864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1968716864',id=120,image_ref='bc45ce83-2d30-4107-90ce-9a9307d84fab',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-ejrogks3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='75f348ef-4044-47a1-ba1b-f1b66513450c',image_min_disk='1',image_min_ram='0',image_owner_id='6c21c2eb2d0c4465ae562381f358fbd8',image_owner_project_name='tempest-TestSnapshotPattern-1494512648',image_owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member',image_user_id='81fb01d9d08845c3b626079ab726db7a',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:56:17Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=4887eb43-1570-49a5-b20e-326af1e84a7b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.282 248514 DEBUG nova.network.os_vif_util [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.283 248514 DEBUG nova.network.os_vif_util [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.285 248514 DEBUG nova.objects.instance [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4887eb43-1570-49a5-b20e-326af1e84a7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.330 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <uuid>4887eb43-1570-49a5-b20e-326af1e84a7b</uuid>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <name>instance-00000078</name>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <nova:name>tempest-TestSnapshotPattern-server-1968716864</nova:name>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:56:22</nova:creationTime>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <nova:user uuid="81fb01d9d08845c3b626079ab726db7a">tempest-TestSnapshotPattern-1494512648-project-member</nova:user>
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <nova:project uuid="6c21c2eb2d0c4465ae562381f358fbd8">tempest-TestSnapshotPattern-1494512648</nova:project>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="bc45ce83-2d30-4107-90ce-9a9307d84fab"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <nova:port uuid="0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a">
Dec 13 08:56:24 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <system>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <entry name="serial">4887eb43-1570-49a5-b20e-326af1e84a7b</entry>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <entry name="uuid">4887eb43-1570-49a5-b20e-326af1e84a7b</entry>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </system>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <os>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   </os>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <features>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   </features>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4887eb43-1570-49a5-b20e-326af1e84a7b_disk">
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config">
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:56:24 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:da:e5:67"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <target dev="tap0b3a1c67-12"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/console.log" append="off"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <video>
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </video>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:56:24 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:56:24 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:56:24 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:56:24 compute-0 nova_compute[248510]: </domain>
Dec 13 08:56:24 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.330 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Preparing to wait for external event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.330 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.331 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.331 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.331 248514 DEBUG nova.virt.libvirt.vif [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1968716864',display_name='tempest-TestSnapshotPattern-server-1968716864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1968716864',id=120,image_ref='bc45ce83-2d30-4107-90ce-9a9307d84fab',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-ejrogks3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='75f348ef-4044-47a1-ba1b-f1b66513450c',image_min_disk='1',image_min_ram='0',image_owner_id='6c21c2eb2d0c4465ae562381f358fbd8',image_owner_project_name='tempest-TestSnapshotPattern-1494512648',image_owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member',image_user_id='81fb01d9d08845c3b626079ab726db7a',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:56:17Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=4887eb43-1570-49a5-b20e-326af1e84a7b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.332 248514 DEBUG nova.network.os_vif_util [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.332 248514 DEBUG nova.network.os_vif_util [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.332 248514 DEBUG os_vif [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.333 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.334 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.336 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b3a1c67-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.336 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b3a1c67-12, col_values=(('external_ids', {'iface-id': '0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:e5:67', 'vm-uuid': '4887eb43-1570-49a5-b20e-326af1e84a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:24 compute-0 NetworkManager[50376]: <info>  [1765616184.3384] manager: (tap0b3a1c67-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.343 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.343 248514 INFO os_vif [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12')
Dec 13 08:56:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2148729958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:24 compute-0 ceph-mon[76537]: pgmap v2851: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 2.6 MiB/s wr, 111 op/s
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.692 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.693 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.694 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No VIF found with MAC fa:16:3e:da:e5:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.695 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Using config drive
Dec 13 08:56:24 compute-0 nova_compute[248510]: 2025-12-13 08:56:24.732 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:25 compute-0 nova_compute[248510]: 2025-12-13 08:56:25.945 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Creating config drive at /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config
Dec 13 08:56:25 compute-0 nova_compute[248510]: 2025-12-13 08:56:25.953 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphoyps3ma execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.109 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphoyps3ma" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.133 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.138 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.296 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.297 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Deleting local config drive /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config because it was imported into RBD.
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.331 248514 DEBUG nova.network.neutron [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updated VIF entry in instance network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.332 248514 DEBUG nova.network.neutron [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:26 compute-0 kernel: tap0b3a1c67-12: entered promiscuous mode
Dec 13 08:56:26 compute-0 NetworkManager[50376]: <info>  [1765616186.3537] manager: (tap0b3a1c67-12): new Tun device (/org/freedesktop/NetworkManager/Devices/488)
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:26 compute-0 ovn_controller[148476]: 2025-12-13T08:56:26Z|01167|binding|INFO|Claiming lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a for this chassis.
Dec 13 08:56:26 compute-0 ovn_controller[148476]: 2025-12-13T08:56:26Z|01168|binding|INFO|0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a: Claiming fa:16:3e:da:e5:67 10.100.0.9
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.361 248514 DEBUG oslo_concurrency.lockutils [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:26 compute-0 ovn_controller[148476]: 2025-12-13T08:56:26Z|01169|binding|INFO|Setting lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a ovn-installed in OVS
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:26 compute-0 systemd-udevd[367388]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.392 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:e5:67 10.100.0.9'], port_security=['fa:16:3e:da:e5:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4887eb43-1570-49a5-b20e-326af1e84a7b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e1e2d18-e674-4ee6-b553-a8676e40259f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c30b2fc-21fd-4778-91da-98384d5e05df, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:56:26 compute-0 ovn_controller[148476]: 2025-12-13T08:56:26Z|01170|binding|INFO|Setting lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a up in Southbound
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.393 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a in datapath d2ff4cff-54cc-40c6-a486-7e7532c2462b bound to our chassis
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.396 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2ff4cff-54cc-40c6-a486-7e7532c2462b
Dec 13 08:56:26 compute-0 systemd-machined[210538]: New machine qemu-147-instance-00000078.
Dec 13 08:56:26 compute-0 NetworkManager[50376]: <info>  [1765616186.4053] device (tap0b3a1c67-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:56:26 compute-0 NetworkManager[50376]: <info>  [1765616186.4062] device (tap0b3a1c67-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:56:26 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000078.
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.416 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bcb21a-15b1-4bfe-8b1b-a532e69a7e2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.455 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1b037a-12aa-45f9-812c-a4e66232b69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.460 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfd3664-8c0e-4c16-90d1-a5eaf25b8172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.493 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[119b898b-f2c0-4e27-8c26-3498152a773d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.513 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7304c06-0430-4a0c-8a74-54dfde1140ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2ff4cff-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:85:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871447, 'reachable_time': 40005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367403, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.528 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2798dcfb-7a9b-4447-93b6-8cc933f74c2c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd2ff4cff-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871459, 'tstamp': 871459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367404, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd2ff4cff-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871463, 'tstamp': 871463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367404, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.529 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2ff4cff-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.533 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2ff4cff-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.533 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.534 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2ff4cff-50, col_values=(('external_ids', {'iface-id': '47f45749-b232-4d0c-bf37-be042ea606c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.534 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.918 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616186.917615, 4887eb43-1570-49a5-b20e-326af1e84a7b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.919 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] VM Started (Lifecycle Event)
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.957 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.962 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616186.9188564, 4887eb43-1570-49a5-b20e-326af1e84a7b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.963 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] VM Paused (Lifecycle Event)
Dec 13 08:56:26 compute-0 nova_compute[248510]: 2025-12-13 08:56:26.996 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.001 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.028 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:56:27 compute-0 ceph-mon[76537]: pgmap v2852: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.343 248514 DEBUG nova.compute.manager [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.343 248514 DEBUG oslo_concurrency.lockutils [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.344 248514 DEBUG oslo_concurrency.lockutils [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.344 248514 DEBUG oslo_concurrency.lockutils [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.344 248514 DEBUG nova.compute.manager [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Processing event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.345 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.348 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616187.3483145, 4887eb43-1570-49a5-b20e-326af1e84a7b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.349 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] VM Resumed (Lifecycle Event)
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.351 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.354 248514 INFO nova.virt.libvirt.driver [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance spawned successfully.
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.355 248514 INFO nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 9.55 seconds to spawn the instance on the hypervisor.
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.355 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.396 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.399 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.435 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.453 248514 INFO nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 11.17 seconds to build instance.
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.475 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:27 compute-0 nova_compute[248510]: 2025-12-13 08:56:27.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.215 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.215 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.216 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.216 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.216 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.218 248514 INFO nova.compute.manager [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Terminating instance
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.218 248514 DEBUG nova.compute.manager [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:56:28 compute-0 kernel: tapf069307e-1a (unregistering): left promiscuous mode
Dec 13 08:56:28 compute-0 NetworkManager[50376]: <info>  [1765616188.2547] device (tapf069307e-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 ovn_controller[148476]: 2025-12-13T08:56:28Z|01171|binding|INFO|Releasing lport f069307e-1a47-4342-b244-d88f04ff512b from this chassis (sb_readonly=0)
Dec 13 08:56:28 compute-0 ovn_controller[148476]: 2025-12-13T08:56:28Z|01172|binding|INFO|Setting lport f069307e-1a47-4342-b244-d88f04ff512b down in Southbound
Dec 13 08:56:28 compute-0 ovn_controller[148476]: 2025-12-13T08:56:28Z|01173|binding|INFO|Removing iface tapf069307e-1a ovn-installed in OVS
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.266 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.271 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:ef:3d 10.100.0.10'], port_security=['fa:16:3e:dd:ef:3d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '822fa9f6-0a5d-490e-89d7-446df19a068b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bb814d40-cd63-4d1c-97e3-f821733a618f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ef6794c-20c0-44ef-9932-95bf5c168e3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f069307e-1a47-4342-b244-d88f04ff512b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.272 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f069307e-1a47-4342-b244-d88f04ff512b in datapath d62e4a11-9334-4dbd-978f-dcabebeb9f79 unbound from our chassis
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.274 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62e4a11-9334-4dbd-978f-dcabebeb9f79
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.282 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.291 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e44e0da-c55f-4a1d-a616-ea227a1dd144]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.323 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[550809e0-9982-4161-b2c3-282629129388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.327 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[52293860-177f-43b7-9ba1-793e0a9cc80e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:28 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000077.scope: Deactivated successfully.
Dec 13 08:56:28 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000077.scope: Consumed 14.554s CPU time.
Dec 13 08:56:28 compute-0 systemd-machined[210538]: Machine qemu-146-instance-00000077 terminated.
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.371 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fab671fc-fb99-4c76-a379-b3d4489c8bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.394 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe8af72-4f04-40c7-a513-ef72875d2223]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e4a11-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869981, 'reachable_time': 20363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367456, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.411 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[509d4e6f-0e9c-4700-bcb1-a4787c38fcae]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd62e4a11-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869993, 'tstamp': 869993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367457, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd62e4a11-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869996, 'tstamp': 869996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367457, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.414 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e4a11-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.446 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.456 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62e4a11-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.456 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.457 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62e4a11-90, col_values=(('external_ids', {'iface-id': '3d979ee9-5b95-4edf-8ffc-3de7e778c5ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.457 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.464 248514 INFO nova.virt.libvirt.driver [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance destroyed successfully.
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.465 248514 DEBUG nova.objects.instance [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 822fa9f6-0a5d-490e-89d7-446df19a068b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.480 248514 DEBUG nova.virt.libvirt.vif [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=119,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:56:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-hzbxz9gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:56:05Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=822fa9f6-0a5d-490e-89d7-446df19a068b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.481 248514 DEBUG nova.network.os_vif_util [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.482 248514 DEBUG nova.network.os_vif_util [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.483 248514 DEBUG os_vif [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.485 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.485 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf069307e-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.488 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.490 248514 INFO os_vif [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a')
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.726 248514 INFO nova.virt.libvirt.driver [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Deleting instance files /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b_del
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.728 248514 INFO nova.virt.libvirt.driver [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Deletion of /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b_del complete
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.823 248514 INFO nova.compute.manager [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Took 0.60 seconds to destroy the instance on the hypervisor.
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.823 248514 DEBUG oslo.service.loopingcall [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.824 248514 DEBUG nova.compute.manager [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:56:28 compute-0 nova_compute[248510]: 2025-12-13 08:56:28.824 248514 DEBUG nova.network.neutron [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:56:29 compute-0 ceph-mon[76537]: pgmap v2853: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.352 248514 DEBUG nova.compute.manager [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-unplugged-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.352 248514 DEBUG oslo_concurrency.lockutils [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.353 248514 DEBUG oslo_concurrency.lockutils [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.353 248514 DEBUG oslo_concurrency.lockutils [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.353 248514 DEBUG nova.compute.manager [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] No waiting events found dispatching network-vif-unplugged-f069307e-1a47-4342-b244-d88f04ff512b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.354 248514 DEBUG nova.compute.manager [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-unplugged-f069307e-1a47-4342-b244-d88f04ff512b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.958 248514 DEBUG nova.compute.manager [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.959 248514 DEBUG oslo_concurrency.lockutils [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.959 248514 DEBUG oslo_concurrency.lockutils [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.959 248514 DEBUG oslo_concurrency.lockutils [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.959 248514 DEBUG nova.compute.manager [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] No waiting events found dispatching network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:29 compute-0 nova_compute[248510]: 2025-12-13 08:56:29.960 248514 WARNING nova.compute.manager [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received unexpected event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a for instance with vm_state active and task_state None.
Dec 13 08:56:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 313 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Dec 13 08:56:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:30 compute-0 nova_compute[248510]: 2025-12-13 08:56:30.934 248514 DEBUG nova.network.neutron [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:30 compute-0 nova_compute[248510]: 2025-12-13 08:56:30.980 248514 INFO nova.compute.manager [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Took 2.16 seconds to deallocate network for instance.
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.033 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.034 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.189 248514 DEBUG oslo_concurrency.processutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:31 compute-0 ceph-mon[76537]: pgmap v2854: 321 pgs: 321 active+clean; 313 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.488 248514 DEBUG nova.compute.manager [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.489 248514 DEBUG oslo_concurrency.lockutils [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.489 248514 DEBUG oslo_concurrency.lockutils [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.490 248514 DEBUG oslo_concurrency.lockutils [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.490 248514 DEBUG nova.compute.manager [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] No waiting events found dispatching network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.490 248514 WARNING nova.compute.manager [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received unexpected event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b for instance with vm_state deleted and task_state None.
Dec 13 08:56:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:56:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256735896' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.774 248514 DEBUG oslo_concurrency.processutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.780 248514 DEBUG nova.compute.provider_tree [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.801 248514 DEBUG nova.scheduler.client.report [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.827 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.856 248514 INFO nova.scheduler.client.report [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 822fa9f6-0a5d-490e-89d7-446df19a068b
Dec 13 08:56:31 compute-0 nova_compute[248510]: 2025-12-13 08:56:31.943 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:31 compute-0 podman[367510]: 2025-12-13 08:56:31.986702683 +0000 UTC m=+0.065175865 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 08:56:31 compute-0 podman[367509]: 2025-12-13 08:56:31.993262198 +0000 UTC m=+0.078929810 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 13 08:56:32 compute-0 podman[367508]: 2025-12-13 08:56:32.02488093 +0000 UTC m=+0.110753047 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 08:56:32 compute-0 nova_compute[248510]: 2025-12-13 08:56:32.072 248514 DEBUG nova.compute.manager [req-2f39ce16-f544-4cb0-9798-0d09273ed305 req-0cefeaaa-030b-4713-a939-1546cac6b1a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-deleted-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 313 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1001 KiB/s rd, 1.6 MiB/s wr, 124 op/s
Dec 13 08:56:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1256735896' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:32 compute-0 nova_compute[248510]: 2025-12-13 08:56:32.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:32 compute-0 nova_compute[248510]: 2025-12-13 08:56:32.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:33 compute-0 ceph-mon[76537]: pgmap v2855: 321 pgs: 321 active+clean; 313 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1001 KiB/s rd, 1.6 MiB/s wr, 124 op/s
Dec 13 08:56:33 compute-0 nova_compute[248510]: 2025-12-13 08:56:33.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 279 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 187 op/s
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.365 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.365 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.366 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.366 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.367 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.368 248514 INFO nova.compute.manager [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Terminating instance
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.369 248514 DEBUG nova.compute.manager [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:56:34 compute-0 kernel: tap59834f67-f8 (unregistering): left promiscuous mode
Dec 13 08:56:34 compute-0 NetworkManager[50376]: <info>  [1765616194.4103] device (tap59834f67-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:56:34 compute-0 ovn_controller[148476]: 2025-12-13T08:56:34Z|01174|binding|INFO|Releasing lport 59834f67-f81d-41bf-9bec-95eea737178e from this chassis (sb_readonly=0)
Dec 13 08:56:34 compute-0 ovn_controller[148476]: 2025-12-13T08:56:34Z|01175|binding|INFO|Setting lport 59834f67-f81d-41bf-9bec-95eea737178e down in Southbound
Dec 13 08:56:34 compute-0 ovn_controller[148476]: 2025-12-13T08:56:34Z|01176|binding|INFO|Removing iface tap59834f67-f8 ovn-installed in OVS
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.424 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:dc:ee 10.100.0.7'], port_security=['fa:16:3e:fc:dc:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2dacd79d-d668-430f-89e3-bd607a8298ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49b9fdf8-f095-49d6-8a3e-6b41045e0020 daf1c258-d9fc-43cc-a960-fdfffc57ef37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ef6794c-20c0-44ef-9932-95bf5c168e3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=59834f67-f81d-41bf-9bec-95eea737178e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.425 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 59834f67-f81d-41bf-9bec-95eea737178e in datapath d62e4a11-9334-4dbd-978f-dcabebeb9f79 unbound from our chassis
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.427 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d62e4a11-9334-4dbd-978f-dcabebeb9f79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.429 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb082c0-d088-47ae-8cb2-1daf66647c8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.430 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 namespace which is not needed anymore
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.443 248514 DEBUG nova.compute.manager [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-changed-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.444 248514 DEBUG nova.compute.manager [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing instance network info cache due to event network-changed-59834f67-f81d-41bf-9bec-95eea737178e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.444 248514 DEBUG oslo_concurrency.lockutils [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.444 248514 DEBUG oslo_concurrency.lockutils [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.444 248514 DEBUG nova.network.neutron [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:56:34 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000074.scope: Deactivated successfully.
Dec 13 08:56:34 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000074.scope: Consumed 15.634s CPU time.
Dec 13 08:56:34 compute-0 systemd-machined[210538]: Machine qemu-143-instance-00000074 terminated.
Dec 13 08:56:34 compute-0 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [NOTICE]   (364730) : haproxy version is 2.8.14-c23fe91
Dec 13 08:56:34 compute-0 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [NOTICE]   (364730) : path to executable is /usr/sbin/haproxy
Dec 13 08:56:34 compute-0 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [WARNING]  (364730) : Exiting Master process...
Dec 13 08:56:34 compute-0 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [WARNING]  (364730) : Exiting Master process...
Dec 13 08:56:34 compute-0 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [ALERT]    (364730) : Current worker (364747) exited with code 143 (Terminated)
Dec 13 08:56:34 compute-0 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [WARNING]  (364730) : All workers exited. Exiting... (0)
Dec 13 08:56:34 compute-0 systemd[1]: libpod-88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55.scope: Deactivated successfully.
Dec 13 08:56:34 compute-0 podman[367595]: 2025-12-13 08:56:34.572423894 +0000 UTC m=+0.044785594 container died 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55-userdata-shm.mount: Deactivated successfully.
Dec 13 08:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e03dea7b690d3db119f67c61ad462e71c1d2570616b62b014daca58c0b50d9d7-merged.mount: Deactivated successfully.
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.607 248514 INFO nova.virt.libvirt.driver [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance destroyed successfully.
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.609 248514 DEBUG nova.objects.instance [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 2dacd79d-d668-430f-89e3-bd607a8298ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:34 compute-0 podman[367595]: 2025-12-13 08:56:34.619926674 +0000 UTC m=+0.092288374 container cleanup 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 08:56:34 compute-0 systemd[1]: libpod-conmon-88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55.scope: Deactivated successfully.
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.634 248514 DEBUG nova.virt.libvirt.vif [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=116,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9i8zv62t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:24Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2dacd79d-d668-430f-89e3-bd607a8298ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.635 248514 DEBUG nova.network.os_vif_util [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.636 248514 DEBUG nova.network.os_vif_util [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.636 248514 DEBUG os_vif [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.638 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59834f67-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.645 248514 INFO os_vif [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8')
Dec 13 08:56:34 compute-0 podman[367634]: 2025-12-13 08:56:34.700503525 +0000 UTC m=+0.056236461 container remove 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.706 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[651e509e-7425-4da2-b191-f15b62deb68e]: (4, ('Sat Dec 13 08:56:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 (88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55)\n88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55\nSat Dec 13 08:56:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 (88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55)\n88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.708 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0644a04-28f5-45ee-8f16-5362a8cc465b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.709 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e4a11-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:34 compute-0 kernel: tapd62e4a11-90: left promiscuous mode
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.730 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42020c84-b825-47bf-860c-80c4eab66f3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2dbe79a-ddd9-4f85-b94f-5cce3415ff1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.757 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15e30005-a46b-4a76-9b80-eb54e9d9910c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.778 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[02c044ec-1e9d-4ac6-8528-db2a11449a95]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869972, 'reachable_time': 27229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367667, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:34 compute-0 systemd[1]: run-netns-ovnmeta\x2dd62e4a11\x2d9334\x2d4dbd\x2d978f\x2ddcabebeb9f79.mount: Deactivated successfully.
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.784 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:56:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.784 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f4802944-b25e-4a27-b45a-1eaffc259bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.947 248514 INFO nova.virt.libvirt.driver [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Deleting instance files /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba_del
Dec 13 08:56:34 compute-0 nova_compute[248510]: 2025-12-13 08:56:34.948 248514 INFO nova.virt.libvirt.driver [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Deletion of /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba_del complete
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.005 248514 INFO nova.compute.manager [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Took 0.64 seconds to destroy the instance on the hypervisor.
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.006 248514 DEBUG oslo.service.loopingcall [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.007 248514 DEBUG nova.compute.manager [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.007 248514 DEBUG nova.network.neutron [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:56:35 compute-0 ceph-mon[76537]: pgmap v2856: 321 pgs: 321 active+clean; 279 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 187 op/s
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.476 248514 DEBUG nova.compute.manager [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.477 248514 DEBUG nova.compute.manager [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing instance network info cache due to event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.477 248514 DEBUG oslo_concurrency.lockutils [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.477 248514 DEBUG oslo_concurrency.lockutils [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:35 compute-0 nova_compute[248510]: 2025-12-13 08:56:35.478 248514 DEBUG nova.network.neutron [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:56:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 279 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 107 op/s
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.600 248514 DEBUG nova.network.neutron [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.626 248514 INFO nova.compute.manager [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Took 1.62 seconds to deallocate network for instance.
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.651 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-unplugged-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.651 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.652 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.652 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.652 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] No waiting events found dispatching network-vif-unplugged-59834f67-f81d-41bf-9bec-95eea737178e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.653 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-unplugged-59834f67-f81d-41bf-9bec-95eea737178e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.653 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.653 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.653 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.654 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.654 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] No waiting events found dispatching network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.654 248514 WARNING nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received unexpected event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e for instance with vm_state active and task_state deleting.
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.691 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.692 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:36 compute-0 nova_compute[248510]: 2025-12-13 08:56:36.789 248514 DEBUG oslo_concurrency.processutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.347 248514 DEBUG nova.network.neutron [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updated VIF entry in instance network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.348 248514 DEBUG nova.network.neutron [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.370 248514 DEBUG oslo_concurrency.lockutils [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:37 compute-0 ceph-mon[76537]: pgmap v2857: 321 pgs: 321 active+clean; 279 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 107 op/s
Dec 13 08:56:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:56:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040174884' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.504 248514 DEBUG oslo_concurrency.processutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.715s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.511 248514 DEBUG nova.compute.provider_tree [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.532 248514 DEBUG nova.scheduler.client.report [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.555 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.588 248514 INFO nova.scheduler.client.report [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 2dacd79d-d668-430f-89e3-bd607a8298ba
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:37 compute-0 nova_compute[248510]: 2025-12-13 08:56:37.680 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:38 compute-0 nova_compute[248510]: 2025-12-13 08:56:38.115 248514 DEBUG nova.network.neutron [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updated VIF entry in instance network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:56:38 compute-0 nova_compute[248510]: 2025-12-13 08:56:38.116 248514 DEBUG nova.network.neutron [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:38 compute-0 nova_compute[248510]: 2025-12-13 08:56:38.141 248514 DEBUG oslo_concurrency.lockutils [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 254 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 109 op/s
Dec 13 08:56:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4040174884' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.406917) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198407013, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2100, "num_deletes": 253, "total_data_size": 3514785, "memory_usage": 3569656, "flush_reason": "Manual Compaction"}
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198433529, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3416669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54562, "largest_seqno": 56661, "table_properties": {"data_size": 3407175, "index_size": 5923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19871, "raw_average_key_size": 20, "raw_value_size": 3388037, "raw_average_value_size": 3482, "num_data_blocks": 261, "num_entries": 973, "num_filter_entries": 973, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615990, "oldest_key_time": 1765615990, "file_creation_time": 1765616198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 26683 microseconds, and 8788 cpu microseconds.
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.433594) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3416669 bytes OK
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.433626) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.435984) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.436005) EVENT_LOG_v1 {"time_micros": 1765616198436000, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.436036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3505928, prev total WAL file size 3505928, number of live WAL files 2.
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.437259) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3336KB)], [128(8984KB)]
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198437385, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 12616999, "oldest_snapshot_seqno": -1}
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7865 keys, 10765031 bytes, temperature: kUnknown
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198535727, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10765031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10712991, "index_size": 31255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 204557, "raw_average_key_size": 26, "raw_value_size": 10572908, "raw_average_value_size": 1344, "num_data_blocks": 1220, "num_entries": 7865, "num_filter_entries": 7865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.536289) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10765031 bytes
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.540263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.0 rd, 109.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 8387, records dropped: 522 output_compression: NoCompression
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.540310) EVENT_LOG_v1 {"time_micros": 1765616198540292, "job": 78, "event": "compaction_finished", "compaction_time_micros": 98603, "compaction_time_cpu_micros": 45573, "output_level": 6, "num_output_files": 1, "total_output_size": 10765031, "num_input_records": 8387, "num_output_records": 7865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198541122, "job": 78, "event": "table_file_deletion", "file_number": 130}
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198542816, "job": 78, "event": "table_file_deletion", "file_number": 128}
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.437057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:56:38 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:56:38 compute-0 nova_compute[248510]: 2025-12-13 08:56:38.750 248514 DEBUG nova.compute.manager [req-35aca36c-bcf0-486f-b844-c648bb40189e req-63675479-6bbe-411e-93eb-a0fc4de7572e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-deleted-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:38 compute-0 nova_compute[248510]: 2025-12-13 08:56:38.751 248514 INFO nova.compute.manager [req-35aca36c-bcf0-486f-b844-c648bb40189e req-63675479-6bbe-411e-93eb-a0fc4de7572e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Neutron deleted interface 59834f67-f81d-41bf-9bec-95eea737178e; detaching it from the instance and deleting it from the info cache
Dec 13 08:56:38 compute-0 nova_compute[248510]: 2025-12-13 08:56:38.751 248514 DEBUG nova.network.neutron [req-35aca36c-bcf0-486f-b844-c648bb40189e req-63675479-6bbe-411e-93eb-a0fc4de7572e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 13 08:56:38 compute-0 nova_compute[248510]: 2025-12-13 08:56:38.755 248514 DEBUG nova.compute.manager [req-35aca36c-bcf0-486f-b844-c648bb40189e req-63675479-6bbe-411e-93eb-a0fc4de7572e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Detach interface failed, port_id=59834f67-f81d-41bf-9bec-95eea737178e, reason: Instance 2dacd79d-d668-430f-89e3-bd607a8298ba could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:56:39 compute-0 ceph-mon[76537]: pgmap v2858: 321 pgs: 321 active+clean; 254 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 109 op/s
Dec 13 08:56:39 compute-0 nova_compute[248510]: 2025-12-13 08:56:39.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:56:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:56:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:56:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:56:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:56:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:56:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 205 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 315 KiB/s wr, 143 op/s
Dec 13 08:56:40 compute-0 ovn_controller[148476]: 2025-12-13T08:56:40Z|00142|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.9
Dec 13 08:56:40 compute-0 ovn_controller[148476]: 2025-12-13T08:56:40Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:da:e5:67 10.100.0.9
Dec 13 08:56:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:41 compute-0 ceph-mon[76537]: pgmap v2859: 321 pgs: 321 active+clean; 205 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 315 KiB/s wr, 143 op/s
Dec 13 08:56:41 compute-0 nova_compute[248510]: 2025-12-13 08:56:41.999 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.000 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.037 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.124 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.125 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.134 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.135 248514 INFO nova.compute.claims [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:56:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 205 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 289 KiB/s wr, 110 op/s
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.333 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:42 compute-0 ovn_controller[148476]: 2025-12-13T08:56:42Z|01177|binding|INFO|Releasing lport 47f45749-b232-4d0c-bf37-be042ea606c8 from this chassis (sb_readonly=0)
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.581 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:56:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:56:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922969887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.919 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.925 248514 DEBUG nova.compute.provider_tree [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.945 248514 DEBUG nova.scheduler.client.report [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.993 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:42 compute-0 nova_compute[248510]: 2025-12-13 08:56:42.994 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.045 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.046 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.084 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.117 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.249 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.251 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.251 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Creating image(s)
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.272 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.292 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.311 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.314 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.386 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.387 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.388 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.389 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.416 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.419 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.470 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616188.4623442, 822fa9f6-0a5d-490e-89d7-446df19a068b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.471 248514 INFO nova.compute.manager [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] VM Stopped (Lifecycle Event)
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.495 248514 DEBUG nova.compute.manager [None req-ddd280ba-695e-4b8a-a320-f809ae1eb37f - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:43 compute-0 nova_compute[248510]: 2025-12-13 08:56:43.699 248514 DEBUG nova.policy [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:56:43 compute-0 ceph-mon[76537]: pgmap v2860: 321 pgs: 321 active+clean; 205 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 289 KiB/s wr, 110 op/s
Dec 13 08:56:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3922969887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:56:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 214 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 502 KiB/s wr, 143 op/s
Dec 13 08:56:44 compute-0 nova_compute[248510]: 2025-12-13 08:56:44.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:45 compute-0 sudo[367808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:56:45 compute-0 sudo[367808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:45 compute-0 sudo[367808]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:45 compute-0 sudo[367833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:56:45 compute-0 sudo[367833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:45 compute-0 ovn_controller[148476]: 2025-12-13T08:56:45Z|00144|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.9
Dec 13 08:56:45 compute-0 ovn_controller[148476]: 2025-12-13T08:56:45Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:da:e5:67 10.100.0.9
Dec 13 08:56:45 compute-0 nova_compute[248510]: 2025-12-13 08:56:45.444 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Successfully created port: 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:56:45 compute-0 ovn_controller[148476]: 2025-12-13T08:56:45Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:e5:67 10.100.0.9
Dec 13 08:56:45 compute-0 ovn_controller[148476]: 2025-12-13T08:56:45Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:e5:67 10.100.0.9
Dec 13 08:56:45 compute-0 sudo[367833]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 08:56:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 08:56:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:56:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:56:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:56:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:56:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:56:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:45 compute-0 ceph-mon[76537]: pgmap v2861: 321 pgs: 321 active+clean; 214 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 502 KiB/s wr, 143 op/s
Dec 13 08:56:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:56:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:56:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:56:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:56:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:56:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:56:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:56:45 compute-0 sudo[367890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:56:45 compute-0 sudo[367890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:45 compute-0 sudo[367890]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:45 compute-0 nova_compute[248510]: 2025-12-13 08:56:45.964 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:46 compute-0 sudo[367915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:56:46 compute-0 sudo[367915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:46 compute-0 nova_compute[248510]: 2025-12-13 08:56:46.036 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:56:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 214 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 80 op/s
Dec 13 08:56:46 compute-0 podman[368007]: 2025-12-13 08:56:46.335754091 +0000 UTC m=+0.086619073 container create 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 08:56:46 compute-0 podman[368007]: 2025-12-13 08:56:46.273239113 +0000 UTC m=+0.024104115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:56:46 compute-0 systemd[1]: Started libpod-conmon-00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d.scope.
Dec 13 08:56:46 compute-0 nova_compute[248510]: 2025-12-13 08:56:46.377 248514 DEBUG nova.objects.instance [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:56:46 compute-0 nova_compute[248510]: 2025-12-13 08:56:46.408 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:56:46 compute-0 nova_compute[248510]: 2025-12-13 08:56:46.408 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Ensure instance console log exists: /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:56:46 compute-0 nova_compute[248510]: 2025-12-13 08:56:46.409 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:46 compute-0 nova_compute[248510]: 2025-12-13 08:56:46.409 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:46 compute-0 nova_compute[248510]: 2025-12-13 08:56:46.410 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:46 compute-0 podman[368007]: 2025-12-13 08:56:46.420441534 +0000 UTC m=+0.171306536 container init 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 08:56:46 compute-0 podman[368007]: 2025-12-13 08:56:46.426904266 +0000 UTC m=+0.177769248 container start 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:56:46 compute-0 elastic_lewin[368041]: 167 167
Dec 13 08:56:46 compute-0 systemd[1]: libpod-00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d.scope: Deactivated successfully.
Dec 13 08:56:46 compute-0 podman[368007]: 2025-12-13 08:56:46.454858177 +0000 UTC m=+0.205723189 container attach 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:56:46 compute-0 podman[368007]: 2025-12-13 08:56:46.456357804 +0000 UTC m=+0.207222826 container died 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 08:56:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b02bdb34ab92482b536d9671858fb222b9f14ee24de846db1f46c1492104f69e-merged.mount: Deactivated successfully.
Dec 13 08:56:46 compute-0 podman[368007]: 2025-12-13 08:56:46.498686416 +0000 UTC m=+0.249551398 container remove 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 08:56:46 compute-0 systemd[1]: libpod-conmon-00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d.scope: Deactivated successfully.
Dec 13 08:56:46 compute-0 podman[368067]: 2025-12-13 08:56:46.676851593 +0000 UTC m=+0.044337213 container create b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:56:46 compute-0 systemd[1]: Started libpod-conmon-b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4.scope.
Dec 13 08:56:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:46 compute-0 podman[368067]: 2025-12-13 08:56:46.654336858 +0000 UTC m=+0.021822498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:56:46 compute-0 podman[368067]: 2025-12-13 08:56:46.75770877 +0000 UTC m=+0.125194410 container init b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 08:56:46 compute-0 podman[368067]: 2025-12-13 08:56:46.768027359 +0000 UTC m=+0.135512979 container start b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 08:56:46 compute-0 podman[368067]: 2025-12-13 08:56:46.773168068 +0000 UTC m=+0.140653708 container attach b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:56:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 08:56:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:56:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:56:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:56:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:56:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:56:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:56:46 compute-0 ceph-mon[76537]: pgmap v2862: 321 pgs: 321 active+clean; 214 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 80 op/s
Dec 13 08:56:47 compute-0 intelligent_bartik[368083]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:56:47 compute-0 intelligent_bartik[368083]: --> All data devices are unavailable
Dec 13 08:56:47 compute-0 systemd[1]: libpod-b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4.scope: Deactivated successfully.
Dec 13 08:56:47 compute-0 podman[368067]: 2025-12-13 08:56:47.329514007 +0000 UTC m=+0.696999627 container died b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.337 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Successfully updated port: 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393-merged.mount: Deactivated successfully.
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.365 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.366 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.366 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:56:47 compute-0 podman[368067]: 2025-12-13 08:56:47.379412988 +0000 UTC m=+0.746898608 container remove b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 08:56:47 compute-0 systemd[1]: libpod-conmon-b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4.scope: Deactivated successfully.
Dec 13 08:56:47 compute-0 sudo[367915]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:47 compute-0 sudo[368114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:56:47 compute-0 sudo[368114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:47 compute-0 sudo[368114]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.570 248514 DEBUG nova.compute.manager [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:47 compute-0 sudo[368139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.571 248514 DEBUG nova.compute.manager [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing instance network info cache due to event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.571 248514 DEBUG oslo_concurrency.lockutils [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:47 compute-0 sudo[368139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:47 compute-0 nova_compute[248510]: 2025-12-13 08:56:47.675 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:56:47 compute-0 podman[368176]: 2025-12-13 08:56:47.885241241 +0000 UTC m=+0.035920122 container create 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 08:56:47 compute-0 systemd[1]: Started libpod-conmon-11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e.scope.
Dec 13 08:56:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:56:47 compute-0 podman[368176]: 2025-12-13 08:56:47.963230666 +0000 UTC m=+0.113909587 container init 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 08:56:47 compute-0 podman[368176]: 2025-12-13 08:56:47.870369058 +0000 UTC m=+0.021047969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:56:47 compute-0 podman[368176]: 2025-12-13 08:56:47.969294848 +0000 UTC m=+0.119973729 container start 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:56:47 compute-0 podman[368176]: 2025-12-13 08:56:47.972871818 +0000 UTC m=+0.123550729 container attach 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:56:47 compute-0 sad_mclaren[368192]: 167 167
Dec 13 08:56:47 compute-0 systemd[1]: libpod-11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e.scope: Deactivated successfully.
Dec 13 08:56:47 compute-0 podman[368176]: 2025-12-13 08:56:47.975722289 +0000 UTC m=+0.126401180 container died 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:56:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-54b2fe4435aa9e3d863c434d635b6c60aeadf48b5cee5157ae6eb06612bba167-merged.mount: Deactivated successfully.
Dec 13 08:56:48 compute-0 podman[368176]: 2025-12-13 08:56:48.018595544 +0000 UTC m=+0.169274425 container remove 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 08:56:48 compute-0 systemd[1]: libpod-conmon-11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e.scope: Deactivated successfully.
Dec 13 08:56:48 compute-0 podman[368215]: 2025-12-13 08:56:48.172840222 +0000 UTC m=+0.038217920 container create 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 08:56:48 compute-0 systemd[1]: Started libpod-conmon-477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d.scope.
Dec 13 08:56:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:48 compute-0 podman[368215]: 2025-12-13 08:56:48.156459351 +0000 UTC m=+0.021837059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:56:48 compute-0 podman[368215]: 2025-12-13 08:56:48.260594872 +0000 UTC m=+0.125972560 container init 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:56:48 compute-0 podman[368215]: 2025-12-13 08:56:48.268662594 +0000 UTC m=+0.134040282 container start 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:56:48 compute-0 podman[368215]: 2025-12-13 08:56:48.273108426 +0000 UTC m=+0.138486134 container attach 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 08:56:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 241 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 93 op/s
Dec 13 08:56:48 compute-0 interesting_borg[368231]: {
Dec 13 08:56:48 compute-0 interesting_borg[368231]:     "0": [
Dec 13 08:56:48 compute-0 interesting_borg[368231]:         {
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "devices": [
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "/dev/loop3"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             ],
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_name": "ceph_lv0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_size": "21470642176",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "name": "ceph_lv0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "tags": {
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cluster_name": "ceph",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.crush_device_class": "",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.encrypted": "0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.objectstore": "bluestore",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osd_id": "0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.type": "block",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.vdo": "0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.with_tpm": "0"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             },
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "type": "block",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "vg_name": "ceph_vg0"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:         }
Dec 13 08:56:48 compute-0 interesting_borg[368231]:     ],
Dec 13 08:56:48 compute-0 interesting_borg[368231]:     "1": [
Dec 13 08:56:48 compute-0 interesting_borg[368231]:         {
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "devices": [
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "/dev/loop4"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             ],
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_name": "ceph_lv1",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_size": "21470642176",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "name": "ceph_lv1",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "tags": {
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cluster_name": "ceph",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.crush_device_class": "",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.encrypted": "0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.objectstore": "bluestore",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osd_id": "1",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.type": "block",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.vdo": "0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.with_tpm": "0"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             },
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "type": "block",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "vg_name": "ceph_vg1"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:         }
Dec 13 08:56:48 compute-0 interesting_borg[368231]:     ],
Dec 13 08:56:48 compute-0 interesting_borg[368231]:     "2": [
Dec 13 08:56:48 compute-0 interesting_borg[368231]:         {
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "devices": [
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "/dev/loop5"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             ],
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_name": "ceph_lv2",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_size": "21470642176",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "name": "ceph_lv2",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "tags": {
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.cluster_name": "ceph",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.crush_device_class": "",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.encrypted": "0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.objectstore": "bluestore",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osd_id": "2",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.type": "block",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.vdo": "0",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:                 "ceph.with_tpm": "0"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             },
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "type": "block",
Dec 13 08:56:48 compute-0 interesting_borg[368231]:             "vg_name": "ceph_vg2"
Dec 13 08:56:48 compute-0 interesting_borg[368231]:         }
Dec 13 08:56:48 compute-0 interesting_borg[368231]:     ]
Dec 13 08:56:48 compute-0 interesting_borg[368231]: }
Dec 13 08:56:48 compute-0 systemd[1]: libpod-477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d.scope: Deactivated successfully.
Dec 13 08:56:48 compute-0 podman[368215]: 2025-12-13 08:56:48.587794386 +0000 UTC m=+0.453172074 container died 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:56:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f-merged.mount: Deactivated successfully.
Dec 13 08:56:48 compute-0 podman[368215]: 2025-12-13 08:56:48.643612975 +0000 UTC m=+0.508990663 container remove 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 08:56:48 compute-0 systemd[1]: libpod-conmon-477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d.scope: Deactivated successfully.
Dec 13 08:56:48 compute-0 sudo[368139]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:48 compute-0 sudo[368252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:56:48 compute-0 sudo[368252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:48 compute-0 sudo[368252]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:48 compute-0 sudo[368277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:56:48 compute-0 sudo[368277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:49 compute-0 podman[368312]: 2025-12-13 08:56:49.125713252 +0000 UTC m=+0.038127967 container create 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:56:49 compute-0 systemd[1]: Started libpod-conmon-74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a.scope.
Dec 13 08:56:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:56:49 compute-0 podman[368312]: 2025-12-13 08:56:49.20341085 +0000 UTC m=+0.115825565 container init 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:56:49 compute-0 podman[368312]: 2025-12-13 08:56:49.110253804 +0000 UTC m=+0.022668529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:56:49 compute-0 podman[368312]: 2025-12-13 08:56:49.209370469 +0000 UTC m=+0.121785184 container start 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:56:49 compute-0 podman[368312]: 2025-12-13 08:56:49.213124873 +0000 UTC m=+0.125539578 container attach 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:56:49 compute-0 lucid_greider[368328]: 167 167
Dec 13 08:56:49 compute-0 systemd[1]: libpod-74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a.scope: Deactivated successfully.
Dec 13 08:56:49 compute-0 podman[368312]: 2025-12-13 08:56:49.214643501 +0000 UTC m=+0.127058216 container died 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 08:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d41409324cfacffd7f66d59786a9bf61ea68546a13be6ea068a5584181898a7-merged.mount: Deactivated successfully.
Dec 13 08:56:49 compute-0 podman[368312]: 2025-12-13 08:56:49.254200233 +0000 UTC m=+0.166614958 container remove 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 08:56:49 compute-0 systemd[1]: libpod-conmon-74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a.scope: Deactivated successfully.
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.282 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.317 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.318 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance network_info: |[{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.318 248514 DEBUG oslo_concurrency.lockutils [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.318 248514 DEBUG nova.network.neutron [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.322 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start _get_guest_xml network_info=[{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.328 248514 WARNING nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.334 248514 DEBUG nova.virt.libvirt.host [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.335 248514 DEBUG nova.virt.libvirt.host [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.340 248514 DEBUG nova.virt.libvirt.host [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.341 248514 DEBUG nova.virt.libvirt.host [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.341 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.341 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:56:49 compute-0 ceph-mon[76537]: pgmap v2863: 321 pgs: 321 active+clean; 241 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 93 op/s
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.346 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:49 compute-0 podman[368352]: 2025-12-13 08:56:49.431272963 +0000 UTC m=+0.045025460 container create 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 08:56:49 compute-0 systemd[1]: Started libpod-conmon-1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163.scope.
Dec 13 08:56:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:49 compute-0 podman[368352]: 2025-12-13 08:56:49.412193635 +0000 UTC m=+0.025946172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:56:49 compute-0 podman[368352]: 2025-12-13 08:56:49.512548781 +0000 UTC m=+0.126301298 container init 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:56:49 compute-0 podman[368352]: 2025-12-13 08:56:49.518935141 +0000 UTC m=+0.132687648 container start 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:56:49 compute-0 podman[368352]: 2025-12-13 08:56:49.522179722 +0000 UTC m=+0.135932249 container attach 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.603 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616194.6016686, 2dacd79d-d668-430f-89e3-bd607a8298ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.604 248514 INFO nova.compute.manager [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] VM Stopped (Lifecycle Event)
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.684 248514 DEBUG nova.compute.manager [None req-dc5579ee-1a7d-4b69-b02c-edfb6158fea4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:49 compute-0 sshd-session[368403]: Connection closed by 80.94.92.165 port 42768
Dec 13 08:56:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:56:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266313267' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.936 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.967 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:49 compute-0 nova_compute[248510]: 2025-12-13 08:56:49.972 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:50 compute-0 lvm[368505]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:56:50 compute-0 lvm[368505]: VG ceph_vg0 finished
Dec 13 08:56:50 compute-0 lvm[368508]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:56:50 compute-0 lvm[368508]: VG ceph_vg1 finished
Dec 13 08:56:50 compute-0 lvm[368510]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:56:50 compute-0 lvm[368510]: VG ceph_vg2 finished
Dec 13 08:56:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 264 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 106 op/s
Dec 13 08:56:50 compute-0 hopeful_goldstine[368371]: {}
Dec 13 08:56:50 compute-0 systemd[1]: libpod-1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163.scope: Deactivated successfully.
Dec 13 08:56:50 compute-0 systemd[1]: libpod-1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163.scope: Consumed 1.274s CPU time.
Dec 13 08:56:50 compute-0 podman[368352]: 2025-12-13 08:56:50.340432258 +0000 UTC m=+0.954184765 container died 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 08:56:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3266313267' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c-merged.mount: Deactivated successfully.
Dec 13 08:56:50 compute-0 podman[368352]: 2025-12-13 08:56:50.388907784 +0000 UTC m=+1.002660291 container remove 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:56:50 compute-0 systemd[1]: libpod-conmon-1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163.scope: Deactivated successfully.
Dec 13 08:56:50 compute-0 sudo[368277]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:56:50 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:56:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:56:50 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:56:50 compute-0 sudo[368525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:56:50 compute-0 sudo[368525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:56:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:56:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3860048685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:50 compute-0 sudo[368525]: pam_unix(sudo:session): session closed for user root
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.537 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.539 248514 DEBUG nova.virt.libvirt.vif [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1758092435',display_name='tempest-TestNetworkBasicOps-server-1758092435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1758092435',id=121,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDDhA9DlC0XXlTi33TP7442ZgmxgyTTZRyp/5+PtSzz/z4TT06lLY5cCNioPf17m6xj5p3Rza8zGSpra/Ou4pMBK7drw3VX1RTJrfYr/jaVe2RRgvmXLfZfYTeWegMxqwQ==',key_name='tempest-TestNetworkBasicOps-530003057',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-9yn1gbd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:56:43Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.540 248514 DEBUG nova.network.os_vif_util [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.541 248514 DEBUG nova.network.os_vif_util [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.542 248514 DEBUG nova.objects.instance [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.649 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <uuid>1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d</uuid>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <name>instance-00000079</name>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-1758092435</nova:name>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:56:49</nova:creationTime>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <nova:port uuid="4013e964-3f6f-4aaa-af6d-20b9cb5c2a39">
Dec 13 08:56:50 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <system>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <entry name="serial">1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d</entry>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <entry name="uuid">1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d</entry>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </system>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <os>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   </os>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <features>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   </features>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk">
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config">
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       </source>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:56:50 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:7e:a9:be"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <target dev="tap4013e964-3f"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/console.log" append="off"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <video>
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </video>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:56:50 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:56:50 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:56:50 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:56:50 compute-0 nova_compute[248510]: </domain>
Dec 13 08:56:50 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.650 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Preparing to wait for external event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.650 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.650 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.650 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.651 248514 DEBUG nova.virt.libvirt.vif [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1758092435',display_name='tempest-TestNetworkBasicOps-server-1758092435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1758092435',id=121,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDDhA9DlC0XXlTi33TP7442ZgmxgyTTZRyp/5+PtSzz/z4TT06lLY5cCNioPf17m6xj5p3Rza8zGSpra/Ou4pMBK7drw3VX1RTJrfYr/jaVe2RRgvmXLfZfYTeWegMxqwQ==',key_name='tempest-TestNetworkBasicOps-530003057',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-9yn1gbd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:56:43Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.651 248514 DEBUG nova.network.os_vif_util [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.652 248514 DEBUG nova.network.os_vif_util [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.652 248514 DEBUG os_vif [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.653 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.653 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.654 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.657 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.658 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4013e964-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.658 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4013e964-3f, col_values=(('external_ids', {'iface-id': '4013e964-3f6f-4aaa-af6d-20b9cb5c2a39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:a9:be', 'vm-uuid': '1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:50 compute-0 NetworkManager[50376]: <info>  [1765616210.6608] manager: (tap4013e964-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.668 248514 INFO os_vif [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f')
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.774 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.775 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.775 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:7e:a9:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.776 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Using config drive
Dec 13 08:56:50 compute-0 nova_compute[248510]: 2025-12-13 08:56:50.807 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.345 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Creating config drive at /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.353 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlqlzsdy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:51 compute-0 ceph-mon[76537]: pgmap v2864: 321 pgs: 321 active+clean; 264 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 106 op/s
Dec 13 08:56:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:56:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:56:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3860048685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.514 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlqlzsdy" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.536 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.539 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.684 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.685 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Deleting local config drive /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config because it was imported into RBD.
Dec 13 08:56:51 compute-0 kernel: tap4013e964-3f: entered promiscuous mode
Dec 13 08:56:51 compute-0 NetworkManager[50376]: <info>  [1765616211.7339] manager: (tap4013e964-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/490)
Dec 13 08:56:51 compute-0 ovn_controller[148476]: 2025-12-13T08:56:51Z|01178|binding|INFO|Claiming lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 for this chassis.
Dec 13 08:56:51 compute-0 ovn_controller[148476]: 2025-12-13T08:56:51Z|01179|binding|INFO|4013e964-3f6f-4aaa-af6d-20b9cb5c2a39: Claiming fa:16:3e:7e:a9:be 10.100.0.9
Dec 13 08:56:51 compute-0 systemd-udevd[368509]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:51 compute-0 NetworkManager[50376]: <info>  [1765616211.7475] device (tap4013e964-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:56:51 compute-0 NetworkManager[50376]: <info>  [1765616211.7482] device (tap4013e964-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:56:51 compute-0 ovn_controller[148476]: 2025-12-13T08:56:51Z|01180|binding|INFO|Setting lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 ovn-installed in OVS
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:51 compute-0 ovn_controller[148476]: 2025-12-13T08:56:51Z|01181|binding|INFO|Setting lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 up in Southbound
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.759 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:a9:be 10.100.0.9'], port_security=['fa:16:3e:7e:a9:be 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c78db00b-677b-4c8b-af80-5bb717876b41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca3b9929-a40c-461d-b2b9-49fd6af07fd3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.761 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 in datapath 793ba3c3-1004-4068-a89a-dc7b4c56fc43 bound to our chassis
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.763 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 793ba3c3-1004-4068-a89a-dc7b4c56fc43
Dec 13 08:56:51 compute-0 systemd-machined[210538]: New machine qemu-148-instance-00000079.
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.775 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[437ed159-1423-4b38-b76a-e4024186cd6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.776 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap793ba3c3-11 in ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.777 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap793ba3c3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.777 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0220d3a-428b-4073-a469-cf31d8a784ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.778 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55072e5f-c4c5-4df1-b3df-0a8cb7b10871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.790 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4a8107-25ee-4d0f-95e5-f7bbc6891c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000079.
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.813 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d33af3d8-6137-4f59-b0d7-a4154677b838]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.842 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[16231d44-811a-49ca-8884-1f696c40288e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.847 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4b77bf79-7de6-43d3-8b7f-eef95ce735f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 NetworkManager[50376]: <info>  [1765616211.8482] manager: (tap793ba3c3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/491)
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.873 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca5e47c-3d9e-4fb3-8364-6bf4c0acb407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.875 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4537659c-c71a-4e59-8103-132eb1f2f726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 NetworkManager[50376]: <info>  [1765616211.8994] device (tap793ba3c3-10): carrier: link connected
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.904 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a704cbee-8874-4e7c-8384-fee7cd2e687d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.927 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca4457d-dc8e-4d40-8db2-f677edf277c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap793ba3c3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:ed:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878911, 'reachable_time': 42802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368656, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e59d8ced-02cd-482d-a107-b9c121bf1c5d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:eddb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878911, 'tstamp': 878911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368657, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.962 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf56a15-48cc-4a56-bcd0-43086e35d30e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap793ba3c3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:ed:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878911, 'reachable_time': 42802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368658, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.991 248514 DEBUG nova.network.neutron [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updated VIF entry in instance network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:56:51 compute-0 nova_compute[248510]: 2025-12-13 08:56:51.992 248514 DEBUG nova.network.neutron [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:56:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef60de09-edd9-4a88-b237-d47229e9ce00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.042 248514 DEBUG oslo_concurrency.lockutils [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.061 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0add1cb0-19c7-4ae8-b399-d060195edaa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.062 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap793ba3c3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.062 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.062 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap793ba3c3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:52 compute-0 NetworkManager[50376]: <info>  [1765616212.0650] manager: (tap793ba3c3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Dec 13 08:56:52 compute-0 kernel: tap793ba3c3-10: entered promiscuous mode
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.068 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap793ba3c3-10, col_values=(('external_ids', {'iface-id': 'e25ab14d-8bf6-4007-ae4c-085df43b875d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:56:52 compute-0 ovn_controller[148476]: 2025-12-13T08:56:52Z|01182|binding|INFO|Releasing lport e25ab14d-8bf6-4007-ae4c-085df43b875d from this chassis (sb_readonly=0)
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.086 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/793ba3c3-1004-4068-a89a-dc7b4c56fc43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/793ba3c3-1004-4068-a89a-dc7b4c56fc43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.087 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef16ef10-5d26-473a-bc2e-b16c071529a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.088 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-793ba3c3-1004-4068-a89a-dc7b4c56fc43
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/793ba3c3-1004-4068-a89a-dc7b4c56fc43.pid.haproxy
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 793ba3c3-1004-4068-a89a-dc7b4c56fc43
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:56:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.089 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'env', 'PROCESS_TAG=haproxy-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/793ba3c3-1004-4068-a89a-dc7b4c56fc43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.199 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616212.1988218, 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.199 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] VM Started (Lifecycle Event)
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.272 248514 DEBUG nova.compute.manager [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.272 248514 DEBUG oslo_concurrency.lockutils [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.273 248514 DEBUG oslo_concurrency.lockutils [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.273 248514 DEBUG oslo_concurrency.lockutils [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.273 248514 DEBUG nova.compute.manager [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Processing event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.274 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:56:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 264 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 765 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.278 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.282 248514 INFO nova.virt.libvirt.driver [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance spawned successfully.
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.282 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.294 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.300 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.320 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.321 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.322 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.323 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.324 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.325 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.333 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.334 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616212.1990333, 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.334 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] VM Paused (Lifecycle Event)
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.378 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.382 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616212.277355, 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.383 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] VM Resumed (Lifecycle Event)
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.420 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.424 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.429 248514 INFO nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Took 9.18 seconds to spawn the instance on the hypervisor.
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.430 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.445 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.515 248514 INFO nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Took 10.41 seconds to build instance.
Dec 13 08:56:52 compute-0 podman[368732]: 2025-12-13 08:56:52.530372325 +0000 UTC m=+0.060736874 container create 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.567 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:52 compute-0 systemd[1]: Started libpod-conmon-5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17.scope.
Dec 13 08:56:52 compute-0 podman[368732]: 2025-12-13 08:56:52.498642849 +0000 UTC m=+0.029007428 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:56:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad8c437fd7f83a96f7ee83b44aeba75a23697638d72f15b7ac660f7308e1111/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:56:52 compute-0 podman[368732]: 2025-12-13 08:56:52.630961477 +0000 UTC m=+0.161326056 container init 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 08:56:52 compute-0 podman[368732]: 2025-12-13 08:56:52.636987298 +0000 UTC m=+0.167351847 container start 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:56:52 compute-0 nova_compute[248510]: 2025-12-13 08:56:52.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:52 compute-0 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [NOTICE]   (368751) : New worker (368753) forked
Dec 13 08:56:52 compute-0 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [NOTICE]   (368751) : Loading success.
Dec 13 08:56:53 compute-0 ceph-mon[76537]: pgmap v2865: 321 pgs: 321 active+clean; 264 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 765 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 08:56:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Dec 13 08:56:54 compute-0 nova_compute[248510]: 2025-12-13 08:56:54.392 248514 DEBUG nova.compute.manager [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:54 compute-0 nova_compute[248510]: 2025-12-13 08:56:54.392 248514 DEBUG oslo_concurrency.lockutils [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:54 compute-0 nova_compute[248510]: 2025-12-13 08:56:54.393 248514 DEBUG oslo_concurrency.lockutils [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:54 compute-0 nova_compute[248510]: 2025-12-13 08:56:54.394 248514 DEBUG oslo_concurrency.lockutils [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:54 compute-0 nova_compute[248510]: 2025-12-13 08:56:54.394 248514 DEBUG nova.compute.manager [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] No waiting events found dispatching network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:56:54 compute-0 nova_compute[248510]: 2025-12-13 08:56:54.395 248514 WARNING nova.compute.manager [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received unexpected event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 for instance with vm_state active and task_state None.
Dec 13 08:56:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:55.434 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:56:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:55.435 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:56:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:56:55.436 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:56:55 compute-0 ceph-mon[76537]: pgmap v2866: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Dec 13 08:56:55 compute-0 nova_compute[248510]: 2025-12-13 08:56:55.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:55 compute-0 nova_compute[248510]: 2025-12-13 08:56:55.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:56:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Dec 13 08:56:56 compute-0 ceph-mon[76537]: pgmap v2867: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Dec 13 08:56:57 compute-0 nova_compute[248510]: 2025-12-13 08:56:57.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:56:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 13 08:56:58 compute-0 nova_compute[248510]: 2025-12-13 08:56:58.285 248514 DEBUG nova.compute.manager [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:56:58 compute-0 nova_compute[248510]: 2025-12-13 08:56:58.286 248514 DEBUG nova.compute.manager [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing instance network info cache due to event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:56:58 compute-0 nova_compute[248510]: 2025-12-13 08:56:58.286 248514 DEBUG oslo_concurrency.lockutils [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:56:58 compute-0 nova_compute[248510]: 2025-12-13 08:56:58.286 248514 DEBUG oslo_concurrency.lockutils [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:56:58 compute-0 nova_compute[248510]: 2025-12-13 08:56:58.287 248514 DEBUG nova.network.neutron [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:56:59 compute-0 ceph-mon[76537]: pgmap v2868: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 13 08:57:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 268 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 652 KiB/s wr, 94 op/s
Dec 13 08:57:00 compute-0 nova_compute[248510]: 2025-12-13 08:57:00.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:01 compute-0 nova_compute[248510]: 2025-12-13 08:57:01.001 248514 DEBUG nova.network.neutron [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updated VIF entry in instance network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:57:01 compute-0 nova_compute[248510]: 2025-12-13 08:57:01.002 248514 DEBUG nova.network.neutron [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:01 compute-0 nova_compute[248510]: 2025-12-13 08:57:01.052 248514 DEBUG oslo_concurrency.lockutils [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:01 compute-0 ceph-mon[76537]: pgmap v2869: 321 pgs: 321 active+clean; 268 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 652 KiB/s wr, 94 op/s
Dec 13 08:57:01 compute-0 nova_compute[248510]: 2025-12-13 08:57:01.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 268 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 208 KiB/s wr, 78 op/s
Dec 13 08:57:02 compute-0 nova_compute[248510]: 2025-12-13 08:57:02.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:02 compute-0 nova_compute[248510]: 2025-12-13 08:57:02.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:03 compute-0 podman[368765]: 2025-12-13 08:57:03.072969334 +0000 UTC m=+0.152662607 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 08:57:03 compute-0 podman[368766]: 2025-12-13 08:57:03.073367484 +0000 UTC m=+0.147471877 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:57:03 compute-0 podman[368764]: 2025-12-13 08:57:03.108344392 +0000 UTC m=+0.187434050 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 08:57:03 compute-0 ceph-mon[76537]: pgmap v2870: 321 pgs: 321 active+clean; 268 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 208 KiB/s wr, 78 op/s
Dec 13 08:57:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 278 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Dec 13 08:57:04 compute-0 ceph-mon[76537]: pgmap v2871: 321 pgs: 321 active+clean; 278 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Dec 13 08:57:04 compute-0 sshd-session[368762]: Invalid user orangepi from 61.245.11.87 port 41836
Dec 13 08:57:05 compute-0 sshd-session[368762]: Connection closed by invalid user orangepi 61.245.11.87 port 41836 [preauth]
Dec 13 08:57:05 compute-0 ovn_controller[148476]: 2025-12-13T08:57:05Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:a9:be 10.100.0.9
Dec 13 08:57:05 compute-0 ovn_controller[148476]: 2025-12-13T08:57:05Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:a9:be 10.100.0.9
Dec 13 08:57:05 compute-0 nova_compute[248510]: 2025-12-13 08:57:05.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 278 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 53 op/s
Dec 13 08:57:06 compute-0 nova_compute[248510]: 2025-12-13 08:57:06.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:07 compute-0 ceph-mon[76537]: pgmap v2872: 321 pgs: 321 active+clean; 278 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 53 op/s
Dec 13 08:57:07 compute-0 nova_compute[248510]: 2025-12-13 08:57:07.654 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 290 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Dec 13 08:57:08 compute-0 nova_compute[248510]: 2025-12-13 08:57:08.726 248514 DEBUG nova.compute.manager [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:08 compute-0 nova_compute[248510]: 2025-12-13 08:57:08.790 248514 INFO nova.compute.manager [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] instance snapshotting
Dec 13 08:57:09 compute-0 nova_compute[248510]: 2025-12-13 08:57:09.153 248514 INFO nova.virt.libvirt.driver [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Beginning live snapshot process
Dec 13 08:57:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:57:09
Dec 13 08:57:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:57:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:57:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Dec 13 08:57:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:57:09 compute-0 ceph-mon[76537]: pgmap v2873: 321 pgs: 321 active+clean; 290 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Dec 13 08:57:09 compute-0 nova_compute[248510]: 2025-12-13 08:57:09.504 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] creating snapshot(adb21fd68fa744dabbb257505c9debf2) on rbd image(4887eb43-1570-49a5-b20e-326af1e84a7b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 300 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Dec 13 08:57:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Dec 13 08:57:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Dec 13 08:57:10 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Dec 13 08:57:10 compute-0 nova_compute[248510]: 2025-12-13 08:57:10.491 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] cloning vms/4887eb43-1570-49a5-b20e-326af1e84a7b_disk@adb21fd68fa744dabbb257505c9debf2 to images/67e4474c-b70a-4aca-89b4-597c5be29fd3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 08:57:10 compute-0 nova_compute[248510]: 2025-12-13 08:57:10.600 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] flattening images/67e4474c-b70a-4aca-89b4-597c5be29fd3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 08:57:10 compute-0 nova_compute[248510]: 2025-12-13 08:57:10.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:57:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:57:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:57:10 compute-0 nova_compute[248510]: 2025-12-13 08:57:10.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:11 compute-0 nova_compute[248510]: 2025-12-13 08:57:11.269 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] removing snapshot(adb21fd68fa744dabbb257505c9debf2) on rbd image(4887eb43-1570-49a5-b20e-326af1e84a7b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 08:57:11 compute-0 ceph-mon[76537]: pgmap v2874: 321 pgs: 321 active+clean; 300 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Dec 13 08:57:11 compute-0 ceph-mon[76537]: osdmap e305: 3 total, 3 up, 3 in
Dec 13 08:57:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Dec 13 08:57:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Dec 13 08:57:11 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Dec 13 08:57:11 compute-0 nova_compute[248510]: 2025-12-13 08:57:11.483 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] creating snapshot(snap) on rbd image(67e4474c-b70a-4aca-89b4-597c5be29fd3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 08:57:11 compute-0 nova_compute[248510]: 2025-12-13 08:57:11.562 248514 INFO nova.compute.manager [None req-7e39d1b3-ccdc-4dd0-bbc2-6a4e83da40e9 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Get console output
Dec 13 08:57:11 compute-0 nova_compute[248510]: 2025-12-13 08:57:11.571 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:57:11 compute-0 nova_compute[248510]: 2025-12-13 08:57:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:11 compute-0 nova_compute[248510]: 2025-12-13 08:57:11.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:57:12 compute-0 nova_compute[248510]: 2025-12-13 08:57:12.058 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:12 compute-0 nova_compute[248510]: 2025-12-13 08:57:12.059 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:12 compute-0 nova_compute[248510]: 2025-12-13 08:57:12.059 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:57:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 300 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Dec 13 08:57:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Dec 13 08:57:12 compute-0 ceph-mon[76537]: osdmap e306: 3 total, 3 up, 3 in
Dec 13 08:57:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Dec 13 08:57:12 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Dec 13 08:57:12 compute-0 nova_compute[248510]: 2025-12-13 08:57:12.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:13 compute-0 ceph-mon[76537]: pgmap v2877: 321 pgs: 321 active+clean; 300 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Dec 13 08:57:13 compute-0 ceph-mon[76537]: osdmap e307: 3 total, 3 up, 3 in
Dec 13 08:57:14 compute-0 nova_compute[248510]: 2025-12-13 08:57:14.053 248514 DEBUG nova.compute.manager [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:14 compute-0 nova_compute[248510]: 2025-12-13 08:57:14.054 248514 DEBUG nova.compute.manager [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing instance network info cache due to event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:57:14 compute-0 nova_compute[248510]: 2025-12-13 08:57:14.055 248514 DEBUG oslo_concurrency.lockutils [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:14 compute-0 nova_compute[248510]: 2025-12-13 08:57:14.055 248514 DEBUG oslo_concurrency.lockutils [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:14 compute-0 nova_compute[248510]: 2025-12-13 08:57:14.056 248514 DEBUG nova.network.neutron [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:57:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 402 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 202 op/s
Dec 13 08:57:14 compute-0 ceph-mon[76537]: pgmap v2879: 321 pgs: 321 active+clean; 402 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 202 op/s
Dec 13 08:57:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:57:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136173477' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:57:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:15.081 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:57:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136173477' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:57:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:15.084 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.488 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.519 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.520 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.520 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.808 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.808 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.808 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.809 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2136173477' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:57:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2136173477' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:57:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.860 248514 INFO nova.virt.libvirt.driver [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Snapshot image upload complete
Dec 13 08:57:15 compute-0 nova_compute[248510]: 2025-12-13 08:57:15.861 248514 INFO nova.compute.manager [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 7.07 seconds to snapshot the instance on the hypervisor.
Dec 13 08:57:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Dec 13 08:57:15 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Dec 13 08:57:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 402 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 180 op/s
Dec 13 08:57:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:57:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341596593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.454 248514 DEBUG nova.network.neutron [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updated VIF entry in instance network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.455 248514 DEBUG nova.network.neutron [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.457 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.483 248514 DEBUG oslo_concurrency.lockutils [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.551 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.551 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.557 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.558 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.562 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.562 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.721 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.723 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2960MB free_disk=59.88791993074119GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.723 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.724 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 75f348ef-4044-47a1-ba1b-f1b66513450c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.818 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 4887eb43-1570-49a5-b20e-326af1e84a7b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.818 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.819 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.819 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.837 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.856 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.857 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.876 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 08:57:16 compute-0 nova_compute[248510]: 2025-12-13 08:57:16.897 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 08:57:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Dec 13 08:57:17 compute-0 nova_compute[248510]: 2025-12-13 08:57:17.024 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:17.086 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Dec 13 08:57:17 compute-0 ceph-mon[76537]: osdmap e308: 3 total, 3 up, 3 in
Dec 13 08:57:17 compute-0 ceph-mon[76537]: pgmap v2881: 321 pgs: 321 active+clean; 402 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 180 op/s
Dec 13 08:57:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3341596593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:17 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Dec 13 08:57:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:57:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738394164' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:17 compute-0 nova_compute[248510]: 2025-12-13 08:57:17.616 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:17 compute-0 nova_compute[248510]: 2025-12-13 08:57:17.622 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:57:17 compute-0 nova_compute[248510]: 2025-12-13 08:57:17.654 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:57:17 compute-0 nova_compute[248510]: 2025-12-13 08:57:17.659 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:17 compute-0 nova_compute[248510]: 2025-12-13 08:57:17.679 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:57:17 compute-0 nova_compute[248510]: 2025-12-13 08:57:17.680 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:18 compute-0 ceph-mon[76537]: osdmap e309: 3 total, 3 up, 3 in
Dec 13 08:57:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1738394164' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 349 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 233 op/s
Dec 13 08:57:18 compute-0 nova_compute[248510]: 2025-12-13 08:57:18.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 ceph-mon[76537]: pgmap v2883: 321 pgs: 321 active+clean; 349 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 233 op/s
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.232 248514 DEBUG nova.compute.manager [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.233 248514 DEBUG nova.compute.manager [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing instance network info cache due to event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.233 248514 DEBUG oslo_concurrency.lockutils [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.233 248514 DEBUG oslo_concurrency.lockutils [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.234 248514 DEBUG nova.network.neutron [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.348 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.348 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.349 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.349 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.350 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.351 248514 INFO nova.compute.manager [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Terminating instance
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.352 248514 DEBUG nova.compute.manager [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:57:19 compute-0 kernel: tap0b3a1c67-12 (unregistering): left promiscuous mode
Dec 13 08:57:19 compute-0 NetworkManager[50376]: <info>  [1765616239.4059] device (tap0b3a1c67-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 ovn_controller[148476]: 2025-12-13T08:57:19Z|01183|binding|INFO|Releasing lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a from this chassis (sb_readonly=0)
Dec 13 08:57:19 compute-0 ovn_controller[148476]: 2025-12-13T08:57:19Z|01184|binding|INFO|Setting lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a down in Southbound
Dec 13 08:57:19 compute-0 ovn_controller[148476]: 2025-12-13T08:57:19Z|01185|binding|INFO|Removing iface tap0b3a1c67-12 ovn-installed in OVS
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.427 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:e5:67 10.100.0.9'], port_security=['fa:16:3e:da:e5:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4887eb43-1570-49a5-b20e-326af1e84a7b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e1e2d18-e674-4ee6-b553-a8676e40259f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c30b2fc-21fd-4778-91da-98384d5e05df, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.428 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a in datapath d2ff4cff-54cc-40c6-a486-7e7532c2462b unbound from our chassis
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.430 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2ff4cff-54cc-40c6-a486-7e7532c2462b
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.449 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d90637c-a2f1-4190-ad38-657991f08c6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:19 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000078.scope: Deactivated successfully.
Dec 13 08:57:19 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000078.scope: Consumed 15.438s CPU time.
Dec 13 08:57:19 compute-0 systemd-machined[210538]: Machine qemu-147-instance-00000078 terminated.
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.478 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[458bc9d7-e730-45fe-b64a-038842b1089f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.481 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2e40f8eb-3ad5-4cd2-bec5-1a8ab9255bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.507 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf04452-ec5c-4b45-9f97-3f0f28b22b80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.523 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc273975-8cb6-487d-adc8-60f438bbbed5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2ff4cff-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:85:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871447, 'reachable_time': 40005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369027, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.538 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed84153-0743-47ae-a4ad-dcb870d14c46]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd2ff4cff-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871459, 'tstamp': 871459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369028, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd2ff4cff-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871463, 'tstamp': 871463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369028, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.539 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2ff4cff-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.545 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2ff4cff-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.545 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2ff4cff-50, col_values=(('external_ids', {'iface-id': '47f45749-b232-4d0c-bf37-be042ea606c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.593 248514 INFO nova.virt.libvirt.driver [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance destroyed successfully.
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.594 248514 DEBUG nova.objects.instance [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'resources' on Instance uuid 4887eb43-1570-49a5-b20e-326af1e84a7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.612 248514 DEBUG nova.virt.libvirt.vif [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1968716864',display_name='tempest-TestSnapshotPattern-server-1968716864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1968716864',id=120,image_ref='bc45ce83-2d30-4107-90ce-9a9307d84fab',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:56:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-ejrogks3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='75f348ef-4044-47a1-ba1b-f1b66513450c',image_min_disk='1',image_min_ram='0',image_owner_id='6c21c2eb2d0c4465ae562381f358fbd8',image_owner_project_name='tempest-TestSnapshotPattern-1494512648',image_owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member',image_user_id='81fb01d9d08845c3b626079ab726db7a',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:57:15Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=4887eb43-1570-49a5-b20e-326af1e84a7b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.612 248514 DEBUG nova.network.os_vif_util [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.613 248514 DEBUG nova.network.os_vif_util [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.613 248514 DEBUG os_vif [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.615 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b3a1c67-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.621 248514 INFO os_vif [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12')
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.680 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.875 248514 INFO nova.virt.libvirt.driver [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Deleting instance files /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b_del
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.876 248514 INFO nova.virt.libvirt.driver [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Deletion of /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b_del complete
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.955 248514 INFO nova.compute.manager [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 0.60 seconds to destroy the instance on the hypervisor.
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.957 248514 DEBUG oslo.service.loopingcall [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.957 248514 DEBUG nova.compute.manager [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:57:19 compute-0 nova_compute[248510]: 2025-12-13 08:57:19.958 248514 DEBUG nova.network.neutron [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:57:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 300 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 11 MiB/s wr, 186 op/s
Dec 13 08:57:20 compute-0 nova_compute[248510]: 2025-12-13 08:57:20.326 248514 DEBUG nova.compute.manager [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-unplugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:20 compute-0 nova_compute[248510]: 2025-12-13 08:57:20.326 248514 DEBUG oslo_concurrency.lockutils [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:20 compute-0 nova_compute[248510]: 2025-12-13 08:57:20.327 248514 DEBUG oslo_concurrency.lockutils [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:20 compute-0 nova_compute[248510]: 2025-12-13 08:57:20.327 248514 DEBUG oslo_concurrency.lockutils [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:20 compute-0 nova_compute[248510]: 2025-12-13 08:57:20.327 248514 DEBUG nova.compute.manager [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] No waiting events found dispatching network-vif-unplugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:57:20 compute-0 nova_compute[248510]: 2025-12-13 08:57:20.327 248514 DEBUG nova.compute.manager [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-unplugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:57:20 compute-0 nova_compute[248510]: 2025-12-13 08:57:20.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:20 compute-0 nova_compute[248510]: 2025-12-13 08:57:20.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:57:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Dec 13 08:57:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Dec 13 08:57:20 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Dec 13 08:57:21 compute-0 nova_compute[248510]: 2025-12-13 08:57:21.004 248514 DEBUG nova.network.neutron [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updated VIF entry in instance network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:57:21 compute-0 nova_compute[248510]: 2025-12-13 08:57:21.005 248514 DEBUG nova.network.neutron [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:21 compute-0 nova_compute[248510]: 2025-12-13 08:57:21.032 248514 DEBUG oslo_concurrency.lockutils [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001673713189639438 of space, bias 1.0, pg target 0.5021139568918314 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014274589188080123 of space, bias 1.0, pg target 0.42823767564240367 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.734031256454457e-07 of space, bias 4.0, pg target 0.0006880837507745348 quantized to 16 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:57:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:57:21 compute-0 ceph-mon[76537]: pgmap v2884: 321 pgs: 321 active+clean; 300 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 11 MiB/s wr, 186 op/s
Dec 13 08:57:21 compute-0 ceph-mon[76537]: osdmap e310: 3 total, 3 up, 3 in
Dec 13 08:57:21 compute-0 nova_compute[248510]: 2025-12-13 08:57:21.501 248514 DEBUG nova.network.neutron [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:21 compute-0 nova_compute[248510]: 2025-12-13 08:57:21.523 248514 INFO nova.compute.manager [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 1.57 seconds to deallocate network for instance.
Dec 13 08:57:21 compute-0 nova_compute[248510]: 2025-12-13 08:57:21.606 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:21 compute-0 nova_compute[248510]: 2025-12-13 08:57:21.606 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 300 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 8.6 KiB/s wr, 62 op/s
Dec 13 08:57:22 compute-0 nova_compute[248510]: 2025-12-13 08:57:22.453 248514 DEBUG nova.compute.manager [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:22 compute-0 nova_compute[248510]: 2025-12-13 08:57:22.454 248514 DEBUG oslo_concurrency.lockutils [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:22 compute-0 nova_compute[248510]: 2025-12-13 08:57:22.454 248514 DEBUG oslo_concurrency.lockutils [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:22 compute-0 nova_compute[248510]: 2025-12-13 08:57:22.454 248514 DEBUG oslo_concurrency.lockutils [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:22 compute-0 nova_compute[248510]: 2025-12-13 08:57:22.454 248514 DEBUG nova.compute.manager [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] No waiting events found dispatching network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:57:22 compute-0 nova_compute[248510]: 2025-12-13 08:57:22.455 248514 WARNING nova.compute.manager [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received unexpected event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a for instance with vm_state deleted and task_state None.
Dec 13 08:57:22 compute-0 nova_compute[248510]: 2025-12-13 08:57:22.455 248514 DEBUG nova.compute.manager [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-deleted-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:22 compute-0 nova_compute[248510]: 2025-12-13 08:57:22.661 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:23 compute-0 nova_compute[248510]: 2025-12-13 08:57:23.019 248514 DEBUG oslo_concurrency.processutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:23 compute-0 ceph-mon[76537]: pgmap v2886: 321 pgs: 321 active+clean; 300 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 8.6 KiB/s wr, 62 op/s
Dec 13 08:57:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:57:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2819567143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:23 compute-0 nova_compute[248510]: 2025-12-13 08:57:23.857 248514 DEBUG oslo_concurrency.processutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.838s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:23 compute-0 nova_compute[248510]: 2025-12-13 08:57:23.864 248514 DEBUG nova.compute.provider_tree [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:57:23 compute-0 nova_compute[248510]: 2025-12-13 08:57:23.932 248514 DEBUG nova.scheduler.client.report [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.103 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.108 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.109 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.155 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.161 248514 INFO nova.scheduler.client.report [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Deleted allocations for instance 4887eb43-1570-49a5-b20e-326af1e84a7b
Dec 13 08:57:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 279 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 10 KiB/s wr, 90 op/s
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.497 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.530 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.531 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.541 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.542 248514 INFO nova.compute.claims [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.653 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.708 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:24 compute-0 nova_compute[248510]: 2025-12-13 08:57:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:57:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2819567143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:24 compute-0 ceph-mon[76537]: pgmap v2887: 321 pgs: 321 active+clean; 279 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 10 KiB/s wr, 90 op/s
Dec 13 08:57:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:57:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554119751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.275 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.282 248514 DEBUG nova.compute.provider_tree [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.305 248514 DEBUG nova.scheduler.client.report [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.341 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.342 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.413 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.414 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.441 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.620 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.721 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.723 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.723 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Creating image(s)
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.744 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.766 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.790 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.794 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.872 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.873 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.874 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.874 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.897 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:25 compute-0 nova_compute[248510]: 2025-12-13 08:57:25.900 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:26 compute-0 nova_compute[248510]: 2025-12-13 08:57:26.016 248514 DEBUG nova.policy [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:57:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Dec 13 08:57:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1554119751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Dec 13 08:57:26 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Dec 13 08:57:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 279 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 8.9 KiB/s wr, 46 op/s
Dec 13 08:57:27 compute-0 ceph-mon[76537]: osdmap e311: 3 total, 3 up, 3 in
Dec 13 08:57:27 compute-0 ceph-mon[76537]: pgmap v2889: 321 pgs: 321 active+clean; 279 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 8.9 KiB/s wr, 46 op/s
Dec 13 08:57:27 compute-0 nova_compute[248510]: 2025-12-13 08:57:27.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:27 compute-0 nova_compute[248510]: 2025-12-13 08:57:27.934 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:28 compute-0 nova_compute[248510]: 2025-12-13 08:57:28.000 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:57:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 298 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec 13 08:57:28 compute-0 nova_compute[248510]: 2025-12-13 08:57:28.389 248514 DEBUG nova.objects.instance [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 5a7142a0-6e82-418c-affe-88fd6beb2ad9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:57:28 compute-0 nova_compute[248510]: 2025-12-13 08:57:28.423 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:57:28 compute-0 nova_compute[248510]: 2025-12-13 08:57:28.424 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Ensure instance console log exists: /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:57:28 compute-0 nova_compute[248510]: 2025-12-13 08:57:28.424 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:28 compute-0 nova_compute[248510]: 2025-12-13 08:57:28.425 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:28 compute-0 nova_compute[248510]: 2025-12-13 08:57:28.425 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:28 compute-0 ceph-mon[76537]: pgmap v2890: 321 pgs: 321 active+clean; 298 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec 13 08:57:28 compute-0 nova_compute[248510]: 2025-12-13 08:57:28.771 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Successfully created port: 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:57:29 compute-0 nova_compute[248510]: 2025-12-13 08:57:29.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 285 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.3 MiB/s wr, 92 op/s
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.519 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.519 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.519 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.520 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.520 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.521 248514 INFO nova.compute.manager [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Terminating instance
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.522 248514 DEBUG nova.compute.manager [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.524 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Successfully updated port: 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.541 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.542 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.542 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:57:30 compute-0 kernel: tap63a84e8b-c1 (unregistering): left promiscuous mode
Dec 13 08:57:30 compute-0 NetworkManager[50376]: <info>  [1765616250.5765] device (tap63a84e8b-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:57:30 compute-0 ovn_controller[148476]: 2025-12-13T08:57:30Z|01186|binding|INFO|Releasing lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed from this chassis (sb_readonly=0)
Dec 13 08:57:30 compute-0 ovn_controller[148476]: 2025-12-13T08:57:30Z|01187|binding|INFO|Setting lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed down in Southbound
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 ovn_controller[148476]: 2025-12-13T08:57:30Z|01188|binding|INFO|Removing iface tap63a84e8b-c1 ovn-installed in OVS
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.599 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:e7:bf 10.100.0.3'], port_security=['fa:16:3e:7d:e7:bf 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '75f348ef-4044-47a1-ba1b-f1b66513450c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e1e2d18-e674-4ee6-b553-a8676e40259f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c30b2fc-21fd-4778-91da-98384d5e05df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.600 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed in datapath d2ff4cff-54cc-40c6-a486-7e7532c2462b unbound from our chassis
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.601 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2ff4cff-54cc-40c6-a486-7e7532c2462b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.602 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47f6db1d-7e87-4c94-a7ab-943594c11fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.603 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b namespace which is not needed anymore
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.623 248514 DEBUG nova.compute.manager [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.623 248514 DEBUG nova.compute.manager [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing instance network info cache due to event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.624 248514 DEBUG oslo_concurrency.lockutils [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.624 248514 DEBUG oslo_concurrency.lockutils [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.624 248514 DEBUG nova.network.neutron [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:57:30 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000076.scope: Deactivated successfully.
Dec 13 08:57:30 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000076.scope: Consumed 17.089s CPU time.
Dec 13 08:57:30 compute-0 systemd-machined[210538]: Machine qemu-145-instance-00000076 terminated.
Dec 13 08:57:30 compute-0 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [NOTICE]   (365455) : haproxy version is 2.8.14-c23fe91
Dec 13 08:57:30 compute-0 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [NOTICE]   (365455) : path to executable is /usr/sbin/haproxy
Dec 13 08:57:30 compute-0 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [WARNING]  (365455) : Exiting Master process...
Dec 13 08:57:30 compute-0 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [ALERT]    (365455) : Current worker (365457) exited with code 143 (Terminated)
Dec 13 08:57:30 compute-0 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [WARNING]  (365455) : All workers exited. Exiting... (0)
Dec 13 08:57:30 compute-0 systemd[1]: libpod-5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e.scope: Deactivated successfully.
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.747 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:57:30 compute-0 podman[369298]: 2025-12-13 08:57:30.749730042 +0000 UTC m=+0.051444380 container died 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.759 248514 INFO nova.virt.libvirt.driver [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance destroyed successfully.
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.760 248514 DEBUG nova.objects.instance [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'resources' on Instance uuid 75f348ef-4044-47a1-ba1b-f1b66513450c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.780 248514 DEBUG nova.virt.libvirt.vif [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-608841865',display_name='tempest-TestSnapshotPattern-server-608841865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-608841865',id=118,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-dx9z8wv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:56:10Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=75f348ef-4044-47a1-ba1b-f1b66513450c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.781 248514 DEBUG nova.network.os_vif_util [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.782 248514 DEBUG nova.network.os_vif_util [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.782 248514 DEBUG os_vif [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.784 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63a84e8b-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.790 248514 INFO os_vif [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1')
Dec 13 08:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e-userdata-shm.mount: Deactivated successfully.
Dec 13 08:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fffb7936c063d506819829d9345338322d52c45114e0f00e692ced85a23dd5f4-merged.mount: Deactivated successfully.
Dec 13 08:57:30 compute-0 podman[369298]: 2025-12-13 08:57:30.807877655 +0000 UTC m=+0.109591963 container cleanup 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:57:30 compute-0 systemd[1]: libpod-conmon-5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e.scope: Deactivated successfully.
Dec 13 08:57:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Dec 13 08:57:30 compute-0 podman[369353]: 2025-12-13 08:57:30.881053006 +0000 UTC m=+0.049332628 container remove 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:57:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Dec 13 08:57:30 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af702c3e-4b37-43dc-b556-9bda2d5d5d5c]: (4, ('Sat Dec 13 08:57:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b (5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e)\n5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e\nSat Dec 13 08:57:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b (5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e)\n5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.894 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e288925a-0433-4556-909e-9e03d7492ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.895 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2ff4cff-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 kernel: tapd2ff4cff-50: left promiscuous mode
Dec 13 08:57:30 compute-0 nova_compute[248510]: 2025-12-13 08:57:30.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[516a6828-be0d-469d-b3d7-0dae56af2e00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.937 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[085413bd-0f7d-4dda-96ff-85a486b3a239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.938 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9564dc0e-16e4-45f8-93a5-5906ceb09aa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.958 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2c283a-9017-4e4c-aa48-1d6afd1847dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871438, 'reachable_time': 37993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369372, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2ff4cff\x2d54cc\x2d40c6\x2da486\x2d7e7532c2462b.mount: Deactivated successfully.
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.962 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:57:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.962 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f5988209-9b6d-4cf4-8664-11e9fff4e714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.075 248514 INFO nova.virt.libvirt.driver [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Deleting instance files /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c_del
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.076 248514 INFO nova.virt.libvirt.driver [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Deletion of /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c_del complete
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.133 248514 INFO nova.compute.manager [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 0.61 seconds to destroy the instance on the hypervisor.
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.133 248514 DEBUG oslo.service.loopingcall [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.134 248514 DEBUG nova.compute.manager [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.134 248514 DEBUG nova.network.neutron [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:57:31 compute-0 ceph-mon[76537]: pgmap v2891: 321 pgs: 321 active+clean; 285 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.3 MiB/s wr, 92 op/s
Dec 13 08:57:31 compute-0 ceph-mon[76537]: osdmap e312: 3 total, 3 up, 3 in
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.738 248514 DEBUG nova.compute.manager [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-unplugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.738 248514 DEBUG oslo_concurrency.lockutils [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.739 248514 DEBUG oslo_concurrency.lockutils [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.740 248514 DEBUG oslo_concurrency.lockutils [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.740 248514 DEBUG nova.compute.manager [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] No waiting events found dispatching network-vif-unplugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:57:31 compute-0 nova_compute[248510]: 2025-12-13 08:57:31.741 248514 DEBUG nova.compute.manager [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-unplugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.132 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updating instance_info_cache with network_info: [{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.223 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.223 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance network_info: |[{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.225 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start _get_guest_xml network_info=[{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.229 248514 WARNING nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.234 248514 DEBUG nova.virt.libvirt.host [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.235 248514 DEBUG nova.virt.libvirt.host [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.238 248514 DEBUG nova.virt.libvirt.host [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.238 248514 DEBUG nova.virt.libvirt.host [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.239 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.239 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.240 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.240 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.240 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.240 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.241 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.241 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.241 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.241 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.242 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.242 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.246 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 285 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 68 op/s
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.671 248514 DEBUG nova.network.neutron [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.724 248514 INFO nova.compute.manager [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 1.59 seconds to deallocate network for instance.
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.787 248514 DEBUG nova.compute.manager [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-changed-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.788 248514 DEBUG nova.compute.manager [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Refreshing instance network info cache due to event network-changed-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.788 248514 DEBUG oslo_concurrency.lockutils [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.789 248514 DEBUG oslo_concurrency.lockutils [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.789 248514 DEBUG nova.network.neutron [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Refreshing network info cache for port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.793 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.794 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:57:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919445828' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.820 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.848 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.858 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.908 248514 DEBUG nova.network.neutron [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updated VIF entry in instance network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:57:32 compute-0 nova_compute[248510]: 2025-12-13 08:57:32.909 248514 DEBUG nova.network.neutron [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.009 248514 DEBUG oslo_concurrency.processutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.188 248514 DEBUG oslo_concurrency.lockutils [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.243 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.244 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.282 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.350 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:33 compute-0 ceph-mon[76537]: pgmap v2893: 321 pgs: 321 active+clean; 285 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 68 op/s
Dec 13 08:57:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2919445828' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:57:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:57:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503960443' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.503 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.505 248514 DEBUG nova.virt.libvirt.vif [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-182179822',display_name='tempest-TestNetworkBasicOps-server-182179822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-182179822',id=122,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIyOWHhK5PGqeOlzFwokovmTuf3HTihMwOQrzfuGYU+/TrdkTdWDTQvnoNZ7qiFrzCGlnIvswkbj8TaejN4nwLPFUx3mjtQULdplgXkj1ea+cO+RfMC1iM+NaDk/WgBTsg==',key_name='tempest-TestNetworkBasicOps-1885276769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ly8zv4h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:57:25Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5a7142a0-6e82-418c-affe-88fd6beb2ad9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.506 248514 DEBUG nova.network.os_vif_util [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.507 248514 DEBUG nova.network.os_vif_util [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.509 248514 DEBUG nova.objects.instance [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a7142a0-6e82-418c-affe-88fd6beb2ad9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.565 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <uuid>5a7142a0-6e82-418c-affe-88fd6beb2ad9</uuid>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <name>instance-0000007a</name>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-182179822</nova:name>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:57:32</nova:creationTime>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <nova:port uuid="8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2">
Dec 13 08:57:33 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <system>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <entry name="serial">5a7142a0-6e82-418c-affe-88fd6beb2ad9</entry>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <entry name="uuid">5a7142a0-6e82-418c-affe-88fd6beb2ad9</entry>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </system>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <os>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   </os>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <features>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   </features>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk">
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       </source>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config">
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       </source>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:57:33 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ed:3b:59"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <target dev="tap8ee3e1ae-47"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/console.log" append="off"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <video>
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </video>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:57:33 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:57:33 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:57:33 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:57:33 compute-0 nova_compute[248510]: </domain>
Dec 13 08:57:33 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.566 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Preparing to wait for external event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.567 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.567 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.567 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.568 248514 DEBUG nova.virt.libvirt.vif [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-182179822',display_name='tempest-TestNetworkBasicOps-server-182179822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-182179822',id=122,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIyOWHhK5PGqeOlzFwokovmTuf3HTihMwOQrzfuGYU+/TrdkTdWDTQvnoNZ7qiFrzCGlnIvswkbj8TaejN4nwLPFUx3mjtQULdplgXkj1ea+cO+RfMC1iM+NaDk/WgBTsg==',key_name='tempest-TestNetworkBasicOps-1885276769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ly8zv4h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:57:25Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5a7142a0-6e82-418c-affe-88fd6beb2ad9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.568 248514 DEBUG nova.network.os_vif_util [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.569 248514 DEBUG nova.network.os_vif_util [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.569 248514 DEBUG os_vif [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.570 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.571 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.574 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ee3e1ae-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.574 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ee3e1ae-47, col_values=(('external_ids', {'iface-id': '8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:3b:59', 'vm-uuid': '5a7142a0-6e82-418c-affe-88fd6beb2ad9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:33 compute-0 NetworkManager[50376]: <info>  [1765616253.5893] manager: (tap8ee3e1ae-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.590 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.594 248514 INFO os_vif [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47')
Dec 13 08:57:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:57:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243712752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.655 248514 DEBUG oslo_concurrency.processutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.664 248514 DEBUG nova.compute.provider_tree [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.675 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.676 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.676 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:ed:3b:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.677 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Using config drive
Dec 13 08:57:33 compute-0 podman[369461]: 2025-12-13 08:57:33.689216043 +0000 UTC m=+0.060424660 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:57:33 compute-0 podman[369460]: 2025-12-13 08:57:33.699401083 +0000 UTC m=+0.067932144 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.707 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.714 248514 DEBUG nova.scheduler.client.report [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:57:33 compute-0 podman[369459]: 2025-12-13 08:57:33.735821644 +0000 UTC m=+0.104809116 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.796 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.798 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.804 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.804 248514 INFO nova.compute.claims [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.862 248514 INFO nova.scheduler.client.report [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Deleted allocations for instance 75f348ef-4044-47a1-ba1b-f1b66513450c
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.974 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:33 compute-0 nova_compute[248510]: 2025-12-13 08:57:33.988 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.141 248514 DEBUG nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.142 248514 DEBUG oslo_concurrency.lockutils [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.142 248514 DEBUG oslo_concurrency.lockutils [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.143 248514 DEBUG oslo_concurrency.lockutils [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.143 248514 DEBUG nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] No waiting events found dispatching network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.143 248514 WARNING nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received unexpected event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed for instance with vm_state deleted and task_state None.
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.144 248514 DEBUG nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-deleted-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.144 248514 INFO nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Neutron deleted interface 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed; detaching it from the instance and deleting it from the info cache
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.144 248514 DEBUG nova.network.neutron [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.148 248514 DEBUG nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Detach interface failed, port_id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed, reason: Instance 75f348ef-4044-47a1-ba1b-f1b66513450c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:57:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 167 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 116 op/s
Dec 13 08:57:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3503960443' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:57:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/243712752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:57:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554201659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.570 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.578 248514 DEBUG nova.compute.provider_tree [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.590 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616239.5896945, 4887eb43-1570-49a5-b20e-326af1e84a7b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.591 248514 INFO nova.compute.manager [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] VM Stopped (Lifecycle Event)
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.646 248514 DEBUG nova.scheduler.client.report [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.653 248514 DEBUG nova.compute.manager [None req-d4c0beb6-d755-4e5c-aa45-2cff72c0b851 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.907 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:34 compute-0 nova_compute[248510]: 2025-12-13 08:57:34.908 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.158 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.158 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.167 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Creating config drive at /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.174 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcz02201c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.222 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.248 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.327 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcz02201c" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.359 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.365 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.416 248514 DEBUG nova.policy [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '095998dc2eb348e8a90c866d4106cd74', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '30f07149729142048436dbfbb8bf2742', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.419 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.421 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.422 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Creating image(s)
Dec 13 08:57:35 compute-0 ceph-mon[76537]: pgmap v2894: 321 pgs: 321 active+clean; 167 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 116 op/s
Dec 13 08:57:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3554201659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.447 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.475 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.505 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.510 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.552 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.553 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Deleting local config drive /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config because it was imported into RBD.
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.592 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.592 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.593 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.593 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:35 compute-0 kernel: tap8ee3e1ae-47: entered promiscuous mode
Dec 13 08:57:35 compute-0 NetworkManager[50376]: <info>  [1765616255.6106] manager: (tap8ee3e1ae-47): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Dec 13 08:57:35 compute-0 ovn_controller[148476]: 2025-12-13T08:57:35Z|01189|binding|INFO|Claiming lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 for this chassis.
Dec 13 08:57:35 compute-0 ovn_controller[148476]: 2025-12-13T08:57:35Z|01190|binding|INFO|8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2: Claiming fa:16:3e:ed:3b:59 10.100.0.13
Dec 13 08:57:35 compute-0 ovn_controller[148476]: 2025-12-13T08:57:35Z|01191|binding|INFO|Setting lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 ovn-installed in OVS
Dec 13 08:57:35 compute-0 systemd-udevd[369691]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.642 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:35 compute-0 NetworkManager[50376]: <info>  [1765616255.6538] device (tap8ee3e1ae-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.653 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 393f815d-d124-4aea-98c0-126aed0744bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:35 compute-0 NetworkManager[50376]: <info>  [1765616255.6545] device (tap8ee3e1ae-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:57:35 compute-0 systemd-machined[210538]: New machine qemu-149-instance-0000007a.
Dec 13 08:57:35 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-0000007a.
Dec 13 08:57:35 compute-0 ovn_controller[148476]: 2025-12-13T08:57:35Z|01192|binding|INFO|Setting lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 up in Southbound
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.685 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:59 10.100.0.13'], port_security=['fa:16:3e:ed:3b:59 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5a7142a0-6e82-418c-affe-88fd6beb2ad9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8944c356-661d-4684-a169-e2ad4b13e098', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca3b9929-a40c-461d-b2b9-49fd6af07fd3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.686 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 in datapath 793ba3c3-1004-4068-a89a-dc7b4c56fc43 bound to our chassis
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.688 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 793ba3c3-1004-4068-a89a-dc7b4c56fc43
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.703 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.708 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50cec198-324e-4273-aba4-008c28e677b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.759 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7dfbd3-30e0-4f64-ad0b-44090d69e0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.763 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5a70906e-6a33-439d-9924-42499fc910cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.797 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad28ca3-961f-4d21-af12-288fe4b32614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f00fd20-af9d-4459-9094-6e3ae1ee4838]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap793ba3c3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:ed:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878911, 'reachable_time': 42802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369727, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.847 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1759bbd-07e5-4822-a365-b5461c3dd4e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap793ba3c3-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878923, 'tstamp': 878923}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369728, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap793ba3c3-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878927, 'tstamp': 878927}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369728, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.850 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap793ba3c3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.865 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap793ba3c3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.865 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.866 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap793ba3c3-10, col_values=(('external_ids', {'iface-id': 'e25ab14d-8bf6-4007-ae4c-085df43b875d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.871 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.920 248514 DEBUG nova.network.neutron [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updated VIF entry in instance network info cache for port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:57:35 compute-0 nova_compute[248510]: 2025-12-13 08:57:35.920 248514 DEBUG nova.network.neutron [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updating instance_info_cache with network_info: [{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.009 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 393f815d-d124-4aea-98c0-126aed0744bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.072 248514 DEBUG oslo_concurrency.lockutils [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.079 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] resizing rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.113 248514 DEBUG nova.compute.manager [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.114 248514 DEBUG oslo_concurrency.lockutils [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.114 248514 DEBUG oslo_concurrency.lockutils [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.115 248514 DEBUG oslo_concurrency.lockutils [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.115 248514 DEBUG nova.compute.manager [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Processing event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.159 248514 DEBUG nova.objects.instance [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lazy-loading 'migration_context' on Instance uuid 393f815d-d124-4aea-98c0-126aed0744bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.187 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.188 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Ensure instance console log exists: /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.188 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.188 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.189 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 167 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 13 08:57:36 compute-0 ceph-mon[76537]: pgmap v2895: 321 pgs: 321 active+clean; 167 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.703 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.704 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616256.70275, 5a7142a0-6e82-418c-affe-88fd6beb2ad9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.704 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] VM Started (Lifecycle Event)
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.707 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.710 248514 INFO nova.virt.libvirt.driver [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance spawned successfully.
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.711 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.731 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.738 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.744 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.745 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.746 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.746 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.746 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.747 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.769 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616256.703979, 5a7142a0-6e82-418c-affe-88fd6beb2ad9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] VM Paused (Lifecycle Event)
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.837 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.841 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616256.70753, 5a7142a0-6e82-418c-affe-88fd6beb2ad9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.841 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] VM Resumed (Lifecycle Event)
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.880 248514 INFO nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Took 11.16 seconds to spawn the instance on the hypervisor.
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.880 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.889 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.892 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.945 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.973 248514 INFO nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Took 12.49 seconds to build instance.
Dec 13 08:57:36 compute-0 nova_compute[248510]: 2025-12-13 08:57:36.990 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:37 compute-0 nova_compute[248510]: 2025-12-13 08:57:37.159 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Successfully created port: 6b86528d-c1b7-4776-809f-1f8b37569b6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:57:37 compute-0 nova_compute[248510]: 2025-12-13 08:57:37.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 187 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 129 op/s
Dec 13 08:57:38 compute-0 nova_compute[248510]: 2025-12-13 08:57:38.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:38 compute-0 nova_compute[248510]: 2025-12-13 08:57:38.712 248514 DEBUG nova.compute.manager [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:38 compute-0 nova_compute[248510]: 2025-12-13 08:57:38.712 248514 DEBUG oslo_concurrency.lockutils [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:38 compute-0 nova_compute[248510]: 2025-12-13 08:57:38.712 248514 DEBUG oslo_concurrency.lockutils [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:38 compute-0 nova_compute[248510]: 2025-12-13 08:57:38.712 248514 DEBUG oslo_concurrency.lockutils [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:38 compute-0 nova_compute[248510]: 2025-12-13 08:57:38.713 248514 DEBUG nova.compute.manager [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] No waiting events found dispatching network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:57:38 compute-0 nova_compute[248510]: 2025-12-13 08:57:38.713 248514 WARNING nova.compute.manager [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received unexpected event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 for instance with vm_state active and task_state None.
Dec 13 08:57:39 compute-0 ceph-mon[76537]: pgmap v2896: 321 pgs: 321 active+clean; 187 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 129 op/s
Dec 13 08:57:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:57:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:57:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:57:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:57:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:57:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:57:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Dec 13 08:57:40 compute-0 nova_compute[248510]: 2025-12-13 08:57:40.391 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Successfully updated port: 6b86528d-c1b7-4776-809f-1f8b37569b6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:57:40 compute-0 nova_compute[248510]: 2025-12-13 08:57:40.441 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:40 compute-0 nova_compute[248510]: 2025-12-13 08:57:40.441 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquired lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:40 compute-0 nova_compute[248510]: 2025-12-13 08:57:40.441 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:57:40 compute-0 nova_compute[248510]: 2025-12-13 08:57:40.545 248514 DEBUG nova.compute.manager [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-changed-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:40 compute-0 nova_compute[248510]: 2025-12-13 08:57:40.546 248514 DEBUG nova.compute.manager [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Refreshing instance network info cache due to event network-changed-6b86528d-c1b7-4776-809f-1f8b37569b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:57:40 compute-0 nova_compute[248510]: 2025-12-13 08:57:40.546 248514 DEBUG oslo_concurrency.lockutils [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:40 compute-0 nova_compute[248510]: 2025-12-13 08:57:40.770 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:57:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:41 compute-0 nova_compute[248510]: 2025-12-13 08:57:41.351 248514 DEBUG nova.compute.manager [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-changed-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:41 compute-0 nova_compute[248510]: 2025-12-13 08:57:41.351 248514 DEBUG nova.compute.manager [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Refreshing instance network info cache due to event network-changed-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:57:41 compute-0 nova_compute[248510]: 2025-12-13 08:57:41.352 248514 DEBUG oslo_concurrency.lockutils [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:41 compute-0 nova_compute[248510]: 2025-12-13 08:57:41.352 248514 DEBUG oslo_concurrency.lockutils [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:41 compute-0 nova_compute[248510]: 2025-12-13 08:57:41.352 248514 DEBUG nova.network.neutron [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Refreshing network info cache for port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:57:41 compute-0 ceph-mon[76537]: pgmap v2897: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Dec 13 08:57:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.451 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updating instance_info_cache with network_info: [{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.740 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Releasing lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.741 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance network_info: |[{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.741 248514 DEBUG oslo_concurrency.lockutils [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.741 248514 DEBUG nova.network.neutron [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Refreshing network info cache for port 6b86528d-c1b7-4776-809f-1f8b37569b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.745 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start _get_guest_xml network_info=[{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.750 248514 WARNING nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.757 248514 DEBUG nova.virt.libvirt.host [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.758 248514 DEBUG nova.virt.libvirt.host [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.763 248514 DEBUG nova.virt.libvirt.host [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.763 248514 DEBUG nova.virt.libvirt.host [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.763 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.763 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:57:42 compute-0 nova_compute[248510]: 2025-12-13 08:57:42.768 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:57:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547996344' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:57:43 compute-0 nova_compute[248510]: 2025-12-13 08:57:43.375 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:43 compute-0 ceph-mon[76537]: pgmap v2898: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Dec 13 08:57:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/547996344' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:57:43 compute-0 nova_compute[248510]: 2025-12-13 08:57:43.408 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:43 compute-0 nova_compute[248510]: 2025-12-13 08:57:43.413 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:43 compute-0 nova_compute[248510]: 2025-12-13 08:57:43.592 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:43 compute-0 nova_compute[248510]: 2025-12-13 08:57:43.864 248514 DEBUG nova.network.neutron [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updated VIF entry in instance network info cache for port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:57:43 compute-0 nova_compute[248510]: 2025-12-13 08:57:43.864 248514 DEBUG nova.network.neutron [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updating instance_info_cache with network_info: [{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:43 compute-0 nova_compute[248510]: 2025-12-13 08:57:43.894 248514 DEBUG oslo_concurrency.lockutils [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:57:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321302842' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.019 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.021 248514 DEBUG nova.virt.libvirt.vif [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1358982584',display_name='tempest-TestServerBasicOps-server-1358982584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1358982584',id=123,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLfROCcHYxf8uvMCUymzKleNDXyw2tmzRbGzC//Zjm3WjKvoC/StL1LDTb6SGUu3s96IMOyEDfcMq8YnCwGooxmPETH1rv9aB87uxpvFNJYtgkaVxxoxt7V/jUmpT5M6g==',key_name='tempest-TestServerBasicOps-1980425304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30f07149729142048436dbfbb8bf2742',ramdisk_id='',reservation_id='r-sw1yv1h4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1870679034',owner_user_name='tempest-TestServerBasicOps-1870679034-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:57:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='095998dc2eb348e8a90c866d4106cd74',uuid=393f815d-d124-4aea-98c0-126aed0744bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.021 248514 DEBUG nova.network.os_vif_util [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converting VIF {"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.022 248514 DEBUG nova.network.os_vif_util [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.024 248514 DEBUG nova.objects.instance [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lazy-loading 'pci_devices' on Instance uuid 393f815d-d124-4aea-98c0-126aed0744bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.116 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <uuid>393f815d-d124-4aea-98c0-126aed0744bd</uuid>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <name>instance-0000007b</name>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <nova:name>tempest-TestServerBasicOps-server-1358982584</nova:name>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:57:42</nova:creationTime>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <nova:user uuid="095998dc2eb348e8a90c866d4106cd74">tempest-TestServerBasicOps-1870679034-project-member</nova:user>
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <nova:project uuid="30f07149729142048436dbfbb8bf2742">tempest-TestServerBasicOps-1870679034</nova:project>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <nova:port uuid="6b86528d-c1b7-4776-809f-1f8b37569b6f">
Dec 13 08:57:44 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <system>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <entry name="serial">393f815d-d124-4aea-98c0-126aed0744bd</entry>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <entry name="uuid">393f815d-d124-4aea-98c0-126aed0744bd</entry>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </system>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <os>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   </os>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <features>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   </features>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/393f815d-d124-4aea-98c0-126aed0744bd_disk">
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       </source>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/393f815d-d124-4aea-98c0-126aed0744bd_disk.config">
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       </source>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:57:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:05:d0:ff"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <target dev="tap6b86528d-c1"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/console.log" append="off"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <video>
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </video>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:57:44 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:57:44 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:57:44 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:57:44 compute-0 nova_compute[248510]: </domain>
Dec 13 08:57:44 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.117 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Preparing to wait for external event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.118 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.118 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.118 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.119 248514 DEBUG nova.virt.libvirt.vif [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1358982584',display_name='tempest-TestServerBasicOps-server-1358982584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1358982584',id=123,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLfROCcHYxf8uvMCUymzKleNDXyw2tmzRbGzC//Zjm3WjKvoC/StL1LDTb6SGUu3s96IMOyEDfcMq8YnCwGooxmPETH1rv9aB87uxpvFNJYtgkaVxxoxt7V/jUmpT5M6g==',key_name='tempest-TestServerBasicOps-1980425304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30f07149729142048436dbfbb8bf2742',ramdisk_id='',reservation_id='r-sw1yv1h4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1870679034',owner_user_name='tempest-TestServerBasicOps-1870679034-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:57:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='095998dc2eb348e8a90c866d4106cd74',uuid=393f815d-d124-4aea-98c0-126aed0744bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.119 248514 DEBUG nova.network.os_vif_util [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converting VIF {"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.120 248514 DEBUG nova.network.os_vif_util [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.121 248514 DEBUG os_vif [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.121 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.122 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.122 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.125 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b86528d-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.126 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b86528d-c1, col_values=(('external_ids', {'iface-id': '6b86528d-c1b7-4776-809f-1f8b37569b6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:d0:ff', 'vm-uuid': '393f815d-d124-4aea-98c0-126aed0744bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:44 compute-0 NetworkManager[50376]: <info>  [1765616264.1295] manager: (tap6b86528d-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/495)
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.135 248514 INFO os_vif [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1')
Dec 13 08:57:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Dec 13 08:57:44 compute-0 ovn_controller[148476]: 2025-12-13T08:57:44Z|01193|binding|INFO|Releasing lport e25ab14d-8bf6-4007-ae4c-085df43b875d from this chassis (sb_readonly=0)
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.349 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.350 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.350 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] No VIF found with MAC fa:16:3e:05:d0:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.351 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Using config drive
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.376 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3321302842' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.758 248514 DEBUG nova.network.neutron [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updated VIF entry in instance network info cache for port 6b86528d-c1b7-4776-809f-1f8b37569b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.759 248514 DEBUG nova.network.neutron [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updating instance_info_cache with network_info: [{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:44 compute-0 nova_compute[248510]: 2025-12-13 08:57:44.861 248514 DEBUG oslo_concurrency.lockutils [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.070 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Creating config drive at /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.077 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ykuwpmu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.235 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ykuwpmu" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.263 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.268 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config 393f815d-d124-4aea-98c0-126aed0744bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:57:45 compute-0 ceph-mon[76537]: pgmap v2899: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.665 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config 393f815d-d124-4aea-98c0-126aed0744bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.666 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Deleting local config drive /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config because it was imported into RBD.
Dec 13 08:57:45 compute-0 NetworkManager[50376]: <info>  [1765616265.7391] manager: (tap6b86528d-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/496)
Dec 13 08:57:45 compute-0 kernel: tap6b86528d-c1: entered promiscuous mode
Dec 13 08:57:45 compute-0 ovn_controller[148476]: 2025-12-13T08:57:45Z|01194|binding|INFO|Claiming lport 6b86528d-c1b7-4776-809f-1f8b37569b6f for this chassis.
Dec 13 08:57:45 compute-0 ovn_controller[148476]: 2025-12-13T08:57:45Z|01195|binding|INFO|6b86528d-c1b7-4776-809f-1f8b37569b6f: Claiming fa:16:3e:05:d0:ff 10.100.0.11
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.757 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616250.7558303, 75f348ef-4044-47a1-ba1b-f1b66513450c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.757 248514 INFO nova.compute.manager [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] VM Stopped (Lifecycle Event)
Dec 13 08:57:45 compute-0 ovn_controller[148476]: 2025-12-13T08:57:45Z|01196|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f ovn-installed in OVS
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:45 compute-0 systemd-udevd[369976]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:45 compute-0 NetworkManager[50376]: <info>  [1765616265.7836] device (tap6b86528d-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:57:45 compute-0 NetworkManager[50376]: <info>  [1765616265.7845] device (tap6b86528d-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:57:45 compute-0 systemd-machined[210538]: New machine qemu-150-instance-0000007b.
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.797 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d0:ff 10.100.0.11'], port_security=['fa:16:3e:05:d0:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '393f815d-d124-4aea-98c0-126aed0744bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30f07149729142048436dbfbb8bf2742', 'neutron:revision_number': '2', 'neutron:security_group_ids': '339be054-74c6-402b-b71a-0e37379fa825 59357d23-9217-43e7-99ef-d4349c0de8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77d611ab-66b1-4445-8d3d-978ee23b13d9, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b86528d-c1b7-4776-809f-1f8b37569b6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.799 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86528d-c1b7-4776-809f-1f8b37569b6f in datapath b9992fbc-9a6f-4e82-84b9-be47eb5816aa bound to our chassis
Dec 13 08:57:45 compute-0 ovn_controller[148476]: 2025-12-13T08:57:45Z|01197|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f up in Southbound
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.801 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9992fbc-9a6f-4e82-84b9-be47eb5816aa
Dec 13 08:57:45 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-0000007b.
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4873d8d3-3e4d-4582-87f9-c24f27a7514d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.824 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9992fbc-91 in ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.827 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9992fbc-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cdc183-ba5f-4762-a621-dd82bbedbf5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.828 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0ece53-afe9-4f31-9e03-53b669f0b885]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:45 compute-0 nova_compute[248510]: 2025-12-13 08:57:45.828 248514 DEBUG nova.compute.manager [None req-23bba313-05ce-4fce-b9aa-7ffb333b8af7 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.842 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7aff8ab5-76b3-4dfc-bb8f-19b820ace7fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.860 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6baa6887-5a77-4eb6-a8b1-1fa74c828160]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.899 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[01e3c8cd-4a42-4eb8-95dd-6c2f4cf9762f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:45 compute-0 NetworkManager[50376]: <info>  [1765616265.9103] manager: (tapb9992fbc-90): new Veth device (/org/freedesktop/NetworkManager/Devices/497)
Dec 13 08:57:45 compute-0 systemd-udevd[369981]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[99ca4dcd-593c-4ae1-bfdb-679fe58569cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.964 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec39f191-a42e-43c5-8dae-9fb7a5d1bf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.968 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2ea67d-40ac-4970-a428-a1177b478bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:46 compute-0 NetworkManager[50376]: <info>  [1765616265.9998] device (tapb9992fbc-90): carrier: link connected
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.007 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4f13e8-ffac-4c0e-836f-a83d65c99d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.031 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e755666a-7969-47cf-a53e-4789560b67ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9992fbc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:30:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884321, 'reachable_time': 37298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370012, 'error': None, 'target': 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.055 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f30d4ff-7c11-4ed8-a989-4b04330572a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9c:3026'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 884321, 'tstamp': 884321}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370013, 'error': None, 'target': 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.077 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf5e387-8a8a-4957-91a2-634f839a5a4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9992fbc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:30:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884321, 'reachable_time': 37298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370014, 'error': None, 'target': 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.114 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d27f23b8-d266-4850-8444-2c8cc80249a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.222 248514 DEBUG nova.compute.manager [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.223 248514 DEBUG oslo_concurrency.lockutils [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.223 248514 DEBUG oslo_concurrency.lockutils [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.223 248514 DEBUG oslo_concurrency.lockutils [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.223 248514 DEBUG nova.compute.manager [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Processing event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.230 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1636b00-5b40-45c9-be74-4a427f01a03a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.232 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9992fbc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.233 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.234 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9992fbc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.235 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:46 compute-0 NetworkManager[50376]: <info>  [1765616266.2364] manager: (tapb9992fbc-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Dec 13 08:57:46 compute-0 kernel: tapb9992fbc-90: entered promiscuous mode
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.241 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9992fbc-90, col_values=(('external_ids', {'iface-id': 'a5f80d22-8b41-40d9-b8ff-ddee53f45af0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:46 compute-0 ovn_controller[148476]: 2025-12-13T08:57:46Z|01198|binding|INFO|Releasing lport a5f80d22-8b41-40d9-b8ff-ddee53f45af0 from this chassis (sb_readonly=0)
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.244 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9992fbc-9a6f-4e82-84b9-be47eb5816aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9992fbc-9a6f-4e82-84b9-be47eb5816aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.246 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eda56a37-1132-45ef-8486-d27188a6f348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.246 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-b9992fbc-9a6f-4e82-84b9-be47eb5816aa
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/b9992fbc-9a6f-4e82-84b9-be47eb5816aa.pid.haproxy
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID b9992fbc-9a6f-4e82-84b9-be47eb5816aa
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:57:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.249 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'env', 'PROCESS_TAG=haproxy-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9992fbc-9a6f-4e82-84b9-be47eb5816aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:57:46 compute-0 nova_compute[248510]: 2025-12-13 08:57:46.259 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 13 08:57:46 compute-0 ceph-mon[76537]: pgmap v2900: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 13 08:57:46 compute-0 podman[370046]: 2025-12-13 08:57:46.686931246 +0000 UTC m=+0.061654490 container create 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 08:57:46 compute-0 systemd[1]: Started libpod-conmon-852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5.scope.
Dec 13 08:57:46 compute-0 podman[370046]: 2025-12-13 08:57:46.653797985 +0000 UTC m=+0.028521299 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:57:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5948b445140de39f593e110d7d860347d27dfb64a06ca5afb4dbd7d1143ccfb3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:46 compute-0 podman[370046]: 2025-12-13 08:57:46.7876212 +0000 UTC m=+0.162344484 container init 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:57:46 compute-0 podman[370046]: 2025-12-13 08:57:46.793017503 +0000 UTC m=+0.167740757 container start 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:57:46 compute-0 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [NOTICE]   (370065) : New worker (370067) forked
Dec 13 08:57:46 compute-0 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [NOTICE]   (370065) : Loading success.
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.065 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616267.0644548, 393f815d-d124-4aea-98c0-126aed0744bd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.065 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] VM Started (Lifecycle Event)
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.068 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.072 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.076 248514 INFO nova.virt.libvirt.driver [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance spawned successfully.
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.076 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.102 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.109 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.110 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.110 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.111 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.111 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.112 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.117 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.145 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.146 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616267.0647352, 393f815d-d124-4aea-98c0-126aed0744bd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.146 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] VM Paused (Lifecycle Event)
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.180 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.185 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616267.0710046, 393f815d-d124-4aea-98c0-126aed0744bd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.185 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] VM Resumed (Lifecycle Event)
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.194 248514 INFO nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Took 11.77 seconds to spawn the instance on the hypervisor.
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.195 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.206 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.210 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.246 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.286 248514 INFO nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Took 13.95 seconds to build instance.
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.307 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:47 compute-0 nova_compute[248510]: 2025-12-13 08:57:47.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:48 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Dec 13 08:57:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 217 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Dec 13 08:57:48 compute-0 nova_compute[248510]: 2025-12-13 08:57:48.680 248514 DEBUG nova.compute.manager [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:48 compute-0 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 DEBUG oslo_concurrency.lockutils [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:48 compute-0 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 DEBUG oslo_concurrency.lockutils [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:48 compute-0 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 DEBUG oslo_concurrency.lockutils [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:48 compute-0 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 DEBUG nova.compute.manager [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] No waiting events found dispatching network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:57:48 compute-0 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 WARNING nova.compute.manager [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received unexpected event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f for instance with vm_state active and task_state None.
Dec 13 08:57:48 compute-0 ovn_controller[148476]: 2025-12-13T08:57:48Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ed:3b:59 10.100.0.13
Dec 13 08:57:48 compute-0 ovn_controller[148476]: 2025-12-13T08:57:48Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:3b:59 10.100.0.13
Dec 13 08:57:49 compute-0 nova_compute[248510]: 2025-12-13 08:57:49.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:49 compute-0 ovn_controller[148476]: 2025-12-13T08:57:49Z|01199|binding|INFO|Releasing lport a5f80d22-8b41-40d9-b8ff-ddee53f45af0 from this chassis (sb_readonly=0)
Dec 13 08:57:49 compute-0 ovn_controller[148476]: 2025-12-13T08:57:49Z|01200|binding|INFO|Releasing lport e25ab14d-8bf6-4007-ae4c-085df43b875d from this chassis (sb_readonly=0)
Dec 13 08:57:49 compute-0 nova_compute[248510]: 2025-12-13 08:57:49.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:49 compute-0 ceph-mon[76537]: pgmap v2901: 321 pgs: 321 active+clean; 217 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Dec 13 08:57:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 227 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 13 08:57:50 compute-0 sudo[370118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:57:50 compute-0 sudo[370118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:50 compute-0 sudo[370118]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:50 compute-0 sudo[370143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 08:57:50 compute-0 sudo[370143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.895388) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616270895545, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 937, "num_deletes": 257, "total_data_size": 1287404, "memory_usage": 1312152, "flush_reason": "Manual Compaction"}
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616270908279, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1265071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56662, "largest_seqno": 57598, "table_properties": {"data_size": 1260333, "index_size": 2325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10511, "raw_average_key_size": 19, "raw_value_size": 1250656, "raw_average_value_size": 2346, "num_data_blocks": 103, "num_entries": 533, "num_filter_entries": 533, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616199, "oldest_key_time": 1765616199, "file_creation_time": 1765616270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 12949 microseconds, and 6433 cpu microseconds.
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.908352) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1265071 bytes OK
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.908385) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.910674) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.910690) EVENT_LOG_v1 {"time_micros": 1765616270910685, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.910725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1282796, prev total WAL file size 1282796, number of live WAL files 2.
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.911445) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353132' seq:0, type:0; will stop at (end)
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1235KB)], [131(10MB)]
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616270911547, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 12030102, "oldest_snapshot_seqno": -1}
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7868 keys, 11905093 bytes, temperature: kUnknown
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616270997504, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 11905093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11851258, "index_size": 33050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 205662, "raw_average_key_size": 26, "raw_value_size": 11709368, "raw_average_value_size": 1488, "num_data_blocks": 1294, "num_entries": 7868, "num_filter_entries": 7868, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:57:50 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.997770) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11905093 bytes
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.999994) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.8 rd, 138.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.3 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(18.9) write-amplify(9.4) OK, records in: 8398, records dropped: 530 output_compression: NoCompression
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.000034) EVENT_LOG_v1 {"time_micros": 1765616271000017, "job": 80, "event": "compaction_finished", "compaction_time_micros": 86034, "compaction_time_cpu_micros": 29977, "output_level": 6, "num_output_files": 1, "total_output_size": 11905093, "num_input_records": 8398, "num_output_records": 7868, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616271000589, "job": 80, "event": "table_file_deletion", "file_number": 133}
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616271002924, "job": 80, "event": "table_file_deletion", "file_number": 131}
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.911360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:57:51 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:57:51 compute-0 podman[370211]: 2025-12-13 08:57:51.15362989 +0000 UTC m=+0.071886081 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 08:57:51 compute-0 podman[370211]: 2025-12-13 08:57:51.256225271 +0000 UTC m=+0.174481432 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:57:51 compute-0 ceph-mon[76537]: pgmap v2902: 321 pgs: 321 active+clean; 227 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 13 08:57:51 compute-0 nova_compute[248510]: 2025-12-13 08:57:51.911 248514 DEBUG nova.compute.manager [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-changed-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:51 compute-0 nova_compute[248510]: 2025-12-13 08:57:51.914 248514 DEBUG nova.compute.manager [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Refreshing instance network info cache due to event network-changed-6b86528d-c1b7-4776-809f-1f8b37569b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:57:51 compute-0 nova_compute[248510]: 2025-12-13 08:57:51.915 248514 DEBUG oslo_concurrency.lockutils [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:57:51 compute-0 nova_compute[248510]: 2025-12-13 08:57:51.916 248514 DEBUG oslo_concurrency.lockutils [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:57:51 compute-0 nova_compute[248510]: 2025-12-13 08:57:51.916 248514 DEBUG nova.network.neutron [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Refreshing network info cache for port 6b86528d-c1b7-4776-809f-1f8b37569b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:57:52 compute-0 sudo[370143]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:57:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:57:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:57:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:57:52 compute-0 sudo[370392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:57:52 compute-0 sudo[370392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:52 compute-0 sudo[370392]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:52 compute-0 sudo[370417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:57:52 compute-0 sudo[370417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 227 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 87 op/s
Dec 13 08:57:52 compute-0 nova_compute[248510]: 2025-12-13 08:57:52.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:52 compute-0 sudo[370417]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:57:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:57:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:57:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:57:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:57:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:57:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:57:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:57:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:57:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:57:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:57:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:57:52 compute-0 sudo[370473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:57:52 compute-0 sudo[370473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:52 compute-0 sudo[370473]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:53 compute-0 sudo[370498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:57:53 compute-0 sudo[370498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:57:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:57:53 compute-0 ceph-mon[76537]: pgmap v2903: 321 pgs: 321 active+clean; 227 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 87 op/s
Dec 13 08:57:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:57:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:57:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:57:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:57:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:57:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:57:53 compute-0 podman[370536]: 2025-12-13 08:57:53.284678742 +0000 UTC m=+0.042712066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:57:53 compute-0 podman[370536]: 2025-12-13 08:57:53.434799587 +0000 UTC m=+0.192832911 container create e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:57:53 compute-0 systemd[1]: Started libpod-conmon-e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473.scope.
Dec 13 08:57:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:57:53 compute-0 podman[370536]: 2025-12-13 08:57:53.672156797 +0000 UTC m=+0.430190161 container init e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 08:57:53 compute-0 podman[370536]: 2025-12-13 08:57:53.683412092 +0000 UTC m=+0.441445446 container start e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 08:57:53 compute-0 pedantic_davinci[370552]: 167 167
Dec 13 08:57:53 compute-0 systemd[1]: libpod-e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473.scope: Deactivated successfully.
Dec 13 08:57:53 compute-0 podman[370536]: 2025-12-13 08:57:53.736401619 +0000 UTC m=+0.494434923 container attach e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 13 08:57:53 compute-0 podman[370536]: 2025-12-13 08:57:53.737676651 +0000 UTC m=+0.495709995 container died e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 08:57:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3f0bde0bdc992be0f244f608f9f74c2df85eb3a12a20106be3b62281c1d7dbe-merged.mount: Deactivated successfully.
Dec 13 08:57:54 compute-0 podman[370536]: 2025-12-13 08:57:54.03300792 +0000 UTC m=+0.791041234 container remove e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 08:57:54 compute-0 systemd[1]: libpod-conmon-e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473.scope: Deactivated successfully.
Dec 13 08:57:54 compute-0 nova_compute[248510]: 2025-12-13 08:57:54.146 248514 DEBUG nova.network.neutron [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updated VIF entry in instance network info cache for port 6b86528d-c1b7-4776-809f-1f8b37569b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:57:54 compute-0 nova_compute[248510]: 2025-12-13 08:57:54.149 248514 DEBUG nova.network.neutron [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updating instance_info_cache with network_info: [{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:57:54 compute-0 nova_compute[248510]: 2025-12-13 08:57:54.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:54 compute-0 nova_compute[248510]: 2025-12-13 08:57:54.275 248514 DEBUG oslo_concurrency.lockutils [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:57:54 compute-0 nova_compute[248510]: 2025-12-13 08:57:54.282 248514 INFO nova.compute.manager [None req-b384d5e4-3937-44c0-b52a-ccb10269a664 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Get console output
Dec 13 08:57:54 compute-0 nova_compute[248510]: 2025-12-13 08:57:54.290 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:57:54 compute-0 podman[370577]: 2025-12-13 08:57:54.293340682 +0000 UTC m=+0.071807339 container create 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:57:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 08:57:54 compute-0 systemd[1]: Started libpod-conmon-77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b.scope.
Dec 13 08:57:54 compute-0 podman[370577]: 2025-12-13 08:57:54.251930198 +0000 UTC m=+0.030396905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:57:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:54 compute-0 podman[370577]: 2025-12-13 08:57:54.590615839 +0000 UTC m=+0.369082536 container init 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:57:54 compute-0 podman[370577]: 2025-12-13 08:57:54.603920734 +0000 UTC m=+0.382387391 container start 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:57:54 compute-0 podman[370577]: 2025-12-13 08:57:54.607875121 +0000 UTC m=+0.386341838 container attach 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 08:57:54 compute-0 ceph-mon[76537]: pgmap v2904: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 08:57:55 compute-0 heuristic_newton[370594]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:57:55 compute-0 heuristic_newton[370594]: --> All data devices are unavailable
Dec 13 08:57:55 compute-0 systemd[1]: libpod-77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b.scope: Deactivated successfully.
Dec 13 08:57:55 compute-0 podman[370577]: 2025-12-13 08:57:55.13034529 +0000 UTC m=+0.908811947 container died 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.199 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.200 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.200 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.202 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.203 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.204 248514 INFO nova.compute.manager [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Terminating instance
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.205 248514 DEBUG nova.compute.manager [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960-merged.mount: Deactivated successfully.
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.435 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.436 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.437 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:55 compute-0 podman[370577]: 2025-12-13 08:57:55.622311462 +0000 UTC m=+1.400778159 container remove 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 08:57:55 compute-0 kernel: tap8ee3e1ae-47 (unregistering): left promiscuous mode
Dec 13 08:57:55 compute-0 NetworkManager[50376]: <info>  [1765616275.6399] device (tap8ee3e1ae-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:57:55 compute-0 systemd[1]: libpod-conmon-77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b.scope: Deactivated successfully.
Dec 13 08:57:55 compute-0 sudo[370498]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 ovn_controller[148476]: 2025-12-13T08:57:55Z|01201|binding|INFO|Releasing lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 from this chassis (sb_readonly=0)
Dec 13 08:57:55 compute-0 ovn_controller[148476]: 2025-12-13T08:57:55Z|01202|binding|INFO|Setting lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 down in Southbound
Dec 13 08:57:55 compute-0 ovn_controller[148476]: 2025-12-13T08:57:55Z|01203|binding|INFO|Removing iface tap8ee3e1ae-47 ovn-installed in OVS
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Dec 13 08:57:55 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007a.scope: Consumed 13.738s CPU time.
Dec 13 08:57:55 compute-0 systemd-machined[210538]: Machine qemu-149-instance-0000007a terminated.
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.758 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:59 10.100.0.13'], port_security=['fa:16:3e:ed:3b:59 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5a7142a0-6e82-418c-affe-88fd6beb2ad9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8944c356-661d-4684-a169-e2ad4b13e098', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca3b9929-a40c-461d-b2b9-49fd6af07fd3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.760 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 in datapath 793ba3c3-1004-4068-a89a-dc7b4c56fc43 unbound from our chassis
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.763 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 793ba3c3-1004-4068-a89a-dc7b4c56fc43
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.783 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f41b49-7a4f-47d8-958b-8076ca16382f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:55 compute-0 sudo[370630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:57:55 compute-0 sudo[370630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:55 compute-0 sudo[370630]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.820 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7ee9c5-5a11-43f0-90e1-3dd85591a04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.824 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1c335d60-fa85-4f46-a189-2605e33eb351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.835 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.853 248514 INFO nova.virt.libvirt.driver [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance destroyed successfully.
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.854 248514 DEBUG nova.objects.instance [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 5a7142a0-6e82-418c-affe-88fd6beb2ad9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.870 248514 DEBUG nova.virt.libvirt.vif [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-182179822',display_name='tempest-TestNetworkBasicOps-server-182179822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-182179822',id=122,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIyOWHhK5PGqeOlzFwokovmTuf3HTihMwOQrzfuGYU+/TrdkTdWDTQvnoNZ7qiFrzCGlnIvswkbj8TaejN4nwLPFUx3mjtQULdplgXkj1ea+cO+RfMC1iM+NaDk/WgBTsg==',key_name='tempest-TestNetworkBasicOps-1885276769',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ly8zv4h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:57:36Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5a7142a0-6e82-418c-affe-88fd6beb2ad9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.871 248514 DEBUG nova.network.os_vif_util [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.872 248514 DEBUG nova.network.os_vif_util [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.871 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7b76c7-d90e-48c0-ba9a-bf736e835b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.872 248514 DEBUG os_vif [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.874 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ee3e1ae-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 sudo[370664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:57:55 compute-0 sudo[370664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.890 248514 INFO os_vif [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47')
Dec 13 08:57:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.908 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75d85142-62c0-4ef0-9fca-425d9eb304a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap793ba3c3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:ed:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878911, 'reachable_time': 42802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370699, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.936 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71881209-143f-4cef-b0f9-b35f0b1f769f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap793ba3c3-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878923, 'tstamp': 878923}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370716, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap793ba3c3-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878927, 'tstamp': 878927}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370716, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.938 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap793ba3c3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 nova_compute[248510]: 2025-12-13 08:57:55.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.942 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap793ba3c3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.942 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.942 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap793ba3c3-10, col_values=(('external_ids', {'iface-id': 'e25ab14d-8bf6-4007-ae4c-085df43b875d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:57:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.943 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:57:56 compute-0 podman[370732]: 2025-12-13 08:57:56.174201241 +0000 UTC m=+0.026438718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:57:56 compute-0 podman[370732]: 2025-12-13 08:57:56.296938165 +0000 UTC m=+0.149175642 container create b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:57:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 08:57:56 compute-0 nova_compute[248510]: 2025-12-13 08:57:56.338 248514 DEBUG nova.compute.manager [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-unplugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:56 compute-0 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG oslo_concurrency.lockutils [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:56 compute-0 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG oslo_concurrency.lockutils [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:56 compute-0 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG oslo_concurrency.lockutils [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:56 compute-0 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG nova.compute.manager [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] No waiting events found dispatching network-vif-unplugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:57:56 compute-0 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG nova.compute.manager [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-unplugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:57:56 compute-0 ceph-mon[76537]: pgmap v2905: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 08:57:56 compute-0 systemd[1]: Started libpod-conmon-b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4.scope.
Dec 13 08:57:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:57:56 compute-0 podman[370732]: 2025-12-13 08:57:56.777440386 +0000 UTC m=+0.629677903 container init b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:57:56 compute-0 podman[370732]: 2025-12-13 08:57:56.790421604 +0000 UTC m=+0.642659101 container start b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:57:56 compute-0 vigorous_dubinsky[370749]: 167 167
Dec 13 08:57:56 compute-0 systemd[1]: libpod-b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4.scope: Deactivated successfully.
Dec 13 08:57:56 compute-0 podman[370732]: 2025-12-13 08:57:56.793912219 +0000 UTC m=+0.646149736 container attach b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 08:57:56 compute-0 podman[370732]: 2025-12-13 08:57:56.803592816 +0000 UTC m=+0.655830303 container died b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 08:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c287a890b2ad78dc3f75eeca2019211e4af85016e4db29219d9340510199a581-merged.mount: Deactivated successfully.
Dec 13 08:57:56 compute-0 podman[370732]: 2025-12-13 08:57:56.848410693 +0000 UTC m=+0.700648170 container remove b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:57:56 compute-0 systemd[1]: libpod-conmon-b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4.scope: Deactivated successfully.
Dec 13 08:57:56 compute-0 nova_compute[248510]: 2025-12-13 08:57:56.971 248514 INFO nova.virt.libvirt.driver [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Deleting instance files /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9_del
Dec 13 08:57:56 compute-0 nova_compute[248510]: 2025-12-13 08:57:56.971 248514 INFO nova.virt.libvirt.driver [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Deletion of /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9_del complete
Dec 13 08:57:57 compute-0 nova_compute[248510]: 2025-12-13 08:57:57.034 248514 INFO nova.compute.manager [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Took 1.83 seconds to destroy the instance on the hypervisor.
Dec 13 08:57:57 compute-0 nova_compute[248510]: 2025-12-13 08:57:57.035 248514 DEBUG oslo.service.loopingcall [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:57:57 compute-0 nova_compute[248510]: 2025-12-13 08:57:57.035 248514 DEBUG nova.compute.manager [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:57:57 compute-0 nova_compute[248510]: 2025-12-13 08:57:57.035 248514 DEBUG nova.network.neutron [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:57:57 compute-0 podman[370773]: 2025-12-13 08:57:57.059046169 +0000 UTC m=+0.049553414 container create 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:57:57 compute-0 systemd[1]: Started libpod-conmon-3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4.scope.
Dec 13 08:57:57 compute-0 podman[370773]: 2025-12-13 08:57:57.037632085 +0000 UTC m=+0.028139330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:57:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:57 compute-0 podman[370773]: 2025-12-13 08:57:57.158586686 +0000 UTC m=+0.149093911 container init 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:57:57 compute-0 podman[370773]: 2025-12-13 08:57:57.166594682 +0000 UTC m=+0.157101907 container start 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 08:57:57 compute-0 podman[370773]: 2025-12-13 08:57:57.16980176 +0000 UTC m=+0.160308985 container attach 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:57:57 compute-0 quirky_haibt[370790]: {
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:     "0": [
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:         {
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "devices": [
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "/dev/loop3"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             ],
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_name": "ceph_lv0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_size": "21470642176",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "name": "ceph_lv0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "tags": {
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cluster_name": "ceph",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.crush_device_class": "",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.encrypted": "0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.objectstore": "bluestore",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osd_id": "0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.type": "block",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.vdo": "0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.with_tpm": "0"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             },
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "type": "block",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "vg_name": "ceph_vg0"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:         }
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:     ],
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:     "1": [
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:         {
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "devices": [
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "/dev/loop4"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             ],
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_name": "ceph_lv1",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_size": "21470642176",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "name": "ceph_lv1",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "tags": {
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cluster_name": "ceph",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.crush_device_class": "",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.encrypted": "0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.objectstore": "bluestore",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osd_id": "1",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.type": "block",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.vdo": "0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.with_tpm": "0"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             },
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "type": "block",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "vg_name": "ceph_vg1"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:         }
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:     ],
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:     "2": [
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:         {
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "devices": [
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "/dev/loop5"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             ],
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_name": "ceph_lv2",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_size": "21470642176",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "name": "ceph_lv2",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "tags": {
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.cluster_name": "ceph",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.crush_device_class": "",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.encrypted": "0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.objectstore": "bluestore",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osd_id": "2",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.type": "block",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.vdo": "0",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:                 "ceph.with_tpm": "0"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             },
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "type": "block",
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:             "vg_name": "ceph_vg2"
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:         }
Dec 13 08:57:57 compute-0 quirky_haibt[370790]:     ]
Dec 13 08:57:57 compute-0 quirky_haibt[370790]: }
Dec 13 08:57:57 compute-0 systemd[1]: libpod-3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4.scope: Deactivated successfully.
Dec 13 08:57:57 compute-0 podman[370773]: 2025-12-13 08:57:57.502046473 +0000 UTC m=+0.492553698 container died 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 08:57:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679-merged.mount: Deactivated successfully.
Dec 13 08:57:57 compute-0 podman[370773]: 2025-12-13 08:57:57.550912299 +0000 UTC m=+0.541419524 container remove 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:57:57 compute-0 systemd[1]: libpod-conmon-3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4.scope: Deactivated successfully.
Dec 13 08:57:57 compute-0 sudo[370664]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:57 compute-0 sudo[370811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:57:57 compute-0 sudo[370811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:57 compute-0 sudo[370811]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:57 compute-0 nova_compute[248510]: 2025-12-13 08:57:57.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:57:57 compute-0 sudo[370836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:57:57 compute-0 sudo[370836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:58 compute-0 podman[370871]: 2025-12-13 08:57:58.060709968 +0000 UTC m=+0.040186515 container create 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:57:58 compute-0 systemd[1]: Started libpod-conmon-08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f.scope.
Dec 13 08:57:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:57:58 compute-0 podman[370871]: 2025-12-13 08:57:58.127929483 +0000 UTC m=+0.107406060 container init 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:57:58 compute-0 podman[370871]: 2025-12-13 08:57:58.134402911 +0000 UTC m=+0.113879458 container start 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 08:57:58 compute-0 podman[370871]: 2025-12-13 08:57:58.041392645 +0000 UTC m=+0.020869192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:57:58 compute-0 podman[370871]: 2025-12-13 08:57:58.13843709 +0000 UTC m=+0.117913657 container attach 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:57:58 compute-0 busy_feistel[370887]: 167 167
Dec 13 08:57:58 compute-0 systemd[1]: libpod-08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f.scope: Deactivated successfully.
Dec 13 08:57:58 compute-0 podman[370871]: 2025-12-13 08:57:58.140027399 +0000 UTC m=+0.119503946 container died 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a9ace429bab01ab8b65d1ec7a7dd7e79320851e21d4e8250e40099ff38a050a-merged.mount: Deactivated successfully.
Dec 13 08:57:58 compute-0 podman[370871]: 2025-12-13 08:57:58.178953052 +0000 UTC m=+0.158429599 container remove 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:57:58 compute-0 systemd[1]: libpod-conmon-08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f.scope: Deactivated successfully.
Dec 13 08:57:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 213 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Dec 13 08:57:58 compute-0 podman[370911]: 2025-12-13 08:57:58.393443902 +0000 UTC m=+0.047566775 container create d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 08:57:58 compute-0 systemd[1]: Started libpod-conmon-d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450.scope.
Dec 13 08:57:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:57:58 compute-0 podman[370911]: 2025-12-13 08:57:58.376265462 +0000 UTC m=+0.030388365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:57:58 compute-0 podman[370911]: 2025-12-13 08:57:58.492902287 +0000 UTC m=+0.147025180 container init d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 08:57:58 compute-0 nova_compute[248510]: 2025-12-13 08:57:58.497 248514 DEBUG nova.compute.manager [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:57:58 compute-0 nova_compute[248510]: 2025-12-13 08:57:58.500 248514 DEBUG oslo_concurrency.lockutils [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:57:58 compute-0 nova_compute[248510]: 2025-12-13 08:57:58.500 248514 DEBUG oslo_concurrency.lockutils [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:57:58 compute-0 nova_compute[248510]: 2025-12-13 08:57:58.500 248514 DEBUG oslo_concurrency.lockutils [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:57:58 compute-0 nova_compute[248510]: 2025-12-13 08:57:58.500 248514 DEBUG nova.compute.manager [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] No waiting events found dispatching network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:57:58 compute-0 nova_compute[248510]: 2025-12-13 08:57:58.501 248514 WARNING nova.compute.manager [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received unexpected event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 for instance with vm_state active and task_state deleting.
Dec 13 08:57:58 compute-0 podman[370911]: 2025-12-13 08:57:58.504009059 +0000 UTC m=+0.158131932 container start d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 08:57:58 compute-0 podman[370911]: 2025-12-13 08:57:58.524408528 +0000 UTC m=+0.178531491 container attach d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 08:57:59 compute-0 lvm[371006]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:57:59 compute-0 lvm[371005]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:57:59 compute-0 lvm[371005]: VG ceph_vg0 finished
Dec 13 08:57:59 compute-0 lvm[371006]: VG ceph_vg1 finished
Dec 13 08:57:59 compute-0 lvm[371008]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:57:59 compute-0 lvm[371008]: VG ceph_vg2 finished
Dec 13 08:57:59 compute-0 vigilant_chebyshev[370927]: {}
Dec 13 08:57:59 compute-0 ceph-mon[76537]: pgmap v2906: 321 pgs: 321 active+clean; 213 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Dec 13 08:57:59 compute-0 systemd[1]: libpod-d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450.scope: Deactivated successfully.
Dec 13 08:57:59 compute-0 systemd[1]: libpod-d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450.scope: Consumed 1.371s CPU time.
Dec 13 08:57:59 compute-0 podman[370911]: 2025-12-13 08:57:59.386584982 +0000 UTC m=+1.040707855 container died d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d-merged.mount: Deactivated successfully.
Dec 13 08:57:59 compute-0 podman[370911]: 2025-12-13 08:57:59.444297135 +0000 UTC m=+1.098420008 container remove d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 08:57:59 compute-0 systemd[1]: libpod-conmon-d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450.scope: Deactivated successfully.
Dec 13 08:57:59 compute-0 sudo[370836]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:57:59 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:57:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:57:59 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:57:59 compute-0 sudo[371022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:57:59 compute-0 sudo[371022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:57:59 compute-0 sudo[371022]: pam_unix(sudo:session): session closed for user root
Dec 13 08:57:59 compute-0 ovn_controller[148476]: 2025-12-13T08:57:59Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:d0:ff 10.100.0.11
Dec 13 08:57:59 compute-0 ovn_controller[148476]: 2025-12-13T08:57:59Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:d0:ff 10.100.0.11
Dec 13 08:58:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 182 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 185 op/s
Dec 13 08:58:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:58:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:58:00 compute-0 nova_compute[248510]: 2025-12-13 08:58:00.509 248514 DEBUG nova.network.neutron [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:58:00 compute-0 nova_compute[248510]: 2025-12-13 08:58:00.548 248514 INFO nova.compute.manager [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Took 3.51 seconds to deallocate network for instance.
Dec 13 08:58:00 compute-0 nova_compute[248510]: 2025-12-13 08:58:00.649 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:00 compute-0 nova_compute[248510]: 2025-12-13 08:58:00.650 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:00 compute-0 nova_compute[248510]: 2025-12-13 08:58:00.760 248514 DEBUG nova.compute.manager [req-82a5cedd-f32e-4625-aca5-621eea7ea5cb req-d2b300ff-3d96-42e6-aa6b-3f1760a83068 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-deleted-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:00 compute-0 nova_compute[248510]: 2025-12-13 08:58:00.769 248514 DEBUG oslo_concurrency.processutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:00 compute-0 nova_compute[248510]: 2025-12-13 08:58:00.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:58:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193198550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:01 compute-0 nova_compute[248510]: 2025-12-13 08:58:01.393 248514 DEBUG oslo_concurrency.processutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:01 compute-0 nova_compute[248510]: 2025-12-13 08:58:01.402 248514 DEBUG nova.compute.provider_tree [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:58:01 compute-0 nova_compute[248510]: 2025-12-13 08:58:01.424 248514 DEBUG nova.scheduler.client.report [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:58:01 compute-0 nova_compute[248510]: 2025-12-13 08:58:01.454 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:01 compute-0 nova_compute[248510]: 2025-12-13 08:58:01.486 248514 INFO nova.scheduler.client.report [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 5a7142a0-6e82-418c-affe-88fd6beb2ad9
Dec 13 08:58:01 compute-0 ceph-mon[76537]: pgmap v2907: 321 pgs: 321 active+clean; 182 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 185 op/s
Dec 13 08:58:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1193198550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:01 compute-0 nova_compute[248510]: 2025-12-13 08:58:01.570 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 182 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 13 08:58:02 compute-0 ceph-mon[76537]: pgmap v2908: 321 pgs: 321 active+clean; 182 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 13 08:58:02 compute-0 nova_compute[248510]: 2025-12-13 08:58:02.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.299 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.300 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.300 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.300 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.301 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.302 248514 INFO nova.compute.manager [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Terminating instance
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.303 248514 DEBUG nova.compute.manager [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:58:03 compute-0 kernel: tap4013e964-3f (unregistering): left promiscuous mode
Dec 13 08:58:03 compute-0 NetworkManager[50376]: <info>  [1765616283.3539] device (tap4013e964-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.361 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 ovn_controller[148476]: 2025-12-13T08:58:03Z|01204|binding|INFO|Releasing lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 from this chassis (sb_readonly=0)
Dec 13 08:58:03 compute-0 ovn_controller[148476]: 2025-12-13T08:58:03Z|01205|binding|INFO|Setting lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 down in Southbound
Dec 13 08:58:03 compute-0 ovn_controller[148476]: 2025-12-13T08:58:03Z|01206|binding|INFO|Removing iface tap4013e964-3f ovn-installed in OVS
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.372 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:a9:be 10.100.0.9'], port_security=['fa:16:3e:7e:a9:be 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c78db00b-677b-4c8b-af80-5bb717876b41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca3b9929-a40c-461d-b2b9-49fd6af07fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.373 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 in datapath 793ba3c3-1004-4068-a89a-dc7b4c56fc43 unbound from our chassis
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.375 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 793ba3c3-1004-4068-a89a-dc7b4c56fc43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.377 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[829b0229-a1a1-4eeb-8d68-40615b02ba7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.379 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 namespace which is not needed anymore
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000079.scope: Deactivated successfully.
Dec 13 08:58:03 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000079.scope: Consumed 14.994s CPU time.
Dec 13 08:58:03 compute-0 systemd-machined[210538]: Machine qemu-148-instance-00000079 terminated.
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.545 248514 INFO nova.virt.libvirt.driver [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance destroyed successfully.
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.545 248514 DEBUG nova.objects.instance [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:58:03 compute-0 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [NOTICE]   (368751) : haproxy version is 2.8.14-c23fe91
Dec 13 08:58:03 compute-0 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [NOTICE]   (368751) : path to executable is /usr/sbin/haproxy
Dec 13 08:58:03 compute-0 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [WARNING]  (368751) : Exiting Master process...
Dec 13 08:58:03 compute-0 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [ALERT]    (368751) : Current worker (368753) exited with code 143 (Terminated)
Dec 13 08:58:03 compute-0 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [WARNING]  (368751) : All workers exited. Exiting... (0)
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.563 248514 DEBUG nova.virt.libvirt.vif [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1758092435',display_name='tempest-TestNetworkBasicOps-server-1758092435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1758092435',id=121,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDDhA9DlC0XXlTi33TP7442ZgmxgyTTZRyp/5+PtSzz/z4TT06lLY5cCNioPf17m6xj5p3Rza8zGSpra/Ou4pMBK7drw3VX1RTJrfYr/jaVe2RRgvmXLfZfYTeWegMxqwQ==',key_name='tempest-TestNetworkBasicOps-530003057',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:56:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-9yn1gbd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:56:52Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.563 248514 DEBUG nova.network.os_vif_util [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.566 248514 DEBUG nova.network.os_vif_util [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:58:03 compute-0 systemd[1]: libpod-5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17.scope: Deactivated successfully.
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.567 248514 DEBUG os_vif [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.570 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4013e964-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:03 compute-0 podman[371093]: 2025-12-13 08:58:03.571163071 +0000 UTC m=+0.058917813 container died 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.579 248514 INFO os_vif [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f')
Dec 13 08:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17-userdata-shm.mount: Deactivated successfully.
Dec 13 08:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad8c437fd7f83a96f7ee83b44aeba75a23697638d72f15b7ac660f7308e1111-merged.mount: Deactivated successfully.
Dec 13 08:58:03 compute-0 podman[371093]: 2025-12-13 08:58:03.616649265 +0000 UTC m=+0.104404007 container cleanup 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.625 248514 DEBUG nova.compute.manager [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-unplugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.625 248514 DEBUG oslo_concurrency.lockutils [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.626 248514 DEBUG oslo_concurrency.lockutils [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.626 248514 DEBUG oslo_concurrency.lockutils [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.626 248514 DEBUG nova.compute.manager [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] No waiting events found dispatching network-vif-unplugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.627 248514 DEBUG nova.compute.manager [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-unplugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:58:03 compute-0 systemd[1]: libpod-conmon-5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17.scope: Deactivated successfully.
Dec 13 08:58:03 compute-0 podman[371149]: 2025-12-13 08:58:03.691145318 +0000 UTC m=+0.047459393 container remove 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.697 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55558e1d-1a5e-451d-be7f-aba67b3c1cf0]: (4, ('Sat Dec 13 08:58:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 (5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17)\n5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17\nSat Dec 13 08:58:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 (5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17)\n5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e99657c5-3ec0-4cca-8316-ecef893d6fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap793ba3c3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.703 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 kernel: tap793ba3c3-10: left promiscuous mode
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.720 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cfee50c6-5ec3-4905-9e15-f13dd3810e94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3848fafd-fee8-4f84-9cbe-a2d68e9c889f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8c0f1c-93f7-4a53-b311-d2e4dbc45428]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.759 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb517c52-4b7d-4730-8310-a01bbda7c382]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878905, 'reachable_time': 38688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371172, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d793ba3c3\x2d1004\x2d4068\x2da89a\x2ddc7b4c56fc43.mount: Deactivated successfully.
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.770 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:58:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.770 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4db225dd-66e0-4e7b-bfd2-a5de3515714b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:03 compute-0 podman[371162]: 2025-12-13 08:58:03.812097449 +0000 UTC m=+0.067453012 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 08:58:03 compute-0 podman[371164]: 2025-12-13 08:58:03.83012201 +0000 UTC m=+0.083373912 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 08:58:03 compute-0 podman[371184]: 2025-12-13 08:58:03.881133509 +0000 UTC m=+0.090096737 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.881 248514 INFO nova.virt.libvirt.driver [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Deleting instance files /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_del
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.882 248514 INFO nova.virt.libvirt.driver [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Deletion of /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_del complete
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.943 248514 INFO nova.compute.manager [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Took 0.64 seconds to destroy the instance on the hypervisor.
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.943 248514 DEBUG oslo.service.loopingcall [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.944 248514 DEBUG nova.compute.manager [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:58:03 compute-0 nova_compute[248510]: 2025-12-13 08:58:03.944 248514 DEBUG nova.network.neutron [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:58:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 160 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 158 op/s
Dec 13 08:58:04 compute-0 nova_compute[248510]: 2025-12-13 08:58:04.540 248514 DEBUG nova.network.neutron [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:58:04 compute-0 nova_compute[248510]: 2025-12-13 08:58:04.557 248514 INFO nova.compute.manager [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Took 0.61 seconds to deallocate network for instance.
Dec 13 08:58:04 compute-0 nova_compute[248510]: 2025-12-13 08:58:04.603 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:04 compute-0 nova_compute[248510]: 2025-12-13 08:58:04.604 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:04 compute-0 nova_compute[248510]: 2025-12-13 08:58:04.677 248514 DEBUG oslo_concurrency.processutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:04 compute-0 nova_compute[248510]: 2025-12-13 08:58:04.724 248514 DEBUG nova.compute.manager [req-2f622381-4451-4840-8aba-0bfd3b6bc2d9 req-d2a32941-7dbd-40e1-97e9-f89ba1ffe769 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-deleted-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:58:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577693289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.232 248514 DEBUG oslo_concurrency.processutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.240 248514 DEBUG nova.compute.provider_tree [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.269 248514 DEBUG nova.scheduler.client.report [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.296 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:05 compute-0 ceph-mon[76537]: pgmap v2909: 321 pgs: 321 active+clean; 160 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 158 op/s
Dec 13 08:58:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1577693289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.408 248514 INFO nova.scheduler.client.report [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.498 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.840 248514 DEBUG nova.compute.manager [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.840 248514 DEBUG oslo_concurrency.lockutils [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.841 248514 DEBUG oslo_concurrency.lockutils [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.841 248514 DEBUG oslo_concurrency.lockutils [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.842 248514 DEBUG nova.compute.manager [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] No waiting events found dispatching network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:58:05 compute-0 nova_compute[248510]: 2025-12-13 08:58:05.842 248514 WARNING nova.compute.manager [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received unexpected event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 for instance with vm_state deleted and task_state None.
Dec 13 08:58:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 160 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 13 08:58:06 compute-0 nova_compute[248510]: 2025-12-13 08:58:06.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:07 compute-0 ceph-mon[76537]: pgmap v2910: 321 pgs: 321 active+clean; 160 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 13 08:58:07 compute-0 nova_compute[248510]: 2025-12-13 08:58:07.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 121 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 08:58:08 compute-0 nova_compute[248510]: 2025-12-13 08:58:08.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:58:09
Dec 13 08:58:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:58:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:58:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'backups', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 13 08:58:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:58:09 compute-0 ceph-mon[76537]: pgmap v2911: 321 pgs: 321 active+clean; 121 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:09.785 158745 DEBUG eventlet.wsgi.server [-] (158745) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:09.787 158745 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: Accept: */*
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: Connection: close
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: Content-Type: text/plain
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: Host: 169.254.169.254
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: User-Agent: curl/7.84.0
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: X-Forwarded-For: 10.100.0.11
Dec 13 08:58:09 compute-0 ovn_metadata_agent[158414]: X-Ovn-Network-Id: b9992fbc-9a6f-4e82-84b9-be47eb5816aa __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 08:58:10 compute-0 nova_compute[248510]: 2025-12-13 08:58:10.849 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616275.848521, 5a7142a0-6e82-418c-affe-88fd6beb2ad9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:58:10 compute-0 nova_compute[248510]: 2025-12-13 08:58:10.850 248514 INFO nova.compute.manager [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] VM Stopped (Lifecycle Event)
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:58:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:58:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:11 compute-0 nova_compute[248510]: 2025-12-13 08:58:11.010 248514 DEBUG nova.compute.manager [None req-10080ed7-5648-46bd-ad9a-c49275b676e9 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:58:11 compute-0 ceph-mon[76537]: pgmap v2912: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 08:58:11 compute-0 nova_compute[248510]: 2025-12-13 08:58:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:11.911 158745 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 13 08:58:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:11.911 158745 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.1242702
Dec 13 08:58:11 compute-0 haproxy-metadata-proxy-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370067]: 10.100.0.11:35396 [13/Dec/2025:08:58:09.783] listener listener/metadata 0/0/0/2127/2127 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:12.027 158745 DEBUG eventlet.wsgi.server [-] (158745) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:12.028 158745 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: Accept: */*
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: Connection: close
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: Content-Length: 100
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: Content-Type: application/x-www-form-urlencoded
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: Host: 169.254.169.254
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: User-Agent: curl/7.84.0
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: X-Forwarded-For: 10.100.0.11
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: X-Ovn-Network-Id: b9992fbc-9a6f-4e82-84b9-be47eb5816aa
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:12.256 158745 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 13 08:58:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:12.256 158745 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2287345
Dec 13 08:58:12 compute-0 haproxy-metadata-proxy-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370067]: 10.100.0.11:60354 [13/Dec/2025:08:58:12.026] listener listener/metadata 0/0/0/230/230 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec 13 08:58:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 169 KiB/s rd, 678 KiB/s wr, 62 op/s
Dec 13 08:58:12 compute-0 nova_compute[248510]: 2025-12-13 08:58:12.686 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:13 compute-0 ceph-mon[76537]: pgmap v2913: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 169 KiB/s rd, 678 KiB/s wr, 62 op/s
Dec 13 08:58:13 compute-0 nova_compute[248510]: 2025-12-13 08:58:13.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:13 compute-0 nova_compute[248510]: 2025-12-13 08:58:13.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:13 compute-0 nova_compute[248510]: 2025-12-13 08:58:13.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:58:13 compute-0 nova_compute[248510]: 2025-12-13 08:58:13.871 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.197 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.198 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.198 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.198 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.199 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.200 248514 INFO nova.compute.manager [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Terminating instance
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.201 248514 DEBUG nova.compute.manager [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:58:14 compute-0 kernel: tap6b86528d-c1 (unregistering): left promiscuous mode
Dec 13 08:58:14 compute-0 NetworkManager[50376]: <info>  [1765616294.2848] device (tap6b86528d-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01207|binding|INFO|Releasing lport 6b86528d-c1b7-4776-809f-1f8b37569b6f from this chassis (sb_readonly=0)
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.290 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01208|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f down in Southbound
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01209|binding|INFO|Removing iface tap6b86528d-c1 ovn-installed in OVS
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.293 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 176 KiB/s rd, 679 KiB/s wr, 64 op/s
Dec 13 08:58:14 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Dec 13 08:58:14 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007b.scope: Consumed 14.534s CPU time.
Dec 13 08:58:14 compute-0 systemd-machined[210538]: Machine qemu-150-instance-0000007b terminated.
Dec 13 08:58:14 compute-0 kernel: tap6b86528d-c1: entered promiscuous mode
Dec 13 08:58:14 compute-0 kernel: tap6b86528d-c1 (unregistering): left promiscuous mode
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01210|if_status|INFO|Not updating pb chassis for 6b86528d-c1b7-4776-809f-1f8b37569b6f now as sb is readonly
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01211|binding|INFO|Claiming lport 6b86528d-c1b7-4776-809f-1f8b37569b6f for this chassis.
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01212|binding|INFO|6b86528d-c1b7-4776-809f-1f8b37569b6f: Claiming fa:16:3e:05:d0:ff 10.100.0.11
Dec 13 08:58:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.429 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d0:ff 10.100.0.11'], port_security=['fa:16:3e:05:d0:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '393f815d-d124-4aea-98c0-126aed0744bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30f07149729142048436dbfbb8bf2742', 'neutron:revision_number': '4', 'neutron:security_group_ids': '339be054-74c6-402b-b71a-0e37379fa825 59357d23-9217-43e7-99ef-d4349c0de8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77d611ab-66b1-4445-8d3d-978ee23b13d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b86528d-c1b7-4776-809f-1f8b37569b6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:58:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.430 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86528d-c1b7-4776-809f-1f8b37569b6f in datapath b9992fbc-9a6f-4e82-84b9-be47eb5816aa unbound from our chassis
Dec 13 08:58:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.431 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:58:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.433 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68f2e81d-f915-4c2b-b97b-38ab83206a45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.433 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa namespace which is not needed anymore
Dec 13 08:58:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.441 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d0:ff 10.100.0.11'], port_security=['fa:16:3e:05:d0:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '393f815d-d124-4aea-98c0-126aed0744bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30f07149729142048436dbfbb8bf2742', 'neutron:revision_number': '4', 'neutron:security_group_ids': '339be054-74c6-402b-b71a-0e37379fa825 59357d23-9217-43e7-99ef-d4349c0de8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77d611ab-66b1-4445-8d3d-978ee23b13d9, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b86528d-c1b7-4776-809f-1f8b37569b6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.445 248514 INFO nova.virt.libvirt.driver [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance destroyed successfully.
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.446 248514 DEBUG nova.objects.instance [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lazy-loading 'resources' on Instance uuid 393f815d-d124-4aea-98c0-126aed0744bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01213|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f ovn-installed in OVS
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01214|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f up in Southbound
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01215|binding|INFO|Releasing lport 6b86528d-c1b7-4776-809f-1f8b37569b6f from this chassis (sb_readonly=1)
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01216|if_status|INFO|Dropped 2 log messages in last 769 seconds (most recently, 769 seconds ago) due to excessive rate
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01217|if_status|INFO|Not setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f down as sb is readonly
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01218|binding|INFO|Removing iface tap6b86528d-c1 ovn-installed in OVS
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.467 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01219|binding|INFO|Releasing lport a5f80d22-8b41-40d9-b8ff-ddee53f45af0 from this chassis (sb_readonly=0)
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01220|binding|INFO|Releasing lport 6b86528d-c1b7-4776-809f-1f8b37569b6f from this chassis (sb_readonly=0)
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01221|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f down in Southbound
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.513 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.515 248514 DEBUG nova.virt.libvirt.vif [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1358982584',display_name='tempest-TestServerBasicOps-server-1358982584',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1358982584',id=123,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLfROCcHYxf8uvMCUymzKleNDXyw2tmzRbGzC//Zjm3WjKvoC/StL1LDTb6SGUu3s96IMOyEDfcMq8YnCwGooxmPETH1rv9aB87uxpvFNJYtgkaVxxoxt7V/jUmpT5M6g==',key_name='tempest-TestServerBasicOps-1980425304',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:57:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30f07149729142048436dbfbb8bf2742',ramdisk_id='',reservation_id='r-sw1yv1h4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1870679034',owner_user_name='tempest-TestServerBasicOps-1870679034-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='095998dc2eb348e8a90c866d4106cd74',uuid=393f815d-d124-4aea-98c0-126aed0744bd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.516 248514 DEBUG nova.network.os_vif_util [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converting VIF {"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.517 248514 DEBUG nova.network.os_vif_util [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.518 248514 DEBUG os_vif [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:58:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.519 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d0:ff 10.100.0.11'], port_security=['fa:16:3e:05:d0:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '393f815d-d124-4aea-98c0-126aed0744bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30f07149729142048436dbfbb8bf2742', 'neutron:revision_number': '4', 'neutron:security_group_ids': '339be054-74c6-402b-b71a-0e37379fa825 59357d23-9217-43e7-99ef-d4349c0de8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77d611ab-66b1-4445-8d3d-978ee23b13d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b86528d-c1b7-4776-809f-1f8b37569b6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.520 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b86528d-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.711 248514 INFO os_vif [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1')
Dec 13 08:58:14 compute-0 ovn_controller[148476]: 2025-12-13T08:58:14Z|01222|binding|INFO|Releasing lport a5f80d22-8b41-40d9-b8ff-ddee53f45af0 from this chassis (sb_readonly=0)
Dec 13 08:58:14 compute-0 nova_compute[248510]: 2025-12-13 08:58:14.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:14 compute-0 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [NOTICE]   (370065) : haproxy version is 2.8.14-c23fe91
Dec 13 08:58:14 compute-0 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [NOTICE]   (370065) : path to executable is /usr/sbin/haproxy
Dec 13 08:58:14 compute-0 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [WARNING]  (370065) : Exiting Master process...
Dec 13 08:58:14 compute-0 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [WARNING]  (370065) : Exiting Master process...
Dec 13 08:58:14 compute-0 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [ALERT]    (370065) : Current worker (370067) exited with code 143 (Terminated)
Dec 13 08:58:14 compute-0 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [WARNING]  (370065) : All workers exited. Exiting... (0)
Dec 13 08:58:14 compute-0 systemd[1]: libpod-852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5.scope: Deactivated successfully.
Dec 13 08:58:14 compute-0 podman[371279]: 2025-12-13 08:58:14.770126784 +0000 UTC m=+0.225721755 container died 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5-userdata-shm.mount: Deactivated successfully.
Dec 13 08:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5948b445140de39f593e110d7d860347d27dfb64a06ca5afb4dbd7d1143ccfb3-merged.mount: Deactivated successfully.
Dec 13 08:58:14 compute-0 podman[371279]: 2025-12-13 08:58:14.928160432 +0000 UTC m=+0.383755403 container cleanup 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:58:14 compute-0 systemd[1]: libpod-conmon-852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5.scope: Deactivated successfully.
Dec 13 08:58:15 compute-0 podman[371327]: 2025-12-13 08:58:15.039511327 +0000 UTC m=+0.090370313 container remove 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.047 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[14b36df8-897a-4006-a189-ec43bebe176d]: (4, ('Sat Dec 13 08:58:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa (852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5)\n852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5\nSat Dec 13 08:58:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa (852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5)\n852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.049 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c63bfa44-e7fc-4aab-8b0d-290e51261d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.050 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9992fbc-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:15 compute-0 kernel: tapb9992fbc-90: left promiscuous mode
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.086 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3309b089-0c7f-4e4b-9f8e-262a028f910a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:58:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1922456529' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:58:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:58:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1922456529' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.102 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9739a51f-6523-4eca-8442-2a035422f20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.104 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e584b66-194e-4353-8663-1d8ef78b7f29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.125 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f578d666-5372-4d87-b2b1-69b9a23a30b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884310, 'reachable_time': 41239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371343, 'error': None, 'target': 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.128 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.129 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[817127e4-d372-405c-a1ca-2716a0f59509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.129 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86528d-c1b7-4776-809f-1f8b37569b6f in datapath b9992fbc-9a6f-4e82-84b9-be47eb5816aa unbound from our chassis
Dec 13 08:58:15 compute-0 systemd[1]: run-netns-ovnmeta\x2db9992fbc\x2d9a6f\x2d4e82\x2d84b9\x2dbe47eb5816aa.mount: Deactivated successfully.
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.131 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.132 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e71895c-8238-44b3-8598-9a0dfbcfb433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.133 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86528d-c1b7-4776-809f-1f8b37569b6f in datapath b9992fbc-9a6f-4e82-84b9-be47eb5816aa unbound from our chassis
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.134 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:58:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.135 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[30be4b7d-f89e-4124-8f93-2b9daa684ba6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:15 compute-0 ceph-mon[76537]: pgmap v2914: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 176 KiB/s rd, 679 KiB/s wr, 64 op/s
Dec 13 08:58:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1922456529' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:58:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1922456529' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.500 248514 DEBUG nova.compute.manager [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-unplugged-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.500 248514 DEBUG oslo_concurrency.lockutils [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.500 248514 DEBUG oslo_concurrency.lockutils [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.501 248514 DEBUG oslo_concurrency.lockutils [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.501 248514 DEBUG nova.compute.manager [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] No waiting events found dispatching network-vif-unplugged-6b86528d-c1b7-4776-809f-1f8b37569b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.501 248514 DEBUG nova.compute.manager [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-unplugged-6b86528d-c1b7-4776-809f-1f8b37569b6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.552 248514 INFO nova.virt.libvirt.driver [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Deleting instance files /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd_del
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.553 248514 INFO nova.virt.libvirt.driver [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Deletion of /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd_del complete
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.639 248514 INFO nova.compute.manager [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Took 1.44 seconds to destroy the instance on the hypervisor.
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.640 248514 DEBUG oslo.service.loopingcall [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.640 248514 DEBUG nova.compute.manager [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:58:15 compute-0 nova_compute[248510]: 2025-12-13 08:58:15.640 248514 DEBUG nova.network.neutron [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:58:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 15 KiB/s wr, 16 op/s
Dec 13 08:58:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:17.001 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:17.002 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:58:17 compute-0 ceph-mon[76537]: pgmap v2915: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 15 KiB/s wr, 16 op/s
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.600 248514 DEBUG nova.compute.manager [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.600 248514 DEBUG oslo_concurrency.lockutils [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.601 248514 DEBUG oslo_concurrency.lockutils [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.601 248514 DEBUG oslo_concurrency.lockutils [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.601 248514 DEBUG nova.compute.manager [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] No waiting events found dispatching network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.602 248514 WARNING nova.compute.manager [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received unexpected event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f for instance with vm_state active and task_state deleting.
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.803 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:58:17 compute-0 nova_compute[248510]: 2025-12-13 08:58:17.804 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.048 248514 DEBUG nova.network.neutron [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.078 248514 INFO nova.compute.manager [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Took 2.44 seconds to deallocate network for instance.
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.147 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.148 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.168 248514 DEBUG nova.compute.manager [req-e43550be-e273-4bbb-8735-94666908e0c6 req-e0ae867d-232f-4375-9dcb-dc13f3232dd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-deleted-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.221 248514 DEBUG oslo_concurrency.processutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 91 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 16 KiB/s wr, 27 op/s
Dec 13 08:58:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:58:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636236204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.394 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.542 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616283.5401127, 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.542 248514 INFO nova.compute.manager [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] VM Stopped (Lifecycle Event)
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.571 248514 DEBUG nova.compute.manager [None req-8f688a4a-e898-4753-94e7-3ea44117b378 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.593 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:58:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3636236204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.595 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3516MB free_disk=59.94185414817184GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.596 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:58:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348939361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.835 248514 DEBUG oslo_concurrency.processutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.840 248514 DEBUG nova.compute.provider_tree [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.869 248514 DEBUG nova.scheduler.client.report [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.900 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.903 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.933 248514 INFO nova.scheduler.client.report [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Deleted allocations for instance 393f815d-d124-4aea-98c0-126aed0744bd
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.973 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.973 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:58:18 compute-0 nova_compute[248510]: 2025-12-13 08:58:18.991 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:19 compute-0 nova_compute[248510]: 2025-12-13 08:58:19.033 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:58:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925900602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:19 compute-0 nova_compute[248510]: 2025-12-13 08:58:19.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:19 compute-0 nova_compute[248510]: 2025-12-13 08:58:19.526 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:19 compute-0 nova_compute[248510]: 2025-12-13 08:58:19.532 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:58:19 compute-0 nova_compute[248510]: 2025-12-13 08:58:19.572 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:58:19 compute-0 ceph-mon[76537]: pgmap v2916: 321 pgs: 321 active+clean; 91 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 16 KiB/s wr, 27 op/s
Dec 13 08:58:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1348939361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/925900602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:19 compute-0 nova_compute[248510]: 2025-12-13 08:58:19.616 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:58:19 compute-0 nova_compute[248510]: 2025-12-13 08:58:19.617 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 3.8 KiB/s wr, 30 op/s
Dec 13 08:58:20 compute-0 ceph-mon[76537]: pgmap v2917: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 3.8 KiB/s wr, 30 op/s
Dec 13 08:58:20 compute-0 nova_compute[248510]: 2025-12-13 08:58:20.619 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4147892310034525e-05 of space, bias 1.0, pg target 0.004244367693010357 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697139695278017 of space, bias 1.0, pg target 0.20091419085834053 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.734031256454457e-07 of space, bias 4.0, pg target 0.0006880837507745348 quantized to 16 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:58:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:58:21 compute-0 nova_compute[248510]: 2025-12-13 08:58:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:21 compute-0 nova_compute[248510]: 2025-12-13 08:58:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:21 compute-0 nova_compute[248510]: 2025-12-13 08:58:21.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:58:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Dec 13 08:58:22 compute-0 nova_compute[248510]: 2025-12-13 08:58:22.689 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:23.004 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:23 compute-0 ceph-mon[76537]: pgmap v2918: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Dec 13 08:58:23 compute-0 sshd-session[371413]: Invalid user solana from 193.32.162.146 port 43980
Dec 13 08:58:23 compute-0 sshd-session[371413]: Connection closed by invalid user solana 193.32.162.146 port 43980 [preauth]
Dec 13 08:58:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Dec 13 08:58:24 compute-0 nova_compute[248510]: 2025-12-13 08:58:24.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:24 compute-0 ceph-mon[76537]: pgmap v2919: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Dec 13 08:58:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:58:26 compute-0 nova_compute[248510]: 2025-12-13 08:58:26.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:26 compute-0 ceph-mon[76537]: pgmap v2920: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:58:27 compute-0 nova_compute[248510]: 2025-12-13 08:58:27.693 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:58:28 compute-0 ceph-mon[76537]: pgmap v2921: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 08:58:29 compute-0 nova_compute[248510]: 2025-12-13 08:58:29.443 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616294.4412525, 393f815d-d124-4aea-98c0-126aed0744bd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:58:29 compute-0 nova_compute[248510]: 2025-12-13 08:58:29.444 248514 INFO nova.compute.manager [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] VM Stopped (Lifecycle Event)
Dec 13 08:58:29 compute-0 nova_compute[248510]: 2025-12-13 08:58:29.468 248514 DEBUG nova.compute.manager [None req-e8ccf8cb-bdc1-4f23-a48c-49dde0cc9dc2 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:58:29 compute-0 nova_compute[248510]: 2025-12-13 08:58:29.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 680 B/s wr, 17 op/s
Dec 13 08:58:30 compute-0 ceph-mon[76537]: pgmap v2922: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 680 B/s wr, 17 op/s
Dec 13 08:58:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:32 compute-0 ceph-mon[76537]: pgmap v2923: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:32 compute-0 nova_compute[248510]: 2025-12-13 08:58:32.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:33 compute-0 podman[371416]: 2025-12-13 08:58:33.95900448 +0000 UTC m=+0.052730451 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 08:58:33 compute-0 podman[371415]: 2025-12-13 08:58:33.988170874 +0000 UTC m=+0.083152466 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Dec 13 08:58:33 compute-0 podman[371417]: 2025-12-13 08:58:33.990795398 +0000 UTC m=+0.079281731 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:58:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:34 compute-0 nova_compute[248510]: 2025-12-13 08:58:34.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:35 compute-0 ceph-mon[76537]: pgmap v2924: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:36 compute-0 ceph-mon[76537]: pgmap v2925: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:37 compute-0 nova_compute[248510]: 2025-12-13 08:58:37.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:39 compute-0 ceph-mon[76537]: pgmap v2926: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:39 compute-0 nova_compute[248510]: 2025-12-13 08:58:39.583 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.054 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.055 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.075 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:58:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:58:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:58:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:58:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:58:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:58:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.162 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.163 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.172 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.173 248514 INFO nova.compute.claims [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.276 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:58:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266125456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.857 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:40 compute-0 nova_compute[248510]: 2025-12-13 08:58:40.864 248514 DEBUG nova.compute.provider_tree [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:58:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.318 248514 DEBUG nova.scheduler.client.report [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.361 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.362 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.422 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.423 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:58:41 compute-0 ceph-mon[76537]: pgmap v2927: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4266125456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.445 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.468 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.639 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.641 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.642 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Creating image(s)
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.674 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.699 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.731 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.736 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.825 248514 DEBUG nova.policy [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.829 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.829 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.830 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.831 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.856 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:58:41 compute-0 nova_compute[248510]: 2025-12-13 08:58:41.860 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.180 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.243 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.321 248514 DEBUG nova.objects.instance [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.338 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.339 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Ensure instance console log exists: /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.339 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.339 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.339 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.697 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:42 compute-0 nova_compute[248510]: 2025-12-13 08:58:42.871 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Successfully created port: 3abb490c-6aad-47d4-8200-febd480ac7db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:58:43 compute-0 ceph-mon[76537]: pgmap v2928: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 08:58:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:58:44 compute-0 nova_compute[248510]: 2025-12-13 08:58:44.396 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Successfully updated port: 3abb490c-6aad-47d4-8200-febd480ac7db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:58:44 compute-0 nova_compute[248510]: 2025-12-13 08:58:44.414 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:58:44 compute-0 nova_compute[248510]: 2025-12-13 08:58:44.415 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:58:44 compute-0 nova_compute[248510]: 2025-12-13 08:58:44.415 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:58:44 compute-0 nova_compute[248510]: 2025-12-13 08:58:44.517 248514 DEBUG nova.compute.manager [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:44 compute-0 nova_compute[248510]: 2025-12-13 08:58:44.517 248514 DEBUG nova.compute.manager [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:58:44 compute-0 nova_compute[248510]: 2025-12-13 08:58:44.517 248514 DEBUG oslo_concurrency.lockutils [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:58:44 compute-0 nova_compute[248510]: 2025-12-13 08:58:44.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:45 compute-0 nova_compute[248510]: 2025-12-13 08:58:45.037 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:58:45 compute-0 ceph-mon[76537]: pgmap v2929: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:58:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.437 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.462 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.462 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance network_info: |[{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.463 248514 DEBUG oslo_concurrency.lockutils [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.463 248514 DEBUG nova.network.neutron [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.466 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start _get_guest_xml network_info=[{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.469 248514 WARNING nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.477 248514 DEBUG nova.virt.libvirt.host [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.478 248514 DEBUG nova.virt.libvirt.host [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.484 248514 DEBUG nova.virt.libvirt.host [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.484 248514 DEBUG nova.virt.libvirt.host [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.485 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.485 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.485 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.486 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.486 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.486 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.486 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.487 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.487 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.487 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.487 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.488 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.490 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:46 compute-0 nova_compute[248510]: 2025-12-13 08:58:46.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:58:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:58:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3206972685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.059 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.083 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.087 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:47 compute-0 ceph-mon[76537]: pgmap v2930: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:58:47 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3206972685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:58:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:58:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755890126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.664 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.666 248514 DEBUG nova.virt.libvirt.vif [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:58:41Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.667 248514 DEBUG nova.network.os_vif_util [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.668 248514 DEBUG nova.network.os_vif_util [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.669 248514 DEBUG nova.objects.instance [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.691 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <name>instance-0000007c</name>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:58:46</nova:creationTime>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 08:58:47 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <system>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <entry name="serial">3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <entry name="uuid">3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </system>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <os>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   </os>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <features>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   </features>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk">
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       </source>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config">
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       </source>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:58:47 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:a8:9a:b3"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <target dev="tap3abb490c-6a"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log" append="off"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <video>
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </video>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:58:47 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:58:47 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:58:47 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:58:47 compute-0 nova_compute[248510]: </domain>
Dec 13 08:58:47 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.692 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Preparing to wait for external event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.693 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.693 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.693 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.694 248514 DEBUG nova.virt.libvirt.vif [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:58:41Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.694 248514 DEBUG nova.network.os_vif_util [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.695 248514 DEBUG nova.network.os_vif_util [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.695 248514 DEBUG os_vif [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.696 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.696 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.700 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3abb490c-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.701 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3abb490c-6a, col_values=(('external_ids', {'iface-id': '3abb490c-6aad-47d4-8200-febd480ac7db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:9a:b3', 'vm-uuid': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:47 compute-0 NetworkManager[50376]: <info>  [1765616327.7037] manager: (tap3abb490c-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.708 248514 INFO os_vif [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a')
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.770 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.771 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.772 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:a8:9a:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.773 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Using config drive
Dec 13 08:58:47 compute-0 nova_compute[248510]: 2025-12-13 08:58:47.799 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.256 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Creating config drive at /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.265 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxy7nph9e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.426 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxy7nph9e" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.459 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:58:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2755890126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.464 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.656 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.657 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Deleting local config drive /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config because it was imported into RBD.
Dec 13 08:58:48 compute-0 kernel: tap3abb490c-6a: entered promiscuous mode
Dec 13 08:58:48 compute-0 NetworkManager[50376]: <info>  [1765616328.7213] manager: (tap3abb490c-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/500)
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:48 compute-0 ovn_controller[148476]: 2025-12-13T08:58:48Z|01223|binding|INFO|Claiming lport 3abb490c-6aad-47d4-8200-febd480ac7db for this chassis.
Dec 13 08:58:48 compute-0 ovn_controller[148476]: 2025-12-13T08:58:48Z|01224|binding|INFO|3abb490c-6aad-47d4-8200-febd480ac7db: Claiming fa:16:3e:a8:9a:b3 10.100.0.12
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.735 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:9a:b3 10.100.0.12'], port_security=['fa:16:3e:a8:9a:b3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a303ed13-8629-4259-965d-e42689484f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03b6e291-05b2-4f8e-8278-d579b0a4e692', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b539ac67-be97-4028-97af-147cf6ca090d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3abb490c-6aad-47d4-8200-febd480ac7db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.736 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3abb490c-6aad-47d4-8200-febd480ac7db in datapath a303ed13-8629-4259-965d-e42689484f38 bound to our chassis
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.737 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a303ed13-8629-4259-965d-e42689484f38
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.751 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0bddc89c-220c-4650-a11d-6488acb9d627]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 systemd-udevd[371802]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.752 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa303ed13-81 in ovnmeta-a303ed13-8629-4259-965d-e42689484f38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.755 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa303ed13-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.756 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d828fb27-6f0a-40b4-a349-080d48ef1bea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e41584f7-3f1d-49e9-be21-765f0ea4b14e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 systemd-machined[210538]: New machine qemu-151-instance-0000007c.
Dec 13 08:58:48 compute-0 NetworkManager[50376]: <info>  [1765616328.7680] device (tap3abb490c-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:58:48 compute-0 NetworkManager[50376]: <info>  [1765616328.7689] device (tap3abb490c-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.776 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[546a19c0-67ae-495d-83fc-a65f4d7d4014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:48 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-0000007c.
Dec 13 08:58:48 compute-0 ovn_controller[148476]: 2025-12-13T08:58:48Z|01225|binding|INFO|Setting lport 3abb490c-6aad-47d4-8200-febd480ac7db ovn-installed in OVS
Dec 13 08:58:48 compute-0 ovn_controller[148476]: 2025-12-13T08:58:48Z|01226|binding|INFO|Setting lport 3abb490c-6aad-47d4-8200-febd480ac7db up in Southbound
Dec 13 08:58:48 compute-0 nova_compute[248510]: 2025-12-13 08:58:48.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.803 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a09f2a-133c-479a-8e98-881ab509cfce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.835 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e586074-429a-4505-9525-0000065369a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.840 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8167a85-6745-483f-8f7b-947a02b27fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 NetworkManager[50376]: <info>  [1765616328.8412] manager: (tapa303ed13-80): new Veth device (/org/freedesktop/NetworkManager/Devices/501)
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.873 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[570460a3-fcd3-4a71-8044-18d26b2a36c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.877 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e12e2f5c-7163-481d-80bf-2106fbb400e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 NetworkManager[50376]: <info>  [1765616328.9024] device (tapa303ed13-80): carrier: link connected
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.915 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2e83f44b-b15e-45b6-b765-945b65d193a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.946 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[480c7518-80dc-4143-8985-b61e4ac04c24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa303ed13-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:86:13:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890611, 'reachable_time': 17646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371835, 'error': None, 'target': 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.973 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcae4e2-daa1-4b44-8d4f-ff8f961ed3d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe86:13ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890611, 'tstamp': 890611}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371836, 'error': None, 'target': 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.004 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa21fa4-087d-4982-8070-f45b45f50dff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa303ed13-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:86:13:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890611, 'reachable_time': 17646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371837, 'error': None, 'target': 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88ec1044-1950-4fec-8ae5-af9f0d47021a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.070 248514 DEBUG nova.network.neutron [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated VIF entry in instance network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.072 248514 DEBUG nova.network.neutron [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.095 248514 DEBUG oslo_concurrency.lockutils [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.143 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a646bad2-5172-4ab4-b0aa-c0b41ceb300c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.145 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa303ed13-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.145 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.146 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa303ed13-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:49 compute-0 kernel: tapa303ed13-80: entered promiscuous mode
Dec 13 08:58:49 compute-0 NetworkManager[50376]: <info>  [1765616329.1503] manager: (tapa303ed13-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.151 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa303ed13-80, col_values=(('external_ids', {'iface-id': '4a84c557-65ab-478a-95fb-44e9a95becd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:49 compute-0 ovn_controller[148476]: 2025-12-13T08:58:49Z|01227|binding|INFO|Releasing lport 4a84c557-65ab-478a-95fb-44e9a95becd6 from this chassis (sb_readonly=0)
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.167 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a303ed13-8629-4259-965d-e42689484f38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a303ed13-8629-4259-965d-e42689484f38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.168 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2efb0d-fa07-4e30-b1a3-060496e7879b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.169 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-a303ed13-8629-4259-965d-e42689484f38
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/a303ed13-8629-4259-965d-e42689484f38.pid.haproxy
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID a303ed13-8629-4259-965d-e42689484f38
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:58:49 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.171 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'env', 'PROCESS_TAG=haproxy-a303ed13-8629-4259-965d-e42689484f38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a303ed13-8629-4259-965d-e42689484f38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.212 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616329.2111478, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.212 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] VM Started (Lifecycle Event)
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.232 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.236 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616329.2113965, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.236 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] VM Paused (Lifecycle Event)
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.262 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.266 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.290 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.436 248514 DEBUG nova.compute.manager [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.437 248514 DEBUG oslo_concurrency.lockutils [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.437 248514 DEBUG oslo_concurrency.lockutils [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.438 248514 DEBUG oslo_concurrency.lockutils [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.438 248514 DEBUG nova.compute.manager [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Processing event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.439 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.445 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616329.444818, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.446 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] VM Resumed (Lifecycle Event)
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.448 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.454 248514 INFO nova.virt.libvirt.driver [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance spawned successfully.
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.455 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:58:49 compute-0 ceph-mon[76537]: pgmap v2931: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.483 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.493 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.500 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.501 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.502 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.503 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.504 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.505 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.535 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:58:49 compute-0 podman[371909]: 2025-12-13 08:58:49.583550141 +0000 UTC m=+0.048802346 container create 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.584 248514 INFO nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Took 7.94 seconds to spawn the instance on the hypervisor.
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.584 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:58:49 compute-0 systemd[1]: Started libpod-conmon-443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565.scope.
Dec 13 08:58:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e10e5c7fc25c7b540b3e7c168ca8f4252eb841eb00fc201c536aa1133b3617a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:58:49 compute-0 podman[371909]: 2025-12-13 08:58:49.555499464 +0000 UTC m=+0.020751689 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.663 248514 INFO nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Took 9.53 seconds to build instance.
Dec 13 08:58:49 compute-0 podman[371909]: 2025-12-13 08:58:49.666239715 +0000 UTC m=+0.131491910 container init 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 08:58:49 compute-0 podman[371909]: 2025-12-13 08:58:49.675245515 +0000 UTC m=+0.140497710 container start 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 08:58:49 compute-0 nova_compute[248510]: 2025-12-13 08:58:49.695 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:49 compute-0 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [NOTICE]   (371928) : New worker (371930) forked
Dec 13 08:58:49 compute-0 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [NOTICE]   (371928) : Loading success.
Dec 13 08:58:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 08:58:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:51 compute-0 ceph-mon[76537]: pgmap v2932: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 08:58:51 compute-0 nova_compute[248510]: 2025-12-13 08:58:51.536 248514 DEBUG nova.compute.manager [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:51 compute-0 nova_compute[248510]: 2025-12-13 08:58:51.536 248514 DEBUG oslo_concurrency.lockutils [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:51 compute-0 nova_compute[248510]: 2025-12-13 08:58:51.536 248514 DEBUG oslo_concurrency.lockutils [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:51 compute-0 nova_compute[248510]: 2025-12-13 08:58:51.537 248514 DEBUG oslo_concurrency.lockutils [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:51 compute-0 nova_compute[248510]: 2025-12-13 08:58:51.537 248514 DEBUG nova.compute.manager [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:58:51 compute-0 nova_compute[248510]: 2025-12-13 08:58:51.537 248514 WARNING nova.compute.manager [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db for instance with vm_state active and task_state None.
Dec 13 08:58:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 08:58:52 compute-0 nova_compute[248510]: 2025-12-13 08:58:52.701 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:53 compute-0 ceph-mon[76537]: pgmap v2933: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 08:58:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 08:58:55 compute-0 nova_compute[248510]: 2025-12-13 08:58:55.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:55 compute-0 NetworkManager[50376]: <info>  [1765616335.2575] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Dec 13 08:58:55 compute-0 NetworkManager[50376]: <info>  [1765616335.2589] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Dec 13 08:58:55 compute-0 nova_compute[248510]: 2025-12-13 08:58:55.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:55 compute-0 ovn_controller[148476]: 2025-12-13T08:58:55Z|01228|binding|INFO|Releasing lport 4a84c557-65ab-478a-95fb-44e9a95becd6 from this chassis (sb_readonly=0)
Dec 13 08:58:55 compute-0 nova_compute[248510]: 2025-12-13 08:58:55.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:55.436 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:58:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:55.438 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:58:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:58:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:58:55 compute-0 ceph-mon[76537]: pgmap v2934: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 08:58:55 compute-0 nova_compute[248510]: 2025-12-13 08:58:55.828 248514 DEBUG nova.compute.manager [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:58:55 compute-0 nova_compute[248510]: 2025-12-13 08:58:55.829 248514 DEBUG nova.compute.manager [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:58:55 compute-0 nova_compute[248510]: 2025-12-13 08:58:55.829 248514 DEBUG oslo_concurrency.lockutils [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:58:55 compute-0 nova_compute[248510]: 2025-12-13 08:58:55.829 248514 DEBUG oslo_concurrency.lockutils [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:58:55 compute-0 nova_compute[248510]: 2025-12-13 08:58:55.830 248514 DEBUG nova.network.neutron [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:58:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:58:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:58:57 compute-0 ceph-mon[76537]: pgmap v2935: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:58:57 compute-0 nova_compute[248510]: 2025-12-13 08:58:57.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:58 compute-0 nova_compute[248510]: 2025-12-13 08:58:58.124 248514 DEBUG nova.network.neutron [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated VIF entry in instance network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:58:58 compute-0 nova_compute[248510]: 2025-12-13 08:58:58.125 248514 DEBUG nova.network.neutron [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:58:58 compute-0 nova_compute[248510]: 2025-12-13 08:58:58.152 248514 DEBUG oslo_concurrency.lockutils [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:58:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:58:58 compute-0 ceph-mon[76537]: pgmap v2936: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:58:59 compute-0 nova_compute[248510]: 2025-12-13 08:58:59.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:58:59 compute-0 sudo[371940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:58:59 compute-0 sudo[371940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:58:59 compute-0 sudo[371940]: pam_unix(sudo:session): session closed for user root
Dec 13 08:58:59 compute-0 sudo[371965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 08:58:59 compute-0 sudo[371965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:59:00 compute-0 sudo[371965]: pam_unix(sudo:session): session closed for user root
Dec 13 08:59:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:59:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 08:59:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 08:59:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:59:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 08:59:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:59:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 08:59:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 08:59:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:59:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 08:59:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 08:59:00 compute-0 sudo[372020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:59:00 compute-0 sudo[372020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:59:00 compute-0 sudo[372020]: pam_unix(sudo:session): session closed for user root
Dec 13 08:59:00 compute-0 sudo[372045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 08:59:00 compute-0 sudo[372045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:59:00 compute-0 podman[372083]: 2025-12-13 08:59:00.819408087 +0000 UTC m=+0.050640711 container create 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:59:00 compute-0 systemd[1]: Started libpod-conmon-22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674.scope.
Dec 13 08:59:00 compute-0 podman[372083]: 2025-12-13 08:59:00.796136247 +0000 UTC m=+0.027368901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:59:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:59:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:00 compute-0 podman[372083]: 2025-12-13 08:59:00.921232709 +0000 UTC m=+0.152465353 container init 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:59:00 compute-0 podman[372083]: 2025-12-13 08:59:00.928190719 +0000 UTC m=+0.159423343 container start 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:59:00 compute-0 brave_stonebraker[372099]: 167 167
Dec 13 08:59:00 compute-0 systemd[1]: libpod-22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674.scope: Deactivated successfully.
Dec 13 08:59:00 compute-0 podman[372083]: 2025-12-13 08:59:00.937309743 +0000 UTC m=+0.168542407 container attach 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 08:59:00 compute-0 podman[372083]: 2025-12-13 08:59:00.938343438 +0000 UTC m=+0.169576072 container died 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8c46a029027a47be49c4a701616fdb1758579ad943b8502e2822561f404a59-merged.mount: Deactivated successfully.
Dec 13 08:59:00 compute-0 podman[372083]: 2025-12-13 08:59:00.995711372 +0000 UTC m=+0.226944006 container remove 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:59:01 compute-0 systemd[1]: libpod-conmon-22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674.scope: Deactivated successfully.
Dec 13 08:59:01 compute-0 podman[372123]: 2025-12-13 08:59:01.210609451 +0000 UTC m=+0.055054048 container create d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 08:59:01 compute-0 systemd[1]: Started libpod-conmon-d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163.scope.
Dec 13 08:59:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:01 compute-0 podman[372123]: 2025-12-13 08:59:01.187495096 +0000 UTC m=+0.031939733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:59:01 compute-0 podman[372123]: 2025-12-13 08:59:01.291592844 +0000 UTC m=+0.136037461 container init d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 08:59:01 compute-0 podman[372123]: 2025-12-13 08:59:01.299063827 +0000 UTC m=+0.143508414 container start d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:59:01 compute-0 podman[372123]: 2025-12-13 08:59:01.302009729 +0000 UTC m=+0.146454336 container attach d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 08:59:01 compute-0 ceph-mon[76537]: pgmap v2937: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 08:59:01 compute-0 reverent_albattani[372138]: --> passed data devices: 0 physical, 3 LVM
Dec 13 08:59:01 compute-0 reverent_albattani[372138]: --> All data devices are unavailable
Dec 13 08:59:01 compute-0 systemd[1]: libpod-d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163.scope: Deactivated successfully.
Dec 13 08:59:01 compute-0 podman[372123]: 2025-12-13 08:59:01.88218513 +0000 UTC m=+0.726629717 container died d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:59:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7-merged.mount: Deactivated successfully.
Dec 13 08:59:01 compute-0 podman[372123]: 2025-12-13 08:59:01.946001992 +0000 UTC m=+0.790446579 container remove d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:59:01 compute-0 systemd[1]: libpod-conmon-d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163.scope: Deactivated successfully.
Dec 13 08:59:01 compute-0 sudo[372045]: pam_unix(sudo:session): session closed for user root
Dec 13 08:59:02 compute-0 sudo[372170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:59:02 compute-0 sudo[372170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:59:02 compute-0 sudo[372170]: pam_unix(sudo:session): session closed for user root
Dec 13 08:59:02 compute-0 sudo[372195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 08:59:02 compute-0 sudo[372195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:59:02 compute-0 ovn_controller[148476]: 2025-12-13T08:59:02Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:9a:b3 10.100.0.12
Dec 13 08:59:02 compute-0 ovn_controller[148476]: 2025-12-13T08:59:02Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:9a:b3 10.100.0.12
Dec 13 08:59:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Dec 13 08:59:02 compute-0 podman[372231]: 2025-12-13 08:59:02.460925876 +0000 UTC m=+0.047184076 container create 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:59:02 compute-0 systemd[1]: Started libpod-conmon-48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832.scope.
Dec 13 08:59:02 compute-0 podman[372231]: 2025-12-13 08:59:02.437618236 +0000 UTC m=+0.023876436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:59:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:59:02 compute-0 podman[372231]: 2025-12-13 08:59:02.562466792 +0000 UTC m=+0.148724982 container init 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:59:02 compute-0 podman[372231]: 2025-12-13 08:59:02.5758904 +0000 UTC m=+0.162148560 container start 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 08:59:02 compute-0 frosty_morse[372247]: 167 167
Dec 13 08:59:02 compute-0 systemd[1]: libpod-48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832.scope: Deactivated successfully.
Dec 13 08:59:02 compute-0 podman[372231]: 2025-12-13 08:59:02.583964908 +0000 UTC m=+0.170223068 container attach 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:59:02 compute-0 podman[372231]: 2025-12-13 08:59:02.584398989 +0000 UTC m=+0.170657149 container died 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 13 08:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b87c14ed2798f45c63a7fe2d0007b8eadef639e10c99da9187a88d1267192eb9-merged.mount: Deactivated successfully.
Dec 13 08:59:02 compute-0 podman[372231]: 2025-12-13 08:59:02.629908323 +0000 UTC m=+0.216166483 container remove 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 08:59:02 compute-0 systemd[1]: libpod-conmon-48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832.scope: Deactivated successfully.
Dec 13 08:59:02 compute-0 nova_compute[248510]: 2025-12-13 08:59:02.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:02 compute-0 nova_compute[248510]: 2025-12-13 08:59:02.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:02 compute-0 podman[372271]: 2025-12-13 08:59:02.804859465 +0000 UTC m=+0.048823746 container create eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 08:59:02 compute-0 systemd[1]: Started libpod-conmon-eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7.scope.
Dec 13 08:59:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:02 compute-0 podman[372271]: 2025-12-13 08:59:02.783377509 +0000 UTC m=+0.027341820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:59:02 compute-0 podman[372271]: 2025-12-13 08:59:02.906719358 +0000 UTC m=+0.150683659 container init eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec 13 08:59:02 compute-0 podman[372271]: 2025-12-13 08:59:02.913517145 +0000 UTC m=+0.157481426 container start eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:59:02 compute-0 podman[372271]: 2025-12-13 08:59:02.922831763 +0000 UTC m=+0.166796044 container attach eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 08:59:03 compute-0 nova_compute[248510]: 2025-12-13 08:59:03.185 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:03 compute-0 nova_compute[248510]: 2025-12-13 08:59:03.187 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:03 compute-0 nova_compute[248510]: 2025-12-13 08:59:03.207 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:59:03 compute-0 focused_leakey[372287]: {
Dec 13 08:59:03 compute-0 focused_leakey[372287]:     "0": [
Dec 13 08:59:03 compute-0 focused_leakey[372287]:         {
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "devices": [
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "/dev/loop3"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             ],
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_name": "ceph_lv0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_size": "21470642176",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "name": "ceph_lv0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "tags": {
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cluster_name": "ceph",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.crush_device_class": "",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.encrypted": "0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.objectstore": "bluestore",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osd_id": "0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.type": "block",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.vdo": "0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.with_tpm": "0"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             },
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "type": "block",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "vg_name": "ceph_vg0"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:         }
Dec 13 08:59:03 compute-0 focused_leakey[372287]:     ],
Dec 13 08:59:03 compute-0 focused_leakey[372287]:     "1": [
Dec 13 08:59:03 compute-0 focused_leakey[372287]:         {
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "devices": [
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "/dev/loop4"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             ],
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_name": "ceph_lv1",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_size": "21470642176",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "name": "ceph_lv1",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "tags": {
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cluster_name": "ceph",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.crush_device_class": "",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.encrypted": "0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.objectstore": "bluestore",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osd_id": "1",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.type": "block",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.vdo": "0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.with_tpm": "0"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             },
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "type": "block",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "vg_name": "ceph_vg1"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:         }
Dec 13 08:59:03 compute-0 focused_leakey[372287]:     ],
Dec 13 08:59:03 compute-0 focused_leakey[372287]:     "2": [
Dec 13 08:59:03 compute-0 focused_leakey[372287]:         {
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "devices": [
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "/dev/loop5"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             ],
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_name": "ceph_lv2",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_size": "21470642176",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "name": "ceph_lv2",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "tags": {
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.cluster_name": "ceph",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.crush_device_class": "",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.encrypted": "0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.objectstore": "bluestore",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osd_id": "2",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.type": "block",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.vdo": "0",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:                 "ceph.with_tpm": "0"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             },
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "type": "block",
Dec 13 08:59:03 compute-0 focused_leakey[372287]:             "vg_name": "ceph_vg2"
Dec 13 08:59:03 compute-0 focused_leakey[372287]:         }
Dec 13 08:59:03 compute-0 focused_leakey[372287]:     ]
Dec 13 08:59:03 compute-0 focused_leakey[372287]: }
Dec 13 08:59:03 compute-0 systemd[1]: libpod-eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7.scope: Deactivated successfully.
Dec 13 08:59:03 compute-0 podman[372271]: 2025-12-13 08:59:03.25976987 +0000 UTC m=+0.503734151 container died eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 08:59:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179-merged.mount: Deactivated successfully.
Dec 13 08:59:03 compute-0 podman[372271]: 2025-12-13 08:59:03.309145079 +0000 UTC m=+0.553109360 container remove eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:59:03 compute-0 nova_compute[248510]: 2025-12-13 08:59:03.313 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:03 compute-0 nova_compute[248510]: 2025-12-13 08:59:03.313 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:03 compute-0 systemd[1]: libpod-conmon-eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7.scope: Deactivated successfully.
Dec 13 08:59:03 compute-0 nova_compute[248510]: 2025-12-13 08:59:03.325 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:59:03 compute-0 nova_compute[248510]: 2025-12-13 08:59:03.325 248514 INFO nova.compute.claims [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:59:03 compute-0 sudo[372195]: pam_unix(sudo:session): session closed for user root
Dec 13 08:59:03 compute-0 sudo[372307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 08:59:03 compute-0 sudo[372307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:59:03 compute-0 sudo[372307]: pam_unix(sudo:session): session closed for user root
Dec 13 08:59:03 compute-0 ceph-mon[76537]: pgmap v2938: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Dec 13 08:59:03 compute-0 nova_compute[248510]: 2025-12-13 08:59:03.466 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:03 compute-0 sudo[372332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 08:59:03 compute-0 sudo[372332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:59:03 compute-0 podman[372388]: 2025-12-13 08:59:03.818431915 +0000 UTC m=+0.056157486 container create e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:59:03 compute-0 systemd[1]: Started libpod-conmon-e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617.scope.
Dec 13 08:59:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:59:03 compute-0 podman[372388]: 2025-12-13 08:59:03.789429275 +0000 UTC m=+0.027154866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:59:03 compute-0 podman[372388]: 2025-12-13 08:59:03.906819759 +0000 UTC m=+0.144545360 container init e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:59:03 compute-0 podman[372388]: 2025-12-13 08:59:03.914864296 +0000 UTC m=+0.152589867 container start e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 08:59:03 compute-0 podman[372388]: 2025-12-13 08:59:03.920837372 +0000 UTC m=+0.158562983 container attach e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 08:59:03 compute-0 youthful_pasteur[372405]: 167 167
Dec 13 08:59:03 compute-0 systemd[1]: libpod-e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617.scope: Deactivated successfully.
Dec 13 08:59:03 compute-0 podman[372388]: 2025-12-13 08:59:03.925140237 +0000 UTC m=+0.162865808 container died e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 13 08:59:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e625005514a5cbe9380e7eca3a8fadddea3cf3d55c2345b35dae3a77b8c4bf0-merged.mount: Deactivated successfully.
Dec 13 08:59:03 compute-0 podman[372388]: 2025-12-13 08:59:03.985895434 +0000 UTC m=+0.223620995 container remove e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 08:59:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:59:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244800968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:04 compute-0 systemd[1]: libpod-conmon-e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617.scope: Deactivated successfully.
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.052 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.062 248514 DEBUG nova.compute.provider_tree [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.082 248514 DEBUG nova.scheduler.client.report [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.119 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.120 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:59:04 compute-0 podman[372428]: 2025-12-13 08:59:04.144172329 +0000 UTC m=+0.079446306 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 08:59:04 compute-0 podman[372429]: 2025-12-13 08:59:04.156009128 +0000 UTC m=+0.091675895 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:59:04 compute-0 podman[372426]: 2025-12-13 08:59:04.174908361 +0000 UTC m=+0.108952068 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.175 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.175 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:59:04 compute-0 podman[372485]: 2025-12-13 08:59:04.19816087 +0000 UTC m=+0.053824888 container create d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.198 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.218 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:59:04 compute-0 systemd[1]: Started libpod-conmon-d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1.scope.
Dec 13 08:59:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:04 compute-0 podman[372485]: 2025-12-13 08:59:04.175995177 +0000 UTC m=+0.031659205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 08:59:04 compute-0 podman[372485]: 2025-12-13 08:59:04.279660365 +0000 UTC m=+0.135324393 container init d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 08:59:04 compute-0 podman[372485]: 2025-12-13 08:59:04.28639186 +0000 UTC m=+0.142055878 container start d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 08:59:04 compute-0 podman[372485]: 2025-12-13 08:59:04.291928465 +0000 UTC m=+0.147592503 container attach d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.318 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.319 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.320 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Creating image(s)
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.343 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.368 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 121 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.390 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.393 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:04 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/244800968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.464 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.465 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.466 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.466 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.488 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.491 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.923 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:04 compute-0 nova_compute[248510]: 2025-12-13 08:59:04.978 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:59:05 compute-0 lvm[372732]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 08:59:05 compute-0 lvm[372732]: VG ceph_vg0 finished
Dec 13 08:59:05 compute-0 lvm[372735]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:59:05 compute-0 lvm[372735]: VG ceph_vg1 finished
Dec 13 08:59:05 compute-0 nova_compute[248510]: 2025-12-13 08:59:05.063 248514 DEBUG nova.objects.instance [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:05 compute-0 lvm[372755]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 08:59:05 compute-0 lvm[372755]: VG ceph_vg2 finished
Dec 13 08:59:05 compute-0 nova_compute[248510]: 2025-12-13 08:59:05.086 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:59:05 compute-0 nova_compute[248510]: 2025-12-13 08:59:05.087 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Ensure instance console log exists: /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:59:05 compute-0 nova_compute[248510]: 2025-12-13 08:59:05.087 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:05 compute-0 nova_compute[248510]: 2025-12-13 08:59:05.088 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:05 compute-0 nova_compute[248510]: 2025-12-13 08:59:05.088 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:05 compute-0 lvm[372756]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 08:59:05 compute-0 lvm[372756]: VG ceph_vg1 finished
Dec 13 08:59:05 compute-0 nova_compute[248510]: 2025-12-13 08:59:05.101 248514 DEBUG nova.policy [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:59:05 compute-0 eloquent_ride[372508]: {}
Dec 13 08:59:05 compute-0 systemd[1]: libpod-d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1.scope: Deactivated successfully.
Dec 13 08:59:05 compute-0 systemd[1]: libpod-d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1.scope: Consumed 1.386s CPU time.
Dec 13 08:59:05 compute-0 podman[372485]: 2025-12-13 08:59:05.194919367 +0000 UTC m=+1.050583385 container died d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 08:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c-merged.mount: Deactivated successfully.
Dec 13 08:59:05 compute-0 podman[372485]: 2025-12-13 08:59:05.247003282 +0000 UTC m=+1.102667320 container remove d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 08:59:05 compute-0 systemd[1]: libpod-conmon-d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1.scope: Deactivated successfully.
Dec 13 08:59:05 compute-0 sudo[372332]: pam_unix(sudo:session): session closed for user root
Dec 13 08:59:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 08:59:05 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:59:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 08:59:05 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:59:05 compute-0 sudo[372770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 08:59:05 compute-0 sudo[372770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 08:59:05 compute-0 sudo[372770]: pam_unix(sudo:session): session closed for user root
Dec 13 08:59:05 compute-0 ceph-mon[76537]: pgmap v2939: 321 pgs: 321 active+clean; 121 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 08:59:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:59:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 08:59:05 compute-0 nova_compute[248510]: 2025-12-13 08:59:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 121 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 08:59:06 compute-0 nova_compute[248510]: 2025-12-13 08:59:06.661 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Successfully created port: 558f49fb-1002-4c28-8ba2-ea32384811d7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:59:06 compute-0 ceph-mon[76537]: pgmap v2940: 321 pgs: 321 active+clean; 121 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 08:59:07 compute-0 nova_compute[248510]: 2025-12-13 08:59:07.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:07 compute-0 nova_compute[248510]: 2025-12-13 08:59:07.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 151 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.3 MiB/s wr, 66 op/s
Dec 13 08:59:08 compute-0 ceph-mon[76537]: pgmap v2941: 321 pgs: 321 active+clean; 151 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.3 MiB/s wr, 66 op/s
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.191 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Successfully updated port: 558f49fb-1002-4c28-8ba2-ea32384811d7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.306 248514 INFO nova.compute.manager [None req-3bc97e41-a435-403f-a4b6-a0c83f2d2d83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Get console output
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.313 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:59:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:59:09
Dec 13 08:59:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 08:59:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 08:59:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 13 08:59:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.331 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.332 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.332 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.488 248514 DEBUG nova.compute.manager [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.489 248514 DEBUG nova.compute.manager [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing instance network info cache due to event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:59:09 compute-0 nova_compute[248510]: 2025-12-13 08:59:09.490 248514 DEBUG oslo_concurrency.lockutils [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:10 compute-0 nova_compute[248510]: 2025-12-13 08:59:10.042 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 08:59:10 compute-0 ceph-mon[76537]: pgmap v2942: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 08:59:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 08:59:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.444 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.473 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.474 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance network_info: |[{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.474 248514 DEBUG oslo_concurrency.lockutils [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.474 248514 DEBUG nova.network.neutron [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.477 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start _get_guest_xml network_info=[{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.484 248514 WARNING nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.491 248514 DEBUG nova.virt.libvirt.host [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.492 248514 DEBUG nova.virt.libvirt.host [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.496 248514 DEBUG nova.virt.libvirt.host [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.496 248514 DEBUG nova.virt.libvirt.host [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.496 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.497 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.497 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.497 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.498 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.498 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.498 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.498 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.499 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.499 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.499 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.499 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:59:11 compute-0 nova_compute[248510]: 2025-12-13 08:59:11.502 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:59:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874647145' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.134 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/874647145' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.167 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.175 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:59:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3782659699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.759 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.761 248514 DEBUG nova.virt.libvirt.vif [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:59:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1185653133',display_name='tempest-TestNetworkAdvancedServerOps-server-1185653133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1185653133',id=125,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBErdAvAxyanLSk6zlk6UR7jIkt03qXx3MfH8+NhdcAyZSFQd+bfa57m4HcmVco5q1ZITwwE9Zaq6j0nsDhbNKnt90E6ScVaQ/3xa8beJWwD79hCpdjT8jJu+dOAQXgrxSQ==',key_name='tempest-TestNetworkAdvancedServerOps-1501689960',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-rf02ewgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:59:04Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=48db77d9-f4d5-44dd-852e-aa10f98ace90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.761 248514 DEBUG nova.network.os_vif_util [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.762 248514 DEBUG nova.network.os_vif_util [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.763 248514 DEBUG nova.objects.instance [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.980 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <uuid>48db77d9-f4d5-44dd-852e-aa10f98ace90</uuid>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <name>instance-0000007d</name>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1185653133</nova:name>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:59:11</nova:creationTime>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <nova:port uuid="558f49fb-1002-4c28-8ba2-ea32384811d7">
Dec 13 08:59:12 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <system>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <entry name="serial">48db77d9-f4d5-44dd-852e-aa10f98ace90</entry>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <entry name="uuid">48db77d9-f4d5-44dd-852e-aa10f98ace90</entry>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </system>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <os>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   </os>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <features>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   </features>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/48db77d9-f4d5-44dd-852e-aa10f98ace90_disk">
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       </source>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config">
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       </source>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:59:12 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ba:ad:5d"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <target dev="tap558f49fb-10"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/console.log" append="off"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <video>
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </video>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:59:12 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:59:12 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:59:12 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:59:12 compute-0 nova_compute[248510]: </domain>
Dec 13 08:59:12 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.981 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Preparing to wait for external event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.983 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.984 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.984 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.986 248514 DEBUG nova.virt.libvirt.vif [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:59:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1185653133',display_name='tempest-TestNetworkAdvancedServerOps-server-1185653133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1185653133',id=125,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBErdAvAxyanLSk6zlk6UR7jIkt03qXx3MfH8+NhdcAyZSFQd+bfa57m4HcmVco5q1ZITwwE9Zaq6j0nsDhbNKnt90E6ScVaQ/3xa8beJWwD79hCpdjT8jJu+dOAQXgrxSQ==',key_name='tempest-TestNetworkAdvancedServerOps-1501689960',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-rf02ewgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:59:04Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=48db77d9-f4d5-44dd-852e-aa10f98ace90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.987 248514 DEBUG nova.network.os_vif_util [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.988 248514 DEBUG nova.network.os_vif_util [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.988 248514 DEBUG os_vif [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.989 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.990 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.991 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.996 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap558f49fb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.997 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap558f49fb-10, col_values=(('external_ids', {'iface-id': '558f49fb-1002-4c28-8ba2-ea32384811d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:ad:5d', 'vm-uuid': '48db77d9-f4d5-44dd-852e-aa10f98ace90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:12 compute-0 nova_compute[248510]: 2025-12-13 08:59:12.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:13 compute-0 NetworkManager[50376]: <info>  [1765616353.0011] manager: (tap558f49fb-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Dec 13 08:59:13 compute-0 nova_compute[248510]: 2025-12-13 08:59:13.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:59:13 compute-0 nova_compute[248510]: 2025-12-13 08:59:13.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:13 compute-0 nova_compute[248510]: 2025-12-13 08:59:13.010 248514 INFO os_vif [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10')
Dec 13 08:59:13 compute-0 ceph-mon[76537]: pgmap v2943: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 08:59:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3782659699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:59:13 compute-0 nova_compute[248510]: 2025-12-13 08:59:13.535 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:59:13 compute-0 nova_compute[248510]: 2025-12-13 08:59:13.535 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:59:13 compute-0 nova_compute[248510]: 2025-12-13 08:59:13.535 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:ba:ad:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:59:13 compute-0 nova_compute[248510]: 2025-12-13 08:59:13.536 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Using config drive
Dec 13 08:59:13 compute-0 nova_compute[248510]: 2025-12-13 08:59:13.558 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.184 248514 DEBUG nova.network.neutron [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updated VIF entry in instance network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.185 248514 DEBUG nova.network.neutron [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.328 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Creating config drive at /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.334 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr6quna_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.495 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr6quna_n" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.527 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.532 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.668 248514 DEBUG oslo_concurrency.lockutils [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.702 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.702 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Deleting local config drive /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config because it was imported into RBD.
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 08:59:14 compute-0 kernel: tap558f49fb-10: entered promiscuous mode
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 08:59:14 compute-0 NetworkManager[50376]: <info>  [1765616354.7759] manager: (tap558f49fb-10): new Tun device (/org/freedesktop/NetworkManager/Devices/506)
Dec 13 08:59:14 compute-0 ovn_controller[148476]: 2025-12-13T08:59:14Z|01229|binding|INFO|Claiming lport 558f49fb-1002-4c28-8ba2-ea32384811d7 for this chassis.
Dec 13 08:59:14 compute-0 ovn_controller[148476]: 2025-12-13T08:59:14Z|01230|binding|INFO|558f49fb-1002-4c28-8ba2-ea32384811d7: Claiming fa:16:3e:ba:ad:5d 10.100.0.5
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:14 compute-0 ovn_controller[148476]: 2025-12-13T08:59:14Z|01231|binding|INFO|Setting lport 558f49fb-1002-4c28-8ba2-ea32384811d7 ovn-installed in OVS
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.794 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:14 compute-0 nova_compute[248510]: 2025-12-13 08:59:14.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:14 compute-0 systemd-udevd[372930]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:59:14 compute-0 systemd-machined[210538]: New machine qemu-152-instance-0000007d.
Dec 13 08:59:14 compute-0 NetworkManager[50376]: <info>  [1765616354.8241] device (tap558f49fb-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:59:14 compute-0 NetworkManager[50376]: <info>  [1765616354.8251] device (tap558f49fb-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:59:14 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-0000007d.
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.957 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:ad:5d 10.100.0.5'], port_security=['fa:16:3e:ba:ad:5d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '48db77d9-f4d5-44dd-852e-aa10f98ace90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67032c28-66fa-4d75-99b8-37c3e61c4140', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af406687-c2e5-4e03-9415-af4130e7b9af, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=558f49fb-1002-4c28-8ba2-ea32384811d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.959 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 558f49fb-1002-4c28-8ba2-ea32384811d7 in datapath 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 bound to our chassis
Dec 13 08:59:14 compute-0 ovn_controller[148476]: 2025-12-13T08:59:14Z|01232|binding|INFO|Setting lport 558f49fb-1002-4c28-8ba2-ea32384811d7 up in Southbound
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.961 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.974 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c41f574f-600c-4831-85da-a71aa38e557d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.975 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4977afa6-11 in ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.977 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4977afa6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07c4480b-fce7-4035-9537-10e5bf2c24cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.978 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c027f0f4-2338-4774-893f-df2027ad3fbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.998 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[12083d34-6f81-4915-866d-552da9b33c56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.014 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7f6d5b-8328-4821-8267-16ce1bc3e18a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.044 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95e35800-9c9a-4c6c-97b2-5dc23ce7e65d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 NetworkManager[50376]: <info>  [1765616355.0508] manager: (tap4977afa6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/507)
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9f44d1-5a60-4903-9a71-0b6e11cde0ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.091 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5016dd5c-39ca-4ce4-9a26-806138e6f56a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.094 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[223685dc-49d7-43a6-970f-6eeeb385df18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 NetworkManager[50376]: <info>  [1765616355.1199] device (tap4977afa6-10): carrier: link connected
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.124 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd8094e-2990-4a99-a299-1a1de2862ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.140 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.141 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c9bde7-a46e-4e55-a436-3301740fd31f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4977afa6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:01:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893233, 'reachable_time': 22249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372964, 'error': None, 'target': 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.161 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42fb8274-1630-4693-bb06-f8c0ea37443f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:1f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 893233, 'tstamp': 893233}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372965, 'error': None, 'target': 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.180 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95cc0e49-f936-4aa1-b6fa-e14c7923b88e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4977afa6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:01:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893233, 'reachable_time': 22249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372966, 'error': None, 'target': 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f99cc681-3ef4-4cc1-8249-a7b06c547767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.286 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec6c0bb-ed8d-46ce-884b-6d69c345a614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4977afa6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4977afa6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:15 compute-0 kernel: tap4977afa6-10: entered promiscuous mode
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:15 compute-0 NetworkManager[50376]: <info>  [1765616355.3312] manager: (tap4977afa6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.334 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4977afa6-10, col_values=(('external_ids', {'iface-id': '9bf5ac40-d652-492d-a8ac-2cb23dbb3344'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:15 compute-0 ovn_controller[148476]: 2025-12-13T08:59:15Z|01233|binding|INFO|Releasing lport 9bf5ac40-d652-492d-a8ac-2cb23dbb3344 from this chassis (sb_readonly=0)
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.351 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4977afa6-1a1c-43aa-9d3f-7b6747f5eb37.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4977afa6-1a1c-43aa-9d3f-7b6747f5eb37.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1972fc2e-2230-4c67-8991-28372a460b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.353 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/4977afa6-1a1c-43aa-9d3f-7b6747f5eb37.pid.haproxy
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:59:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.354 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'env', 'PROCESS_TAG=haproxy-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4977afa6-1a1c-43aa-9d3f-7b6747f5eb37.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:59:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 08:59:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1196940316' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:59:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 08:59:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1196940316' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.399 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616355.3981373, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.400 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Started (Lifecycle Event)
Dec 13 08:59:15 compute-0 ceph-mon[76537]: pgmap v2944: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 08:59:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1196940316' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 08:59:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1196940316' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.670 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.674 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616355.3987756, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.674 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Paused (Lifecycle Event)
Dec 13 08:59:15 compute-0 podman[373040]: 2025-12-13 08:59:15.760380977 +0000 UTC m=+0.049036872 container create cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 08:59:15 compute-0 systemd[1]: Started libpod-conmon-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c.scope.
Dec 13 08:59:15 compute-0 podman[373040]: 2025-12-13 08:59:15.734802251 +0000 UTC m=+0.023458166 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:59:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86652b75459e4a25df06217dc80b414d5f5e77aaf98604cf5669cd3f9ba9632a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:15 compute-0 podman[373040]: 2025-12-13 08:59:15.858121319 +0000 UTC m=+0.146777254 container init cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 08:59:15 compute-0 podman[373040]: 2025-12-13 08:59:15.863082521 +0000 UTC m=+0.151738416 container start cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:59:15 compute-0 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [NOTICE]   (373059) : New worker (373061) forked
Dec 13 08:59:15 compute-0 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [NOTICE]   (373059) : Loading success.
Dec 13 08:59:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.913564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355913720, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 977, "num_deletes": 250, "total_data_size": 1443813, "memory_usage": 1463928, "flush_reason": "Manual Compaction"}
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355924156, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 900113, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57599, "largest_seqno": 58575, "table_properties": {"data_size": 896233, "index_size": 1531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10418, "raw_average_key_size": 20, "raw_value_size": 887864, "raw_average_value_size": 1768, "num_data_blocks": 70, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616271, "oldest_key_time": 1765616271, "file_creation_time": 1765616355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 10642 microseconds, and 6027 cpu microseconds.
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.924215) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 900113 bytes OK
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.924238) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.925461) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.925484) EVENT_LOG_v1 {"time_micros": 1765616355925477, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.925506) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1439124, prev total WAL file size 1439124, number of live WAL files 2.
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.926310) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323631' seq:72057594037927935, type:22 .. '6D6772737461740032353132' seq:0, type:0; will stop at (end)
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(879KB)], [134(11MB)]
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355926368, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12805206, "oldest_snapshot_seqno": -1}
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.958 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.962 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7894 keys, 9838205 bytes, temperature: kUnknown
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355986746, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9838205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9787888, "index_size": 29468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 206407, "raw_average_key_size": 26, "raw_value_size": 9649191, "raw_average_value_size": 1222, "num_data_blocks": 1148, "num_entries": 7894, "num_filter_entries": 7894, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.987065) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9838205 bytes
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.988401) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.6 rd, 162.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(25.2) write-amplify(10.9) OK, records in: 8370, records dropped: 476 output_compression: NoCompression
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.988442) EVENT_LOG_v1 {"time_micros": 1765616355988426, "job": 82, "event": "compaction_finished", "compaction_time_micros": 60506, "compaction_time_cpu_micros": 23523, "output_level": 6, "num_output_files": 1, "total_output_size": 9838205, "num_input_records": 8370, "num_output_records": 7894, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355988984, "job": 82, "event": "table_file_deletion", "file_number": 136}
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355992898, "job": 82, "event": "table_file_deletion", "file_number": 134}
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.926253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.992987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.992994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.992997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.993000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:59:15 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.993003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 08:59:15 compute-0 nova_compute[248510]: 2025-12-13 08:59:15.998 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:59:16 compute-0 nova_compute[248510]: 2025-12-13 08:59:16.051 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:16 compute-0 nova_compute[248510]: 2025-12-13 08:59:16.052 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:16 compute-0 nova_compute[248510]: 2025-12-13 08:59:16.053 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 08:59:16 compute-0 nova_compute[248510]: 2025-12-13 08:59:16.053 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:16 compute-0 nova_compute[248510]: 2025-12-13 08:59:16.279 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:16 compute-0 nova_compute[248510]: 2025-12-13 08:59:16.280 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:16 compute-0 nova_compute[248510]: 2025-12-13 08:59:16.281 248514 DEBUG nova.objects.instance [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'flavor' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:59:16 compute-0 ceph-mon[76537]: pgmap v2945: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 08:59:17 compute-0 nova_compute[248510]: 2025-12-13 08:59:17.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:17.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.100 248514 DEBUG nova.objects.instance [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.129 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.264 248514 DEBUG nova.compute.manager [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.264 248514 DEBUG oslo_concurrency.lockutils [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.264 248514 DEBUG oslo_concurrency.lockutils [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.264 248514 DEBUG oslo_concurrency.lockutils [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.265 248514 DEBUG nova.compute.manager [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Processing event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.265 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.271 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616358.2708058, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.271 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Resumed (Lifecycle Event)
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.273 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.277 248514 INFO nova.virt.libvirt.driver [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance spawned successfully.
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.278 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.297 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.303 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.307 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.307 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.308 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.308 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.308 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.309 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.336 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:59:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.386 248514 INFO nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Took 14.07 seconds to spawn the instance on the hypervisor.
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.387 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.470 248514 INFO nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Took 15.19 seconds to build instance.
Dec 13 08:59:18 compute-0 nova_compute[248510]: 2025-12-13 08:59:18.494 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:19 compute-0 nova_compute[248510]: 2025-12-13 08:59:19.130 248514 DEBUG nova.policy [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:59:19 compute-0 ceph-mon[76537]: pgmap v2946: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 08:59:19 compute-0 nova_compute[248510]: 2025-12-13 08:59:19.933 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Successfully created port: a30b0da9-1ee1-4092-a86b-5fa66fe76492 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.164 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.220 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.220 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.220 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.221 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.221 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.261 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.261 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.261 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.261 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 627 KiB/s wr, 45 op/s
Dec 13 08:59:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:59:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562676911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.874 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.990 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.991 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.998 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:59:20 compute-0 nova_compute[248510]: 2025-12-13 08:59:20.999 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.211 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.213 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3178MB free_disk=59.92096870671958GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.213 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.214 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011220948804967251 of space, bias 1.0, pg target 0.33662846414901754 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697035056271496 of space, bias 1.0, pg target 0.20091105168814488 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.729684235412037e-07 of space, bias 4.0, pg target 0.0006875621082494445 quantized to 16 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 08:59:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 08:59:21 compute-0 ceph-mon[76537]: pgmap v2947: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 627 KiB/s wr, 45 op/s
Dec 13 08:59:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/562676911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.533 248514 DEBUG nova.compute.manager [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.534 248514 DEBUG oslo_concurrency.lockutils [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.534 248514 DEBUG oslo_concurrency.lockutils [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.535 248514 DEBUG oslo_concurrency.lockutils [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.535 248514 DEBUG nova.compute.manager [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] No waiting events found dispatching network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.536 248514 WARNING nova.compute.manager [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received unexpected event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 for instance with vm_state active and task_state None.
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.538 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.539 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 48db77d9-f4d5-44dd-852e-aa10f98ace90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.539 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.539 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.625 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:21.761 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:59:21 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:21.762 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 08:59:21 compute-0 nova_compute[248510]: 2025-12-13 08:59:21.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:59:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045510146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.207 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.213 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.233 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.274 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.275 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.328 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Successfully updated port: a30b0da9-1ee1-4092-a86b-5fa66fe76492 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.344 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.344 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.345 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:59:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 17 KiB/s wr, 19 op/s
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.486 248514 DEBUG nova.compute.manager [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.488 248514 DEBUG nova.compute.manager [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-a30b0da9-1ee1-4092-a86b-5fa66fe76492. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.489 248514 DEBUG oslo_concurrency.lockutils [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1045510146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:22 compute-0 nova_compute[248510]: 2025-12-13 08:59:22.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:23 compute-0 nova_compute[248510]: 2025-12-13 08:59:23.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:23 compute-0 ceph-mon[76537]: pgmap v2948: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 17 KiB/s wr, 19 op/s
Dec 13 08:59:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Dec 13 08:59:24 compute-0 ceph-mon[76537]: pgmap v2949: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Dec 13 08:59:24 compute-0 nova_compute[248510]: 2025-12-13 08:59:24.825 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:24 compute-0 nova_compute[248510]: 2025-12-13 08:59:24.827 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:24 compute-0 nova_compute[248510]: 2025-12-13 08:59:24.827 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.161 248514 DEBUG nova.compute.manager [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.162 248514 DEBUG nova.compute.manager [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing instance network info cache due to event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.162 248514 DEBUG oslo_concurrency.lockutils [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.163 248514 DEBUG oslo_concurrency.lockutils [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.163 248514 DEBUG nova.network.neutron [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.397 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.424 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.426 248514 DEBUG oslo_concurrency.lockutils [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.427 248514 DEBUG nova.network.neutron [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port a30b0da9-1ee1-4092-a86b-5fa66fe76492 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.431 248514 DEBUG nova.virt.libvirt.vif [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.431 248514 DEBUG nova.network.os_vif_util [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.432 248514 DEBUG nova.network.os_vif_util [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.433 248514 DEBUG os_vif [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.434 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.435 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.435 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.439 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa30b0da9-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.440 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa30b0da9-1e, col_values=(('external_ids', {'iface-id': 'a30b0da9-1ee1-4092-a86b-5fa66fe76492', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:10:e4', 'vm-uuid': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:25 compute-0 NetworkManager[50376]: <info>  [1765616365.4446] manager: (tapa30b0da9-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.451 248514 INFO os_vif [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e')
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.453 248514 DEBUG nova.virt.libvirt.vif [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.453 248514 DEBUG nova.network.os_vif_util [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.454 248514 DEBUG nova.network.os_vif_util [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.457 248514 DEBUG nova.virt.libvirt.guest [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] attach device xml: <interface type="ethernet">
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:65:10:e4"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <target dev="tapa30b0da9-1e"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]: </interface>
Dec 13 08:59:25 compute-0 nova_compute[248510]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 08:59:25 compute-0 kernel: tapa30b0da9-1e: entered promiscuous mode
Dec 13 08:59:25 compute-0 ovn_controller[148476]: 2025-12-13T08:59:25Z|01234|binding|INFO|Claiming lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 for this chassis.
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 ovn_controller[148476]: 2025-12-13T08:59:25Z|01235|binding|INFO|a30b0da9-1ee1-4092-a86b-5fa66fe76492: Claiming fa:16:3e:65:10:e4 10.100.0.18
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.480 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:10:e4 10.100.0.18'], port_security=['fa:16:3e:65:10:e4 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fae4da-5722-4e42-b101-44d9ef244421', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9deccf5-6c40-469a-b45c-630adf312a35, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a30b0da9-1ee1-4092-a86b-5fa66fe76492) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.484 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a30b0da9-1ee1-4092-a86b-5fa66fe76492 in datapath 16fae4da-5722-4e42-b101-44d9ef244421 bound to our chassis
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.486 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16fae4da-5722-4e42-b101-44d9ef244421
Dec 13 08:59:25 compute-0 NetworkManager[50376]: <info>  [1765616365.4892] manager: (tapa30b0da9-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/510)
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.499 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc4762a-468f-4e82-a1fa-d205f4e79c6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.500 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16fae4da-51 in ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.502 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16fae4da-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.502 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[edebb7af-5364-449c-ad27-d99024fd1039]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ce9431-cc00-4c2a-8b6a-587f37d8edf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.516 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[757a04fa-80e2-420f-be79-ab8829ed7d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 ovn_controller[148476]: 2025-12-13T08:59:25Z|01236|binding|INFO|Setting lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 ovn-installed in OVS
Dec 13 08:59:25 compute-0 ovn_controller[148476]: 2025-12-13T08:59:25Z|01237|binding|INFO|Setting lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 up in Southbound
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.533 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a5abe9-ddb3-45d0-ab12-554cba14959e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 systemd-udevd[373126]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:59:25 compute-0 NetworkManager[50376]: <info>  [1765616365.5570] device (tapa30b0da9-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:59:25 compute-0 NetworkManager[50376]: <info>  [1765616365.5578] device (tapa30b0da9-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.566 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d19c9c1-92ff-4dfd-9a8c-ba51c6ffbe93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 NetworkManager[50376]: <info>  [1765616365.5730] manager: (tap16fae4da-50): new Veth device (/org/freedesktop/NetworkManager/Devices/511)
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6232eba-a4f2-4082-8c16-40f329fcef92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.604 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[05fdf637-171e-4b2a-ad7e-ded9ec28f7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.608 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b685db-09c7-46fb-a026-fe9f40505cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.624 248514 DEBUG nova.virt.libvirt.driver [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.624 248514 DEBUG nova.virt.libvirt.driver [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.624 248514 DEBUG nova.virt.libvirt.driver [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:a8:9a:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.625 248514 DEBUG nova.virt.libvirt.driver [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:65:10:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:59:25 compute-0 NetworkManager[50376]: <info>  [1765616365.6336] device (tap16fae4da-50): carrier: link connected
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.641 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[780fcd4b-b856-4c3f-a310-9389455af041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.652 248514 DEBUG nova.virt.libvirt.guest [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:59:25</nova:creationTime>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 08:59:25 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     <nova:port uuid="a30b0da9-1ee1-4092-a86b-5fa66fe76492">
Dec 13 08:59:25 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 08:59:25 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 08:59:25 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 08:59:25 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 08:59:25 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7746aaf6-cf85-4356-9403-782d559ad541]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fae4da-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:9f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894284, 'reachable_time': 25312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373149, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.674 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[681f9295-c0a4-406f-877d-66df489f8b5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:9f41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894284, 'tstamp': 894284}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373150, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.692 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.698 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b772db0-50a6-4687-9e58-918f6f288ca1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fae4da-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:9f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894284, 'reachable_time': 25312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373151, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c316c8-747d-4ce4-b86f-191b0bd71939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.800 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdf4b70-471a-4acc-873b-43235225d331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.801 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fae4da-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.802 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.802 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16fae4da-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 NetworkManager[50376]: <info>  [1765616365.8049] manager: (tap16fae4da-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Dec 13 08:59:25 compute-0 kernel: tap16fae4da-50: entered promiscuous mode
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.807 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16fae4da-50, col_values=(('external_ids', {'iface-id': '7a0536ce-73f9-4aa4-8989-da712c91214d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 ovn_controller[148476]: 2025-12-13T08:59:25Z|01238|binding|INFO|Releasing lport 7a0536ce-73f9-4aa4-8989-da712c91214d from this chassis (sb_readonly=0)
Dec 13 08:59:25 compute-0 nova_compute[248510]: 2025-12-13 08:59:25.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.824 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16fae4da-5722-4e42-b101-44d9ef244421.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16fae4da-5722-4e42-b101-44d9ef244421.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.825 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56adeef9-1663-42be-84a3-b0d50870b940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.825 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: global
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-16fae4da-5722-4e42-b101-44d9ef244421
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/16fae4da-5722-4e42-b101-44d9ef244421.pid.haproxy
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 16fae4da-5722-4e42-b101-44d9ef244421
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 08:59:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.827 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'env', 'PROCESS_TAG=haproxy-16fae4da-5722-4e42-b101-44d9ef244421', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16fae4da-5722-4e42-b101-44d9ef244421.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 08:59:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:26 compute-0 podman[373182]: 2025-12-13 08:59:26.229771471 +0000 UTC m=+0.055782926 container create 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:59:26 compute-0 systemd[1]: Started libpod-conmon-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040.scope.
Dec 13 08:59:26 compute-0 podman[373182]: 2025-12-13 08:59:26.203752105 +0000 UTC m=+0.029763610 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 08:59:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 08:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b75e401be1e6964868d0a53502eed4a247827032e021c62c59365aba03a75fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 08:59:26 compute-0 podman[373182]: 2025-12-13 08:59:26.326918559 +0000 UTC m=+0.152930094 container init 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 08:59:26 compute-0 podman[373182]: 2025-12-13 08:59:26.337735184 +0000 UTC m=+0.163746669 container start 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 08:59:26 compute-0 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [NOTICE]   (373202) : New worker (373204) forked
Dec 13 08:59:26 compute-0 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [NOTICE]   (373202) : Loading success.
Dec 13 08:59:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 08:59:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:26.765 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:26 compute-0 nova_compute[248510]: 2025-12-13 08:59:26.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.288 248514 DEBUG nova.network.neutron [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated VIF entry in instance network info cache for port a30b0da9-1ee1-4092-a86b-5fa66fe76492. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.289 248514 DEBUG nova.network.neutron [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.320 248514 DEBUG oslo_concurrency.lockutils [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.331 248514 DEBUG nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.331 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.331 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.332 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.332 248514 DEBUG nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.332 248514 WARNING nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 for instance with vm_state active and task_state None.
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.332 248514 DEBUG nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.333 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.333 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.333 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.333 248514 DEBUG nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.334 248514 WARNING nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 for instance with vm_state active and task_state None.
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.457 248514 DEBUG nova.network.neutron [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updated VIF entry in instance network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.458 248514 DEBUG nova.network.neutron [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:27 compute-0 ceph-mon[76537]: pgmap v2950: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.480 248514 DEBUG oslo_concurrency.lockutils [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:27 compute-0 ovn_controller[148476]: 2025-12-13T08:59:27Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:10:e4 10.100.0.18
Dec 13 08:59:27 compute-0 ovn_controller[148476]: 2025-12-13T08:59:27Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:10:e4 10.100.0.18
Dec 13 08:59:27 compute-0 nova_compute[248510]: 2025-12-13 08:59:27.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 08:59:29 compute-0 ceph-mon[76537]: pgmap v2951: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 08:59:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 172 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 381 KiB/s wr, 83 op/s
Dec 13 08:59:30 compute-0 nova_compute[248510]: 2025-12-13 08:59:30.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:30 compute-0 ovn_controller[148476]: 2025-12-13T08:59:30Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:ad:5d 10.100.0.5
Dec 13 08:59:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:30 compute-0 ovn_controller[148476]: 2025-12-13T08:59:30Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:ad:5d 10.100.0.5
Dec 13 08:59:31 compute-0 ceph-mon[76537]: pgmap v2952: 321 pgs: 321 active+clean; 172 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 381 KiB/s wr, 83 op/s
Dec 13 08:59:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 172 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 381 KiB/s wr, 64 op/s
Dec 13 08:59:32 compute-0 nova_compute[248510]: 2025-12-13 08:59:32.722 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:33 compute-0 nova_compute[248510]: 2025-12-13 08:59:33.323 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:33 compute-0 nova_compute[248510]: 2025-12-13 08:59:33.323 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:33 compute-0 nova_compute[248510]: 2025-12-13 08:59:33.350 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 08:59:33 compute-0 nova_compute[248510]: 2025-12-13 08:59:33.448 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:33 compute-0 nova_compute[248510]: 2025-12-13 08:59:33.448 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:33 compute-0 nova_compute[248510]: 2025-12-13 08:59:33.461 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 08:59:33 compute-0 nova_compute[248510]: 2025-12-13 08:59:33.461 248514 INFO nova.compute.claims [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Claim successful on node compute-0.ctlplane.example.com
Dec 13 08:59:33 compute-0 nova_compute[248510]: 2025-12-13 08:59:33.616 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:33 compute-0 ceph-mon[76537]: pgmap v2953: 321 pgs: 321 active+clean; 172 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 381 KiB/s wr, 64 op/s
Dec 13 08:59:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:59:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178757918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.213 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.219 248514 DEBUG nova.compute.provider_tree [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.237 248514 DEBUG nova.scheduler.client.report [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.259 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.260 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.319 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.319 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.339 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.369 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 08:59:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.529 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.530 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.530 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Creating image(s)
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.556 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.583 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.608 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.613 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.704 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.705 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.705 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.706 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.727 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:34 compute-0 nova_compute[248510]: 2025-12-13 08:59:34.731 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2178757918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:34 compute-0 ceph-mon[76537]: pgmap v2954: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 08:59:34 compute-0 podman[373332]: 2025-12-13 08:59:34.984822162 +0000 UTC m=+0.069143974 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:59:34 compute-0 podman[373331]: 2025-12-13 08:59:34.998128568 +0000 UTC m=+0.085450903 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 13 08:59:35 compute-0 podman[373330]: 2025-12-13 08:59:35.023728584 +0000 UTC m=+0.113234063 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.081 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.116 248514 DEBUG nova.policy [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.156 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.234 248514 DEBUG nova.objects.instance [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid decfc3d0-e424-4304-8b3c-51daa9bd0fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.258 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.258 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Ensure instance console log exists: /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.258 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.259 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.259 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:35 compute-0 nova_compute[248510]: 2025-12-13 08:59:35.445 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.117 248514 INFO nova.compute.manager [None req-e800da01-0f44-4459-8172-b36ee7c61edd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Get console output
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.123 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:59:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.454 248514 INFO nova.compute.manager [None req-61a90f97-9a65-4f8f-8cc4-c8dab41913cd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Pausing
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.455 248514 DEBUG nova.objects.instance [None req-61a90f97-9a65-4f8f-8cc4-c8dab41913cd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.505 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616376.5042372, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.506 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Paused (Lifecycle Event)
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.510 248514 DEBUG nova.compute.manager [None req-61a90f97-9a65-4f8f-8cc4-c8dab41913cd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.539 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:36 compute-0 nova_compute[248510]: 2025-12-13 08:59:36.543 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:59:37 compute-0 nova_compute[248510]: 2025-12-13 08:59:37.437 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Successfully created port: d03635fc-13b4-44c2-baca-088d0efb07d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 08:59:37 compute-0 ceph-mon[76537]: pgmap v2955: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 08:59:37 compute-0 nova_compute[248510]: 2025-12-13 08:59:37.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:38 compute-0 nova_compute[248510]: 2025-12-13 08:59:38.138 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Successfully updated port: d03635fc-13b4-44c2-baca-088d0efb07d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 08:59:38 compute-0 nova_compute[248510]: 2025-12-13 08:59:38.167 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:38 compute-0 nova_compute[248510]: 2025-12-13 08:59:38.167 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:38 compute-0 nova_compute[248510]: 2025-12-13 08:59:38.167 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 08:59:38 compute-0 nova_compute[248510]: 2025-12-13 08:59:38.279 248514 DEBUG nova.compute.manager [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-changed-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:38 compute-0 nova_compute[248510]: 2025-12-13 08:59:38.280 248514 DEBUG nova.compute.manager [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Refreshing instance network info cache due to event network-changed-d03635fc-13b4-44c2-baca-088d0efb07d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:59:38 compute-0 nova_compute[248510]: 2025-12-13 08:59:38.280 248514 DEBUG oslo_concurrency.lockutils [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 229 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.1 MiB/s wr, 79 op/s
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.074 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 08:59:39 compute-0 ceph-mon[76537]: pgmap v2956: 321 pgs: 321 active+clean; 229 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.1 MiB/s wr, 79 op/s
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.906 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Updating instance_info_cache with network_info: [{"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.931 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.931 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance network_info: |[{"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.932 248514 DEBUG oslo_concurrency.lockutils [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.932 248514 DEBUG nova.network.neutron [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Refreshing network info cache for port d03635fc-13b4-44c2-baca-088d0efb07d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.934 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start _get_guest_xml network_info=[{"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.941 248514 WARNING nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.947 248514 DEBUG nova.virt.libvirt.host [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.947 248514 DEBUG nova.virt.libvirt.host [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.955 248514 DEBUG nova.virt.libvirt.host [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.956 248514 DEBUG nova.virt.libvirt.host [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.956 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.957 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.957 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.957 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.959 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.959 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 08:59:39 compute-0 nova_compute[248510]: 2025-12-13 08:59:39.961 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:59:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:59:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:59:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:59:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 08:59:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 08:59:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Dec 13 08:59:40 compute-0 nova_compute[248510]: 2025-12-13 08:59:40.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:59:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178981125' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:59:40 compute-0 nova_compute[248510]: 2025-12-13 08:59:40.584 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:40 compute-0 nova_compute[248510]: 2025-12-13 08:59:40.606 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:40 compute-0 nova_compute[248510]: 2025-12-13 08:59:40.609 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:40 compute-0 nova_compute[248510]: 2025-12-13 08:59:40.671 248514 INFO nova.compute.manager [None req-d912cc48-9eed-4f45-8d32-9b0a43897626 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Get console output
Dec 13 08:59:40 compute-0 nova_compute[248510]: 2025-12-13 08:59:40.683 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:59:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 08:59:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530065640' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.161 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.162 248514 DEBUG nova.virt.libvirt.vif [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:59:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-828624011',display_name='tempest-TestNetworkBasicOps-server-828624011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-828624011',id=126,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbfdn0vI5uiRW56FmHbdPgrEcVctEQCQ0JrumFh8Rkx0TdD9XCLn6dyTpTncQ+yr075mpR5CmHJQEBcBXMTv9XR0fwQ9NFmvKHwa7Jo4OFodBgAjBRwq8dmNkZu2ZP1tQ==',key_name='tempest-TestNetworkBasicOps-1642652749',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-19q60cz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:59:34Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=decfc3d0-e424-4304-8b3c-51daa9bd0fb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.163 248514 DEBUG nova.network.os_vif_util [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.164 248514 DEBUG nova.network.os_vif_util [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.165 248514 DEBUG nova.objects.instance [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid decfc3d0-e424-4304-8b3c-51daa9bd0fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.182 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <uuid>decfc3d0-e424-4304-8b3c-51daa9bd0fb6</uuid>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <name>instance-0000007e</name>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <metadata>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-828624011</nova:name>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 08:59:39</nova:creationTime>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <nova:port uuid="d03635fc-13b4-44c2-baca-088d0efb07d9">
Dec 13 08:59:41 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   </metadata>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <system>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <entry name="serial">decfc3d0-e424-4304-8b3c-51daa9bd0fb6</entry>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <entry name="uuid">decfc3d0-e424-4304-8b3c-51daa9bd0fb6</entry>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </system>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <os>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   </os>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <features>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <apic/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   </features>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   </clock>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   </cpu>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   <devices>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk">
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       </source>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config">
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       </source>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 08:59:41 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       </auth>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </disk>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:c0:f2:de"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <target dev="tapd03635fc-13"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </interface>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/console.log" append="off"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </serial>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <video>
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </video>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </rng>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 08:59:41 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 08:59:41 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 08:59:41 compute-0 nova_compute[248510]:   </devices>
Dec 13 08:59:41 compute-0 nova_compute[248510]: </domain>
Dec 13 08:59:41 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.184 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Preparing to wait for external event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.185 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.185 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.185 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.186 248514 DEBUG nova.virt.libvirt.vif [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:59:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-828624011',display_name='tempest-TestNetworkBasicOps-server-828624011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-828624011',id=126,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbfdn0vI5uiRW56FmHbdPgrEcVctEQCQ0JrumFh8Rkx0TdD9XCLn6dyTpTncQ+yr075mpR5CmHJQEBcBXMTv9XR0fwQ9NFmvKHwa7Jo4OFodBgAjBRwq8dmNkZu2ZP1tQ==',key_name='tempest-TestNetworkBasicOps-1642652749',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-19q60cz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:59:34Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=decfc3d0-e424-4304-8b3c-51daa9bd0fb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.186 248514 DEBUG nova.network.os_vif_util [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.187 248514 DEBUG nova.network.os_vif_util [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.187 248514 DEBUG os_vif [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.188 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.188 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.191 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.191 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd03635fc-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.191 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd03635fc-13, col_values=(('external_ids', {'iface-id': 'd03635fc-13b4-44c2-baca-088d0efb07d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:f2:de', 'vm-uuid': 'decfc3d0-e424-4304-8b3c-51daa9bd0fb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:41 compute-0 NetworkManager[50376]: <info>  [1765616381.1939] manager: (tapd03635fc-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.203 248514 INFO os_vif [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13')
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.224 248514 INFO nova.compute.manager [None req-133157d0-81db-44a2-bcae-84bf4be13972 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Unpausing
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.226 248514 DEBUG nova.objects.instance [None req-133157d0-81db-44a2-bcae-84bf4be13972 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.263 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616381.2628398, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.263 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Resumed (Lifecycle Event)
Dec 13 08:59:41 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.270 248514 DEBUG nova.virt.libvirt.guest [None req-133157d0-81db-44a2-bcae-84bf4be13972 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.271 248514 DEBUG nova.compute.manager [None req-133157d0-81db-44a2-bcae-84bf4be13972 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.275 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.275 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.276 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:c0:f2:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.276 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Using config drive
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.303 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.316 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.323 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:59:41 compute-0 nova_compute[248510]: 2025-12-13 08:59:41.360 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] During sync_power_state the instance has a pending task (unpausing). Skip.
Dec 13 08:59:41 compute-0 ceph-mon[76537]: pgmap v2957: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Dec 13 08:59:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4178981125' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:59:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1530065640' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.272 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Creating config drive at /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.277 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46uu5iw9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.380 248514 DEBUG nova.network.neutron [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Updated VIF entry in instance network info cache for port d03635fc-13b4-44c2-baca-088d0efb07d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.381 248514 DEBUG nova.network.neutron [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Updating instance_info_cache with network_info: [{"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.401 248514 DEBUG oslo_concurrency.lockutils [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.450 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46uu5iw9" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.482 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.486 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.640 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.641 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Deleting local config drive /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config because it was imported into RBD.
Dec 13 08:59:42 compute-0 kernel: tapd03635fc-13: entered promiscuous mode
Dec 13 08:59:42 compute-0 NetworkManager[50376]: <info>  [1765616382.7074] manager: (tapd03635fc-13): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Dec 13 08:59:42 compute-0 ovn_controller[148476]: 2025-12-13T08:59:42Z|01239|binding|INFO|Claiming lport d03635fc-13b4-44c2-baca-088d0efb07d9 for this chassis.
Dec 13 08:59:42 compute-0 ovn_controller[148476]: 2025-12-13T08:59:42Z|01240|binding|INFO|d03635fc-13b4-44c2-baca-088d0efb07d9: Claiming fa:16:3e:c0:f2:de 10.100.0.30
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.717 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:f2:de 10.100.0.30'], port_security=['fa:16:3e:c0:f2:de 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'decfc3d0-e424-4304-8b3c-51daa9bd0fb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fae4da-5722-4e42-b101-44d9ef244421', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'af860b42-00a6-4e57-908f-190713e2b805', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9deccf5-6c40-469a-b45c-630adf312a35, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d03635fc-13b4-44c2-baca-088d0efb07d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.719 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d03635fc-13b4-44c2-baca-088d0efb07d9 in datapath 16fae4da-5722-4e42-b101-44d9ef244421 bound to our chassis
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.720 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16fae4da-5722-4e42-b101-44d9ef244421
Dec 13 08:59:42 compute-0 ovn_controller[148476]: 2025-12-13T08:59:42Z|01241|binding|INFO|Setting lport d03635fc-13b4-44c2-baca-088d0efb07d9 ovn-installed in OVS
Dec 13 08:59:42 compute-0 ovn_controller[148476]: 2025-12-13T08:59:42Z|01242|binding|INFO|Setting lport d03635fc-13b4-44c2-baca-088d0efb07d9 up in Southbound
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.745 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e6db272d-f85b-4490-80c8-b08e487f2b2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:42 compute-0 systemd-udevd[373600]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 08:59:42 compute-0 systemd-machined[210538]: New machine qemu-153-instance-0000007e.
Dec 13 08:59:42 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-0000007e.
Dec 13 08:59:42 compute-0 NetworkManager[50376]: <info>  [1765616382.7689] device (tapd03635fc-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 08:59:42 compute-0 NetworkManager[50376]: <info>  [1765616382.7697] device (tapd03635fc-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.788 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[01c6b78e-4ae0-4642-882f-dba9dfc78a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.793 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4ced4a09-2918-4555-a7fa-5c3025bedf34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.832 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1c94a9ba-d0ff-49f7-8b9d-2d9dd0816d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.856 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cbaed246-bd99-4bed-8d14-c1736a54468d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fae4da-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:9f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894284, 'reachable_time': 25312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373612, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.883 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42a0b589-da14-4581-b060-aa16be6a887f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16fae4da-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894297, 'tstamp': 894297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373614, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap16fae4da-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894300, 'tstamp': 894300}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373614, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.884 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fae4da-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:42 compute-0 nova_compute[248510]: 2025-12-13 08:59:42.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.888 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16fae4da-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.888 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.889 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16fae4da-50, col_values=(('external_ids', {'iface-id': '7a0536ce-73f9-4aa4-8989-da712c91214d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.889 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.119 248514 DEBUG nova.compute.manager [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.120 248514 DEBUG oslo_concurrency.lockutils [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.120 248514 DEBUG oslo_concurrency.lockutils [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.121 248514 DEBUG oslo_concurrency.lockutils [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.121 248514 DEBUG nova.compute.manager [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Processing event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.131 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616383.1313186, decfc3d0-e424-4304-8b3c-51daa9bd0fb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.132 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] VM Started (Lifecycle Event)
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.134 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.138 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.142 248514 INFO nova.virt.libvirt.driver [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance spawned successfully.
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.142 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.162 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.175 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.176 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.177 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.177 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.178 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.179 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.184 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.215 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.216 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616383.131516, decfc3d0-e424-4304-8b3c-51daa9bd0fb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.216 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] VM Paused (Lifecycle Event)
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.250 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.254 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616383.1369686, decfc3d0-e424-4304-8b3c-51daa9bd0fb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.255 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] VM Resumed (Lifecycle Event)
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.259 248514 INFO nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Took 8.73 seconds to spawn the instance on the hypervisor.
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.260 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.290 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.295 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.327 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.344 248514 INFO nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Took 9.92 seconds to build instance.
Dec 13 08:59:43 compute-0 nova_compute[248510]: 2025-12-13 08:59:43.365 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:43 compute-0 ceph-mon[76537]: pgmap v2958: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Dec 13 08:59:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Dec 13 08:59:44 compute-0 nova_compute[248510]: 2025-12-13 08:59:44.453 248514 INFO nova.compute.manager [None req-0830799e-1a68-4b7c-884d-6ba13fb4e75b a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Get console output
Dec 13 08:59:44 compute-0 nova_compute[248510]: 2025-12-13 08:59:44.460 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.238 248514 DEBUG nova.compute.manager [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.239 248514 DEBUG oslo_concurrency.lockutils [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.239 248514 DEBUG oslo_concurrency.lockutils [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.240 248514 DEBUG oslo_concurrency.lockutils [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.241 248514 DEBUG nova.compute.manager [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] No waiting events found dispatching network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.242 248514 WARNING nova.compute.manager [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received unexpected event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 for instance with vm_state active and task_state None.
Dec 13 08:59:45 compute-0 ceph-mon[76537]: pgmap v2959: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.660 248514 DEBUG nova.compute.manager [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.661 248514 DEBUG nova.compute.manager [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing instance network info cache due to event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.661 248514 DEBUG oslo_concurrency.lockutils [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.661 248514 DEBUG oslo_concurrency.lockutils [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.662 248514 DEBUG nova.network.neutron [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.738 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.739 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.739 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.739 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.739 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.740 248514 INFO nova.compute.manager [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Terminating instance
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.741 248514 DEBUG nova.compute.manager [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 08:59:45 compute-0 kernel: tap558f49fb-10 (unregistering): left promiscuous mode
Dec 13 08:59:45 compute-0 NetworkManager[50376]: <info>  [1765616385.7953] device (tap558f49fb-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 08:59:45 compute-0 ovn_controller[148476]: 2025-12-13T08:59:45Z|01243|binding|INFO|Releasing lport 558f49fb-1002-4c28-8ba2-ea32384811d7 from this chassis (sb_readonly=0)
Dec 13 08:59:45 compute-0 ovn_controller[148476]: 2025-12-13T08:59:45Z|01244|binding|INFO|Setting lport 558f49fb-1002-4c28-8ba2-ea32384811d7 down in Southbound
Dec 13 08:59:45 compute-0 ovn_controller[148476]: 2025-12-13T08:59:45Z|01245|binding|INFO|Removing iface tap558f49fb-10 ovn-installed in OVS
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.813 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:ad:5d 10.100.0.5'], port_security=['fa:16:3e:ba:ad:5d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '48db77d9-f4d5-44dd-852e-aa10f98ace90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67032c28-66fa-4d75-99b8-37c3e61c4140', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af406687-c2e5-4e03-9415-af4130e7b9af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=558f49fb-1002-4c28-8ba2-ea32384811d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 08:59:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.814 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 558f49fb-1002-4c28-8ba2-ea32384811d7 in datapath 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 unbound from our chassis
Dec 13 08:59:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.816 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 08:59:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.818 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c44e1e3-c72e-4f3c-a575-e087f2620851]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.819 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 namespace which is not needed anymore
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:45 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Dec 13 08:59:45 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007d.scope: Consumed 12.858s CPU time.
Dec 13 08:59:45 compute-0 systemd-machined[210538]: Machine qemu-152-instance-0000007d terminated.
Dec 13 08:59:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:45 compute-0 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [NOTICE]   (373059) : haproxy version is 2.8.14-c23fe91
Dec 13 08:59:45 compute-0 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [NOTICE]   (373059) : path to executable is /usr/sbin/haproxy
Dec 13 08:59:45 compute-0 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [WARNING]  (373059) : Exiting Master process...
Dec 13 08:59:45 compute-0 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [WARNING]  (373059) : Exiting Master process...
Dec 13 08:59:45 compute-0 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [ALERT]    (373059) : Current worker (373061) exited with code 143 (Terminated)
Dec 13 08:59:45 compute-0 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [WARNING]  (373059) : All workers exited. Exiting... (0)
Dec 13 08:59:45 compute-0 systemd[1]: libpod-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c.scope: Deactivated successfully.
Dec 13 08:59:45 compute-0 conmon[373055]: conmon cd1426da206dbb959906 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c.scope/container/memory.events
Dec 13 08:59:45 compute-0 podman[373680]: 2025-12-13 08:59:45.97033975 +0000 UTC m=+0.063853224 container died cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.976 248514 INFO nova.virt.libvirt.driver [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance destroyed successfully.
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.977 248514 DEBUG nova.objects.instance [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.992 248514 DEBUG nova.virt.libvirt.vif [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:59:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1185653133',display_name='tempest-TestNetworkAdvancedServerOps-server-1185653133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1185653133',id=125,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBErdAvAxyanLSk6zlk6UR7jIkt03qXx3MfH8+NhdcAyZSFQd+bfa57m4HcmVco5q1ZITwwE9Zaq6j0nsDhbNKnt90E6ScVaQ/3xa8beJWwD79hCpdjT8jJu+dOAQXgrxSQ==',key_name='tempest-TestNetworkAdvancedServerOps-1501689960',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:59:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-rf02ewgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:59:41Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=48db77d9-f4d5-44dd-852e-aa10f98ace90,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.993 248514 DEBUG nova.network.os_vif_util [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.994 248514 DEBUG nova.network.os_vif_util [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.994 248514 DEBUG os_vif [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.996 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.996 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap558f49fb-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:45 compute-0 nova_compute[248510]: 2025-12-13 08:59:45.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.002 248514 INFO os_vif [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10')
Dec 13 08:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c-userdata-shm.mount: Deactivated successfully.
Dec 13 08:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-86652b75459e4a25df06217dc80b414d5f5e77aaf98604cf5669cd3f9ba9632a-merged.mount: Deactivated successfully.
Dec 13 08:59:46 compute-0 podman[373680]: 2025-12-13 08:59:46.099964823 +0000 UTC m=+0.193478297 container cleanup cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 08:59:46 compute-0 systemd[1]: libpod-conmon-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c.scope: Deactivated successfully.
Dec 13 08:59:46 compute-0 podman[373739]: 2025-12-13 08:59:46.263903626 +0000 UTC m=+0.144479527 container remove cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.271 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46556867-67e1-450c-b1ae-11072364a159]: (4, ('Sat Dec 13 08:59:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 (cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c)\ncd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c\nSat Dec 13 08:59:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 (cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c)\ncd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.273 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdb88eb-80f9-4143-9aaf-0cd97374688f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4977afa6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 08:59:46 compute-0 kernel: tap4977afa6-10: left promiscuous mode
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.277 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.283 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5feab448-5da2-45b1-bbf1-406f6e13193b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.296 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.300 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1587b85a-4d37-4000-82bf-a3cbf5217387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.302 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db7abea4-51c1-4ae9-9af0-80806e0caaae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.326 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb25327d-82cd-45b2-aa88-a74475c60a58]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893225, 'reachable_time': 27619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373754, 'error': None, 'target': 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d4977afa6\x2d1a1c\x2d43aa\x2d9d3f\x2d7b6747f5eb37.mount: Deactivated successfully.
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.332 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 08:59:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.332 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9b919c30-5774-41c7-ae48-e3d721ee45f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 08:59:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 13 08:59:46 compute-0 ceph-mon[76537]: pgmap v2960: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.805 248514 INFO nova.virt.libvirt.driver [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Deleting instance files /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90_del
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.806 248514 INFO nova.virt.libvirt.driver [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Deletion of /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90_del complete
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.870 248514 INFO nova.compute.manager [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Took 1.13 seconds to destroy the instance on the hypervisor.
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.871 248514 DEBUG oslo.service.loopingcall [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.872 248514 DEBUG nova.compute.manager [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 08:59:46 compute-0 nova_compute[248510]: 2025-12-13 08:59:46.872 248514 DEBUG nova.network.neutron [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.413 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-unplugged-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.414 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.414 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.415 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.415 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] No waiting events found dispatching network-vif-unplugged-558f49fb-1002-4c28-8ba2-ea32384811d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.416 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-unplugged-558f49fb-1002-4c28-8ba2-ea32384811d7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.416 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.417 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.417 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.418 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.418 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] No waiting events found dispatching network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.419 248514 WARNING nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received unexpected event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 for instance with vm_state active and task_state deleting.
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.549 248514 DEBUG nova.network.neutron [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.567 248514 INFO nova.compute.manager [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Took 0.69 seconds to deallocate network for instance.
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.619 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.620 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.692 248514 DEBUG nova.network.neutron [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updated VIF entry in instance network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.695 248514 DEBUG nova.network.neutron [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.718 248514 DEBUG oslo_concurrency.lockutils [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.719 248514 DEBUG oslo_concurrency.processutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 08:59:47 compute-0 nova_compute[248510]: 2025-12-13 08:59:47.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 08:59:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58622239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:48 compute-0 nova_compute[248510]: 2025-12-13 08:59:48.259 248514 DEBUG oslo_concurrency.processutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 08:59:48 compute-0 nova_compute[248510]: 2025-12-13 08:59:48.265 248514 DEBUG nova.compute.provider_tree [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 08:59:48 compute-0 nova_compute[248510]: 2025-12-13 08:59:48.290 248514 DEBUG nova.scheduler.client.report [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 08:59:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/58622239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 08:59:48 compute-0 nova_compute[248510]: 2025-12-13 08:59:48.319 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:48 compute-0 nova_compute[248510]: 2025-12-13 08:59:48.348 248514 INFO nova.scheduler.client.report [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance 48db77d9-f4d5-44dd-852e-aa10f98ace90
Dec 13 08:59:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 205 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 08:59:48 compute-0 nova_compute[248510]: 2025-12-13 08:59:48.411 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:49 compute-0 ceph-mon[76537]: pgmap v2961: 321 pgs: 321 active+clean; 205 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 08:59:49 compute-0 nova_compute[248510]: 2025-12-13 08:59:49.529 248514 DEBUG nova.compute.manager [req-e72a6be7-7ba7-43e4-9757-3d33135bc89f req-b1b9a7cb-7450-431d-8dc0-f5687139dfe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-deleted-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 08:59:49 compute-0 nova_compute[248510]: 2025-12-13 08:59:49.530 248514 INFO nova.compute.manager [req-e72a6be7-7ba7-43e4-9757-3d33135bc89f req-b1b9a7cb-7450-431d-8dc0-f5687139dfe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Neutron deleted interface 558f49fb-1002-4c28-8ba2-ea32384811d7; detaching it from the instance and deleting it from the info cache
Dec 13 08:59:49 compute-0 nova_compute[248510]: 2025-12-13 08:59:49.530 248514 DEBUG nova.network.neutron [req-e72a6be7-7ba7-43e4-9757-3d33135bc89f req-b1b9a7cb-7450-431d-8dc0-f5687139dfe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 13 08:59:49 compute-0 nova_compute[248510]: 2025-12-13 08:59:49.533 248514 DEBUG nova.compute.manager [req-e72a6be7-7ba7-43e4-9757-3d33135bc89f req-b1b9a7cb-7450-431d-8dc0-f5687139dfe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Detach interface failed, port_id=558f49fb-1002-4c28-8ba2-ea32384811d7, reason: Instance 48db77d9-f4d5-44dd-852e-aa10f98ace90 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 08:59:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 167 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 828 KiB/s wr, 115 op/s
Dec 13 08:59:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:50 compute-0 nova_compute[248510]: 2025-12-13 08:59:50.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:51 compute-0 ceph-mon[76537]: pgmap v2962: 321 pgs: 321 active+clean; 167 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 828 KiB/s wr, 115 op/s
Dec 13 08:59:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 167 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Dec 13 08:59:52 compute-0 ovn_controller[148476]: 2025-12-13T08:59:52Z|01246|binding|INFO|Releasing lport 4a84c557-65ab-478a-95fb-44e9a95becd6 from this chassis (sb_readonly=0)
Dec 13 08:59:52 compute-0 ovn_controller[148476]: 2025-12-13T08:59:52Z|01247|binding|INFO|Releasing lport 7a0536ce-73f9-4aa4-8989-da712c91214d from this chassis (sb_readonly=0)
Dec 13 08:59:52 compute-0 nova_compute[248510]: 2025-12-13 08:59:52.663 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:52 compute-0 nova_compute[248510]: 2025-12-13 08:59:52.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:53 compute-0 ceph-mon[76537]: pgmap v2963: 321 pgs: 321 active+clean; 167 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Dec 13 08:59:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 169 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 137 KiB/s wr, 107 op/s
Dec 13 08:59:54 compute-0 ovn_controller[148476]: 2025-12-13T08:59:54Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:f2:de 10.100.0.30
Dec 13 08:59:54 compute-0 ovn_controller[148476]: 2025-12-13T08:59:54Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:f2:de 10.100.0.30
Dec 13 08:59:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:55.437 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 08:59:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:55.438 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 08:59:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 08:59:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 08:59:55 compute-0 ceph-mon[76537]: pgmap v2964: 321 pgs: 321 active+clean; 169 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 137 KiB/s wr, 107 op/s
Dec 13 08:59:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 08:59:56 compute-0 nova_compute[248510]: 2025-12-13 08:59:56.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 169 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 124 KiB/s wr, 88 op/s
Dec 13 08:59:56 compute-0 ceph-mon[76537]: pgmap v2965: 321 pgs: 321 active+clean; 169 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 124 KiB/s wr, 88 op/s
Dec 13 08:59:57 compute-0 nova_compute[248510]: 2025-12-13 08:59:57.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 08:59:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 176 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 135 op/s
Dec 13 08:59:59 compute-0 ceph-mon[76537]: pgmap v2966: 321 pgs: 321 active+clean; 176 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 135 op/s
Dec 13 09:00:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 13 09:00:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:00 compute-0 nova_compute[248510]: 2025-12-13 09:00:00.975 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616385.974272, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:00 compute-0 nova_compute[248510]: 2025-12-13 09:00:00.975 248514 INFO nova.compute.manager [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Stopped (Lifecycle Event)
Dec 13 09:00:01 compute-0 nova_compute[248510]: 2025-12-13 09:00:01.003 248514 DEBUG nova.compute.manager [None req-2d1d20df-757b-493a-aab7-5d84ef82a2a9 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:01 compute-0 nova_compute[248510]: 2025-12-13 09:00:01.005 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:01 compute-0 ceph-mon[76537]: pgmap v2967: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 13 09:00:02 compute-0 nova_compute[248510]: 2025-12-13 09:00:02.285 248514 DEBUG nova.compute.manager [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:02 compute-0 nova_compute[248510]: 2025-12-13 09:00:02.286 248514 DEBUG nova.compute.manager [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-a30b0da9-1ee1-4092-a86b-5fa66fe76492. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:00:02 compute-0 nova_compute[248510]: 2025-12-13 09:00:02.286 248514 DEBUG oslo_concurrency.lockutils [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:02 compute-0 nova_compute[248510]: 2025-12-13 09:00:02.286 248514 DEBUG oslo_concurrency.lockutils [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:02 compute-0 nova_compute[248510]: 2025-12-13 09:00:02.287 248514 DEBUG nova.network.neutron [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port a30b0da9-1ee1-4092-a86b-5fa66fe76492 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:00:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:00:02 compute-0 nova_compute[248510]: 2025-12-13 09:00:02.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:02 compute-0 nova_compute[248510]: 2025-12-13 09:00:02.741 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:03 compute-0 ceph-mon[76537]: pgmap v2968: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:00:03 compute-0 nova_compute[248510]: 2025-12-13 09:00:03.676 248514 DEBUG nova.network.neutron [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated VIF entry in instance network info cache for port a30b0da9-1ee1-4092-a86b-5fa66fe76492. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:00:03 compute-0 nova_compute[248510]: 2025-12-13 09:00:03.677 248514 DEBUG nova.network.neutron [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:03 compute-0 nova_compute[248510]: 2025-12-13 09:00:03.709 248514 DEBUG oslo_concurrency.lockutils [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 13 09:00:05 compute-0 ceph-mon[76537]: pgmap v2969: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 13 09:00:05 compute-0 sudo[373778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:00:05 compute-0 sudo[373778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:05 compute-0 sudo[373778]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:05 compute-0 podman[373804]: 2025-12-13 09:00:05.628169649 +0000 UTC m=+0.056363860 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:00:05 compute-0 sudo[373821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:00:05 compute-0 sudo[373821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:05 compute-0 podman[373803]: 2025-12-13 09:00:05.653166271 +0000 UTC m=+0.084347975 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 09:00:05 compute-0 podman[373802]: 2025-12-13 09:00:05.655027236 +0000 UTC m=+0.088785333 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:00:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:06 compute-0 nova_compute[248510]: 2025-12-13 09:00:06.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:06 compute-0 sudo[373821]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:00:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:00:06 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:00:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 09:00:06 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:00:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:00:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:00:06 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:00:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:00:06 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:00:06 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:00:06 compute-0 sudo[373919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:00:06 compute-0 sudo[373919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:06 compute-0 sudo[373919]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:06 compute-0 sudo[373944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:00:06 compute-0 sudo[373944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:06 compute-0 nova_compute[248510]: 2025-12-13 09:00:06.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:07 compute-0 podman[373981]: 2025-12-13 09:00:07.015842646 +0000 UTC m=+0.065365701 container create 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 09:00:07 compute-0 systemd[1]: Started libpod-conmon-06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0.scope.
Dec 13 09:00:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:07 compute-0 podman[373981]: 2025-12-13 09:00:06.988559458 +0000 UTC m=+0.038082583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:00:07 compute-0 podman[373981]: 2025-12-13 09:00:07.097112176 +0000 UTC m=+0.146635221 container init 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec 13 09:00:07 compute-0 podman[373981]: 2025-12-13 09:00:07.1070974 +0000 UTC m=+0.156620425 container start 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 13 09:00:07 compute-0 podman[373981]: 2025-12-13 09:00:07.111279052 +0000 UTC m=+0.160802077 container attach 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:00:07 compute-0 zen_rosalind[373998]: 167 167
Dec 13 09:00:07 compute-0 systemd[1]: libpod-06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0.scope: Deactivated successfully.
Dec 13 09:00:07 compute-0 conmon[373998]: conmon 06c919bf667da8204a5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0.scope/container/memory.events
Dec 13 09:00:07 compute-0 podman[373981]: 2025-12-13 09:00:07.115153867 +0000 UTC m=+0.164676892 container died 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c32c908094a803cd7e50696dd71697ae804e2303b7d316080bef7e24494458f-merged.mount: Deactivated successfully.
Dec 13 09:00:07 compute-0 podman[373981]: 2025-12-13 09:00:07.154686295 +0000 UTC m=+0.204209320 container remove 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:00:07 compute-0 systemd[1]: libpod-conmon-06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0.scope: Deactivated successfully.
Dec 13 09:00:07 compute-0 podman[374021]: 2025-12-13 09:00:07.34160976 +0000 UTC m=+0.043249849 container create 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:00:07 compute-0 systemd[1]: Started libpod-conmon-827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d.scope.
Dec 13 09:00:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:07 compute-0 podman[374021]: 2025-12-13 09:00:07.321504538 +0000 UTC m=+0.023144647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:07 compute-0 podman[374021]: 2025-12-13 09:00:07.427957684 +0000 UTC m=+0.129597803 container init 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:00:07 compute-0 podman[374021]: 2025-12-13 09:00:07.434805662 +0000 UTC m=+0.136445751 container start 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:00:07 compute-0 podman[374021]: 2025-12-13 09:00:07.438805469 +0000 UTC m=+0.140445588 container attach 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:00:07 compute-0 ceph-mon[76537]: pgmap v2970: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 09:00:07 compute-0 nova_compute[248510]: 2025-12-13 09:00:07.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:07 compute-0 nova_compute[248510]: 2025-12-13 09:00:07.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:07 compute-0 zealous_darwin[374038]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:00:07 compute-0 zealous_darwin[374038]: --> All data devices are unavailable
Dec 13 09:00:07 compute-0 systemd[1]: libpod-827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d.scope: Deactivated successfully.
Dec 13 09:00:07 compute-0 podman[374021]: 2025-12-13 09:00:07.940701045 +0000 UTC m=+0.642341134 container died 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3-merged.mount: Deactivated successfully.
Dec 13 09:00:07 compute-0 podman[374021]: 2025-12-13 09:00:07.980873198 +0000 UTC m=+0.682513287 container remove 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:00:08 compute-0 systemd[1]: libpod-conmon-827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d.scope: Deactivated successfully.
Dec 13 09:00:08 compute-0 sudo[373944]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:08 compute-0 sudo[374069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:00:08 compute-0 sudo[374069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:08 compute-0 sudo[374069]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:08 compute-0 sudo[374094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:00:08 compute-0 sudo[374094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 09:00:08 compute-0 podman[374131]: 2025-12-13 09:00:08.428822143 +0000 UTC m=+0.040790830 container create df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:00:08 compute-0 systemd[1]: Started libpod-conmon-df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce.scope.
Dec 13 09:00:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:08 compute-0 podman[374131]: 2025-12-13 09:00:08.41197724 +0000 UTC m=+0.023945957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:00:08 compute-0 podman[374131]: 2025-12-13 09:00:08.512366538 +0000 UTC m=+0.124335245 container init df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:00:08 compute-0 podman[374131]: 2025-12-13 09:00:08.520469276 +0000 UTC m=+0.132437963 container start df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 09:00:08 compute-0 kind_elbakyan[374148]: 167 167
Dec 13 09:00:08 compute-0 podman[374131]: 2025-12-13 09:00:08.524382662 +0000 UTC m=+0.136351349 container attach df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:00:08 compute-0 systemd[1]: libpod-df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce.scope: Deactivated successfully.
Dec 13 09:00:08 compute-0 conmon[374148]: conmon df733168c78489603bc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce.scope/container/memory.events
Dec 13 09:00:08 compute-0 podman[374131]: 2025-12-13 09:00:08.526161155 +0000 UTC m=+0.138129842 container died df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Dec 13 09:00:08 compute-0 ceph-mon[76537]: pgmap v2971: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 09:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fa6599a7aed2909e2ca23c7e79704d413955102d7f49b86c2916b7e5077c882-merged.mount: Deactivated successfully.
Dec 13 09:00:08 compute-0 podman[374131]: 2025-12-13 09:00:08.565495018 +0000 UTC m=+0.177463705 container remove df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 09:00:08 compute-0 systemd[1]: libpod-conmon-df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce.scope: Deactivated successfully.
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.677 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.678 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.678 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.678 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.678 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.679 248514 INFO nova.compute.manager [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Terminating instance
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.680 248514 DEBUG nova.compute.manager [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:00:08 compute-0 kernel: tapd03635fc-13 (unregistering): left promiscuous mode
Dec 13 09:00:08 compute-0 NetworkManager[50376]: <info>  [1765616408.7253] device (tapd03635fc-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:00:08 compute-0 ovn_controller[148476]: 2025-12-13T09:00:08Z|01248|binding|INFO|Releasing lport d03635fc-13b4-44c2-baca-088d0efb07d9 from this chassis (sb_readonly=0)
Dec 13 09:00:08 compute-0 ovn_controller[148476]: 2025-12-13T09:00:08Z|01249|binding|INFO|Setting lport d03635fc-13b4-44c2-baca-088d0efb07d9 down in Southbound
Dec 13 09:00:08 compute-0 ovn_controller[148476]: 2025-12-13T09:00:08Z|01250|binding|INFO|Removing iface tapd03635fc-13 ovn-installed in OVS
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.743 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:f2:de 10.100.0.30'], port_security=['fa:16:3e:c0:f2:de 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'decfc3d0-e424-4304-8b3c-51daa9bd0fb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fae4da-5722-4e42-b101-44d9ef244421', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'af860b42-00a6-4e57-908f-190713e2b805', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9deccf5-6c40-469a-b45c-630adf312a35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d03635fc-13b4-44c2-baca-088d0efb07d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.744 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d03635fc-13b4-44c2-baca-088d0efb07d9 in datapath 16fae4da-5722-4e42-b101-44d9ef244421 unbound from our chassis
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.745 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16fae4da-5722-4e42-b101-44d9ef244421
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.773 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[054b66e0-5568-4e38-a1f3-ce4fa64d803b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:08 compute-0 podman[374171]: 2025-12-13 09:00:08.782989662 +0000 UTC m=+0.058376310 container create cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.806 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[06da02d5-3117-4418-a6c5-aa49839a50fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:08 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Dec 13 09:00:08 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007e.scope: Consumed 13.586s CPU time.
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.809 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6985264f-010e-4839-98b5-6abac3078bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:08 compute-0 systemd[1]: Started libpod-conmon-cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a.scope.
Dec 13 09:00:08 compute-0 systemd-machined[210538]: Machine qemu-153-instance-0000007e terminated.
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.838 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b8da30c4-8027-4386-b108-5557a073f92c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.854 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1663da3-95cf-456e-a099-f16610137a8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fae4da-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:9f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 790, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 790, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894284, 'reachable_time': 40385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 524, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 524, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374201, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:08 compute-0 podman[374171]: 2025-12-13 09:00:08.76617616 +0000 UTC m=+0.041562828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8727d35a-882f-4c88-af17-495d646c6e59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16fae4da-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894297, 'tstamp': 894297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374202, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap16fae4da-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894300, 'tstamp': 894300}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374202, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fae4da-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 podman[374171]: 2025-12-13 09:00:08.884247311 +0000 UTC m=+0.159633989 container init cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.884 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16fae4da-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.885 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.885 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16fae4da-50, col_values=(('external_ids', {'iface-id': '7a0536ce-73f9-4aa4-8989-da712c91214d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.885 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:00:08 compute-0 podman[374171]: 2025-12-13 09:00:08.894162683 +0000 UTC m=+0.169549351 container start cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:00:08 compute-0 podman[374171]: 2025-12-13 09:00:08.898688554 +0000 UTC m=+0.174075212 container attach cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.922 248514 INFO nova.virt.libvirt.driver [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance destroyed successfully.
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.923 248514 DEBUG nova.objects.instance [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid decfc3d0-e424-4304-8b3c-51daa9bd0fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.936 248514 DEBUG nova.virt.libvirt.vif [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:59:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-828624011',display_name='tempest-TestNetworkBasicOps-server-828624011',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-828624011',id=126,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbfdn0vI5uiRW56FmHbdPgrEcVctEQCQ0JrumFh8Rkx0TdD9XCLn6dyTpTncQ+yr075mpR5CmHJQEBcBXMTv9XR0fwQ9NFmvKHwa7Jo4OFodBgAjBRwq8dmNkZu2ZP1tQ==',key_name='tempest-TestNetworkBasicOps-1642652749',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:59:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-19q60cz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:59:43Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=decfc3d0-e424-4304-8b3c-51daa9bd0fb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.936 248514 DEBUG nova.network.os_vif_util [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.937 248514 DEBUG nova.network.os_vif_util [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.938 248514 DEBUG os_vif [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.940 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd03635fc-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.943 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.947 248514 INFO os_vif [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13')
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.967 248514 DEBUG nova.compute.manager [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-unplugged-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.967 248514 DEBUG oslo_concurrency.lockutils [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.967 248514 DEBUG oslo_concurrency.lockutils [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.967 248514 DEBUG oslo_concurrency.lockutils [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.968 248514 DEBUG nova.compute.manager [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] No waiting events found dispatching network-vif-unplugged-d03635fc-13b4-44c2-baca-088d0efb07d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:08 compute-0 nova_compute[248510]: 2025-12-13 09:00:08.968 248514 DEBUG nova.compute.manager [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-unplugged-d03635fc-13b4-44c2-baca-088d0efb07d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:00:09 compute-0 gifted_pike[374198]: {
Dec 13 09:00:09 compute-0 gifted_pike[374198]:     "0": [
Dec 13 09:00:09 compute-0 gifted_pike[374198]:         {
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "devices": [
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "/dev/loop3"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             ],
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_name": "ceph_lv0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_size": "21470642176",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "name": "ceph_lv0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "tags": {
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cluster_name": "ceph",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.crush_device_class": "",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.encrypted": "0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.objectstore": "bluestore",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osd_id": "0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.type": "block",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.vdo": "0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.with_tpm": "0"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             },
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "type": "block",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "vg_name": "ceph_vg0"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:         }
Dec 13 09:00:09 compute-0 gifted_pike[374198]:     ],
Dec 13 09:00:09 compute-0 gifted_pike[374198]:     "1": [
Dec 13 09:00:09 compute-0 gifted_pike[374198]:         {
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "devices": [
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "/dev/loop4"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             ],
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_name": "ceph_lv1",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_size": "21470642176",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "name": "ceph_lv1",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "tags": {
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cluster_name": "ceph",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.crush_device_class": "",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.encrypted": "0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.objectstore": "bluestore",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osd_id": "1",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.type": "block",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.vdo": "0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.with_tpm": "0"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             },
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "type": "block",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "vg_name": "ceph_vg1"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:         }
Dec 13 09:00:09 compute-0 gifted_pike[374198]:     ],
Dec 13 09:00:09 compute-0 gifted_pike[374198]:     "2": [
Dec 13 09:00:09 compute-0 gifted_pike[374198]:         {
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "devices": [
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "/dev/loop5"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             ],
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_name": "ceph_lv2",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_size": "21470642176",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "name": "ceph_lv2",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "tags": {
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.cluster_name": "ceph",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.crush_device_class": "",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.encrypted": "0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.objectstore": "bluestore",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osd_id": "2",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.type": "block",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.vdo": "0",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:                 "ceph.with_tpm": "0"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             },
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "type": "block",
Dec 13 09:00:09 compute-0 gifted_pike[374198]:             "vg_name": "ceph_vg2"
Dec 13 09:00:09 compute-0 gifted_pike[374198]:         }
Dec 13 09:00:09 compute-0 gifted_pike[374198]:     ]
Dec 13 09:00:09 compute-0 gifted_pike[374198]: }
Dec 13 09:00:09 compute-0 systemd[1]: libpod-cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a.scope: Deactivated successfully.
Dec 13 09:00:09 compute-0 podman[374171]: 2025-12-13 09:00:09.213855338 +0000 UTC m=+0.489241996 container died cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:00:09 compute-0 nova_compute[248510]: 2025-12-13 09:00:09.217 248514 INFO nova.virt.libvirt.driver [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Deleting instance files /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6_del
Dec 13 09:00:09 compute-0 nova_compute[248510]: 2025-12-13 09:00:09.218 248514 INFO nova.virt.libvirt.driver [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Deletion of /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6_del complete
Dec 13 09:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2-merged.mount: Deactivated successfully.
Dec 13 09:00:09 compute-0 podman[374171]: 2025-12-13 09:00:09.25356445 +0000 UTC m=+0.528951178 container remove cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:00:09 compute-0 systemd[1]: libpod-conmon-cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a.scope: Deactivated successfully.
Dec 13 09:00:09 compute-0 nova_compute[248510]: 2025-12-13 09:00:09.282 248514 INFO nova.compute.manager [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Took 0.60 seconds to destroy the instance on the hypervisor.
Dec 13 09:00:09 compute-0 nova_compute[248510]: 2025-12-13 09:00:09.283 248514 DEBUG oslo.service.loopingcall [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:00:09 compute-0 nova_compute[248510]: 2025-12-13 09:00:09.285 248514 DEBUG nova.compute.manager [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:00:09 compute-0 nova_compute[248510]: 2025-12-13 09:00:09.285 248514 DEBUG nova.network.neutron [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:00:09 compute-0 sudo[374094]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:00:09
Dec 13 09:00:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:00:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:00:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'images', 'default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Dec 13 09:00:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:00:09 compute-0 sudo[374253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:00:09 compute-0 sudo[374253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:09 compute-0 sudo[374253]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:09 compute-0 sudo[374278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:00:09 compute-0 sudo[374278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:09 compute-0 podman[374315]: 2025-12-13 09:00:09.738956231 +0000 UTC m=+0.039380145 container create 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:00:09 compute-0 systemd[1]: Started libpod-conmon-7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932.scope.
Dec 13 09:00:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:09 compute-0 podman[374315]: 2025-12-13 09:00:09.722398036 +0000 UTC m=+0.022821970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:00:09 compute-0 podman[374315]: 2025-12-13 09:00:09.835477124 +0000 UTC m=+0.135901058 container init 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:00:09 compute-0 podman[374315]: 2025-12-13 09:00:09.842201368 +0000 UTC m=+0.142625282 container start 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:00:09 compute-0 eloquent_burnell[374331]: 167 167
Dec 13 09:00:09 compute-0 podman[374315]: 2025-12-13 09:00:09.847015726 +0000 UTC m=+0.147439690 container attach 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:00:09 compute-0 systemd[1]: libpod-7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932.scope: Deactivated successfully.
Dec 13 09:00:09 compute-0 podman[374315]: 2025-12-13 09:00:09.848600715 +0000 UTC m=+0.149024639 container died 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 09:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8062f44919e17d67c72a84397752dba4da91d79e71ab5b766e92f11cc6e7cb1-merged.mount: Deactivated successfully.
Dec 13 09:00:09 compute-0 podman[374315]: 2025-12-13 09:00:09.896580159 +0000 UTC m=+0.197004073 container remove 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:00:09 compute-0 systemd[1]: libpod-conmon-7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932.scope: Deactivated successfully.
Dec 13 09:00:10 compute-0 podman[374354]: 2025-12-13 09:00:10.105281588 +0000 UTC m=+0.042671986 container create 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:00:10 compute-0 systemd[1]: Started libpod-conmon-700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e.scope.
Dec 13 09:00:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:10 compute-0 podman[374354]: 2025-12-13 09:00:10.087453941 +0000 UTC m=+0.024844359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:10 compute-0 podman[374354]: 2025-12-13 09:00:10.205688186 +0000 UTC m=+0.143078614 container init 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:00:10 compute-0 podman[374354]: 2025-12-13 09:00:10.214801059 +0000 UTC m=+0.152191477 container start 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:00:10 compute-0 podman[374354]: 2025-12-13 09:00:10.218500989 +0000 UTC m=+0.155891407 container attach 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:00:10 compute-0 nova_compute[248510]: 2025-12-13 09:00:10.384 248514 DEBUG nova.network.neutron [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 156 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1005 KiB/s wr, 28 op/s
Dec 13 09:00:10 compute-0 nova_compute[248510]: 2025-12-13 09:00:10.411 248514 INFO nova.compute.manager [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Took 1.13 seconds to deallocate network for instance.
Dec 13 09:00:10 compute-0 nova_compute[248510]: 2025-12-13 09:00:10.482 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:10 compute-0 nova_compute[248510]: 2025-12-13 09:00:10.482 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:10 compute-0 nova_compute[248510]: 2025-12-13 09:00:10.567 248514 DEBUG oslo_concurrency.processutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:10 compute-0 lvm[374467]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:00:10 compute-0 lvm[374467]: VG ceph_vg0 finished
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:00:10 compute-0 lvm[374469]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:00:10 compute-0 lvm[374469]: VG ceph_vg1 finished
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:00:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:00:10 compute-0 lvm[374471]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:00:10 compute-0 lvm[374471]: VG ceph_vg2 finished
Dec 13 09:00:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:10 compute-0 stoic_bohr[374371]: {}
Dec 13 09:00:11 compute-0 systemd[1]: libpod-700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e.scope: Deactivated successfully.
Dec 13 09:00:11 compute-0 systemd[1]: libpod-700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e.scope: Consumed 1.308s CPU time.
Dec 13 09:00:11 compute-0 podman[374354]: 2025-12-13 09:00:11.0164126 +0000 UTC m=+0.953802998 container died 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:00:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921-merged.mount: Deactivated successfully.
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.066 248514 DEBUG nova.compute.manager [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.066 248514 DEBUG oslo_concurrency.lockutils [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.066 248514 DEBUG oslo_concurrency.lockutils [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.067 248514 DEBUG oslo_concurrency.lockutils [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.067 248514 DEBUG nova.compute.manager [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] No waiting events found dispatching network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.067 248514 WARNING nova.compute.manager [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received unexpected event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 for instance with vm_state deleted and task_state None.
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.067 248514 DEBUG nova.compute.manager [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-deleted-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:11 compute-0 podman[374354]: 2025-12-13 09:00:11.081510594 +0000 UTC m=+1.018900992 container remove 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 09:00:11 compute-0 systemd[1]: libpod-conmon-700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e.scope: Deactivated successfully.
Dec 13 09:00:11 compute-0 sudo[374278]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:00:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:00:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:00:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:00:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:00:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502021258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.200 248514 DEBUG oslo_concurrency.processutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.207 248514 DEBUG nova.compute.provider_tree [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:00:11 compute-0 sudo[374488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:00:11 compute-0 sudo[374488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.228 248514 DEBUG nova.scheduler.client.report [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:00:11 compute-0 sudo[374488]: pam_unix(sudo:session): session closed for user root
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.257 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.287 248514 INFO nova.scheduler.client.report [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance decfc3d0-e424-4304-8b3c-51daa9bd0fb6
Dec 13 09:00:11 compute-0 nova_compute[248510]: 2025-12-13 09:00:11.395 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:11 compute-0 ceph-mon[76537]: pgmap v2972: 321 pgs: 321 active+clean; 156 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1005 KiB/s wr, 28 op/s
Dec 13 09:00:11 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:00:11 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:00:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/502021258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.495800) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411495881, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 702, "num_deletes": 251, "total_data_size": 909139, "memory_usage": 921944, "flush_reason": "Manual Compaction"}
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411505680, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 893611, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58576, "largest_seqno": 59277, "table_properties": {"data_size": 889917, "index_size": 1537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8365, "raw_average_key_size": 19, "raw_value_size": 882549, "raw_average_value_size": 2052, "num_data_blocks": 68, "num_entries": 430, "num_filter_entries": 430, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616356, "oldest_key_time": 1765616356, "file_creation_time": 1765616411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 9917 microseconds, and 4614 cpu microseconds.
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.505730) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 893611 bytes OK
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.505751) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507180) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507194) EVENT_LOG_v1 {"time_micros": 1765616411507189, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507213) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 905486, prev total WAL file size 905486, number of live WAL files 2.
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507937) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(872KB)], [137(9607KB)]
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411508030, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 10731816, "oldest_snapshot_seqno": -1}
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7811 keys, 8926402 bytes, temperature: kUnknown
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411574844, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8926402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8877650, "index_size": 28095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 205369, "raw_average_key_size": 26, "raw_value_size": 8741418, "raw_average_value_size": 1119, "num_data_blocks": 1081, "num_entries": 7811, "num_filter_entries": 7811, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.575151) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8926402 bytes
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.576929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.4 rd, 133.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.4 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(22.0) write-amplify(10.0) OK, records in: 8324, records dropped: 513 output_compression: NoCompression
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.576951) EVENT_LOG_v1 {"time_micros": 1765616411576941, "job": 84, "event": "compaction_finished", "compaction_time_micros": 66894, "compaction_time_cpu_micros": 23673, "output_level": 6, "num_output_files": 1, "total_output_size": 8926402, "num_input_records": 8324, "num_output_records": 7811, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411577270, "job": 84, "event": "table_file_deletion", "file_number": 139}
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411579607, "job": 84, "event": "table_file_deletion", "file_number": 137}
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:00:11 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.085 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-a30b0da9-1ee1-4092-a86b-5fa66fe76492" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.086 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-a30b0da9-1ee1-4092-a86b-5fa66fe76492" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.106 248514 DEBUG nova.objects.instance [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'flavor' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.128 248514 DEBUG nova.virt.libvirt.vif [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.129 248514 DEBUG nova.network.os_vif_util [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.130 248514 DEBUG nova.network.os_vif_util [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.135 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.138 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.143 248514 DEBUG nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Attempting to detach device tapa30b0da9-1e from instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.143 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] detach device xml: <interface type="ethernet">
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:65:10:e4"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <target dev="tapa30b0da9-1e"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]: </interface>
Dec 13 09:00:12 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.150 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.157 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.158 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.160 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface>not found in domain: <domain type='kvm' id='151'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <name>instance-0000007c</name>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:59:25</nova:creationTime>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:port uuid="a30b0da9-1ee1-4092-a86b-5fa66fe76492">
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 09:00:12 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <resource>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </resource>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <system>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='serial'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='uuid'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </system>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <os>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </os>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <features>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </features>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk' index='2'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config' index='1'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:a8:9a:b3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target dev='tap3abb490c-6a'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:65:10:e4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target dev='tapa30b0da9-1e'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='net1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </target>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/0'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </console>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </graphics>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <video>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </video>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c131,c649</label>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c649</imagelabel>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 09:00:12 compute-0 nova_compute[248510]: </domain>
Dec 13 09:00:12 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.160 248514 INFO nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully detached device tapa30b0da9-1e from instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 from the persistent domain config.
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.160 248514 DEBUG nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] (1/8): Attempting to detach device tapa30b0da9-1e with device alias net1 from instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.161 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] detach device xml: <interface type="ethernet">
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <mac address="fa:16:3e:65:10:e4"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <model type="virtio"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <mtu size="1442"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <target dev="tapa30b0da9-1e"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]: </interface>
Dec 13 09:00:12 compute-0 nova_compute[248510]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.176 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.250 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.251 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.259 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.259 248514 INFO nova.compute.claims [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:00:12 compute-0 kernel: tapa30b0da9-1e (unregistering): left promiscuous mode
Dec 13 09:00:12 compute-0 NetworkManager[50376]: <info>  [1765616412.2733] device (tapa30b0da9-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:00:12 compute-0 ovn_controller[148476]: 2025-12-13T09:00:12Z|01251|binding|INFO|Releasing lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 from this chassis (sb_readonly=0)
Dec 13 09:00:12 compute-0 ovn_controller[148476]: 2025-12-13T09:00:12Z|01252|binding|INFO|Setting lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 down in Southbound
Dec 13 09:00:12 compute-0 ovn_controller[148476]: 2025-12-13T09:00:12Z|01253|binding|INFO|Removing iface tapa30b0da9-1e ovn-installed in OVS
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.287 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765616412.2851186, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.288 248514 DEBUG nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Start waiting for the detach event from libvirt for device tapa30b0da9-1e with device alias net1 for instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.289 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.292 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface>not found in domain: <domain type='kvm' id='151'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <name>instance-0000007c</name>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 08:59:25</nova:creationTime>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:port uuid="a30b0da9-1ee1-4092-a86b-5fa66fe76492">
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 09:00:12 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <resource>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </resource>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <system>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='serial'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='uuid'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </system>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <os>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </os>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <features>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </features>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk' index='2'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config' index='1'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:a8:9a:b3'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target dev='tap3abb490c-6a'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       </target>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/0'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </console>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </graphics>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <video>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </video>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c131,c649</label>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c649</imagelabel>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 09:00:12 compute-0 nova_compute[248510]: </domain>
Dec 13 09:00:12 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.293 248514 INFO nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully detached device tapa30b0da9-1e from instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 from the live domain config.
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.294 248514 DEBUG nova.virt.libvirt.vif [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.294 248514 DEBUG nova.network.os_vif_util [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.296 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:10:e4 10.100.0.18', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fae4da-5722-4e42-b101-44d9ef244421', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9deccf5-6c40-469a-b45c-630adf312a35, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a30b0da9-1ee1-4092-a86b-5fa66fe76492) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.295 248514 DEBUG nova.network.os_vif_util [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.295 248514 DEBUG os_vif [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.297 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a30b0da9-1ee1-4092-a86b-5fa66fe76492 in datapath 16fae4da-5722-4e42-b101-44d9ef244421 unbound from our chassis
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.299 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.299 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16fae4da-5722-4e42-b101-44d9ef244421, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.300 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa30b0da9-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.300 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[225d2f35-5c80-4aad-9732-eb4369a37875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.301 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 namespace which is not needed anymore
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.307 248514 INFO os_vif [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e')
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.308 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 09:00:12</nova:creationTime>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 09:00:12 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 09:00:12 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 09:00:12 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 09:00:12 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 09:00:12 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 09:00:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 156 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 22 KiB/s wr, 14 op/s
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.415 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:12 compute-0 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [NOTICE]   (373202) : haproxy version is 2.8.14-c23fe91
Dec 13 09:00:12 compute-0 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [NOTICE]   (373202) : path to executable is /usr/sbin/haproxy
Dec 13 09:00:12 compute-0 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [WARNING]  (373202) : Exiting Master process...
Dec 13 09:00:12 compute-0 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [WARNING]  (373202) : Exiting Master process...
Dec 13 09:00:12 compute-0 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [ALERT]    (373202) : Current worker (373204) exited with code 143 (Terminated)
Dec 13 09:00:12 compute-0 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [WARNING]  (373202) : All workers exited. Exiting... (0)
Dec 13 09:00:12 compute-0 systemd[1]: libpod-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040.scope: Deactivated successfully.
Dec 13 09:00:12 compute-0 conmon[373198]: conmon 221fe09bf91523ff8393 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040.scope/container/memory.events
Dec 13 09:00:12 compute-0 podman[374538]: 2025-12-13 09:00:12.451807555 +0000 UTC m=+0.052262810 container died 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:00:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040-userdata-shm.mount: Deactivated successfully.
Dec 13 09:00:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b75e401be1e6964868d0a53502eed4a247827032e021c62c59365aba03a75fa-merged.mount: Deactivated successfully.
Dec 13 09:00:12 compute-0 podman[374538]: 2025-12-13 09:00:12.500296452 +0000 UTC m=+0.100751707 container cleanup 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 09:00:12 compute-0 systemd[1]: libpod-conmon-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040.scope: Deactivated successfully.
Dec 13 09:00:12 compute-0 podman[374565]: 2025-12-13 09:00:12.562512345 +0000 UTC m=+0.041483606 container remove 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4faf5dbb-ba56-405a-b8cc-02e938670f78]: (4, ('Sat Dec 13 09:00:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 (221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040)\n221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040\nSat Dec 13 09:00:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 (221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040)\n221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e22784-372f-471e-b4ca-6e18a14d53e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fae4da-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:12 compute-0 kernel: tap16fae4da-50: left promiscuous mode
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.578 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.599 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb24145-5a24-40ff-a5af-96b9787c8277]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97fe68d7-616d-44ac-9d01-8b4b7fbf80a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.615 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3693d1b-0b98-4898-9ba1-a8266320362e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.635 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78f5a568-0bac-4011-af87-3532fd0138e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894277, 'reachable_time': 30742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374599, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d16fae4da\x2d5722\x2d4e42\x2db101\x2d44d9ef244421.mount: Deactivated successfully.
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.639 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:00:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.639 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[507c4d7c-528e-4d3c-8303-925adf0d27f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.927 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.928 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.928 248514 DEBUG nova.network.neutron [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:00:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:00:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768523932' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.985 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:12 compute-0 nova_compute[248510]: 2025-12-13 09:00:12.993 248514 DEBUG nova.compute.provider_tree [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.011 248514 DEBUG nova.scheduler.client.report [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.033 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.034 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.078 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.079 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.098 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.126 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.171 248514 DEBUG nova.compute.manager [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-deleted-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.172 248514 INFO nova.compute.manager [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Neutron deleted interface a30b0da9-1ee1-4092-a86b-5fa66fe76492; detaching it from the instance and deleting it from the info cache
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.172 248514 DEBUG nova.network.neutron [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.214 248514 DEBUG nova.objects.instance [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'system_metadata' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.258 248514 DEBUG nova.objects.instance [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'flavor' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.277 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.280 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.281 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Creating image(s)
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.314 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.343 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.370 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.374 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.424 248514 DEBUG nova.virt.libvirt.vif [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.424 248514 DEBUG nova.network.os_vif_util [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.425 248514 DEBUG nova.network.os_vif_util [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.429 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.434 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface>not found in domain: <domain type='kvm' id='151'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <name>instance-0000007c</name>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 09:00:12</nova:creationTime>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 09:00:13 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <resource>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </resource>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <system>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='serial'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='uuid'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </system>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <os>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </os>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <features>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </features>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk' index='2'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config' index='1'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:a8:9a:b3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target dev='tap3abb490c-6a'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </target>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/0'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </console>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </graphics>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <video>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </video>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c131,c649</label>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c649</imagelabel>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 09:00:13 compute-0 nova_compute[248510]: </domain>
Dec 13 09:00:13 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.435 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.439 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface>not found in domain: <domain type='kvm' id='151'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <name>instance-0000007c</name>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 09:00:12</nova:creationTime>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 09:00:13 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <memory unit='KiB'>131072</memory>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <vcpu placement='static'>1</vcpu>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <resource>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <partition>/machine</partition>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </resource>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <sysinfo type='smbios'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <system>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='manufacturer'>RDO</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='product'>OpenStack Compute</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='serial'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='uuid'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <entry name='family'>Virtual Machine</entry>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </system>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <os>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <boot dev='hd'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <smbios mode='sysinfo'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </os>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <features>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <vmcoreinfo state='on'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </features>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <cpu mode='custom' match='exact' check='full'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <vendor>AMD</vendor>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='x2apic'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc-deadline'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='hypervisor'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='tsc_adjust'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='spec-ctrl'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='stibp'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='ssbd'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='cmp_legacy'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='overflow-recov'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='succor'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='ibrs'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='amd-ssbd'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='virt-ssbd'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='lbrv'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='tsc-scale'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='vmcb-clean'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='flushbyasid'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='pause-filter'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='pfthreshold'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='xsaves'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='svm'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='require' name='topoext'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='npt'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <feature policy='disable' name='nrip-save'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <clock offset='utc'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <timer name='pit' tickpolicy='delay'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <timer name='hpet' present='no'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <on_poweroff>destroy</on_poweroff>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <on_reboot>restart</on_reboot>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <on_crash>destroy</on_crash>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <disk type='network' device='disk'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk' index='2'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target dev='vda' bus='virtio'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='virtio-disk0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <disk type='network' device='cdrom'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <driver name='qemu' type='raw' cache='none'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <auth username='openstack'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config' index='1'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <host name='192.168.122.100' port='6789'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target dev='sda' bus='sata'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <readonly/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='sata0-0-0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='0' model='pcie-root'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pcie.0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='1' port='0x10'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='2' port='0x11'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='3' port='0x12'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='4' port='0x13'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.4'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='5' port='0x14'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.5'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='6' port='0x15'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.6'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='7' port='0x16'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.7'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='8' port='0x17'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.8'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='9' port='0x18'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.9'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='10' port='0x19'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.10'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='11' port='0x1a'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.11'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='12' port='0x1b'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.12'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='13' port='0x1c'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.13'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='14' port='0x1d'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.14'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='15' port='0x1e'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.15'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='16' port='0x1f'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.16'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='17' port='0x20'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.17'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='18' port='0x21'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.18'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='19' port='0x22'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.19'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='20' port='0x23'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.20'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='21' port='0x24'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.21'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='22' port='0x25'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.22'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='23' port='0x26'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.23'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='24' port='0x27'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.24'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-root-port'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target chassis='25' port='0x28'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.25'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model name='pcie-pci-bridge'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='pci.26'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='usb'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <controller type='sata' index='0'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='ide'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </controller>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <interface type='ethernet'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <mac address='fa:16:3e:a8:9a:b3'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target dev='tap3abb490c-6a'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model type='virtio'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <driver name='vhost' rx_queue_size='512'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <mtu size='1442'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='net0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <serial type='pty'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target type='isa-serial' port='0'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:         <model name='isa-serial'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       </target>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <console type='pty' tty='/dev/pts/0'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <source path='/dev/pts/0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <target type='serial' port='0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='serial0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </console>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <input type='tablet' bus='usb'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='input0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='usb' bus='0' port='1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <input type='mouse' bus='ps2'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='input1'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <input type='keyboard' bus='ps2'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='input2'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </input>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <listen type='address' address='::0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </graphics>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <audio id='1' type='none'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <video>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <model type='virtio' heads='1' primary='yes'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='video0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </video>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <watchdog model='itco' action='reset'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='watchdog0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </watchdog>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <memballoon model='virtio'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <stats period='10'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='balloon0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <rng model='virtio'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <backend model='random'>/dev/urandom</backend>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <alias name='rng0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <label>system_u:system_r:svirt_t:s0:c131,c649</label>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c649</imagelabel>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <label>+107:+107</label>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <imagelabel>+107:+107</imagelabel>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </seclabel>
Dec 13 09:00:13 compute-0 nova_compute[248510]: </domain>
Dec 13 09:00:13 compute-0 nova_compute[248510]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.440 248514 WARNING nova.virt.libvirt.driver [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Detaching interface fa:16:3e:65:10:e4 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapa30b0da9-1e' not found.
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.441 248514 DEBUG nova.virt.libvirt.vif [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.441 248514 DEBUG nova.network.os_vif_util [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.441 248514 DEBUG nova.network.os_vif_util [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.442 248514 DEBUG os_vif [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.444 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa30b0da9-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.444 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.447 248514 INFO os_vif [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e')
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.448 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:creationTime>2025-12-13 09:00:13</nova:creationTime>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:flavor name="m1.nano">
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:memory>128</nova:memory>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:disk>1</nova:disk>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:swap>0</nova:swap>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:vcpus>1</nova:vcpus>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:flavor>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:owner>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:owner>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   <nova:ports>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 09:00:13 compute-0 nova_compute[248510]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 09:00:13 compute-0 nova_compute[248510]:     </nova:port>
Dec 13 09:00:13 compute-0 nova_compute[248510]:   </nova:ports>
Dec 13 09:00:13 compute-0 nova_compute[248510]: </nova:instance>
Dec 13 09:00:13 compute-0 nova_compute[248510]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.473 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.474 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.475 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.475 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.501 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:13 compute-0 ceph-mon[76537]: pgmap v2973: 321 pgs: 321 active+clean; 156 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 22 KiB/s wr, 14 op/s
Dec 13 09:00:13 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2768523932' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.507 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.802 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.874 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:00:13 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.968 248514 DEBUG nova.objects.instance [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a653f54-8266-4f9c-ac5a-b4991534e9fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:13.999 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.000 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Ensure instance console log exists: /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.000 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.001 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.001 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.146 248514 DEBUG nova.policy [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:00:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 146 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.794 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 09:00:14 compute-0 ovn_controller[148476]: 2025-12-13T09:00:14Z|01254|binding|INFO|Releasing lport 4a84c557-65ab-478a-95fb-44e9a95becd6 from this chassis (sb_readonly=0)
Dec 13 09:00:14 compute-0 nova_compute[248510]: 2025-12-13 09:00:14.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:15 compute-0 nova_compute[248510]: 2025-12-13 09:00:15.018 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:00:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1728748001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:00:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:00:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1728748001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:00:15 compute-0 ceph-mon[76537]: pgmap v2974: 321 pgs: 321 active+clean; 146 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec 13 09:00:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1728748001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:00:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1728748001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:00:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.156 248514 INFO nova.network.neutron [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Port a30b0da9-1ee1-4092-a86b-5fa66fe76492 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.157 248514 DEBUG nova.network.neutron [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.181 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.185 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.185 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.186 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.202 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Successfully created port: da5ed241-e9aa-44e8-ac5a-7dcb30431922 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.223 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-a30b0da9-1ee1-4092-a86b-5fa66fe76492" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 146 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.459 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.459 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.460 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.460 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.460 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.462 248514 INFO nova.compute.manager [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Terminating instance
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.463 248514 DEBUG nova.compute.manager [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:00:16 compute-0 kernel: tap3abb490c-6a (unregistering): left promiscuous mode
Dec 13 09:00:16 compute-0 NetworkManager[50376]: <info>  [1765616416.5063] device (tap3abb490c-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.513 248514 DEBUG nova.compute.manager [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.513 248514 DEBUG nova.compute.manager [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.513 248514 DEBUG oslo_concurrency.lockutils [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.514 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:16 compute-0 ovn_controller[148476]: 2025-12-13T09:00:16Z|01255|binding|INFO|Releasing lport 3abb490c-6aad-47d4-8200-febd480ac7db from this chassis (sb_readonly=0)
Dec 13 09:00:16 compute-0 ovn_controller[148476]: 2025-12-13T09:00:16Z|01256|binding|INFO|Setting lport 3abb490c-6aad-47d4-8200-febd480ac7db down in Southbound
Dec 13 09:00:16 compute-0 ovn_controller[148476]: 2025-12-13T09:00:16Z|01257|binding|INFO|Removing iface tap3abb490c-6a ovn-installed in OVS
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.521 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:9a:b3 10.100.0.12'], port_security=['fa:16:3e:a8:9a:b3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a303ed13-8629-4259-965d-e42689484f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03b6e291-05b2-4f8e-8278-d579b0a4e692', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b539ac67-be97-4028-97af-147cf6ca090d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3abb490c-6aad-47d4-8200-febd480ac7db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.523 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3abb490c-6aad-47d4-8200-febd480ac7db in datapath a303ed13-8629-4259-965d-e42689484f38 unbound from our chassis
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.526 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a303ed13-8629-4259-965d-e42689484f38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.528 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39104070-ddbf-48c0-962f-bca7aca083ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.529 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a303ed13-8629-4259-965d-e42689484f38 namespace which is not needed anymore
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:16 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Dec 13 09:00:16 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007c.scope: Consumed 16.373s CPU time.
Dec 13 09:00:16 compute-0 systemd-machined[210538]: Machine qemu-151-instance-0000007c terminated.
Dec 13 09:00:16 compute-0 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [NOTICE]   (371928) : haproxy version is 2.8.14-c23fe91
Dec 13 09:00:16 compute-0 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [NOTICE]   (371928) : path to executable is /usr/sbin/haproxy
Dec 13 09:00:16 compute-0 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [WARNING]  (371928) : Exiting Master process...
Dec 13 09:00:16 compute-0 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [WARNING]  (371928) : Exiting Master process...
Dec 13 09:00:16 compute-0 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [ALERT]    (371928) : Current worker (371930) exited with code 143 (Terminated)
Dec 13 09:00:16 compute-0 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [WARNING]  (371928) : All workers exited. Exiting... (0)
Dec 13 09:00:16 compute-0 systemd[1]: libpod-443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565.scope: Deactivated successfully.
Dec 13 09:00:16 compute-0 podman[374793]: 2025-12-13 09:00:16.680521414 +0000 UTC m=+0.050613070 container died 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 09:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565-userdata-shm.mount: Deactivated successfully.
Dec 13 09:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e10e5c7fc25c7b540b3e7c168ca8f4252eb841eb00fc201c536aa1133b3617a-merged.mount: Deactivated successfully.
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.707 248514 INFO nova.virt.libvirt.driver [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance destroyed successfully.
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.712 248514 DEBUG nova.objects.instance [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:16 compute-0 podman[374793]: 2025-12-13 09:00:16.721256741 +0000 UTC m=+0.091348387 container cleanup 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.728 248514 DEBUG nova.virt.libvirt.vif [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.729 248514 DEBUG nova.network.os_vif_util [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.730 248514 DEBUG nova.network.os_vif_util [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.730 248514 DEBUG os_vif [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.733 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3abb490c-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.734 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.737 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.740 248514 INFO os_vif [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a')
Dec 13 09:00:16 compute-0 systemd[1]: libpod-conmon-443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565.scope: Deactivated successfully.
Dec 13 09:00:16 compute-0 podman[374833]: 2025-12-13 09:00:16.813417897 +0000 UTC m=+0.066024597 container remove 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.819 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0e07d013-6876-4ca1-85f3-09da3822a0e9]: (4, ('Sat Dec 13 09:00:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38 (443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565)\n443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565\nSat Dec 13 09:00:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38 (443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565)\n443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.821 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2b2b63-bafb-4204-b330-54ee26b4cce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.823 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa303ed13-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.827 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:16 compute-0 kernel: tapa303ed13-80: left promiscuous mode
Dec 13 09:00:16 compute-0 nova_compute[248510]: 2025-12-13 09:00:16.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.844 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df3cb205-286a-4d1c-a3e2-5921bb399489]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.861 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5de74720-3625-474a-98f6-56cb378fd328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.863 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b52db8f7-6447-4646-8ca4-ad4f8a7c18a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8a716191-8b84-4ffe-a1f8-ee8cfd495c3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890604, 'reachable_time': 25043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374866, 'error': None, 'target': 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.897 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a303ed13-8629-4259-965d-e42689484f38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:00:16 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.897 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[be06dc5d-a51e-43ff-9d7e-cc4165e1f8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:16 compute-0 systemd[1]: run-netns-ovnmeta\x2da303ed13\x2d8629\x2d4259\x2d965d\x2de42689484f38.mount: Deactivated successfully.
Dec 13 09:00:17 compute-0 nova_compute[248510]: 2025-12-13 09:00:17.005 248514 INFO nova.virt.libvirt.driver [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Deleting instance files /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_del
Dec 13 09:00:17 compute-0 nova_compute[248510]: 2025-12-13 09:00:17.007 248514 INFO nova.virt.libvirt.driver [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Deletion of /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_del complete
Dec 13 09:00:17 compute-0 nova_compute[248510]: 2025-12-13 09:00:17.092 248514 INFO nova.compute.manager [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Took 0.63 seconds to destroy the instance on the hypervisor.
Dec 13 09:00:17 compute-0 nova_compute[248510]: 2025-12-13 09:00:17.093 248514 DEBUG oslo.service.loopingcall [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:00:17 compute-0 nova_compute[248510]: 2025-12-13 09:00:17.094 248514 DEBUG nova.compute.manager [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:00:17 compute-0 nova_compute[248510]: 2025-12-13 09:00:17.094 248514 DEBUG nova.network.neutron [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:00:17 compute-0 ceph-mon[76537]: pgmap v2975: 321 pgs: 321 active+clean; 146 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Dec 13 09:00:17 compute-0 nova_compute[248510]: 2025-12-13 09:00:17.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.130 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.153 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.153 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.153 248514 DEBUG oslo_concurrency.lockutils [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.154 248514 DEBUG nova.network.neutron [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.155 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.179 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.180 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.180 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.181 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.181 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.238 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Successfully updated port: da5ed241-e9aa-44e8-ac5a-7dcb30431922 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.261 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.261 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.262 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.301 248514 DEBUG nova.network.neutron [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.311 248514 INFO nova.network.neutron [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Port 3abb490c-6aad-47d4-8200-febd480ac7db from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.312 248514 DEBUG nova.network.neutron [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.322 248514 INFO nova.compute.manager [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Took 1.23 seconds to deallocate network for instance.
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.328 248514 DEBUG oslo_concurrency.lockutils [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.371 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.372 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 139 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.438 248514 DEBUG oslo_concurrency.processutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:18 compute-0 ceph-mon[76537]: pgmap v2976: 321 pgs: 321 active+clean; 139 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.622 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-unplugged-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-unplugged-3abb490c-6aad-47d4-8200-febd480ac7db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 WARNING nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-unplugged-3abb490c-6aad-47d4-8200-febd480ac7db for instance with vm_state deleted and task_state None.
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 WARNING nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db for instance with vm_state deleted and task_state None.
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing instance network info cache due to event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.625 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:00:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468858762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.770 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.989 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.991 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3470MB free_disk=59.939925766550004GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:00:18 compute-0 nova_compute[248510]: 2025-12-13 09:00:18.991 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:00:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451361975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.056 248514 DEBUG oslo_concurrency.processutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.065 248514 DEBUG nova.compute.provider_tree [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.087 248514 DEBUG nova.scheduler.client.report [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.097 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.113 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.116 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.149 248514 INFO nova.scheduler.client.report [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.217 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9a653f54-8266-4f9c-ac5a-b4991534e9fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.218 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.218 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.238 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.268 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2468858762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2451361975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:00:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868176633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.837 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.842 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.858 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.883 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:00:19 compute-0 nova_compute[248510]: 2025-12-13 09:00:19.883 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 13 09:00:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2868176633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:20 compute-0 ceph-mon[76537]: pgmap v2977: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 13 09:00:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.127 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.157 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.157 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance network_info: |[{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.159 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.159 248514 DEBUG nova.network.neutron [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.168 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start _get_guest_xml network_info=[{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.175 248514 WARNING nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.180 248514 DEBUG nova.virt.libvirt.host [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.181 248514 DEBUG nova.virt.libvirt.host [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.190 248514 DEBUG nova.virt.libvirt.host [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.191 248514 DEBUG nova.virt.libvirt.host [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.192 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.192 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.192 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.193 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.193 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.193 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.194 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.194 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.194 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.195 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.195 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.195 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.200 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003601247509638841 of space, bias 1.0, pg target 0.10803742528916523 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696810718935558 of space, bias 1.0, pg target 0.20090432156806673 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.725337214369618e-07 of space, bias 4.0, pg target 0.0006870404657243541 quantized to 16 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:00:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:00:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1491095645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:00:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1491095645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.796 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.818 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:21 compute-0 nova_compute[248510]: 2025-12-13 09:00:21.822 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:00:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771744240' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.408 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.411 248514 DEBUG nova.virt.libvirt.vif [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:00:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1854403821',display_name='tempest-TestNetworkAdvancedServerOps-server-1854403821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1854403821',id=127,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWMM/Lkrglzn6TUU8f1dhcCbz/Yw7nIlbXlIesplWBqBJXHooQUykZVdZMxvRqd3+5+420G2QZFN25AFW8hXICmnni45jcvyASiuhi4RAOuTlyQgzYOGb0LTvE5xWl7hA==',key_name='tempest-TestNetworkAdvancedServerOps-279390887',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-azzrjv4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:00:13Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=9a653f54-8266-4f9c-ac5a-b4991534e9fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.412 248514 DEBUG nova.network.os_vif_util [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.413 248514 DEBUG nova.network.os_vif_util [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.415 248514 DEBUG nova.objects.instance [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a653f54-8266-4f9c-ac5a-b4991534e9fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.442 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <uuid>9a653f54-8266-4f9c-ac5a-b4991534e9fb</uuid>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <name>instance-0000007f</name>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1854403821</nova:name>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:00:21</nova:creationTime>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <nova:port uuid="da5ed241-e9aa-44e8-ac5a-7dcb30431922">
Dec 13 09:00:22 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <system>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <entry name="serial">9a653f54-8266-4f9c-ac5a-b4991534e9fb</entry>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <entry name="uuid">9a653f54-8266-4f9c-ac5a-b4991534e9fb</entry>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </system>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <os>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   </os>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <features>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   </features>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk">
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config">
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:00:22 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:cc:05:2b"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <target dev="tapda5ed241-e9"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/console.log" append="off"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <video>
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </video>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:00:22 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:00:22 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:00:22 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:00:22 compute-0 nova_compute[248510]: </domain>
Dec 13 09:00:22 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.444 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Preparing to wait for external event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.445 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.445 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.446 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.446 248514 DEBUG nova.virt.libvirt.vif [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:00:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1854403821',display_name='tempest-TestNetworkAdvancedServerOps-server-1854403821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1854403821',id=127,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWMM/Lkrglzn6TUU8f1dhcCbz/Yw7nIlbXlIesplWBqBJXHooQUykZVdZMxvRqd3+5+420G2QZFN25AFW8hXICmnni45jcvyASiuhi4RAOuTlyQgzYOGb0LTvE5xWl7hA==',key_name='tempest-TestNetworkAdvancedServerOps-279390887',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-azzrjv4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:00:13Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=9a653f54-8266-4f9c-ac5a-b4991534e9fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.447 248514 DEBUG nova.network.os_vif_util [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.448 248514 DEBUG nova.network.os_vif_util [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.448 248514 DEBUG os_vif [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.450 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.451 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.456 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda5ed241-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.456 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda5ed241-e9, col_values=(('external_ids', {'iface-id': 'da5ed241-e9aa-44e8-ac5a-7dcb30431922', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:05:2b', 'vm-uuid': '9a653f54-8266-4f9c-ac5a-b4991534e9fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:22 compute-0 NetworkManager[50376]: <info>  [1765616422.4886] manager: (tapda5ed241-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.493 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.494 248514 INFO os_vif [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9')
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.500 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.547 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.547 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.548 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:cc:05:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.548 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Using config drive
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.575 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:22 compute-0 nova_compute[248510]: 2025-12-13 09:00:22.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1771744240' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:00:22 compute-0 ceph-mon[76537]: pgmap v2978: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.151 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Creating config drive at /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.156 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7yy9s9q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.313 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7yy9s9q" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.341 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.345 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.489 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.490 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Deleting local config drive /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config because it was imported into RBD.
Dec 13 09:00:23 compute-0 kernel: tapda5ed241-e9: entered promiscuous mode
Dec 13 09:00:23 compute-0 NetworkManager[50376]: <info>  [1765616423.5413] manager: (tapda5ed241-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/516)
Dec 13 09:00:23 compute-0 ovn_controller[148476]: 2025-12-13T09:00:23Z|01258|binding|INFO|Claiming lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 for this chassis.
Dec 13 09:00:23 compute-0 ovn_controller[148476]: 2025-12-13T09:00:23Z|01259|binding|INFO|da5ed241-e9aa-44e8-ac5a-7dcb30431922: Claiming fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:23 compute-0 ovn_controller[148476]: 2025-12-13T09:00:23Z|01260|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 ovn-installed in OVS
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:23 compute-0 ovn_controller[148476]: 2025-12-13T09:00:23Z|01261|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 up in Southbound
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.558 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.560 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f bound to our chassis
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.561 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 09:00:23 compute-0 systemd-udevd[375069]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.574 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b353702e-6238-4b7a-92e0-663adb3424c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.575 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9b944cc-b1 in ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.577 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9b944cc-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.577 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26e4221e-8d97-46d7-83c4-0fdd31840472]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.578 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11953a34-338d-46ec-9633-79f73ef1bcfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 NetworkManager[50376]: <info>  [1765616423.5870] device (tapda5ed241-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:00:23 compute-0 NetworkManager[50376]: <info>  [1765616423.5879] device (tapda5ed241-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.590 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[850cb3c3-24e0-46cf-8709-8343d313bd53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 systemd-machined[210538]: New machine qemu-154-instance-0000007f.
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22d24a78-5774-4dd4-be01-9e37be35934c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007f.
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.653 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a2261f9b-27e9-48e4-b927-6f943b918463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 systemd-udevd[375072]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.658 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3fed49-900b-48e7-9cda-a3e3c2c90b9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 NetworkManager[50376]: <info>  [1765616423.6597] manager: (tapb9b944cc-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/517)
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.688 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6bffe1-7bb7-4b85-b25a-891bd1ad52f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.691 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b9debf-9c09-44a7-b011-ea4f0df4316f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 NetworkManager[50376]: <info>  [1765616423.7107] device (tapb9b944cc-b0): carrier: link connected
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.714 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[836b2036-5f86-432d-8aec-612941be90cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.729 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f82df206-181b-4fdc-916e-06a79553091f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9b944cc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900092, 'reachable_time': 43983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375104, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.742 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7481ef09-92a1-4e2b-885c-7ca4bcae416d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:4fac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 900092, 'tstamp': 900092}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375105, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[37951df6-df8f-4ec2-be72-ff16e07b022e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9b944cc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900092, 'reachable_time': 43983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375106, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7c1c80-8ae3-4065-98f3-1c340b49f461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.844 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2f726e-c95b-4daa-84bf-75f7e7001e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.845 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b944cc-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.845 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.846 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9b944cc-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:23 compute-0 NetworkManager[50376]: <info>  [1765616423.8480] manager: (tapb9b944cc-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Dec 13 09:00:23 compute-0 kernel: tapb9b944cc-b0: entered promiscuous mode
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.850 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9b944cc-b0, col_values=(('external_ids', {'iface-id': 'dacb29c7-c3ab-43a0-86c5-5add3f769729'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.853 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:23 compute-0 ovn_controller[148476]: 2025-12-13T09:00:23Z|01262|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.854 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.866 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85e41e8e-a11c-41d6-b26a-342c8e99310e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.867 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.868 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'env', 'PROCESS_TAG=haproxy-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.920 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616408.919502, decfc3d0-e424-4304-8b3c-51daa9bd0fb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.921 248514 INFO nova.compute.manager [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] VM Stopped (Lifecycle Event)
Dec 13 09:00:23 compute-0 nova_compute[248510]: 2025-12-13 09:00:23.947 248514 DEBUG nova.compute.manager [None req-a0d61e36-476e-469f-b342-12a0631d92a2 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.049 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616424.0490115, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.049 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Started (Lifecycle Event)
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.082 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.088 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616424.0501661, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.089 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Paused (Lifecycle Event)
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.112 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.116 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.141 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.183 248514 DEBUG nova.network.neutron [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updated VIF entry in instance network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.184 248514 DEBUG nova.network.neutron [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.209 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.210 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-deleted-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:24 compute-0 podman[375179]: 2025-12-13 09:00:24.264173013 +0000 UTC m=+0.052441465 container create 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:00:24 compute-0 systemd[1]: Started libpod-conmon-47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35.scope.
Dec 13 09:00:24 compute-0 podman[375179]: 2025-12-13 09:00:24.237622393 +0000 UTC m=+0.025890875 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:00:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc161a8edde91d2349ff8deec297f008ca51bfac852b10dd03755608924ccb60/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.360 248514 DEBUG nova.compute.manager [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.361 248514 DEBUG oslo_concurrency.lockutils [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.362 248514 DEBUG oslo_concurrency.lockutils [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.362 248514 DEBUG oslo_concurrency.lockutils [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.363 248514 DEBUG nova.compute.manager [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Processing event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.364 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:00:24 compute-0 podman[375179]: 2025-12-13 09:00:24.364392736 +0000 UTC m=+0.152661268 container init 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.369 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616424.3688927, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.369 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Resumed (Lifecycle Event)
Dec 13 09:00:24 compute-0 podman[375179]: 2025-12-13 09:00:24.370871474 +0000 UTC m=+0.159139966 container start 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.371 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.375 248514 INFO nova.virt.libvirt.driver [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance spawned successfully.
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.375 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:00:24 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [NOTICE]   (375198) : New worker (375200) forked
Dec 13 09:00:24 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [NOTICE]   (375198) : Loading success.
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.410 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.413 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.414 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.414 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.415 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.415 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.420 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.480 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.511 248514 INFO nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Took 11.23 seconds to spawn the instance on the hypervisor.
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.512 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.578 248514 INFO nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Took 12.35 seconds to build instance.
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.596 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:24 compute-0 nova_compute[248510]: 2025-12-13 09:00:24.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:00:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:25.367 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:25 compute-0 nova_compute[248510]: 2025-12-13 09:00:25.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:25.369 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:00:25 compute-0 ceph-mon[76537]: pgmap v2979: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Dec 13 09:00:25 compute-0 ovn_controller[148476]: 2025-12-13T09:00:25Z|01263|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 09:00:25 compute-0 nova_compute[248510]: 2025-12-13 09:00:25.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:25 compute-0 ovn_controller[148476]: 2025-12-13T09:00:25Z|01264|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 09:00:25 compute-0 nova_compute[248510]: 2025-12-13 09:00:25.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 370 KiB/s wr, 49 op/s
Dec 13 09:00:26 compute-0 nova_compute[248510]: 2025-12-13 09:00:26.653 248514 DEBUG nova.compute.manager [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:26 compute-0 nova_compute[248510]: 2025-12-13 09:00:26.654 248514 DEBUG oslo_concurrency.lockutils [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:26 compute-0 nova_compute[248510]: 2025-12-13 09:00:26.654 248514 DEBUG oslo_concurrency.lockutils [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:26 compute-0 nova_compute[248510]: 2025-12-13 09:00:26.654 248514 DEBUG oslo_concurrency.lockutils [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:26 compute-0 nova_compute[248510]: 2025-12-13 09:00:26.655 248514 DEBUG nova.compute.manager [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:26 compute-0 nova_compute[248510]: 2025-12-13 09:00:26.655 248514 WARNING nova.compute.manager [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.
Dec 13 09:00:27 compute-0 ceph-mon[76537]: pgmap v2980: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 370 KiB/s wr, 49 op/s
Dec 13 09:00:27 compute-0 nova_compute[248510]: 2025-12-13 09:00:27.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:27 compute-0 nova_compute[248510]: 2025-12-13 09:00:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:27 compute-0 nova_compute[248510]: 2025-12-13 09:00:27.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:28.373 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 370 KiB/s wr, 66 op/s
Dec 13 09:00:29 compute-0 ceph-mon[76537]: pgmap v2981: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 370 KiB/s wr, 66 op/s
Dec 13 09:00:29 compute-0 NetworkManager[50376]: <info>  [1765616429.6773] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Dec 13 09:00:29 compute-0 NetworkManager[50376]: <info>  [1765616429.6779] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/520)
Dec 13 09:00:29 compute-0 nova_compute[248510]: 2025-12-13 09:00:29.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:29 compute-0 ovn_controller[148476]: 2025-12-13T09:00:29Z|01265|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 09:00:29 compute-0 nova_compute[248510]: 2025-12-13 09:00:29.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:29 compute-0 ovn_controller[148476]: 2025-12-13T09:00:29Z|01266|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 09:00:29 compute-0 nova_compute[248510]: 2025-12-13 09:00:29.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:30 compute-0 nova_compute[248510]: 2025-12-13 09:00:30.183 248514 DEBUG nova.compute.manager [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:30 compute-0 nova_compute[248510]: 2025-12-13 09:00:30.183 248514 DEBUG nova.compute.manager [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing instance network info cache due to event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:00:30 compute-0 nova_compute[248510]: 2025-12-13 09:00:30.183 248514 DEBUG oslo_concurrency.lockutils [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:30 compute-0 nova_compute[248510]: 2025-12-13 09:00:30.184 248514 DEBUG oslo_concurrency.lockutils [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:30 compute-0 nova_compute[248510]: 2025-12-13 09:00:30.184 248514 DEBUG nova.network.neutron [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:00:30 compute-0 nova_compute[248510]: 2025-12-13 09:00:30.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:30 compute-0 nova_compute[248510]: 2025-12-13 09:00:30.377 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Dec 13 09:00:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:31 compute-0 ceph-mon[76537]: pgmap v2982: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Dec 13 09:00:31 compute-0 nova_compute[248510]: 2025-12-13 09:00:31.701 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616416.70063, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:31 compute-0 nova_compute[248510]: 2025-12-13 09:00:31.702 248514 INFO nova.compute.manager [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] VM Stopped (Lifecycle Event)
Dec 13 09:00:31 compute-0 nova_compute[248510]: 2025-12-13 09:00:31.766 248514 DEBUG nova.compute.manager [None req-0b211dad-2e2c-4fc9-99a1-11eb44b9d6f9 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:31 compute-0 nova_compute[248510]: 2025-12-13 09:00:31.819 248514 DEBUG nova.network.neutron [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updated VIF entry in instance network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:00:31 compute-0 nova_compute[248510]: 2025-12-13 09:00:31.820 248514 DEBUG nova.network.neutron [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:31 compute-0 nova_compute[248510]: 2025-12-13 09:00:31.856 248514 DEBUG oslo_concurrency.lockutils [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:00:32 compute-0 nova_compute[248510]: 2025-12-13 09:00:32.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:32 compute-0 nova_compute[248510]: 2025-12-13 09:00:32.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:33 compute-0 ceph-mon[76537]: pgmap v2983: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:00:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:00:35 compute-0 ceph-mon[76537]: pgmap v2984: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:00:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:35 compute-0 podman[375213]: 2025-12-13 09:00:35.964437988 +0000 UTC m=+0.056163456 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 13 09:00:35 compute-0 podman[375214]: 2025-12-13 09:00:35.985424361 +0000 UTC m=+0.076669077 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 09:00:35 compute-0 podman[375212]: 2025-12-13 09:00:35.987962934 +0000 UTC m=+0.083385203 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Dec 13 09:00:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Dec 13 09:00:36 compute-0 nova_compute[248510]: 2025-12-13 09:00:36.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:37 compute-0 nova_compute[248510]: 2025-12-13 09:00:37.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:37 compute-0 ceph-mon[76537]: pgmap v2985: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Dec 13 09:00:37 compute-0 nova_compute[248510]: 2025-12-13 09:00:37.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:37 compute-0 ovn_controller[148476]: 2025-12-13T09:00:37Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 09:00:37 compute-0 ovn_controller[148476]: 2025-12-13T09:00:37Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 09:00:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 97 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Dec 13 09:00:38 compute-0 ceph-mon[76537]: pgmap v2986: 321 pgs: 321 active+clean; 97 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Dec 13 09:00:39 compute-0 nova_compute[248510]: 2025-12-13 09:00:39.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:00:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:00:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:00:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:00:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:00:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:00:40 compute-0 sshd-session[375271]: Connection closed by authenticating user root 61.245.11.87 port 60584 [preauth]
Dec 13 09:00:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 09:00:40 compute-0 nova_compute[248510]: 2025-12-13 09:00:40.804 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:40 compute-0 nova_compute[248510]: 2025-12-13 09:00:40.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:00:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:41 compute-0 ceph-mon[76537]: pgmap v2987: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 09:00:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:00:42 compute-0 nova_compute[248510]: 2025-12-13 09:00:42.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:42 compute-0 nova_compute[248510]: 2025-12-13 09:00:42.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:43 compute-0 ceph-mon[76537]: pgmap v2988: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:00:43 compute-0 nova_compute[248510]: 2025-12-13 09:00:43.921 248514 INFO nova.compute.manager [None req-b3ce700e-229f-4457-a15f-806307a83b26 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Get console output
Dec 13 09:00:43 compute-0 nova_compute[248510]: 2025-12-13 09:00:43.927 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:00:44 compute-0 nova_compute[248510]: 2025-12-13 09:00:44.268 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:44 compute-0 nova_compute[248510]: 2025-12-13 09:00:44.268 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:44 compute-0 nova_compute[248510]: 2025-12-13 09:00:44.269 248514 INFO nova.compute.manager [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Rebooting instance
Dec 13 09:00:44 compute-0 nova_compute[248510]: 2025-12-13 09:00:44.284 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:44 compute-0 nova_compute[248510]: 2025-12-13 09:00:44.284 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:44 compute-0 nova_compute[248510]: 2025-12-13 09:00:44.284 248514 DEBUG nova.network.neutron [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:00:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:00:45 compute-0 ceph-mon[76537]: pgmap v2989: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:00:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:00:47 compute-0 ceph-mon[76537]: pgmap v2990: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.792 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.796 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.824 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.824 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.854 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.863 248514 DEBUG nova.network.neutron [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.910 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:47 compute-0 nova_compute[248510]: 2025-12-13 09:00:47.912 248514 DEBUG nova.compute.manager [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:00:48 compute-0 nova_compute[248510]: 2025-12-13 09:00:48.588 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:48 compute-0 nova_compute[248510]: 2025-12-13 09:00:48.589 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:48 compute-0 nova_compute[248510]: 2025-12-13 09:00:48.614 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:00:48 compute-0 nova_compute[248510]: 2025-12-13 09:00:48.712 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:48 compute-0 nova_compute[248510]: 2025-12-13 09:00:48.713 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:48 compute-0 nova_compute[248510]: 2025-12-13 09:00:48.724 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:00:48 compute-0 nova_compute[248510]: 2025-12-13 09:00:48.724 248514 INFO nova.compute.claims [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:00:48 compute-0 nova_compute[248510]: 2025-12-13 09:00:48.908 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:00:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 59K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1434 writes, 6976 keys, 1434 commit groups, 1.0 writes per commit group, ingest: 9.59 MB, 0.02 MB/s
                                           Interval WAL: 1434 writes, 1434 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     20.6      3.60              0.26        42    0.086       0      0       0.0       0.0
                                             L6      1/0    8.51 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7     79.6     67.0      5.21              1.07        41    0.127    259K    22K       0.0       0.0
                                            Sum      1/0    8.51 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     47.1     48.0      8.81              1.33        83    0.106    259K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    113.5    110.4      0.67              0.26        14    0.048     57K   3578       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     79.6     67.0      5.21              1.07        41    0.127    259K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     20.6      3.59              0.26        41    0.088       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.072, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.41 GB write, 0.08 MB/s write, 0.40 GB read, 0.08 MB/s read, 8.8 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 48.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000431 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3027,46.72 MB,15.3677%) FilterBlock(84,716.05 KB,0.230021%) IndexBlock(84,1.18 MB,0.387834%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 09:00:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:00:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870465599' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:49 compute-0 ceph-mon[76537]: pgmap v2991: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.570 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.578 248514 DEBUG nova.compute.provider_tree [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.607 248514 DEBUG nova.scheduler.client.report [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.636 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.638 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.695 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.696 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.719 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.756 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.861 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.864 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.865 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Creating image(s)
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.890 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.919 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.944 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:49 compute-0 nova_compute[248510]: 2025-12-13 09:00:49.948 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.027 248514 DEBUG nova.policy [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.034 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.035 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.036 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.036 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.061 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.065 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:50 compute-0 kernel: tapda5ed241-e9 (unregistering): left promiscuous mode
Dec 13 09:00:50 compute-0 NetworkManager[50376]: <info>  [1765616450.2750] device (tapda5ed241-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01267|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=0)
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01268|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 down in Southbound
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01269|binding|INFO|Removing iface tapda5ed241-e9 ovn-installed in OVS
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.295 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.296 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.298 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.299 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3849ecc8-3716-4903-be66-7ffaa78ee23f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.300 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f namespace which is not needed anymore
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.304 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:50 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Dec 13 09:00:50 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Dec 13 09:00:50 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007f.scope: Consumed 13.500s CPU time.
Dec 13 09:00:50 compute-0 systemd-machined[210538]: Machine qemu-154-instance-0000007f terminated.
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.371 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 1.0 MiB/s wr, 45 op/s
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.453 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:00:50 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [NOTICE]   (375198) : haproxy version is 2.8.14-c23fe91
Dec 13 09:00:50 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [NOTICE]   (375198) : path to executable is /usr/sbin/haproxy
Dec 13 09:00:50 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [WARNING]  (375198) : Exiting Master process...
Dec 13 09:00:50 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [WARNING]  (375198) : Exiting Master process...
Dec 13 09:00:50 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [ALERT]    (375198) : Current worker (375200) exited with code 143 (Terminated)
Dec 13 09:00:50 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [WARNING]  (375198) : All workers exited. Exiting... (0)
Dec 13 09:00:50 compute-0 systemd[1]: libpod-47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35.scope: Deactivated successfully.
Dec 13 09:00:50 compute-0 podman[375429]: 2025-12-13 09:00:50.466168422 +0000 UTC m=+0.052592508 container died 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 09:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35-userdata-shm.mount: Deactivated successfully.
Dec 13 09:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc161a8edde91d2349ff8deec297f008ca51bfac852b10dd03755608924ccb60-merged.mount: Deactivated successfully.
Dec 13 09:00:50 compute-0 podman[375429]: 2025-12-13 09:00:50.513492501 +0000 UTC m=+0.099916557 container cleanup 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:00:50 compute-0 kernel: tapda5ed241-e9: entered promiscuous mode
Dec 13 09:00:50 compute-0 NetworkManager[50376]: <info>  [1765616450.5172] manager: (tapda5ed241-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/521)
Dec 13 09:00:50 compute-0 systemd-udevd[375394]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01270|binding|INFO|Claiming lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 for this chassis.
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01271|binding|INFO|da5ed241-e9aa-44e8-ac5a-7dcb30431922: Claiming fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 09:00:50 compute-0 kernel: tapda5ed241-e9 (unregistering): left promiscuous mode
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:50 compute-0 systemd[1]: libpod-conmon-47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35.scope: Deactivated successfully.
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.529 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01272|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 ovn-installed in OVS
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01273|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 up in Southbound
Dec 13 09:00:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1870465599' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01274|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=1)
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01275|binding|INFO|Removing iface tapda5ed241-e9 ovn-installed in OVS
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01276|if_status|INFO|Dropped 6 log messages in last 156 seconds (most recently, 156 seconds ago) due to excessive rate
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01277|if_status|INFO|Not setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 down as sb is readonly
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01278|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=0)
Dec 13 09:00:50 compute-0 ovn_controller[148476]: 2025-12-13T09:00:50Z|01279|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 down in Southbound
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.561 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.566 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.573 248514 DEBUG nova.objects.instance [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:50 compute-0 podman[375498]: 2025-12-13 09:00:50.587092822 +0000 UTC m=+0.042437969 container remove 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.593 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e8181026-51bf-4280-91cb-9b7b7d4ed943]: (4, ('Sat Dec 13 09:00:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f (47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35)\n47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35\nSat Dec 13 09:00:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f (47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35)\n47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.595 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b985ebd-f4c6-43b9-b6c8-ba820e3b8584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.595 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.595 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Ensure instance console log exists: /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.596 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.596 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.596 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.596 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b944cc-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:50 compute-0 kernel: tapb9b944cc-b0: left promiscuous mode
Dec 13 09:00:50 compute-0 nova_compute[248510]: 2025-12-13 09:00:50.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a21b157-bc7f-485e-b5b4-80e92adebaea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.683 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d544a2e-6820-40c8-9c21-474e5de0ad36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.684 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bc32d4-b9e9-477e-8ecf-3e8baebcc92c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.708 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f290aad-405d-480e-be32-74c281067655]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900086, 'reachable_time': 43880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375535, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 systemd[1]: run-netns-ovnmeta\x2db9b944cc\x2db37e\x2d492b\x2dabd7\x2db6bfb9227f0f.mount: Deactivated successfully.
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.713 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.714 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdac92f-3b8d-44bc-bd01-240951c37b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.715 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.716 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.716 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[813a2be8-8ea1-40b7-994b-4da1a3c9b58e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.717 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.718 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:00:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.719 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[299d8218-cf7f-4760-847c-231fcc92ea67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.120 248514 INFO nova.virt.libvirt.driver [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance shutdown successfully.
Dec 13 09:00:51 compute-0 kernel: tapda5ed241-e9: entered promiscuous mode
Dec 13 09:00:51 compute-0 NetworkManager[50376]: <info>  [1765616451.1759] manager: (tapda5ed241-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/522)
Dec 13 09:00:51 compute-0 ovn_controller[148476]: 2025-12-13T09:00:51Z|01280|binding|INFO|Claiming lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 for this chassis.
Dec 13 09:00:51 compute-0 ovn_controller[148476]: 2025-12-13T09:00:51Z|01281|binding|INFO|da5ed241-e9aa-44e8-ac5a-7dcb30431922: Claiming fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.186 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.187 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f bound to our chassis
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.189 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 09:00:51 compute-0 NetworkManager[50376]: <info>  [1765616451.1927] device (tapda5ed241-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.192 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:51 compute-0 NetworkManager[50376]: <info>  [1765616451.1938] device (tapda5ed241-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:00:51 compute-0 ovn_controller[148476]: 2025-12-13T09:00:51Z|01282|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 ovn-installed in OVS
Dec 13 09:00:51 compute-0 ovn_controller[148476]: 2025-12-13T09:00:51Z|01283|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 up in Southbound
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.199 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.205 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea946fc3-6612-4011-8757-08426861a6cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.206 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9b944cc-b1 in ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.212 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9b944cc-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.213 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6881cc27-ec6d-4bae-bb03-fd83c694588c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.214 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a749fe17-3429-4e39-b637-bb758461a1f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 systemd-machined[210538]: New machine qemu-155-instance-0000007f.
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.232 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[787f5bc5-b460-4426-a8f6-bfb3bbeb35ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007f.
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.250 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a984d784-5127-4f4d-ab64-f02c895a4dbd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.286 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8df436d4-ba8e-4726-9bf2-2555503a20a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.294 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db86190a-e8f1-4682-acac-7a747ef71c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 NetworkManager[50376]: <info>  [1765616451.2985] manager: (tapb9b944cc-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/523)
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.301 248514 DEBUG nova.compute.manager [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.301 248514 DEBUG oslo_concurrency.lockutils [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.301 248514 DEBUG oslo_concurrency.lockutils [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.301 248514 DEBUG oslo_concurrency.lockutils [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.302 248514 DEBUG nova.compute.manager [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.302 248514 WARNING nova.compute.manager [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state reboot_started.
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.336 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e127793c-ca80-44cb-8ca0-2de3bad25ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.339 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[443d1d14-8fc6-46d1-b6df-235082c27f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 NetworkManager[50376]: <info>  [1765616451.3651] device (tapb9b944cc-b0): carrier: link connected
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.372 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[052fd6f5-905a-446a-9f29-3b4760ecb52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.389 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a430c3-4d31-49d4-b4be-a4731eb71dc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9b944cc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902858, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375580, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.402 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f791bda-19a7-47cc-bb90-6c440620d6a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:4fac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 902858, 'tstamp': 902858}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375581, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.415 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3721b877-1351-468c-aba7-5ba266940620]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9b944cc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902858, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375582, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.452 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[853ba085-ae88-4b96-905a-ca8b4372f214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39284548-fb04-4f78-b4ce-9752360a43a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b944cc-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.538 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9b944cc-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:51 compute-0 NetworkManager[50376]: <info>  [1765616451.5408] manager: (tapb9b944cc-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Dec 13 09:00:51 compute-0 kernel: tapb9b944cc-b0: entered promiscuous mode
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.544 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9b944cc-b0, col_values=(('external_ids', {'iface-id': 'dacb29c7-c3ab-43a0-86c5-5add3f769729'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:51 compute-0 ovn_controller[148476]: 2025-12-13T09:00:51Z|01284|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.566 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.568 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b02732-e941-465f-a075-7fce84d66f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.569 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:00:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.570 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'env', 'PROCESS_TAG=haproxy-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:00:51 compute-0 ceph-mon[76537]: pgmap v2992: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 1.0 MiB/s wr, 45 op/s
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.952 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 9a653f54-8266-4f9c-ac5a-b4991534e9fb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.952 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616451.9514248, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.953 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Resumed (Lifecycle Event)
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.959 248514 INFO nova.virt.libvirt.driver [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance running successfully.
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.960 248514 INFO nova.virt.libvirt.driver [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance soft rebooted successfully.
Dec 13 09:00:51 compute-0 nova_compute[248510]: 2025-12-13 09:00:51.961 248514 DEBUG nova.compute.manager [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:51 compute-0 podman[375655]: 2025-12-13 09:00:51.973785826 +0000 UTC m=+0.050970209 container create 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.007 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.014 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:00:52 compute-0 systemd[1]: Started libpod-conmon-114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b.scope.
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.024 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Successfully updated port: 3ba83e70-35d2-4947-85b4-64f21eae817c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.035 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.037 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616451.952874, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.037 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Started (Lifecycle Event)
Dec 13 09:00:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.040 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:52 compute-0 podman[375655]: 2025-12-13 09:00:51.946898428 +0000 UTC m=+0.024082841 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.041 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.041 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26999b8fd1c742d22d53c509972f804cd2c1a3b692246b65cd69c5f1f676657/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:52 compute-0 podman[375655]: 2025-12-13 09:00:52.05483851 +0000 UTC m=+0.132022903 container init 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:00:52 compute-0 podman[375655]: 2025-12-13 09:00:52.059576796 +0000 UTC m=+0.136761189 container start 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.074 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.078 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:00:52 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [NOTICE]   (375675) : New worker (375677) forked
Dec 13 09:00:52 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [NOTICE]   (375675) : Loading success.
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.126 248514 DEBUG nova.compute.manager [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.127 248514 DEBUG nova.compute.manager [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Refreshing instance network info cache due to event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.127 248514 DEBUG oslo_concurrency.lockutils [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.380 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:00:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 32 KiB/s wr, 3 op/s
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:52 compute-0 nova_compute[248510]: 2025-12-13 09:00:52.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:52 compute-0 ceph-mon[76537]: pgmap v2993: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 32 KiB/s wr, 3 op/s
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.237 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.270 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.271 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance network_info: |[{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.271 248514 DEBUG oslo_concurrency.lockutils [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.272 248514 DEBUG nova.network.neutron [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Refreshing network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.275 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start _get_guest_xml network_info=[{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.279 248514 WARNING nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.290 248514 DEBUG nova.virt.libvirt.host [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.291 248514 DEBUG nova.virt.libvirt.host [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.296 248514 DEBUG nova.virt.libvirt.host [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.297 248514 DEBUG nova.virt.libvirt.host [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.298 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.298 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.299 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.299 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.300 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.300 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.301 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.301 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.302 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.302 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.303 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.303 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.307 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.392 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.392 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.393 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.393 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.393 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.394 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.394 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.394 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.395 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.395 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.395 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.395 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.396 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.396 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.396 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.397 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.397 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.397 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.397 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.398 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.398 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.398 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.399 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.399 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.399 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.400 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.400 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.400 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.401 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.401 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.401 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.401 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.402 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.402 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.402 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.402 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.403 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.403 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.403 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.403 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.404 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.404 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.
Dec 13 09:00:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:00:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355443234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.839 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.871 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:53 compute-0 nova_compute[248510]: 2025-12-13 09:00:53.876 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2355443234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:00:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:00:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196716338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.403 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.406 248514 DEBUG nova.virt.libvirt.vif [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430432318',display_name='tempest-TestNetworkBasicOps-server-1430432318',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430432318',id=128,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZwYzQivS6OmxYADpL/9VcE8wCVSuV+/vnYuB1qCFxjkNKqDVEv7izs9w56y2Ibc/1G1HwpaZexOYOvh1sTJGy09M2esErCHKSz7Mr00MXTFv2qZyMndAh7JN+dvFj7tg==',key_name='tempest-TestNetworkBasicOps-481585601',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-p1inmefq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:00:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=046adf00-a09d-4e5c-89d5-e63a2ee02fc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.406 248514 DEBUG nova.network.os_vif_util [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.407 248514 DEBUG nova.network.os_vif_util [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.409 248514 DEBUG nova.objects.instance [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:00:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.436 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <uuid>046adf00-a09d-4e5c-89d5-e63a2ee02fc1</uuid>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <name>instance-00000080</name>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-1430432318</nova:name>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:00:53</nova:creationTime>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <nova:port uuid="3ba83e70-35d2-4947-85b4-64f21eae817c">
Dec 13 09:00:54 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <system>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <entry name="serial">046adf00-a09d-4e5c-89d5-e63a2ee02fc1</entry>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <entry name="uuid">046adf00-a09d-4e5c-89d5-e63a2ee02fc1</entry>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </system>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <os>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   </os>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <features>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   </features>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk">
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config">
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       </source>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:00:54 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1a:55:74"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <target dev="tap3ba83e70-35"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/console.log" append="off"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <video>
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </video>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:00:54 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:00:54 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:00:54 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:00:54 compute-0 nova_compute[248510]: </domain>
Dec 13 09:00:54 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.438 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Preparing to wait for external event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.438 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.438 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.438 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.439 248514 DEBUG nova.virt.libvirt.vif [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430432318',display_name='tempest-TestNetworkBasicOps-server-1430432318',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430432318',id=128,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZwYzQivS6OmxYADpL/9VcE8wCVSuV+/vnYuB1qCFxjkNKqDVEv7izs9w56y2Ibc/1G1HwpaZexOYOvh1sTJGy09M2esErCHKSz7Mr00MXTFv2qZyMndAh7JN+dvFj7tg==',key_name='tempest-TestNetworkBasicOps-481585601',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-p1inmefq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:00:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=046adf00-a09d-4e5c-89d5-e63a2ee02fc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.439 248514 DEBUG nova.network.os_vif_util [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.440 248514 DEBUG nova.network.os_vif_util [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.440 248514 DEBUG os_vif [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.441 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.441 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.444 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ba83e70-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.445 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ba83e70-35, col_values=(('external_ids', {'iface-id': '3ba83e70-35d2-4947-85b4-64f21eae817c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:55:74', 'vm-uuid': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:54 compute-0 NetworkManager[50376]: <info>  [1765616454.4484] manager: (tap3ba83e70-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/525)
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.458 248514 INFO os_vif [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35')
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.526 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:1a:55:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.528 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Using config drive
Dec 13 09:00:54 compute-0 nova_compute[248510]: 2025-12-13 09:00:54.554 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4196716338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:00:54 compute-0 ceph-mon[76537]: pgmap v2994: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.309 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Creating config drive at /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.314 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplk1y0x9w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.438 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.470 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplk1y0x9w" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.502 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.507 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.669 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.670 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Deleting local config drive /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config because it was imported into RBD.
Dec 13 09:00:55 compute-0 kernel: tap3ba83e70-35: entered promiscuous mode
Dec 13 09:00:55 compute-0 NetworkManager[50376]: <info>  [1765616455.7363] manager: (tap3ba83e70-35): new Tun device (/org/freedesktop/NetworkManager/Devices/526)
Dec 13 09:00:55 compute-0 ovn_controller[148476]: 2025-12-13T09:00:55Z|01285|binding|INFO|Claiming lport 3ba83e70-35d2-4947-85b4-64f21eae817c for this chassis.
Dec 13 09:00:55 compute-0 ovn_controller[148476]: 2025-12-13T09:00:55Z|01286|binding|INFO|3ba83e70-35d2-4947-85b4-64f21eae817c: Claiming fa:16:3e:1a:55:74 10.100.0.6
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.747 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.748 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 bound to our chassis
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.750 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 09:00:55 compute-0 ovn_controller[148476]: 2025-12-13T09:00:55Z|01287|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c ovn-installed in OVS
Dec 13 09:00:55 compute-0 ovn_controller[148476]: 2025-12-13T09:00:55Z|01288|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c up in Southbound
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.761 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8834319-cffe-43e1-b65c-1b068e42a9bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.762 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9da7b572-11 in ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:00:55 compute-0 nova_compute[248510]: 2025-12-13 09:00:55.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.765 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9da7b572-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.765 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0f83ca-aee2-4954-b3e6-45d54857a4f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.767 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd25e07c-4f5c-4137-89dc-1fea3746f59d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 systemd-udevd[375822]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:00:55 compute-0 systemd-machined[210538]: New machine qemu-156-instance-00000080.
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.786 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[08648dcb-6f66-451a-addc-e81b0558ec98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-00000080.
Dec 13 09:00:55 compute-0 NetworkManager[50376]: <info>  [1765616455.7916] device (tap3ba83e70-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:00:55 compute-0 NetworkManager[50376]: <info>  [1765616455.7925] device (tap3ba83e70-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.805 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f7b39f-67a5-44af-84c4-835c4ccf6bdb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.846 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7827c4a5-c124-47de-942e-d5dca4cf13e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.850 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52173a04-87a9-418f-8a06-b852d1dcc8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 NetworkManager[50376]: <info>  [1765616455.8518] manager: (tap9da7b572-10): new Veth device (/org/freedesktop/NetworkManager/Devices/527)
Dec 13 09:00:55 compute-0 systemd-udevd[375827]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.889 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8164c346-9822-4b7c-bb55-9b315e2cc4c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.894 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d75e5f-99b6-42b8-9d63-1008c9397398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 NetworkManager[50376]: <info>  [1765616455.9220] device (tap9da7b572-10): carrier: link connected
Dec 13 09:00:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.929 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[de0e13b7-2ca9-421b-be0d-02604da4f367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.954 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe3d5df-7161-491f-8358-f55cb3f120b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da7b572-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:10:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903313, 'reachable_time': 32396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375855, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.975 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5eb7c2b-589b-44bb-ac06-76229e640d5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:101b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903313, 'tstamp': 903313}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375856, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.995 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e630b34f-8adb-426e-be17-d17536e47f08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da7b572-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:10:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903313, 'reachable_time': 32396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375857, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.037 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3477e7-32e1-41e2-a949-ff07d785be9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.109 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b545de9-9b49-4cfc-94f2-944e035b6ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.111 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da7b572-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.111 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.112 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9da7b572-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:56 compute-0 NetworkManager[50376]: <info>  [1765616456.1141] manager: (tap9da7b572-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Dec 13 09:00:56 compute-0 kernel: tap9da7b572-10: entered promiscuous mode
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.115 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9da7b572-10, col_values=(('external_ids', {'iface-id': '92adc7c4-81cb-40ce-bd1a-0498c06b06d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:00:56 compute-0 ovn_controller[148476]: 2025-12-13T09:00:56Z|01289|binding|INFO|Releasing lport 92adc7c4-81cb-40ce-bd1a-0498c06b06d5 from this chassis (sb_readonly=0)
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.132 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.132 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f164950-f2f0-4094-9bec-2b08741b8442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.134 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:00:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.134 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'env', 'PROCESS_TAG=haproxy-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9da7b572-116c-40f0-9b1e-0183fa9d3f87.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.326 248514 DEBUG nova.compute.manager [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.334 248514 DEBUG oslo_concurrency.lockutils [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.334 248514 DEBUG oslo_concurrency.lockutils [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.335 248514 DEBUG oslo_concurrency.lockutils [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.336 248514 DEBUG nova.compute.manager [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Processing event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:00:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.515 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616456.514865, 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.517 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] VM Started (Lifecycle Event)
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.521 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.526 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.530 248514 INFO nova.virt.libvirt.driver [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance spawned successfully.
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.532 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:00:56 compute-0 podman[375927]: 2025-12-13 09:00:56.560522477 +0000 UTC m=+0.069508193 container create 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.565 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.574 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.586 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.591 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.592 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.592 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.593 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.593 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.599 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.600 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616456.5152326, 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.600 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] VM Paused (Lifecycle Event)
Dec 13 09:00:56 compute-0 podman[375927]: 2025-12-13 09:00:56.521945882 +0000 UTC m=+0.030931618 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:00:56 compute-0 systemd[1]: Started libpod-conmon-69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56.scope.
Dec 13 09:00:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf7277745e8100040655f44041079f812c029e04aac3444da67e2a87f1405fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.643 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.647 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616456.525831, 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.648 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] VM Resumed (Lifecycle Event)
Dec 13 09:00:56 compute-0 podman[375927]: 2025-12-13 09:00:56.653525313 +0000 UTC m=+0.162511049 container init 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:00:56 compute-0 podman[375927]: 2025-12-13 09:00:56.660872213 +0000 UTC m=+0.169857929 container start 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.678 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.683 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.689 248514 INFO nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Took 6.83 seconds to spawn the instance on the hypervisor.
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.689 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:00:56 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [NOTICE]   (375947) : New worker (375949) forked
Dec 13 09:00:56 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [NOTICE]   (375947) : Loading success.
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.721 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.756 248514 INFO nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Took 8.07 seconds to build instance.
Dec 13 09:00:56 compute-0 nova_compute[248510]: 2025-12-13 09:00:56.773 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:57 compute-0 nova_compute[248510]: 2025-12-13 09:00:57.055 248514 DEBUG nova.network.neutron [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updated VIF entry in instance network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:00:57 compute-0 nova_compute[248510]: 2025-12-13 09:00:57.056 248514 DEBUG nova.network.neutron [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:00:57 compute-0 nova_compute[248510]: 2025-12-13 09:00:57.085 248514 DEBUG oslo_concurrency.lockutils [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:00:57 compute-0 ceph-mon[76537]: pgmap v2995: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 09:00:57 compute-0 nova_compute[248510]: 2025-12-13 09:00:57.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:00:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Dec 13 09:00:58 compute-0 nova_compute[248510]: 2025-12-13 09:00:58.626 248514 DEBUG nova.compute.manager [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:00:58 compute-0 nova_compute[248510]: 2025-12-13 09:00:58.627 248514 DEBUG oslo_concurrency.lockutils [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:00:58 compute-0 nova_compute[248510]: 2025-12-13 09:00:58.627 248514 DEBUG oslo_concurrency.lockutils [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:00:58 compute-0 nova_compute[248510]: 2025-12-13 09:00:58.627 248514 DEBUG oslo_concurrency.lockutils [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:00:58 compute-0 nova_compute[248510]: 2025-12-13 09:00:58.627 248514 DEBUG nova.compute.manager [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] No waiting events found dispatching network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:00:58 compute-0 nova_compute[248510]: 2025-12-13 09:00:58.628 248514 WARNING nova.compute.manager [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received unexpected event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with vm_state active and task_state None.
Dec 13 09:00:58 compute-0 ceph-mon[76537]: pgmap v2996: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Dec 13 09:00:59 compute-0 nova_compute[248510]: 2025-12-13 09:00:59.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Dec 13 09:01:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.255 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.403 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 9a653f54-8266-4f9c-ac5a-b4991534e9fb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.403 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.404 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.404 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.405 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.405 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.463 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:01 compute-0 nova_compute[248510]: 2025-12-13 09:01:01.464 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:01 compute-0 ceph-mon[76537]: pgmap v2997: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Dec 13 09:01:01 compute-0 CROND[375959]: (root) CMD (run-parts /etc/cron.hourly)
Dec 13 09:01:01 compute-0 run-parts[375962]: (/etc/cron.hourly) starting 0anacron
Dec 13 09:01:01 compute-0 run-parts[375968]: (/etc/cron.hourly) finished 0anacron
Dec 13 09:01:01 compute-0 CROND[375958]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 13 09:01:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Dec 13 09:01:02 compute-0 nova_compute[248510]: 2025-12-13 09:01:02.799 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:03 compute-0 ceph-mon[76537]: pgmap v2998: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Dec 13 09:01:03 compute-0 ovn_controller[148476]: 2025-12-13T09:01:03Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 09:01:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Dec 13 09:01:04 compute-0 nova_compute[248510]: 2025-12-13 09:01:04.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:05 compute-0 ceph-mon[76537]: pgmap v2999: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Dec 13 09:01:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 100 op/s
Dec 13 09:01:06 compute-0 nova_compute[248510]: 2025-12-13 09:01:06.508 248514 DEBUG nova.compute.manager [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:06 compute-0 nova_compute[248510]: 2025-12-13 09:01:06.509 248514 DEBUG nova.compute.manager [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Refreshing instance network info cache due to event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:01:06 compute-0 nova_compute[248510]: 2025-12-13 09:01:06.509 248514 DEBUG oslo_concurrency.lockutils [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:01:06 compute-0 nova_compute[248510]: 2025-12-13 09:01:06.510 248514 DEBUG oslo_concurrency.lockutils [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:01:06 compute-0 nova_compute[248510]: 2025-12-13 09:01:06.510 248514 DEBUG nova.network.neutron [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Refreshing network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:01:06 compute-0 nova_compute[248510]: 2025-12-13 09:01:06.922 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:06 compute-0 podman[375971]: 2025-12-13 09:01:06.983122946 +0000 UTC m=+0.067170245 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 09:01:06 compute-0 podman[375970]: 2025-12-13 09:01:06.99022836 +0000 UTC m=+0.071917842 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=multipathd)
Dec 13 09:01:07 compute-0 podman[375969]: 2025-12-13 09:01:07.020208184 +0000 UTC m=+0.106215841 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 13 09:01:07 compute-0 ceph-mon[76537]: pgmap v3000: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 100 op/s
Dec 13 09:01:07 compute-0 nova_compute[248510]: 2025-12-13 09:01:07.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:07 compute-0 nova_compute[248510]: 2025-12-13 09:01:07.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 173 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 475 KiB/s wr, 123 op/s
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.714 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.715 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.717 248514 INFO nova.compute.manager [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Terminating instance
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.718 248514 DEBUG nova.compute.manager [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:01:08 compute-0 kernel: tap3ba83e70-35 (unregistering): left promiscuous mode
Dec 13 09:01:08 compute-0 NetworkManager[50376]: <info>  [1765616468.7633] device (tap3ba83e70-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:01:08 compute-0 ovn_controller[148476]: 2025-12-13T09:01:08Z|01290|binding|INFO|Releasing lport 3ba83e70-35d2-4947-85b4-64f21eae817c from this chassis (sb_readonly=0)
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:08 compute-0 ovn_controller[148476]: 2025-12-13T09:01:08Z|01291|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c down in Southbound
Dec 13 09:01:08 compute-0 ovn_controller[148476]: 2025-12-13T09:01:08Z|01292|binding|INFO|Removing iface tap3ba83e70-35 ovn-installed in OVS
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:08 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000080.scope: Deactivated successfully.
Dec 13 09:01:08 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000080.scope: Consumed 12.562s CPU time.
Dec 13 09:01:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.823 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.824 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 unbound from our chassis
Dec 13 09:01:08 compute-0 systemd-machined[210538]: Machine qemu-156-instance-00000080 terminated.
Dec 13 09:01:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.826 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:01:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7c378d9c-e1a1-49a7-b382-0a9bfd8aa918]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.828 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 namespace which is not needed anymore
Dec 13 09:01:08 compute-0 kernel: tap3ba83e70-35: entered promiscuous mode
Dec 13 09:01:08 compute-0 NetworkManager[50376]: <info>  [1765616468.9450] manager: (tap3ba83e70-35): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Dec 13 09:01:08 compute-0 systemd-udevd[376034]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:01:08 compute-0 ovn_controller[148476]: 2025-12-13T09:01:08Z|01293|binding|INFO|Claiming lport 3ba83e70-35d2-4947-85b4-64f21eae817c for this chassis.
Dec 13 09:01:08 compute-0 ovn_controller[148476]: 2025-12-13T09:01:08Z|01294|binding|INFO|3ba83e70-35d2-4947-85b4-64f21eae817c: Claiming fa:16:3e:1a:55:74 10.100.0.6
Dec 13 09:01:08 compute-0 nova_compute[248510]: 2025-12-13 09:01:08.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:08 compute-0 kernel: tap3ba83e70-35 (unregistering): left promiscuous mode
Dec 13 09:01:08 compute-0 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 09:01:09 compute-0 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 09:01:09 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [NOTICE]   (375947) : haproxy version is 2.8.14-c23fe91
Dec 13 09:01:09 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [NOTICE]   (375947) : path to executable is /usr/sbin/haproxy
Dec 13 09:01:09 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [WARNING]  (375947) : Exiting Master process...
Dec 13 09:01:09 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [WARNING]  (375947) : Exiting Master process...
Dec 13 09:01:09 compute-0 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 09:01:09 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [ALERT]    (375947) : Current worker (375949) exited with code 143 (Terminated)
Dec 13 09:01:09 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [WARNING]  (375947) : All workers exited. Exiting... (0)
Dec 13 09:01:09 compute-0 systemd[1]: libpod-69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56.scope: Deactivated successfully.
Dec 13 09:01:09 compute-0 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.020 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.022 248514 INFO nova.virt.libvirt.driver [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance destroyed successfully.
Dec 13 09:01:09 compute-0 ovn_controller[148476]: 2025-12-13T09:01:09Z|01295|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c ovn-installed in OVS
Dec 13 09:01:09 compute-0 ovn_controller[148476]: 2025-12-13T09:01:09Z|01296|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c up in Southbound
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.022 248514 DEBUG nova.objects.instance [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:01:09 compute-0 ovn_controller[148476]: 2025-12-13T09:01:09Z|01297|binding|INFO|Releasing lport 3ba83e70-35d2-4947-85b4-64f21eae817c from this chassis (sb_readonly=1)
Dec 13 09:01:09 compute-0 ovn_controller[148476]: 2025-12-13T09:01:09Z|01298|if_status|INFO|Dropped 2 log messages in last 19 seconds (most recently, 19 seconds ago) due to excessive rate
Dec 13 09:01:09 compute-0 ovn_controller[148476]: 2025-12-13T09:01:09Z|01299|if_status|INFO|Not setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c down as sb is readonly
Dec 13 09:01:09 compute-0 podman[376055]: 2025-12-13 09:01:09.024877653 +0000 UTC m=+0.096360089 container died 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:01:09 compute-0 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:09 compute-0 ovn_controller[148476]: 2025-12-13T09:01:09Z|01300|binding|INFO|Removing iface tap3ba83e70-35 ovn-installed in OVS
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.027 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:09 compute-0 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 09:01:09 compute-0 ovn_controller[148476]: 2025-12-13T09:01:09Z|01301|binding|INFO|Releasing lport 3ba83e70-35d2-4947-85b4-64f21eae817c from this chassis (sb_readonly=0)
Dec 13 09:01:09 compute-0 ovn_controller[148476]: 2025-12-13T09:01:09Z|01302|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c down in Southbound
Dec 13 09:01:09 compute-0 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 09:01:09 compute-0 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.041 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.047 248514 DEBUG nova.virt.libvirt.vif [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430432318',display_name='tempest-TestNetworkBasicOps-server-1430432318',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430432318',id=128,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZwYzQivS6OmxYADpL/9VcE8wCVSuV+/vnYuB1qCFxjkNKqDVEv7izs9w56y2Ibc/1G1HwpaZexOYOvh1sTJGy09M2esErCHKSz7Mr00MXTFv2qZyMndAh7JN+dvFj7tg==',key_name='tempest-TestNetworkBasicOps-481585601',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:00:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-p1inmefq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:00:56Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=046adf00-a09d-4e5c-89d5-e63a2ee02fc1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.048 248514 DEBUG nova.network.os_vif_util [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.048 248514 DEBUG nova.network.os_vif_util [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.049 248514 DEBUG os_vif [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.051 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ba83e70-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.053 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.056 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56-userdata-shm.mount: Deactivated successfully.
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.059 248514 INFO os_vif [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35')
Dec 13 09:01:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf7277745e8100040655f44041079f812c029e04aac3444da67e2a87f1405fc-merged.mount: Deactivated successfully.
Dec 13 09:01:09 compute-0 podman[376055]: 2025-12-13 09:01:09.082472053 +0000 UTC m=+0.153954479 container cleanup 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 09:01:09 compute-0 systemd[1]: libpod-conmon-69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56.scope: Deactivated successfully.
Dec 13 09:01:09 compute-0 podman[376121]: 2025-12-13 09:01:09.159711314 +0000 UTC m=+0.053355397 container remove 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.169 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aea20d79-21fd-46dc-84d8-1f31f1deb9ea]: (4, ('Sat Dec 13 09:01:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 (69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56)\n69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56\nSat Dec 13 09:01:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 (69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56)\n69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.172 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ead3f9b3-9de7-476d-a9af-9b90f466977f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.173 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da7b572-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:09 compute-0 kernel: tap9da7b572-10: left promiscuous mode
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.191 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.194 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[60937b4e-e1de-481e-b78d-c896bc38ae84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.206 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5b0eef-1d06-4f06-b6b8-33c86cad8430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.207 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57fee616-3634-4792-8548-d275f421a673]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.225 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c96d1f-cc65-4921-853e-fcd3628e6bfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903305, 'reachable_time': 30137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376136, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d9da7b572\x2d116c\x2d40f0\x2d9b1e\x2d0183fa9d3f87.mount: Deactivated successfully.
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.228 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.228 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1396ad81-7296-4dd9-a60f-bc5d97bd8131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.230 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 unbound from our chassis
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.233 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.234 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7688ae69-896a-4527-9f0c-09c84d9701ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.235 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 unbound from our chassis
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.237 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:01:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.238 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb8e2fc-d08d-4131-a4ee-a9f67f2a82a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.328 248514 INFO nova.virt.libvirt.driver [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Deleting instance files /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1_del
Dec 13 09:01:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:01:09
Dec 13 09:01:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:01:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:01:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'images']
Dec 13 09:01:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.330 248514 INFO nova.virt.libvirt.driver [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Deletion of /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1_del complete
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.428 248514 INFO nova.compute.manager [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Took 0.71 seconds to destroy the instance on the hypervisor.
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.429 248514 DEBUG oslo.service.loopingcall [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.429 248514 DEBUG nova.compute.manager [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.430 248514 DEBUG nova.network.neutron [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:01:09 compute-0 ceph-mon[76537]: pgmap v3001: 321 pgs: 321 active+clean; 173 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 475 KiB/s wr, 123 op/s
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.668 248514 DEBUG nova.compute.manager [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.668 248514 DEBUG oslo_concurrency.lockutils [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.669 248514 DEBUG oslo_concurrency.lockutils [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.669 248514 DEBUG oslo_concurrency.lockutils [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.670 248514 DEBUG nova.compute.manager [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] No waiting events found dispatching network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:01:09 compute-0 nova_compute[248510]: 2025-12-13 09:01:09.670 248514 DEBUG nova.compute.manager [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 178 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 135 op/s
Dec 13 09:01:10 compute-0 nova_compute[248510]: 2025-12-13 09:01:10.726 248514 DEBUG nova.network.neutron [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updated VIF entry in instance network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:01:10 compute-0 nova_compute[248510]: 2025-12-13 09:01:10.727 248514 DEBUG nova.network.neutron [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:10 compute-0 nova_compute[248510]: 2025-12-13 09:01:10.830 248514 INFO nova.compute.manager [None req-47fe4a6f-56ae-4101-89a1-d63f3ea6256f a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Get console output
Dec 13 09:01:10 compute-0 nova_compute[248510]: 2025-12-13 09:01:10.836 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:01:10 compute-0 nova_compute[248510]: 2025-12-13 09:01:10.855 248514 DEBUG oslo_concurrency.lockutils [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:01:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:01:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:11 compute-0 sudo[376137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:01:11 compute-0 sudo[376137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:11 compute-0 sudo[376137]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:11 compute-0 sudo[376162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:01:11 compute-0 sudo[376162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:11 compute-0 ceph-mon[76537]: pgmap v3002: 321 pgs: 321 active+clean; 178 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 135 op/s
Dec 13 09:01:11 compute-0 nova_compute[248510]: 2025-12-13 09:01:11.817 248514 DEBUG nova.compute.manager [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:11 compute-0 nova_compute[248510]: 2025-12-13 09:01:11.818 248514 DEBUG oslo_concurrency.lockutils [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:11 compute-0 nova_compute[248510]: 2025-12-13 09:01:11.818 248514 DEBUG oslo_concurrency.lockutils [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:11 compute-0 nova_compute[248510]: 2025-12-13 09:01:11.818 248514 DEBUG oslo_concurrency.lockutils [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:11 compute-0 nova_compute[248510]: 2025-12-13 09:01:11.818 248514 DEBUG nova.compute.manager [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] No waiting events found dispatching network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:01:11 compute-0 nova_compute[248510]: 2025-12-13 09:01:11.819 248514 WARNING nova.compute.manager [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received unexpected event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with vm_state active and task_state deleting.
Dec 13 09:01:11 compute-0 sudo[376162]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:01:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:01:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:01:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:01:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:01:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:01:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:01:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:01:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:01:12 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:01:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:01:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:01:12 compute-0 sudo[376218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:01:12 compute-0 sudo[376218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:12 compute-0 sudo[376218]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:12 compute-0 sudo[376243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:01:12 compute-0 sudo[376243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:12 compute-0 podman[376280]: 2025-12-13 09:01:12.433854547 +0000 UTC m=+0.040216556 container create c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 09:01:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 178 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 740 KiB/s rd, 1.3 MiB/s wr, 84 op/s
Dec 13 09:01:12 compute-0 systemd[1]: Started libpod-conmon-c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd.scope.
Dec 13 09:01:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:01:12 compute-0 podman[376280]: 2025-12-13 09:01:12.41600308 +0000 UTC m=+0.022365119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:01:12 compute-0 podman[376280]: 2025-12-13 09:01:12.529653702 +0000 UTC m=+0.136015731 container init c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:01:12 compute-0 podman[376280]: 2025-12-13 09:01:12.536279924 +0000 UTC m=+0.142641933 container start c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:01:12 compute-0 podman[376280]: 2025-12-13 09:01:12.540088957 +0000 UTC m=+0.146450966 container attach c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 09:01:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:01:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:01:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:01:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:01:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:01:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:01:12 compute-0 adoring_austin[376296]: 167 167
Dec 13 09:01:12 compute-0 systemd[1]: libpod-c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd.scope: Deactivated successfully.
Dec 13 09:01:12 compute-0 conmon[376296]: conmon c7f576896819f0e7f697 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd.scope/container/memory.events
Dec 13 09:01:12 compute-0 podman[376280]: 2025-12-13 09:01:12.546928145 +0000 UTC m=+0.153290154 container died c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d4fd43bf4e84ba7f548500a0d9ee8615c3f37e663e7896d74ccc8645ba70521-merged.mount: Deactivated successfully.
Dec 13 09:01:12 compute-0 podman[376280]: 2025-12-13 09:01:12.598915107 +0000 UTC m=+0.205277126 container remove c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:01:12 compute-0 systemd[1]: libpod-conmon-c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd.scope: Deactivated successfully.
Dec 13 09:01:12 compute-0 podman[376319]: 2025-12-13 09:01:12.832901475 +0000 UTC m=+0.061195889 container create 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 09:01:12 compute-0 nova_compute[248510]: 2025-12-13 09:01:12.843 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:12 compute-0 systemd[1]: Started libpod-conmon-309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59.scope.
Dec 13 09:01:12 compute-0 podman[376319]: 2025-12-13 09:01:12.80902804 +0000 UTC m=+0.037322444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:01:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:12 compute-0 podman[376319]: 2025-12-13 09:01:12.939601937 +0000 UTC m=+0.167896381 container init 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 09:01:12 compute-0 podman[376319]: 2025-12-13 09:01:12.947019578 +0000 UTC m=+0.175313942 container start 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:01:12 compute-0 podman[376319]: 2025-12-13 09:01:12.951298053 +0000 UTC m=+0.179592457 container attach 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.250 248514 DEBUG nova.network.neutron [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.276 248514 INFO nova.compute.manager [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Took 3.85 seconds to deallocate network for instance.
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.408 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.408 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:13 compute-0 hopeful_panini[376335]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:01:13 compute-0 hopeful_panini[376335]: --> All data devices are unavailable
Dec 13 09:01:13 compute-0 systemd[1]: libpod-309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59.scope: Deactivated successfully.
Dec 13 09:01:13 compute-0 podman[376319]: 2025-12-13 09:01:13.466572556 +0000 UTC m=+0.694866920 container died 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5-merged.mount: Deactivated successfully.
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.501 248514 DEBUG oslo_concurrency.processutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:13 compute-0 podman[376319]: 2025-12-13 09:01:13.520559247 +0000 UTC m=+0.748853611 container remove 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 09:01:13 compute-0 systemd[1]: libpod-conmon-309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59.scope: Deactivated successfully.
Dec 13 09:01:13 compute-0 ceph-mon[76537]: pgmap v3003: 321 pgs: 321 active+clean; 178 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 740 KiB/s rd, 1.3 MiB/s wr, 84 op/s
Dec 13 09:01:13 compute-0 sudo[376243]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.610 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.612 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.612 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.613 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.613 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.616 248514 INFO nova.compute.manager [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Terminating instance
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.618 248514 DEBUG nova.compute.manager [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:01:13 compute-0 sudo[376368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:01:13 compute-0 sudo[376368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:13 compute-0 sudo[376368]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:13 compute-0 kernel: tapda5ed241-e9 (unregistering): left promiscuous mode
Dec 13 09:01:13 compute-0 NetworkManager[50376]: <info>  [1765616473.6807] device (tapda5ed241-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:01:13 compute-0 ovn_controller[148476]: 2025-12-13T09:01:13Z|01303|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=0)
Dec 13 09:01:13 compute-0 ovn_controller[148476]: 2025-12-13T09:01:13Z|01304|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 down in Southbound
Dec 13 09:01:13 compute-0 ovn_controller[148476]: 2025-12-13T09:01:13Z|01305|binding|INFO|Removing iface tapda5ed241-e9 ovn-installed in OVS
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.693 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:13 compute-0 sudo[376412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:01:13 compute-0 sudo[376412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:13 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Dec 13 09:01:13 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007f.scope: Consumed 13.021s CPU time.
Dec 13 09:01:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.729 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.736 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis
Dec 13 09:01:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.739 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:01:13 compute-0 systemd-machined[210538]: Machine qemu-155-instance-0000007f terminated.
Dec 13 09:01:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.742 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[31c2cdf4-a51f-43c1-b0b6-1fc3f0e9cf26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.743 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f namespace which is not needed anymore
Dec 13 09:01:13 compute-0 kernel: tapda5ed241-e9: entered promiscuous mode
Dec 13 09:01:13 compute-0 NetworkManager[50376]: <info>  [1765616473.8445] manager: (tapda5ed241-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:13 compute-0 ovn_controller[148476]: 2025-12-13T09:01:13Z|01306|binding|INFO|Claiming lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 for this chassis.
Dec 13 09:01:13 compute-0 ovn_controller[148476]: 2025-12-13T09:01:13Z|01307|binding|INFO|da5ed241-e9aa-44e8-ac5a-7dcb30431922: Claiming fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 09:01:13 compute-0 kernel: tapda5ed241-e9 (unregistering): left promiscuous mode
Dec 13 09:01:13 compute-0 ovn_controller[148476]: 2025-12-13T09:01:13Z|01308|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 ovn-installed in OVS
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.877 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.884 248514 INFO nova.virt.libvirt.driver [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance destroyed successfully.
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.885 248514 DEBUG nova.objects.instance [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 9a653f54-8266-4f9c-ac5a-b4991534e9fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:01:13 compute-0 ovn_controller[148476]: 2025-12-13T09:01:13Z|01309|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=0)
Dec 13 09:01:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.888 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.896 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.907 248514 DEBUG nova.virt.libvirt.vif [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:00:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1854403821',display_name='tempest-TestNetworkAdvancedServerOps-server-1854403821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1854403821',id=127,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWMM/Lkrglzn6TUU8f1dhcCbz/Yw7nIlbXlIesplWBqBJXHooQUykZVdZMxvRqd3+5+420G2QZFN25AFW8hXICmnni45jcvyASiuhi4RAOuTlyQgzYOGb0LTvE5xWl7hA==',key_name='tempest-TestNetworkAdvancedServerOps-279390887',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:00:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-azzrjv4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:00:52Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=9a653f54-8266-4f9c-ac5a-b4991534e9fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.908 248514 DEBUG nova.network.os_vif_util [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.909 248514 DEBUG nova.network.os_vif_util [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.910 248514 DEBUG os_vif [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.913 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda5ed241-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:13 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [NOTICE]   (375675) : haproxy version is 2.8.14-c23fe91
Dec 13 09:01:13 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [NOTICE]   (375675) : path to executable is /usr/sbin/haproxy
Dec 13 09:01:13 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [ALERT]    (375675) : Current worker (375677) exited with code 143 (Terminated)
Dec 13 09:01:13 compute-0 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [WARNING]  (375675) : All workers exited. Exiting... (0)
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:13 compute-0 systemd[1]: libpod-114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b.scope: Deactivated successfully.
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:01:13 compute-0 nova_compute[248510]: 2025-12-13 09:01:13.922 248514 INFO os_vif [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9')
Dec 13 09:01:13 compute-0 podman[376462]: 2025-12-13 09:01:13.925784195 +0000 UTC m=+0.083450273 container died 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b-userdata-shm.mount: Deactivated successfully.
Dec 13 09:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a26999b8fd1c742d22d53c509972f804cd2c1a3b692246b65cd69c5f1f676657-merged.mount: Deactivated successfully.
Dec 13 09:01:13 compute-0 podman[376462]: 2025-12-13 09:01:13.9774698 +0000 UTC m=+0.135135878 container cleanup 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:01:13 compute-0 systemd[1]: libpod-conmon-114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b.scope: Deactivated successfully.
Dec 13 09:01:14 compute-0 podman[376525]: 2025-12-13 09:01:14.042370959 +0000 UTC m=+0.042139992 container remove 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0690e0a-5253-4070-84dc-6b139eae5e95]: (4, ('Sat Dec 13 09:01:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f (114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b)\n114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b\nSat Dec 13 09:01:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f (114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b)\n114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11eca81f-c9e2-424e-804b-296fb9a6a58c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.053 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b944cc-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:14 compute-0 kernel: tapb9b944cc-b0: left promiscuous mode
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.059 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[58fe5ea5-5d0f-4bff-a0b5-ab6eb2782d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 podman[376531]: 2025-12-13 09:01:14.06692294 +0000 UTC m=+0.054028683 container create 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 09:01:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:01:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366073542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.086 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9223cc7-3351-4e35-8c57-43647f582257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.088 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3c1152-71a0-45c2-aabf-47c7d4d682e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.101 248514 DEBUG oslo_concurrency.processutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:14 compute-0 systemd[1]: Started libpod-conmon-4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856.scope.
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.111 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[402ff549-3e82-48b4-95d9-568772a3bbc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902849, 'reachable_time': 27256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376557, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.112 248514 DEBUG nova.compute.provider_tree [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:01:14 compute-0 systemd[1]: run-netns-ovnmeta\x2db9b944cc\x2db37e\x2d492b\x2dabd7\x2db6bfb9227f0f.mount: Deactivated successfully.
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.115 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.115 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8c589b-4ecf-473e-b191-041cb5a07adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.117 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.118 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.119 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7b479a-44a7-4977-afc1-9a10fc17175a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.120 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.121 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:01:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.121 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[827823fb-6dae-4845-8271-8d5f0be27a47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:14 compute-0 podman[376531]: 2025-12-13 09:01:14.044391298 +0000 UTC m=+0.031497061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:01:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.154 248514 DEBUG nova.scheduler.client.report [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:01:14 compute-0 podman[376531]: 2025-12-13 09:01:14.161047374 +0000 UTC m=+0.148153167 container init 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Dec 13 09:01:14 compute-0 podman[376531]: 2025-12-13 09:01:14.170442404 +0000 UTC m=+0.157548147 container start 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:01:14 compute-0 podman[376531]: 2025-12-13 09:01:14.17394697 +0000 UTC m=+0.161052723 container attach 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 09:01:14 compute-0 eager_bhaskara[376559]: 167 167
Dec 13 09:01:14 compute-0 systemd[1]: libpod-4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856.scope: Deactivated successfully.
Dec 13 09:01:14 compute-0 conmon[376559]: conmon 4c70788d47719f324d82 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856.scope/container/memory.events
Dec 13 09:01:14 compute-0 podman[376531]: 2025-12-13 09:01:14.179362422 +0000 UTC m=+0.166468165 container died 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.195 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:14 compute-0 podman[376531]: 2025-12-13 09:01:14.226920227 +0000 UTC m=+0.214025970 container remove 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 09:01:14 compute-0 systemd[1]: libpod-conmon-4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856.scope: Deactivated successfully.
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.269 248514 INFO nova.virt.libvirt.driver [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Deleting instance files /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb_del
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.270 248514 INFO nova.virt.libvirt.driver [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Deletion of /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb_del complete
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.298 248514 INFO nova.scheduler.client.report [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 046adf00-a09d-4e5c-89d5-e63a2ee02fc1
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.412 248514 INFO nova.compute.manager [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Took 0.79 seconds to destroy the instance on the hypervisor.
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.413 248514 DEBUG oslo.service.loopingcall [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.413 248514 DEBUG nova.compute.manager [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.413 248514 DEBUG nova.network.neutron [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:01:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 96 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 790 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 09:01:14 compute-0 podman[376584]: 2025-12-13 09:01:14.477660824 +0000 UTC m=+0.059280232 container create 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0313522abdda27c9850ad1dd1483fa3c45131d585bf3500bfaf34b66cb1dbf6-merged.mount: Deactivated successfully.
Dec 13 09:01:14 compute-0 systemd[1]: Started libpod-conmon-8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684.scope.
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.532 248514 DEBUG nova.compute.manager [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.533 248514 DEBUG nova.compute.manager [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing instance network info cache due to event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.533 248514 DEBUG oslo_concurrency.lockutils [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.533 248514 DEBUG oslo_concurrency.lockutils [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.534 248514 DEBUG nova.network.neutron [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.537 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2366073542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:14 compute-0 ceph-mon[76537]: pgmap v3004: 321 pgs: 321 active+clean; 96 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 790 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 09:01:14 compute-0 podman[376584]: 2025-12-13 09:01:14.460575046 +0000 UTC m=+0.042194484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:01:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:14 compute-0 podman[376584]: 2025-12-13 09:01:14.578348369 +0000 UTC m=+0.159967827 container init 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec 13 09:01:14 compute-0 podman[376584]: 2025-12-13 09:01:14.587593475 +0000 UTC m=+0.169212903 container start 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:01:14 compute-0 podman[376584]: 2025-12-13 09:01:14.590854365 +0000 UTC m=+0.172473783 container attach 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.850 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.851 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:01:14 compute-0 nova_compute[248510]: 2025-12-13 09:01:14.851 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:14 compute-0 brave_jepsen[376600]: {
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:     "0": [
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:         {
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "devices": [
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "/dev/loop3"
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             ],
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_name": "ceph_lv0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_size": "21470642176",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "name": "ceph_lv0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "tags": {
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cluster_name": "ceph",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.crush_device_class": "",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.encrypted": "0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.objectstore": "bluestore",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osd_id": "0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.type": "block",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.vdo": "0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.with_tpm": "0"
Dec 13 09:01:14 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             },
Dec 13 09:01:14 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "type": "block",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "vg_name": "ceph_vg0"
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:         }
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:     ],
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:     "1": [
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:         {
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "devices": [
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "/dev/loop4"
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             ],
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_name": "ceph_lv1",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_size": "21470642176",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "name": "ceph_lv1",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "tags": {
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cluster_name": "ceph",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.crush_device_class": "",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.encrypted": "0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.objectstore": "bluestore",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osd_id": "1",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.type": "block",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.vdo": "0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.with_tpm": "0"
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             },
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "type": "block",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "vg_name": "ceph_vg1"
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:         }
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:     ],
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:     "2": [
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:         {
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "devices": [
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "/dev/loop5"
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             ],
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_name": "ceph_lv2",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_size": "21470642176",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "name": "ceph_lv2",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "tags": {
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.cluster_name": "ceph",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.crush_device_class": "",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.encrypted": "0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.objectstore": "bluestore",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osd_id": "2",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.type": "block",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.vdo": "0",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:                 "ceph.with_tpm": "0"
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             },
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "type": "block",
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:             "vg_name": "ceph_vg2"
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:         }
Dec 13 09:01:14 compute-0 brave_jepsen[376600]:     ]
Dec 13 09:01:14 compute-0 brave_jepsen[376600]: }
Dec 13 09:01:14 compute-0 systemd[1]: libpod-8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684.scope: Deactivated successfully.
Dec 13 09:01:14 compute-0 podman[376610]: 2025-12-13 09:01:14.971290907 +0000 UTC m=+0.026673543 container died 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09-merged.mount: Deactivated successfully.
Dec 13 09:01:15 compute-0 podman[376610]: 2025-12-13 09:01:15.011407729 +0000 UTC m=+0.066790335 container remove 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:01:15 compute-0 systemd[1]: libpod-conmon-8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684.scope: Deactivated successfully.
Dec 13 09:01:15 compute-0 sudo[376412]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:01:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3773702943' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:01:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:01:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3773702943' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:01:15 compute-0 sudo[376623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:01:15 compute-0 sudo[376623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:15 compute-0 sudo[376623]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:15 compute-0 sudo[376648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:01:15 compute-0 sudo[376648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:15 compute-0 podman[376685]: 2025-12-13 09:01:15.50534943 +0000 UTC m=+0.046731585 container create 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:01:15 compute-0 systemd[1]: Started libpod-conmon-4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae.scope.
Dec 13 09:01:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3773702943' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:01:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3773702943' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:01:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:01:15 compute-0 podman[376685]: 2025-12-13 09:01:15.488813885 +0000 UTC m=+0.030196060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:01:15 compute-0 podman[376685]: 2025-12-13 09:01:15.593050557 +0000 UTC m=+0.134432722 container init 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:01:15 compute-0 podman[376685]: 2025-12-13 09:01:15.598410208 +0000 UTC m=+0.139792353 container start 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:01:15 compute-0 podman[376685]: 2025-12-13 09:01:15.600854218 +0000 UTC m=+0.142236403 container attach 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 09:01:15 compute-0 strange_curie[376702]: 167 167
Dec 13 09:01:15 compute-0 systemd[1]: libpod-4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae.scope: Deactivated successfully.
Dec 13 09:01:15 compute-0 podman[376685]: 2025-12-13 09:01:15.602888968 +0000 UTC m=+0.144271123 container died 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 09:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-db1f216c1beb84723ed828a15830f4296f6c7cff922e654687f68828c14bb6fe-merged.mount: Deactivated successfully.
Dec 13 09:01:15 compute-0 podman[376685]: 2025-12-13 09:01:15.641699878 +0000 UTC m=+0.183082043 container remove 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:01:15 compute-0 systemd[1]: libpod-conmon-4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae.scope: Deactivated successfully.
Dec 13 09:01:15 compute-0 nova_compute[248510]: 2025-12-13 09:01:15.780 248514 DEBUG nova.network.neutron [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:15 compute-0 nova_compute[248510]: 2025-12-13 09:01:15.868 248514 DEBUG nova.compute.manager [req-5ea56200-92c3-4a6f-8566-15a1fc0fd7fe req-9b98f308-c591-4812-8f21-42df3fbf2282 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-deleted-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:15 compute-0 nova_compute[248510]: 2025-12-13 09:01:15.868 248514 INFO nova.compute.manager [req-5ea56200-92c3-4a6f-8566-15a1fc0fd7fe req-9b98f308-c591-4812-8f21-42df3fbf2282 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Neutron deleted interface da5ed241-e9aa-44e8-ac5a-7dcb30431922; detaching it from the instance and deleting it from the info cache
Dec 13 09:01:15 compute-0 nova_compute[248510]: 2025-12-13 09:01:15.869 248514 DEBUG nova.network.neutron [req-5ea56200-92c3-4a6f-8566-15a1fc0fd7fe req-9b98f308-c591-4812-8f21-42df3fbf2282 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:15 compute-0 podman[376724]: 2025-12-13 09:01:15.871258687 +0000 UTC m=+0.050249541 container create ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:01:15 compute-0 nova_compute[248510]: 2025-12-13 09:01:15.888 248514 INFO nova.compute.manager [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Took 1.47 seconds to deallocate network for instance.
Dec 13 09:01:15 compute-0 nova_compute[248510]: 2025-12-13 09:01:15.901 248514 DEBUG nova.compute.manager [req-5ea56200-92c3-4a6f-8566-15a1fc0fd7fe req-9b98f308-c591-4812-8f21-42df3fbf2282 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Detach interface failed, port_id=da5ed241-e9aa-44e8-ac5a-7dcb30431922, reason: Instance 9a653f54-8266-4f9c-ac5a-b4991534e9fb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:01:15 compute-0 systemd[1]: Started libpod-conmon-ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d.scope.
Dec 13 09:01:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:15 compute-0 podman[376724]: 2025-12-13 09:01:15.846723346 +0000 UTC m=+0.025714310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:01:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:15 compute-0 podman[376724]: 2025-12-13 09:01:15.979757163 +0000 UTC m=+0.158748017 container init ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 09:01:15 compute-0 podman[376724]: 2025-12-13 09:01:15.988749573 +0000 UTC m=+0.167740427 container start ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:01:15 compute-0 podman[376724]: 2025-12-13 09:01:15.992360151 +0000 UTC m=+0.171351005 container attach ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.033 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.034 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.079 248514 DEBUG oslo_concurrency.processutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 96 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 13 09:01:16 compute-0 ceph-mon[76537]: pgmap v3005: 321 pgs: 321 active+clean; 96 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 13 09:01:16 compute-0 lvm[376835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:01:16 compute-0 lvm[376835]: VG ceph_vg0 finished
Dec 13 09:01:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:01:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330621943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:16 compute-0 lvm[376837]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:01:16 compute-0 lvm[376837]: VG ceph_vg1 finished
Dec 13 09:01:16 compute-0 lvm[376840]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:01:16 compute-0 lvm[376840]: VG ceph_vg0 finished
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.676 248514 DEBUG oslo_concurrency.processutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.683 248514 DEBUG nova.compute.provider_tree [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:01:16 compute-0 lvm[376841]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:01:16 compute-0 lvm[376841]: VG ceph_vg2 finished
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.704 248514 DEBUG nova.network.neutron [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updated VIF entry in instance network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.704 248514 DEBUG nova.network.neutron [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.709 248514 DEBUG nova.scheduler.client.report [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.777 248514 DEBUG oslo_concurrency.lockutils [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:01:16 compute-0 pensive_wright[376740]: {}
Dec 13 09:01:16 compute-0 systemd[1]: libpod-ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d.scope: Deactivated successfully.
Dec 13 09:01:16 compute-0 systemd[1]: libpod-ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d.scope: Consumed 1.295s CPU time.
Dec 13 09:01:16 compute-0 podman[376724]: 2025-12-13 09:01:16.805061675 +0000 UTC m=+0.984052549 container died ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:01:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0-merged.mount: Deactivated successfully.
Dec 13 09:01:16 compute-0 podman[376724]: 2025-12-13 09:01:16.849470052 +0000 UTC m=+1.028460906 container remove ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:01:16 compute-0 systemd[1]: libpod-conmon-ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d.scope: Deactivated successfully.
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.865 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:16 compute-0 sudo[376648]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.904 248514 INFO nova.scheduler.client.report [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance 9a653f54-8266-4f9c-ac5a-b4991534e9fb
Dec 13 09:01:16 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:01:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:01:16 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:01:16 compute-0 sudo[376858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:01:16 compute-0 sudo[376858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:01:16 compute-0 sudo[376858]: pam_unix(sudo:session): session closed for user root
Dec 13 09:01:16 compute-0 nova_compute[248510]: 2025-12-13 09:01:16.997 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1330621943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:01:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:01:17 compute-0 nova_compute[248510]: 2025-12-13 09:01:17.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 64 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 13 09:01:18 compute-0 ceph-mon[76537]: pgmap v3006: 321 pgs: 321 active+clean; 64 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 13 09:01:18 compute-0 nova_compute[248510]: 2025-12-13 09:01:18.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:19 compute-0 nova_compute[248510]: 2025-12-13 09:01:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:19 compute-0 nova_compute[248510]: 2025-12-13 09:01:19.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:19 compute-0 nova_compute[248510]: 2025-12-13 09:01:19.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:19 compute-0 nova_compute[248510]: 2025-12-13 09:01:19.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:19 compute-0 nova_compute[248510]: 2025-12-13 09:01:19.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:01:19 compute-0 nova_compute[248510]: 2025-12-13 09:01:19.817 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:01:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433851384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:20 compute-0 nova_compute[248510]: 2025-12-13 09:01:20.398 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3433851384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 1.6 MiB/s wr, 93 op/s
Dec 13 09:01:20 compute-0 nova_compute[248510]: 2025-12-13 09:01:20.565 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:01:20 compute-0 nova_compute[248510]: 2025-12-13 09:01:20.566 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3446MB free_disk=59.97294283658266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:01:20 compute-0 nova_compute[248510]: 2025-12-13 09:01:20.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:20 compute-0 nova_compute[248510]: 2025-12-13 09:01:20.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:20 compute-0 nova_compute[248510]: 2025-12-13 09:01:20.645 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:01:20 compute-0 nova_compute[248510]: 2025-12-13 09:01:20.646 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:01:20 compute-0 nova_compute[248510]: 2025-12-13 09:01:20.666 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:21 compute-0 nova_compute[248510]: 2025-12-13 09:01:21.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:21 compute-0 nova_compute[248510]: 2025-12-13 09:01:21.176 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:01:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2760382793' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:21 compute-0 nova_compute[248510]: 2025-12-13 09:01:21.231 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:21 compute-0 nova_compute[248510]: 2025-12-13 09:01:21.238 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:01:21 compute-0 nova_compute[248510]: 2025-12-13 09:01:21.257 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4227178868833227e-05 of space, bias 1.0, pg target 0.004268153660649968 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669669102060614 of space, bias 1.0, pg target 0.2009007306181842 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.725337214369618e-07 of space, bias 4.0, pg target 0.0006870404657243541 quantized to 16 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:01:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:01:21 compute-0 nova_compute[248510]: 2025-12-13 09:01:21.316 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:01:21 compute-0 nova_compute[248510]: 2025-12-13 09:01:21.317 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:21 compute-0 ceph-mon[76537]: pgmap v3007: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 1.6 MiB/s wr, 93 op/s
Dec 13 09:01:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2760382793' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 774 KiB/s wr, 58 op/s
Dec 13 09:01:22 compute-0 nova_compute[248510]: 2025-12-13 09:01:22.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:23 compute-0 nova_compute[248510]: 2025-12-13 09:01:23.317 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:23 compute-0 ceph-mon[76537]: pgmap v3008: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 774 KiB/s wr, 58 op/s
Dec 13 09:01:23 compute-0 nova_compute[248510]: 2025-12-13 09:01:23.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:24 compute-0 nova_compute[248510]: 2025-12-13 09:01:24.019 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616469.0181606, 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:24 compute-0 nova_compute[248510]: 2025-12-13 09:01:24.020 248514 INFO nova.compute.manager [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] VM Stopped (Lifecycle Event)
Dec 13 09:01:24 compute-0 nova_compute[248510]: 2025-12-13 09:01:24.066 248514 DEBUG nova.compute.manager [None req-d65f99fc-c130-4f1d-a1fd-4ab0c8dfbeb0 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 774 KiB/s wr, 58 op/s
Dec 13 09:01:24 compute-0 nova_compute[248510]: 2025-12-13 09:01:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:24 compute-0 nova_compute[248510]: 2025-12-13 09:01:24.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.288 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.288 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.355 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:01:25 compute-0 ceph-mon[76537]: pgmap v3009: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 774 KiB/s wr, 58 op/s
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.485 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.486 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.527 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.528 248514 INFO nova.compute.claims [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:01:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:25.532 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:25.533 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.707 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:25 compute-0 nova_compute[248510]: 2025-12-13 09:01:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:01:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256721503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.268 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.277 248514 DEBUG nova.compute.provider_tree [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.303 248514 DEBUG nova.scheduler.client.report [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.328 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.329 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.402 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.402 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.427 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:01:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.460 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:01:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3256721503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.591 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.592 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.593 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Creating image(s)
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.619 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.647 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.675 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.680 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.796 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.797 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.798 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.798 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.819 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:26 compute-0 nova_compute[248510]: 2025-12-13 09:01:26.823 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.045 248514 DEBUG nova.policy [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.139 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.196 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.267 248514 DEBUG nova.objects.instance [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 9519b20c-c79b-41e3-922c-362e2d3a7ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.290 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.291 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Ensure instance console log exists: /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.291 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.291 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.292 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:27 compute-0 ceph-mon[76537]: pgmap v3010: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Dec 13 09:01:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:27.535 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:27 compute-0 nova_compute[248510]: 2025-12-13 09:01:27.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 54 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 455 KiB/s wr, 34 op/s
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.611 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Successfully updated port: 3ba83e70-35d2-4947-85b4-64f21eae817c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.655 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.655 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.655 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.762 248514 DEBUG nova.compute.manager [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.762 248514 DEBUG nova.compute.manager [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Refreshing instance network info cache due to event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.763 248514 DEBUG oslo_concurrency.lockutils [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.875 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.878 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616473.8758311, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.878 248514 INFO nova.compute.manager [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Stopped (Lifecycle Event)
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.902 248514 DEBUG nova.compute.manager [None req-22e23101-cacf-45f8-bd46-235994f90eef - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:28 compute-0 nova_compute[248510]: 2025-12-13 09:01:28.975 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:29 compute-0 ceph-mon[76537]: pgmap v3011: 321 pgs: 321 active+clean; 54 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 455 KiB/s wr, 34 op/s
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.928 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.952 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.953 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance network_info: |[{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.953 248514 DEBUG oslo_concurrency.lockutils [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.954 248514 DEBUG nova.network.neutron [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Refreshing network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.957 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start _get_guest_xml network_info=[{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.961 248514 WARNING nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.966 248514 DEBUG nova.virt.libvirt.host [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.967 248514 DEBUG nova.virt.libvirt.host [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.971 248514 DEBUG nova.virt.libvirt.host [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.972 248514 DEBUG nova.virt.libvirt.host [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.972 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.972 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.973 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.973 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.973 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.974 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.974 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.974 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.974 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.975 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.975 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.975 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:01:29 compute-0 nova_compute[248510]: 2025-12-13 09:01:29.978 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Dec 13 09:01:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:01:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663929690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:01:30 compute-0 nova_compute[248510]: 2025-12-13 09:01:30.561 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:30 compute-0 nova_compute[248510]: 2025-12-13 09:01:30.587 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:30 compute-0 nova_compute[248510]: 2025-12-13 09:01:30.591 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:01:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/489710228' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.142 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.143 248514 DEBUG nova.virt.libvirt.vif [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-465153182',display_name='tempest-TestNetworkBasicOps-server-465153182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-465153182',id=129,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIO+yWVJ497mtKMlg6/trFGMKEjgI30rNV/FBctamIJHtuqDBXodOJRCF+bmPrWOZ8OVFPBiR7SruaUfZabfP77iTSiPTfosnAjyuR+wA0QlVYwQySGN575QhOgAkbSV7g==',key_name='tempest-TestNetworkBasicOps-815903989',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mmqkoou9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:01:26Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=9519b20c-c79b-41e3-922c-362e2d3a7ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.143 248514 DEBUG nova.network.os_vif_util [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.144 248514 DEBUG nova.network.os_vif_util [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.145 248514 DEBUG nova.objects.instance [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 9519b20c-c79b-41e3-922c-362e2d3a7ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.162 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <uuid>9519b20c-c79b-41e3-922c-362e2d3a7ded</uuid>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <name>instance-00000081</name>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-465153182</nova:name>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:01:29</nova:creationTime>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <nova:port uuid="3ba83e70-35d2-4947-85b4-64f21eae817c">
Dec 13 09:01:31 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <system>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <entry name="serial">9519b20c-c79b-41e3-922c-362e2d3a7ded</entry>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <entry name="uuid">9519b20c-c79b-41e3-922c-362e2d3a7ded</entry>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </system>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <os>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   </os>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <features>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   </features>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9519b20c-c79b-41e3-922c-362e2d3a7ded_disk">
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       </source>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config">
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       </source>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:01:31 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1a:55:74"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <target dev="tap3ba83e70-35"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/console.log" append="off"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <video>
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </video>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:01:31 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:01:31 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:01:31 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:01:31 compute-0 nova_compute[248510]: </domain>
Dec 13 09:01:31 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.163 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Preparing to wait for external event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.163 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.163 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.164 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.164 248514 DEBUG nova.virt.libvirt.vif [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-465153182',display_name='tempest-TestNetworkBasicOps-server-465153182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-465153182',id=129,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIO+yWVJ497mtKMlg6/trFGMKEjgI30rNV/FBctamIJHtuqDBXodOJRCF+bmPrWOZ8OVFPBiR7SruaUfZabfP77iTSiPTfosnAjyuR+wA0QlVYwQySGN575QhOgAkbSV7g==',key_name='tempest-TestNetworkBasicOps-815903989',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mmqkoou9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:01:26Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=9519b20c-c79b-41e3-922c-362e2d3a7ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.164 248514 DEBUG nova.network.os_vif_util [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.165 248514 DEBUG nova.network.os_vif_util [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.165 248514 DEBUG os_vif [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.166 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.167 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.170 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ba83e70-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.170 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ba83e70-35, col_values=(('external_ids', {'iface-id': '3ba83e70-35d2-4947-85b4-64f21eae817c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:55:74', 'vm-uuid': '9519b20c-c79b-41e3-922c-362e2d3a7ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:31 compute-0 NetworkManager[50376]: <info>  [1765616491.1726] manager: (tap3ba83e70-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.176 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.179 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.180 248514 INFO os_vif [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35')
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.242 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.242 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.243 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:1a:55:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.244 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Using config drive
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.275 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.426 248514 DEBUG nova.network.neutron [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Updated VIF entry in instance network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.427 248514 DEBUG nova.network.neutron [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.447 248514 DEBUG oslo_concurrency.lockutils [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:01:31 compute-0 ceph-mon[76537]: pgmap v3012: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Dec 13 09:01:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/663929690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:01:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/489710228' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.901 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Creating config drive at /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config
Dec 13 09:01:31 compute-0 nova_compute[248510]: 2025-12-13 09:01:31.906 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9lon17q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.062 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9lon17q" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.088 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.092 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.242 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.243 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Deleting local config drive /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config because it was imported into RBD.
Dec 13 09:01:32 compute-0 kernel: tap3ba83e70-35: entered promiscuous mode
Dec 13 09:01:32 compute-0 NetworkManager[50376]: <info>  [1765616492.3018] manager: (tap3ba83e70-35): new Tun device (/org/freedesktop/NetworkManager/Devices/532)
Dec 13 09:01:32 compute-0 ovn_controller[148476]: 2025-12-13T09:01:32Z|01310|binding|INFO|Claiming lport 3ba83e70-35d2-4947-85b4-64f21eae817c for this chassis.
Dec 13 09:01:32 compute-0 ovn_controller[148476]: 2025-12-13T09:01:32Z|01311|binding|INFO|3ba83e70-35d2-4947-85b4-64f21eae817c: Claiming fa:16:3e:1a:55:74 10.100.0.6
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.306 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 NetworkManager[50376]: <info>  [1765616492.3125] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/533)
Dec 13 09:01:32 compute-0 NetworkManager[50376]: <info>  [1765616492.3131] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Dec 13 09:01:32 compute-0 systemd-machined[210538]: New machine qemu-157-instance-00000081.
Dec 13 09:01:32 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-00000081.
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.385 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9519b20c-c79b-41e3-922c-362e2d3a7ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '9', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.386 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 bound to our chassis
Dec 13 09:01:32 compute-0 systemd-udevd[377253]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.388 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.401 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aff01373-83cf-48ef-b038-b38ceb3962b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.402 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9da7b572-11 in ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:01:32 compute-0 NetworkManager[50376]: <info>  [1765616492.4076] device (tap3ba83e70-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:01:32 compute-0 NetworkManager[50376]: <info>  [1765616492.4082] device (tap3ba83e70-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.408 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9da7b572-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.409 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8db9632-1d5d-41e1-8ac7-ae7115582101]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.410 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5de73c5-3d15-43aa-93b5-d6cff210a698]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.427 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e02e6e-c75b-40fc-bcfd-537250d7c5dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_controller[148476]: 2025-12-13T09:01:32Z|01312|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c ovn-installed in OVS
Dec 13 09:01:32 compute-0 ovn_controller[148476]: 2025-12-13T09:01:32Z|01313|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c up in Southbound
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.436 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.442 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11da7df9-f333-4352-982d-d7cf7f18b53d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.475 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f29cbf64-7347-4e44-b8d2-c36962e03df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.483 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71d69425-5ae8-48a5-97ec-803712571cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 NetworkManager[50376]: <info>  [1765616492.4844] manager: (tap9da7b572-10): new Veth device (/org/freedesktop/NetworkManager/Devices/535)
Dec 13 09:01:32 compute-0 systemd-udevd[377258]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.517 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[51e9f158-6f3b-440e-945c-e303512b3878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.522 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31efe345-5b05-4d20-b571-969693caba83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 NetworkManager[50376]: <info>  [1765616492.5446] device (tap9da7b572-10): carrier: link connected
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.550 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[38f8a3de-f4cb-4cf0-8e1c-d14636109f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e691ec68-ae0a-4e26-a8c3-d7c7263a8986]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da7b572-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:10:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906976, 'reachable_time': 29183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377288, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.590 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f6bbe9f7-29a1-4c0d-8487-dc57423a486d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:101b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 906976, 'tstamp': 906976}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377289, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.607 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c21dea-3c2a-4292-8694-f78ff7bcd9a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da7b572-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:10:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 3, 'rx_bytes': 180, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 3, 'rx_bytes': 180, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906976, 'reachable_time': 29183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377290, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27ab6ac2-e5d1-403a-a4c0-20488adcd586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.706 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d46ebb3f-6736-4a24-bf3d-84f1207186e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.708 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da7b572-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.708 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.709 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9da7b572-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:32 compute-0 NetworkManager[50376]: <info>  [1765616492.7113] manager: (tap9da7b572-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Dec 13 09:01:32 compute-0 kernel: tap9da7b572-10: entered promiscuous mode
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.718 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9da7b572-10, col_values=(('external_ids', {'iface-id': '92adc7c4-81cb-40ce-bd1a-0498c06b06d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 ovn_controller[148476]: 2025-12-13T09:01:32Z|01314|binding|INFO|Releasing lport 92adc7c4-81cb-40ce-bd1a-0498c06b06d5 from this chassis (sb_readonly=0)
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.737 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.737 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[978f7513-c1d2-4374-9887-6fb968a9a4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.739 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:01:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.740 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'env', 'PROCESS_TAG=haproxy-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9da7b572-116c-40f0-9b1e-0183fa9d3f87.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:01:32 compute-0 nova_compute[248510]: 2025-12-13 09:01:32.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:32 compute-0 sshd-session[377267]: Invalid user solana from 193.32.162.146 port 55034
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.041 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616493.0402653, 9519b20c-c79b-41e3-922c-362e2d3a7ded => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.041 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] VM Started (Lifecycle Event)
Dec 13 09:01:33 compute-0 sshd-session[377267]: Connection closed by invalid user solana 193.32.162.146 port 55034 [preauth]
Dec 13 09:01:33 compute-0 podman[377364]: 2025-12-13 09:01:33.109141557 +0000 UTC m=+0.050757173 container create 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:01:33 compute-0 systemd[1]: Started libpod-conmon-93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc.scope.
Dec 13 09:01:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:01:33 compute-0 podman[377364]: 2025-12-13 09:01:33.082263229 +0000 UTC m=+0.023878865 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c098a60e057cfd60886eb2452e9a4fdba031c53e7157bf4406be4a772701ff1a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.199 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:33 compute-0 podman[377364]: 2025-12-13 09:01:33.204570373 +0000 UTC m=+0.146186019 container init 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.206 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616493.0404358, 9519b20c-c79b-41e3-922c-362e2d3a7ded => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.207 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] VM Paused (Lifecycle Event)
Dec 13 09:01:33 compute-0 podman[377364]: 2025-12-13 09:01:33.21179042 +0000 UTC m=+0.153406026 container start 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.236 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:33 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [NOTICE]   (377383) : New worker (377385) forked
Dec 13 09:01:33 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [NOTICE]   (377383) : Loading success.
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.241 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.281 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.463 248514 DEBUG nova.compute.manager [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.463 248514 DEBUG oslo_concurrency.lockutils [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.464 248514 DEBUG oslo_concurrency.lockutils [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.464 248514 DEBUG oslo_concurrency.lockutils [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.464 248514 DEBUG nova.compute.manager [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Processing event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.465 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.469 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616493.4691448, 9519b20c-c79b-41e3-922c-362e2d3a7ded => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.469 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] VM Resumed (Lifecycle Event)
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.471 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.473 248514 INFO nova.virt.libvirt.driver [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance spawned successfully.
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.473 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.507 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.513 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.514 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.514 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.514 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.515 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.516 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:33 compute-0 ceph-mon[76537]: pgmap v3013: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.522 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.560 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.645 248514 INFO nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Took 7.05 seconds to spawn the instance on the hypervisor.
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.647 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.718 248514 INFO nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Took 8.27 seconds to build instance.
Dec 13 09:01:33 compute-0 nova_compute[248510]: 2025-12-13 09:01:33.748 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 13 09:01:35 compute-0 ceph-mon[76537]: pgmap v3014: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 13 09:01:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:35 compute-0 nova_compute[248510]: 2025-12-13 09:01:35.994 248514 DEBUG nova.compute.manager [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:35 compute-0 nova_compute[248510]: 2025-12-13 09:01:35.994 248514 DEBUG oslo_concurrency.lockutils [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:35 compute-0 nova_compute[248510]: 2025-12-13 09:01:35.995 248514 DEBUG oslo_concurrency.lockutils [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:35 compute-0 nova_compute[248510]: 2025-12-13 09:01:35.995 248514 DEBUG oslo_concurrency.lockutils [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:35 compute-0 nova_compute[248510]: 2025-12-13 09:01:35.995 248514 DEBUG nova.compute.manager [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] No waiting events found dispatching network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:01:35 compute-0 nova_compute[248510]: 2025-12-13 09:01:35.995 248514 WARNING nova.compute.manager [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received unexpected event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with vm_state active and task_state None.
Dec 13 09:01:36 compute-0 nova_compute[248510]: 2025-12-13 09:01:36.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 13 09:01:36 compute-0 ceph-mon[76537]: pgmap v3015: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.307 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.309 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.310 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.311 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.312 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.314 248514 INFO nova.compute.manager [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Terminating instance
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.316 248514 DEBUG nova.compute.manager [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:01:37 compute-0 kernel: tap3ba83e70-35 (unregistering): left promiscuous mode
Dec 13 09:01:37 compute-0 NetworkManager[50376]: <info>  [1765616497.3723] device (tap3ba83e70-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:01:37 compute-0 ovn_controller[148476]: 2025-12-13T09:01:37Z|01315|binding|INFO|Releasing lport 3ba83e70-35d2-4947-85b4-64f21eae817c from this chassis (sb_readonly=0)
Dec 13 09:01:37 compute-0 ovn_controller[148476]: 2025-12-13T09:01:37Z|01316|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c down in Southbound
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:37 compute-0 ovn_controller[148476]: 2025-12-13T09:01:37Z|01317|binding|INFO|Removing iface tap3ba83e70-35 ovn-installed in OVS
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.387 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9519b20c-c79b-41e3-922c-362e2d3a7ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '11', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.392 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 unbound from our chassis
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.395 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.396 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78a7e7b1-46b1-493c-b162-ab6405892227]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.397 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 namespace which is not needed anymore
Dec 13 09:01:37 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000081.scope: Deactivated successfully.
Dec 13 09:01:37 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000081.scope: Consumed 4.607s CPU time.
Dec 13 09:01:37 compute-0 systemd-machined[210538]: Machine qemu-157-instance-00000081 terminated.
Dec 13 09:01:37 compute-0 podman[377399]: 2025-12-13 09:01:37.471944838 +0000 UTC m=+0.063426514 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:01:37 compute-0 podman[377398]: 2025-12-13 09:01:37.472523142 +0000 UTC m=+0.063793963 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 09:01:37 compute-0 podman[377396]: 2025-12-13 09:01:37.501133982 +0000 UTC m=+0.093166411 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 09:01:37 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [NOTICE]   (377383) : haproxy version is 2.8.14-c23fe91
Dec 13 09:01:37 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [NOTICE]   (377383) : path to executable is /usr/sbin/haproxy
Dec 13 09:01:37 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [WARNING]  (377383) : Exiting Master process...
Dec 13 09:01:37 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [ALERT]    (377383) : Current worker (377385) exited with code 143 (Terminated)
Dec 13 09:01:37 compute-0 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [WARNING]  (377383) : All workers exited. Exiting... (0)
Dec 13 09:01:37 compute-0 systemd[1]: libpod-93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc.scope: Deactivated successfully.
Dec 13 09:01:37 compute-0 podman[377475]: 2025-12-13 09:01:37.5472185 +0000 UTC m=+0.050684361 container died 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.555 248514 INFO nova.virt.libvirt.driver [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance destroyed successfully.
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.556 248514 DEBUG nova.objects.instance [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 9519b20c-c79b-41e3-922c-362e2d3a7ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c098a60e057cfd60886eb2452e9a4fdba031c53e7157bf4406be4a772701ff1a-merged.mount: Deactivated successfully.
Dec 13 09:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc-userdata-shm.mount: Deactivated successfully.
Dec 13 09:01:37 compute-0 podman[377475]: 2025-12-13 09:01:37.607907996 +0000 UTC m=+0.111373847 container cleanup 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 09:01:37 compute-0 systemd[1]: libpod-conmon-93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc.scope: Deactivated successfully.
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.672 248514 DEBUG nova.virt.libvirt.vif [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-465153182',display_name='tempest-TestNetworkBasicOps-server-465153182',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-465153182',id=129,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIO+yWVJ497mtKMlg6/trFGMKEjgI30rNV/FBctamIJHtuqDBXodOJRCF+bmPrWOZ8OVFPBiR7SruaUfZabfP77iTSiPTfosnAjyuR+wA0QlVYwQySGN575QhOgAkbSV7g==',key_name='tempest-TestNetworkBasicOps-815903989',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mmqkoou9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:01:33Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=9519b20c-c79b-41e3-922c-362e2d3a7ded,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.673 248514 DEBUG nova.network.os_vif_util [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.674 248514 DEBUG nova.network.os_vif_util [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.674 248514 DEBUG os_vif [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.677 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ba83e70-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:37 compute-0 podman[377520]: 2025-12-13 09:01:37.681584079 +0000 UTC m=+0.049953104 container remove 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.682 248514 INFO os_vif [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35')
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.687 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f198bf4c-4124-42a2-84ed-751d5120d5bc]: (4, ('Sat Dec 13 09:01:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 (93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc)\n93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc\nSat Dec 13 09:01:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 (93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc)\n93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.688 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c81c8e0-f010-415c-b6d6-56f91ca05bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.689 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da7b572-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:37 compute-0 kernel: tap9da7b572-10: left promiscuous mode
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.709 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a977a84-7784-4f46-99d2-7b5e7e08c0f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.723 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[237f1c25-80e8-47c8-b539-5212f97aa272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d625cb-f445-4093-ae30-dfd04e17a2f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a882ffa-6560-441e-a446-05330bfa64b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906968, 'reachable_time': 44046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377553, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.750 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:01:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.750 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cf5635-73fc-4298-b7d4-8018026ee5f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d9da7b572\x2d116c\x2d40f0\x2d9b1e\x2d0183fa9d3f87.mount: Deactivated successfully.
Dec 13 09:01:37 compute-0 nova_compute[248510]: 2025-12-13 09:01:37.892 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:38 compute-0 nova_compute[248510]: 2025-12-13 09:01:38.016 248514 INFO nova.virt.libvirt.driver [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Deleting instance files /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded_del
Dec 13 09:01:38 compute-0 nova_compute[248510]: 2025-12-13 09:01:38.017 248514 INFO nova.virt.libvirt.driver [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Deletion of /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded_del complete
Dec 13 09:01:38 compute-0 nova_compute[248510]: 2025-12-13 09:01:38.098 248514 INFO nova.compute.manager [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Took 0.78 seconds to destroy the instance on the hypervisor.
Dec 13 09:01:38 compute-0 nova_compute[248510]: 2025-12-13 09:01:38.099 248514 DEBUG oslo.service.loopingcall [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:01:38 compute-0 nova_compute[248510]: 2025-12-13 09:01:38.099 248514 DEBUG nova.compute.manager [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:01:38 compute-0 nova_compute[248510]: 2025-12-13 09:01:38.099 248514 DEBUG nova.network.neutron [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:01:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 74 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 788 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 13 09:01:39 compute-0 nova_compute[248510]: 2025-12-13 09:01:39.577 248514 DEBUG nova.compute.manager [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:39 compute-0 nova_compute[248510]: 2025-12-13 09:01:39.577 248514 DEBUG oslo_concurrency.lockutils [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:39 compute-0 nova_compute[248510]: 2025-12-13 09:01:39.577 248514 DEBUG oslo_concurrency.lockutils [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:39 compute-0 nova_compute[248510]: 2025-12-13 09:01:39.578 248514 DEBUG oslo_concurrency.lockutils [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:39 compute-0 nova_compute[248510]: 2025-12-13 09:01:39.578 248514 DEBUG nova.compute.manager [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] No waiting events found dispatching network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:01:39 compute-0 nova_compute[248510]: 2025-12-13 09:01:39.578 248514 DEBUG nova.compute.manager [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:01:39 compute-0 ceph-mon[76537]: pgmap v3016: 321 pgs: 321 active+clean; 74 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 788 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 13 09:01:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:01:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:01:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:01:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:01:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:01:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:01:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Dec 13 09:01:40 compute-0 ceph-mon[76537]: pgmap v3017: 321 pgs: 321 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Dec 13 09:01:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.255 248514 DEBUG nova.network.neutron [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.282 248514 INFO nova.compute.manager [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Took 3.18 seconds to deallocate network for instance.
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.339 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.340 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.420 248514 DEBUG oslo_concurrency.processutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.500 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.501 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.532 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.612 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.678 248514 DEBUG nova.compute.manager [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.679 248514 DEBUG oslo_concurrency.lockutils [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.679 248514 DEBUG oslo_concurrency.lockutils [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.679 248514 DEBUG oslo_concurrency.lockutils [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.680 248514 DEBUG nova.compute.manager [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] No waiting events found dispatching network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:01:41 compute-0 nova_compute[248510]: 2025-12-13 09:01:41.680 248514 WARNING nova.compute.manager [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received unexpected event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with vm_state deleted and task_state None.
Dec 13 09:01:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:01:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234585360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.019 248514 DEBUG oslo_concurrency.processutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.026 248514 DEBUG nova.compute.provider_tree [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:01:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4234585360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.066 248514 DEBUG nova.scheduler.client.report [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.097 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.100 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.105 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.105 248514 INFO nova.compute.claims [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.129 248514 INFO nova.scheduler.client.report [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 9519b20c-c79b-41e3-922c-362e2d3a7ded
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.228 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.247 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:01:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217924076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.945 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.952 248514 DEBUG nova.compute.provider_tree [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:01:42 compute-0 nova_compute[248510]: 2025-12-13 09:01:42.971 248514 DEBUG nova.scheduler.client.report [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.005 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.006 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:01:43 compute-0 ceph-mon[76537]: pgmap v3018: 321 pgs: 321 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 13 09:01:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3217924076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.057 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.057 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.082 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.111 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.210 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.211 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.212 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating image(s)
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.234 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.261 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.283 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.287 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.377 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.378 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.379 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.379 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.399 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.403 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 18506534-de17-4e42-87c7-b2546619f4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.440 248514 DEBUG nova.policy [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.695 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 18506534-de17-4e42-87c7-b2546619f4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:43 compute-0 nova_compute[248510]: 2025-12-13 09:01:43.763 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:01:44 compute-0 nova_compute[248510]: 2025-12-13 09:01:44.114 248514 DEBUG nova.objects.instance [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:01:44 compute-0 nova_compute[248510]: 2025-12-13 09:01:44.140 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:01:44 compute-0 nova_compute[248510]: 2025-12-13 09:01:44.140 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Ensure instance console log exists: /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:01:44 compute-0 nova_compute[248510]: 2025-12-13 09:01:44.141 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:44 compute-0 nova_compute[248510]: 2025-12-13 09:01:44.141 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:44 compute-0 nova_compute[248510]: 2025-12-13 09:01:44.141 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 76 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Dec 13 09:01:45 compute-0 nova_compute[248510]: 2025-12-13 09:01:45.100 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Successfully created port: 698673ab-115a-49aa-b3d5-68392d28aa81 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:01:45 compute-0 ceph-mon[76537]: pgmap v3019: 321 pgs: 321 active+clean; 76 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Dec 13 09:01:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 76 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Dec 13 09:01:46 compute-0 nova_compute[248510]: 2025-12-13 09:01:46.900 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Successfully updated port: 698673ab-115a-49aa-b3d5-68392d28aa81 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:01:46 compute-0 nova_compute[248510]: 2025-12-13 09:01:46.917 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:01:46 compute-0 nova_compute[248510]: 2025-12-13 09:01:46.918 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:01:46 compute-0 nova_compute[248510]: 2025-12-13 09:01:46.918 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:01:47 compute-0 nova_compute[248510]: 2025-12-13 09:01:47.059 248514 DEBUG nova.compute.manager [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:47 compute-0 nova_compute[248510]: 2025-12-13 09:01:47.059 248514 DEBUG nova.compute.manager [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing instance network info cache due to event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:01:47 compute-0 nova_compute[248510]: 2025-12-13 09:01:47.059 248514 DEBUG oslo_concurrency.lockutils [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:01:47 compute-0 nova_compute[248510]: 2025-12-13 09:01:47.328 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:01:47 compute-0 ceph-mon[76537]: pgmap v3020: 321 pgs: 321 active+clean; 76 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Dec 13 09:01:47 compute-0 nova_compute[248510]: 2025-12-13 09:01:47.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:47 compute-0 nova_compute[248510]: 2025-12-13 09:01:47.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 88 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 09:01:49 compute-0 ceph-mon[76537]: pgmap v3021: 321 pgs: 321 active+clean; 88 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 09:01:49 compute-0 nova_compute[248510]: 2025-12-13 09:01:49.982 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.017 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.018 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance network_info: |[{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.018 248514 DEBUG oslo_concurrency.lockutils [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.018 248514 DEBUG nova.network.neutron [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.021 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start _get_guest_xml network_info=[{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.025 248514 WARNING nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.029 248514 DEBUG nova.virt.libvirt.host [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.029 248514 DEBUG nova.virt.libvirt.host [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.032 248514 DEBUG nova.virt.libvirt.host [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.033 248514 DEBUG nova.virt.libvirt.host [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.033 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.033 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.035 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.035 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.035 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.035 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.038 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:01:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744851776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.655 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.681 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.685 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:50 compute-0 nova_compute[248510]: 2025-12-13 09:01:50.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:01:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999546636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.231 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.232 248514 DEBUG nova.virt.libvirt.vif [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:01:43Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.233 248514 DEBUG nova.network.os_vif_util [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.234 248514 DEBUG nova.network.os_vif_util [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.235 248514 DEBUG nova.objects.instance [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.256 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <uuid>18506534-de17-4e42-87c7-b2546619f4d4</uuid>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <name>instance-00000082</name>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1657427533</nova:name>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:01:50</nova:creationTime>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <nova:port uuid="698673ab-115a-49aa-b3d5-68392d28aa81">
Dec 13 09:01:51 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <system>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <entry name="serial">18506534-de17-4e42-87c7-b2546619f4d4</entry>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <entry name="uuid">18506534-de17-4e42-87c7-b2546619f4d4</entry>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </system>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <os>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   </os>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <features>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   </features>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/18506534-de17-4e42-87c7-b2546619f4d4_disk">
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       </source>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/18506534-de17-4e42-87c7-b2546619f4d4_disk.config">
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       </source>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:01:51 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:7b:34:14"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <target dev="tap698673ab-11"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/console.log" append="off"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <video>
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </video>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:01:51 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:01:51 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:01:51 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:01:51 compute-0 nova_compute[248510]: </domain>
Dec 13 09:01:51 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.257 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Preparing to wait for external event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.258 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.259 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.259 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.260 248514 DEBUG nova.virt.libvirt.vif [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:01:43Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.260 248514 DEBUG nova.network.os_vif_util [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.261 248514 DEBUG nova.network.os_vif_util [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.261 248514 DEBUG os_vif [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.262 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.268 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap698673ab-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.268 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap698673ab-11, col_values=(('external_ids', {'iface-id': '698673ab-115a-49aa-b3d5-68392d28aa81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:34:14', 'vm-uuid': '18506534-de17-4e42-87c7-b2546619f4d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:51 compute-0 NetworkManager[50376]: <info>  [1765616511.2710] manager: (tap698673ab-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.274 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.276 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.276 248514 INFO os_vif [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11')
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.428 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.429 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.429 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:7b:34:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.430 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Using config drive
Dec 13 09:01:51 compute-0 nova_compute[248510]: 2025-12-13 09:01:51.450 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:51 compute-0 ceph-mon[76537]: pgmap v3022: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 13 09:01:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3744851776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:01:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/999546636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:01:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.554 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616497.5534518, 9519b20c-c79b-41e3-922c-362e2d3a7ded => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.555 248514 INFO nova.compute.manager [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] VM Stopped (Lifecycle Event)
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.592 248514 DEBUG nova.compute.manager [None req-8b54f665-2adc-43e9-a7a5-955266c3afbc - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.684 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating config drive at /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.690 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsrmu0jpl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.846 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsrmu0jpl" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.879 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.885 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config 18506534-de17-4e42-87c7-b2546619f4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:01:52 compute-0 nova_compute[248510]: 2025-12-13 09:01:52.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.056 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config 18506534-de17-4e42-87c7-b2546619f4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.056 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deleting local config drive /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config because it was imported into RBD.
Dec 13 09:01:53 compute-0 NetworkManager[50376]: <info>  [1765616513.0985] manager: (tap698673ab-11): new Tun device (/org/freedesktop/NetworkManager/Devices/538)
Dec 13 09:01:53 compute-0 kernel: tap698673ab-11: entered promiscuous mode
Dec 13 09:01:53 compute-0 ovn_controller[148476]: 2025-12-13T09:01:53Z|01318|binding|INFO|Claiming lport 698673ab-115a-49aa-b3d5-68392d28aa81 for this chassis.
Dec 13 09:01:53 compute-0 ovn_controller[148476]: 2025-12-13T09:01:53Z|01319|binding|INFO|698673ab-115a-49aa-b3d5-68392d28aa81: Claiming fa:16:3e:7b:34:14 10.100.0.5
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 systemd-udevd[377901]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:01:53 compute-0 systemd-machined[210538]: New machine qemu-158-instance-00000082.
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.137 248514 DEBUG nova.network.neutron [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updated VIF entry in instance network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.138 248514 DEBUG nova.network.neutron [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:01:53 compute-0 NetworkManager[50376]: <info>  [1765616513.1404] device (tap698673ab-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:01:53 compute-0 NetworkManager[50376]: <info>  [1765616513.1414] device (tap698673ab-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.144 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.145 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a bound to our chassis
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.146 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.156 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b01b2f-b9c1-492a-8b62-9864d956db96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-00000082.
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.157 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44b4664e-91 in ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.159 248514 DEBUG oslo_concurrency.lockutils [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.159 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44b4664e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.159 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78ad4c61-cc8d-4f7d-a149-1de7fba44148]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.160 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[625f125b-2d9e-4003-bc84-cbdc3d0bb13b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.205 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[919b03a1-e737-44c5-9100-441f6b72f1d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_controller[148476]: 2025-12-13T09:01:53Z|01320|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 ovn-installed in OVS
Dec 13 09:01:53 compute-0 ovn_controller[148476]: 2025-12-13T09:01:53Z|01321|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 up in Southbound
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[000a6675-b77b-4805-95d9-1541b68f46f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.249 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a62021fb-1fd8-42cf-bc71-5957b0ff73ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 NetworkManager[50376]: <info>  [1765616513.2553] manager: (tap44b4664e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/539)
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.254 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2bdc0c-cac0-4292-b1c4-53523b1d3208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.286 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a3e3cc-80cf-41d7-ab6b-e69ade501f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.290 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ac756119-7934-4620-a672-5804f0cbc3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 NetworkManager[50376]: <info>  [1765616513.3172] device (tap44b4664e-90): carrier: link connected
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.321 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1d1cfd-e326-47ab-b39a-22af5b545440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.337 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec315d9d-9822-4c05-ae98-5691f89752d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44b4664e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:4b:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909053, 'reachable_time': 24631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377935, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.356 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f27b5f5-d4fc-46c3-8c39-a5cd26e9ef06]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:4b6f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 909053, 'tstamp': 909053}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377936, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.378 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9878e9d7-a0b4-46a7-adf4-4dfbd4d0ff89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44b4664e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:4b:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909053, 'reachable_time': 24631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377937, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.412 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[98457172-0019-4795-a0e7-51dfe3490f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.474 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1492b18-fe58-49fc-a4fd-bcfd42cfbc98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.476 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44b4664e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.476 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.476 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44b4664e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:53 compute-0 NetworkManager[50376]: <info>  [1765616513.4796] manager: (tap44b4664e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/540)
Dec 13 09:01:53 compute-0 kernel: tap44b4664e-90: entered promiscuous mode
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.486 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44b4664e-90, col_values=(('external_ids', {'iface-id': '88fafa32-b4a3-42fd-afa1-df6ba2200ad6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.488 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 ovn_controller[148476]: 2025-12-13T09:01:53Z|01322|binding|INFO|Releasing lport 88fafa32-b4a3-42fd-afa1-df6ba2200ad6 from this chassis (sb_readonly=0)
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.492 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.493 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[087744ae-0bca-4786-b685-d08f8afcf358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.494 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:01:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.495 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'env', 'PROCESS_TAG=haproxy-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44b4664e-9eef-4d04-bcdb-68869f16c46a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.502 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:53 compute-0 ceph-mon[76537]: pgmap v3023: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.764 248514 DEBUG nova.compute.manager [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.764 248514 DEBUG oslo_concurrency.lockutils [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.765 248514 DEBUG oslo_concurrency.lockutils [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.765 248514 DEBUG oslo_concurrency.lockutils [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:53 compute-0 nova_compute[248510]: 2025-12-13 09:01:53.765 248514 DEBUG nova.compute.manager [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Processing event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:01:53 compute-0 podman[377969]: 2025-12-13 09:01:53.895241193 +0000 UTC m=+0.049366370 container create 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:01:53 compute-0 systemd[1]: Started libpod-conmon-611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e.scope.
Dec 13 09:01:53 compute-0 podman[377969]: 2025-12-13 09:01:53.870578909 +0000 UTC m=+0.024704106 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:01:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cf72505ed6c58f094c4c16689a309a1fb7eaf088d765885e56e37adbda2ee2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:01:53 compute-0 podman[377969]: 2025-12-13 09:01:53.999740681 +0000 UTC m=+0.153865888 container init 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:01:54 compute-0 podman[377969]: 2025-12-13 09:01:54.009463079 +0000 UTC m=+0.163588256 container start 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 09:01:54 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [NOTICE]   (377988) : New worker (377990) forked
Dec 13 09:01:54 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [NOTICE]   (377988) : Loading success.
Dec 13 09:01:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.490 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616514.4901824, 18506534-de17-4e42-87c7-b2546619f4d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.490 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Started (Lifecycle Event)
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.493 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.496 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.500 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance spawned successfully.
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.501 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.513 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.517 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.528 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.528 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.528 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.538 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616514.4941294, 18506534-de17-4e42-87c7-b2546619f4d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.538 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Paused (Lifecycle Event)
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.608 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.611 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616514.4966018, 18506534-de17-4e42-87c7-b2546619f4d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.611 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Resumed (Lifecycle Event)
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.643 248514 INFO nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Took 11.43 seconds to spawn the instance on the hypervisor.
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.643 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.656 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.660 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.695 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.774 248514 INFO nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Took 13.18 seconds to build instance.
Dec 13 09:01:54 compute-0 nova_compute[248510]: 2025-12-13 09:01:54.796 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:01:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:55 compute-0 ceph-mon[76537]: pgmap v3024: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 09:01:55 compute-0 nova_compute[248510]: 2025-12-13 09:01:55.895 248514 DEBUG nova.compute.manager [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:01:55 compute-0 nova_compute[248510]: 2025-12-13 09:01:55.895 248514 DEBUG oslo_concurrency.lockutils [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:01:55 compute-0 nova_compute[248510]: 2025-12-13 09:01:55.895 248514 DEBUG oslo_concurrency.lockutils [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:01:55 compute-0 nova_compute[248510]: 2025-12-13 09:01:55.896 248514 DEBUG oslo_concurrency.lockutils [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:01:55 compute-0 nova_compute[248510]: 2025-12-13 09:01:55.896 248514 DEBUG nova.compute.manager [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:01:55 compute-0 nova_compute[248510]: 2025-12-13 09:01:55.896 248514 WARNING nova.compute.manager [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state None.
Dec 13 09:01:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:01:56 compute-0 nova_compute[248510]: 2025-12-13 09:01:56.271 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 468 KiB/s wr, 17 op/s
Dec 13 09:01:57 compute-0 ceph-mon[76537]: pgmap v3025: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 468 KiB/s wr, 17 op/s
Dec 13 09:01:57 compute-0 nova_compute[248510]: 2025-12-13 09:01:57.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:01:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 493 KiB/s rd, 468 KiB/s wr, 34 op/s
Dec 13 09:01:58 compute-0 ceph-mon[76537]: pgmap v3026: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 493 KiB/s rd, 468 KiB/s wr, 34 op/s
Dec 13 09:02:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 09:02:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:01 compute-0 nova_compute[248510]: 2025-12-13 09:02:01.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:01 compute-0 ceph-mon[76537]: pgmap v3027: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 09:02:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:02:02 compute-0 NetworkManager[50376]: <info>  [1765616522.4952] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Dec 13 09:02:02 compute-0 NetworkManager[50376]: <info>  [1765616522.4965] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Dec 13 09:02:02 compute-0 nova_compute[248510]: 2025-12-13 09:02:02.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:02 compute-0 ovn_controller[148476]: 2025-12-13T09:02:02Z|01323|binding|INFO|Releasing lport 88fafa32-b4a3-42fd-afa1-df6ba2200ad6 from this chassis (sb_readonly=0)
Dec 13 09:02:02 compute-0 nova_compute[248510]: 2025-12-13 09:02:02.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:02 compute-0 nova_compute[248510]: 2025-12-13 09:02:02.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:02 compute-0 nova_compute[248510]: 2025-12-13 09:02:02.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:03 compute-0 nova_compute[248510]: 2025-12-13 09:02:03.248 248514 DEBUG nova.compute.manager [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:03 compute-0 nova_compute[248510]: 2025-12-13 09:02:03.248 248514 DEBUG nova.compute.manager [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing instance network info cache due to event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:02:03 compute-0 nova_compute[248510]: 2025-12-13 09:02:03.249 248514 DEBUG oslo_concurrency.lockutils [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:02:03 compute-0 nova_compute[248510]: 2025-12-13 09:02:03.249 248514 DEBUG oslo_concurrency.lockutils [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:02:03 compute-0 nova_compute[248510]: 2025-12-13 09:02:03.249 248514 DEBUG nova.network.neutron [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:02:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:02:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 40K writes, 159K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 40K writes, 14K syncs, 2.80 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5560 writes, 22K keys, 5560 commit groups, 1.0 writes per commit group, ingest: 26.78 MB, 0.04 MB/s
                                           Interval WAL: 5560 writes, 2114 syncs, 2.63 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:02:03 compute-0 ceph-mon[76537]: pgmap v3028: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:02:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:02:04 compute-0 ceph-mon[76537]: pgmap v3029: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:02:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:06 compute-0 nova_compute[248510]: 2025-12-13 09:02:06.233 248514 DEBUG nova.network.neutron [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updated VIF entry in instance network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:02:06 compute-0 nova_compute[248510]: 2025-12-13 09:02:06.234 248514 DEBUG nova.network.neutron [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:06 compute-0 nova_compute[248510]: 2025-12-13 09:02:06.271 248514 DEBUG oslo_concurrency.lockutils [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:02:06 compute-0 nova_compute[248510]: 2025-12-13 09:02:06.278 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Dec 13 09:02:06 compute-0 nova_compute[248510]: 2025-12-13 09:02:06.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:07 compute-0 ceph-mon[76537]: pgmap v3030: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Dec 13 09:02:07 compute-0 nova_compute[248510]: 2025-12-13 09:02:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:07 compute-0 ovn_controller[148476]: 2025-12-13T09:02:07Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:34:14 10.100.0.5
Dec 13 09:02:07 compute-0 ovn_controller[148476]: 2025-12-13T09:02:07Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:34:14 10.100.0.5
Dec 13 09:02:07 compute-0 nova_compute[248510]: 2025-12-13 09:02:07.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:07 compute-0 podman[378044]: 2025-12-13 09:02:07.980637388 +0000 UTC m=+0.060140173 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:02:07 compute-0 podman[378043]: 2025-12-13 09:02:07.988645154 +0000 UTC m=+0.071485660 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 09:02:08 compute-0 podman[378042]: 2025-12-13 09:02:08.008463929 +0000 UTC m=+0.095115699 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 09:02:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 94 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 412 KiB/s wr, 85 op/s
Dec 13 09:02:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:02:09
Dec 13 09:02:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:02:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:02:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'images', 'default.rgw.control']
Dec 13 09:02:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:02:09 compute-0 ceph-mon[76537]: pgmap v3031: 321 pgs: 321 active+clean; 94 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 412 KiB/s wr, 85 op/s
Dec 13 09:02:09 compute-0 nova_compute[248510]: 2025-12-13 09:02:09.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Dec 13 09:02:10 compute-0 ceph-mon[76537]: pgmap v3032: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:02:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:02:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:02:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.8 total, 600.0 interval
                                           Cumulative writes: 41K writes, 159K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 41K writes, 15K syncs, 2.76 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5762 writes, 24K keys, 5762 commit groups, 1.0 writes per commit group, ingest: 26.51 MB, 0.04 MB/s
                                           Interval WAL: 5763 writes, 2205 syncs, 2.61 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:02:11 compute-0 nova_compute[248510]: 2025-12-13 09:02:11.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:02:12 compute-0 nova_compute[248510]: 2025-12-13 09:02:12.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:13 compute-0 nova_compute[248510]: 2025-12-13 09:02:13.292 248514 INFO nova.compute.manager [None req-5f074d7a-0962-4f5f-9962-26f1c8d53407 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Get console output
Dec 13 09:02:13 compute-0 nova_compute[248510]: 2025-12-13 09:02:13.299 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:02:13 compute-0 ceph-mon[76537]: pgmap v3033: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:02:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:02:14 compute-0 nova_compute[248510]: 2025-12-13 09:02:14.542 248514 INFO nova.compute.manager [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Rebuilding instance
Dec 13 09:02:14 compute-0 nova_compute[248510]: 2025-12-13 09:02:14.860 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:14 compute-0 nova_compute[248510]: 2025-12-13 09:02:14.882 248514 DEBUG nova.compute.manager [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:02:14 compute-0 nova_compute[248510]: 2025-12-13 09:02:14.953 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:14 compute-0 nova_compute[248510]: 2025-12-13 09:02:14.970 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:14 compute-0 nova_compute[248510]: 2025-12-13 09:02:14.987 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.002 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.018 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.022 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 09:02:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:02:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945107926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:02:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:02:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945107926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:02:15 compute-0 ceph-mon[76537]: pgmap v3034: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:02:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1945107926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:02:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1945107926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.799 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:02:15 compute-0 nova_compute[248510]: 2025-12-13 09:02:15.799 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:16 compute-0 nova_compute[248510]: 2025-12-13 09:02:16.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:02:17 compute-0 sudo[378103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:02:17 compute-0 sudo[378103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:17 compute-0 sudo[378103]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:17 compute-0 sudo[378128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:02:17 compute-0 sudo[378128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:17 compute-0 kernel: tap698673ab-11 (unregistering): left promiscuous mode
Dec 13 09:02:17 compute-0 NetworkManager[50376]: <info>  [1765616537.2484] device (tap698673ab-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:02:17 compute-0 ovn_controller[148476]: 2025-12-13T09:02:17Z|01324|binding|INFO|Releasing lport 698673ab-115a-49aa-b3d5-68392d28aa81 from this chassis (sb_readonly=0)
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.258 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:17 compute-0 ovn_controller[148476]: 2025-12-13T09:02:17Z|01325|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 down in Southbound
Dec 13 09:02:17 compute-0 ovn_controller[148476]: 2025-12-13T09:02:17Z|01326|binding|INFO|Removing iface tap698673ab-11 ovn-installed in OVS
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.265 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.280 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.282 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a unbound from our chassis
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.284 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44b4664e-9eef-4d04-bcdb-68869f16c46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.285 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b81d144-6d3b-4019-8eb5-19287d187e09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.286 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a namespace which is not needed anymore
Dec 13 09:02:17 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000082.scope: Deactivated successfully.
Dec 13 09:02:17 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000082.scope: Consumed 13.720s CPU time.
Dec 13 09:02:17 compute-0 systemd-machined[210538]: Machine qemu-158-instance-00000082 terminated.
Dec 13 09:02:17 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [NOTICE]   (377988) : haproxy version is 2.8.14-c23fe91
Dec 13 09:02:17 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [NOTICE]   (377988) : path to executable is /usr/sbin/haproxy
Dec 13 09:02:17 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [WARNING]  (377988) : Exiting Master process...
Dec 13 09:02:17 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [ALERT]    (377988) : Current worker (377990) exited with code 143 (Terminated)
Dec 13 09:02:17 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [WARNING]  (377988) : All workers exited. Exiting... (0)
Dec 13 09:02:17 compute-0 systemd[1]: libpod-611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e.scope: Deactivated successfully.
Dec 13 09:02:17 compute-0 podman[378184]: 2025-12-13 09:02:17.451413899 +0000 UTC m=+0.052144647 container died 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 09:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e-userdata-shm.mount: Deactivated successfully.
Dec 13 09:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-11cf72505ed6c58f094c4c16689a309a1fb7eaf088d765885e56e37adbda2ee2-merged.mount: Deactivated successfully.
Dec 13 09:02:17 compute-0 podman[378184]: 2025-12-13 09:02:17.507304437 +0000 UTC m=+0.108035195 container cleanup 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:02:17 compute-0 systemd[1]: libpod-conmon-611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e.scope: Deactivated successfully.
Dec 13 09:02:17 compute-0 ceph-mon[76537]: pgmap v3035: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.566 248514 DEBUG nova.compute.manager [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.568 248514 DEBUG oslo_concurrency.lockutils [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.568 248514 DEBUG oslo_concurrency.lockutils [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.569 248514 DEBUG oslo_concurrency.lockutils [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.569 248514 DEBUG nova.compute.manager [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.569 248514 WARNING nova.compute.manager [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state rebuilding.
Dec 13 09:02:17 compute-0 podman[378229]: 2025-12-13 09:02:17.582289113 +0000 UTC m=+0.047380011 container remove 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.589 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1221d02-ad2a-4a5b-92de-5d9cf5e46db3]: (4, ('Sat Dec 13 09:02:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a (611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e)\n611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e\nSat Dec 13 09:02:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a (611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e)\n611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.591 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55cc62d5-37eb-4097-9775-8b2f212fe373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.594 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44b4664e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:17 compute-0 kernel: tap44b4664e-90: left promiscuous mode
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c30e471-95fa-46f9-9228-98e5f3c88167]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.636 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46999000-f175-4ce6-be6c-27d73e4c14a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.637 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd5bbe1-df7b-4b1e-af8b-872f10ba9766]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.653 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1bc40f-e29b-4eb8-bf8c-3a29079aa546]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909045, 'reachable_time': 19969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378251, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d44b4664e\x2d9eef\x2d4d04\x2dbcdb\x2d68869f16c46a.mount: Deactivated successfully.
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.656 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:02:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.656 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b89f0be9-88d8-4d4c-bbd9-067829b0bd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:17 compute-0 sudo[378128]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.780 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.781 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:02:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:02:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:02:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:02:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.807 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:02:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:02:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:02:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:02:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:02:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:02:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:02:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:02:17 compute-0 sudo[378264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:02:17 compute-0 sudo[378264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:17 compute-0 sudo[378264]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.919 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.920 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.930 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.930 248514 INFO nova.compute.claims [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:02:17 compute-0 nova_compute[248510]: 2025-12-13 09:02:17.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:17 compute-0 sudo[378289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:02:17 compute-0 sudo[378289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.029 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.042 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance shutdown successfully after 3 seconds.
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.048 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance destroyed successfully.
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.053 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.054 248514 DEBUG nova.compute.provider_tree [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.062 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance destroyed successfully.
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.064 248514 DEBUG nova.virt.libvirt.vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:01:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:13Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.064 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.066 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.066 248514 DEBUG os_vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.070 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap698673ab-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.075 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.084 248514 INFO os_vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11')
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.111 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.185 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:18 compute-0 podman[378346]: 2025-12-13 09:02:18.293919461 +0000 UTC m=+0.057577890 container create 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 09:02:18 compute-0 systemd[1]: Started libpod-conmon-586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5.scope.
Dec 13 09:02:18 compute-0 podman[378346]: 2025-12-13 09:02:18.269995976 +0000 UTC m=+0.033654535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:02:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:02:18 compute-0 podman[378346]: 2025-12-13 09:02:18.391349396 +0000 UTC m=+0.155007835 container init 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 09:02:18 compute-0 podman[378346]: 2025-12-13 09:02:18.401423263 +0000 UTC m=+0.165081702 container start 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:02:18 compute-0 podman[378346]: 2025-12-13 09:02:18.407994134 +0000 UTC m=+0.171652643 container attach 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:02:18 compute-0 amazing_heisenberg[378364]: 167 167
Dec 13 09:02:18 compute-0 systemd[1]: libpod-586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5.scope: Deactivated successfully.
Dec 13 09:02:18 compute-0 conmon[378364]: conmon 586e9d0d7dba7a9e08eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5.scope/container/memory.events
Dec 13 09:02:18 compute-0 podman[378346]: 2025-12-13 09:02:18.41109998 +0000 UTC m=+0.174758399 container died 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-012065122cb87c7c2551928616b1d933b0741faf2be759b62fa583448a7384ba-merged.mount: Deactivated successfully.
Dec 13 09:02:18 compute-0 podman[378346]: 2025-12-13 09:02:18.464061676 +0000 UTC m=+0.227720095 container remove 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:02:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.465 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deleting instance files /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4_del
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.468 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deletion of /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4_del complete
Dec 13 09:02:18 compute-0 systemd[1]: libpod-conmon-586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5.scope: Deactivated successfully.
Dec 13 09:02:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:02:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:02:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:02:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:02:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:02:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:02:18 compute-0 podman[378404]: 2025-12-13 09:02:18.679684394 +0000 UTC m=+0.047492714 container create e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:02:18 compute-0 systemd[1]: Started libpod-conmon-e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb.scope.
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.727 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.728 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating image(s)
Dec 13 09:02:18 compute-0 podman[378404]: 2025-12-13 09:02:18.660584206 +0000 UTC m=+0.028392516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:02:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.765 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:18 compute-0 podman[378404]: 2025-12-13 09:02:18.780312247 +0000 UTC m=+0.148120587 container init e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:02:18 compute-0 podman[378404]: 2025-12-13 09:02:18.789473771 +0000 UTC m=+0.157282091 container start e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:02:18 compute-0 podman[378404]: 2025-12-13 09:02:18.793847418 +0000 UTC m=+0.161655758 container attach e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:02:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:02:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729367253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.818 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.851 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.855 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.909 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.920 248514 DEBUG nova.compute.provider_tree [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.940 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.951 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.952 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.953 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.953 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.982 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:18 compute-0 nova_compute[248510]: 2025-12-13 09:02:18.989 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 18506534-de17-4e42-87c7-b2546619f4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.049 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.050 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.164 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.165 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.190 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.212 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:02:19 compute-0 nifty_jackson[378422]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:02:19 compute-0 nifty_jackson[378422]: --> All data devices are unavailable
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.300 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 18506534-de17-4e42-87c7-b2546619f4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:19 compute-0 systemd[1]: libpod-e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb.scope: Deactivated successfully.
Dec 13 09:02:19 compute-0 podman[378404]: 2025-12-13 09:02:19.32135327 +0000 UTC m=+0.689161610 container died e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c-merged.mount: Deactivated successfully.
Dec 13 09:02:19 compute-0 podman[378404]: 2025-12-13 09:02:19.368747951 +0000 UTC m=+0.736556271 container remove e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:02:19 compute-0 systemd[1]: libpod-conmon-e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb.scope: Deactivated successfully.
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.411 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.413 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.414 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Creating image(s)
Dec 13 09:02:19 compute-0 sudo[378289]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.443 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.471 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.492 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:19 compute-0 sudo[378599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.499 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:19 compute-0 sudo[378599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:19 compute-0 sudo[378599]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.554 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:02:19 compute-0 ceph-mon[76537]: pgmap v3036: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:02:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1729367253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:19 compute-0 sudo[378666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:02:19 compute-0 sudo[378666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.592 248514 DEBUG nova.policy [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.599 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.600 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.601 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.601 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.631 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.635 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:02:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.2 total, 600.0 interval
                                           Cumulative writes: 32K writes, 132K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.85 writes per sync, written: 0.13 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4125 writes, 18K keys, 4125 commit groups, 1.0 writes per commit group, ingest: 20.64 MB, 0.03 MB/s
                                           Interval WAL: 4125 writes, 1571 syncs, 2.63 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.689 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.696 248514 DEBUG nova.compute.manager [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.696 248514 DEBUG oslo_concurrency.lockutils [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.697 248514 DEBUG oslo_concurrency.lockutils [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.697 248514 DEBUG oslo_concurrency.lockutils [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.697 248514 DEBUG nova.compute.manager [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.697 248514 WARNING nova.compute.manager [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state rebuild_spawning.
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.740 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.741 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.741 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.751 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.752 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Ensure instance console log exists: /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.752 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.752 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.753 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.755 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start _get_guest_xml network_info=[{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.762 248514 WARNING nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.772 248514 DEBUG nova.virt.libvirt.host [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.773 248514 DEBUG nova.virt.libvirt.host [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.776 248514 DEBUG nova.virt.libvirt.host [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.777 248514 DEBUG nova.virt.libvirt.host [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.777 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.778 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.778 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.778 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.778 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.780 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.780 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.813 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:19 compute-0 podman[378777]: 2025-12-13 09:02:19.904683069 +0000 UTC m=+0.071337157 container create e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 09:02:19 compute-0 nova_compute[248510]: 2025-12-13 09:02:19.934 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:19 compute-0 systemd[1]: Started libpod-conmon-e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c.scope.
Dec 13 09:02:19 compute-0 podman[378777]: 2025-12-13 09:02:19.863242535 +0000 UTC m=+0.029896653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:02:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:02:19 compute-0 podman[378777]: 2025-12-13 09:02:19.984334899 +0000 UTC m=+0.150989007 container init e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:02:19 compute-0 podman[378777]: 2025-12-13 09:02:19.990469569 +0000 UTC m=+0.157123647 container start e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:02:19 compute-0 podman[378777]: 2025-12-13 09:02:19.9941889 +0000 UTC m=+0.160843018 container attach e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:02:19 compute-0 charming_noether[378811]: 167 167
Dec 13 09:02:19 compute-0 systemd[1]: libpod-e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c.scope: Deactivated successfully.
Dec 13 09:02:19 compute-0 podman[378777]: 2025-12-13 09:02:19.995789259 +0000 UTC m=+0.162443347 container died e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.001 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:02:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc5cc6e2fe46080668caa39c8995b8f0b177f117aa86276df49da973a1a87f72-merged.mount: Deactivated successfully.
Dec 13 09:02:20 compute-0 podman[378777]: 2025-12-13 09:02:20.039387296 +0000 UTC m=+0.206041384 container remove e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:02:20 compute-0 systemd[1]: libpod-conmon-e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c.scope: Deactivated successfully.
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.098 248514 DEBUG nova.objects.instance [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid d1949b92-7b16-4a9d-b033-d0a6df7a9f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.117 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.117 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Ensure instance console log exists: /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.118 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.118 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.118 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:20 compute-0 podman[378908]: 2025-12-13 09:02:20.204626411 +0000 UTC m=+0.044016548 container create e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:02:20 compute-0 systemd[1]: Started libpod-conmon-e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969.scope.
Dec 13 09:02:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:20 compute-0 podman[378908]: 2025-12-13 09:02:20.184031277 +0000 UTC m=+0.023421404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:02:20 compute-0 podman[378908]: 2025-12-13 09:02:20.298427017 +0000 UTC m=+0.137817154 container init e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:02:20 compute-0 podman[378908]: 2025-12-13 09:02:20.315238179 +0000 UTC m=+0.154628296 container start e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:02:20 compute-0 podman[378908]: 2025-12-13 09:02:20.318669542 +0000 UTC m=+0.158059669 container attach e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:02:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:02:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118914386' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.377 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.409 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.415 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 60 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 09:02:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3118914386' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:02:20 compute-0 focused_khorana[378924]: {
Dec 13 09:02:20 compute-0 focused_khorana[378924]:     "0": [
Dec 13 09:02:20 compute-0 focused_khorana[378924]:         {
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "devices": [
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "/dev/loop3"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             ],
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_name": "ceph_lv0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_size": "21470642176",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "name": "ceph_lv0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "tags": {
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cluster_name": "ceph",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.crush_device_class": "",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.encrypted": "0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.objectstore": "bluestore",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osd_id": "0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.type": "block",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.vdo": "0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.with_tpm": "0"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             },
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "type": "block",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "vg_name": "ceph_vg0"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:         }
Dec 13 09:02:20 compute-0 focused_khorana[378924]:     ],
Dec 13 09:02:20 compute-0 focused_khorana[378924]:     "1": [
Dec 13 09:02:20 compute-0 focused_khorana[378924]:         {
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "devices": [
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "/dev/loop4"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             ],
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_name": "ceph_lv1",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_size": "21470642176",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "name": "ceph_lv1",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "tags": {
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cluster_name": "ceph",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.crush_device_class": "",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.encrypted": "0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.objectstore": "bluestore",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osd_id": "1",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.type": "block",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.vdo": "0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.with_tpm": "0"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             },
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "type": "block",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "vg_name": "ceph_vg1"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:         }
Dec 13 09:02:20 compute-0 focused_khorana[378924]:     ],
Dec 13 09:02:20 compute-0 focused_khorana[378924]:     "2": [
Dec 13 09:02:20 compute-0 focused_khorana[378924]:         {
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "devices": [
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "/dev/loop5"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             ],
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_name": "ceph_lv2",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_size": "21470642176",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "name": "ceph_lv2",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "tags": {
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.cluster_name": "ceph",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.crush_device_class": "",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.encrypted": "0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.objectstore": "bluestore",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osd_id": "2",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.type": "block",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.vdo": "0",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:                 "ceph.with_tpm": "0"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             },
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "type": "block",
Dec 13 09:02:20 compute-0 focused_khorana[378924]:             "vg_name": "ceph_vg2"
Dec 13 09:02:20 compute-0 focused_khorana[378924]:         }
Dec 13 09:02:20 compute-0 focused_khorana[378924]:     ]
Dec 13 09:02:20 compute-0 focused_khorana[378924]: }
Dec 13 09:02:20 compute-0 systemd[1]: libpod-e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969.scope: Deactivated successfully.
Dec 13 09:02:20 compute-0 podman[378908]: 2025-12-13 09:02:20.609571113 +0000 UTC m=+0.448961220 container died e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 09:02:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d-merged.mount: Deactivated successfully.
Dec 13 09:02:20 compute-0 podman[378908]: 2025-12-13 09:02:20.665441701 +0000 UTC m=+0.504831818 container remove e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:02:20 compute-0 systemd[1]: libpod-conmon-e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969.scope: Deactivated successfully.
Dec 13 09:02:20 compute-0 sudo[378666]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.802 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:02:20 compute-0 nova_compute[248510]: 2025-12-13 09:02:20.802 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:20 compute-0 sudo[378985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:02:20 compute-0 sudo[378985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:20 compute-0 sudo[378985]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:20 compute-0 sudo[379010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:02:20 compute-0 sudo[379010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:02:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76311328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.030 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.033 248514 DEBUG nova.virt.libvirt.vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:01:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:18Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.034 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.035 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.041 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <uuid>18506534-de17-4e42-87c7-b2546619f4d4</uuid>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <name>instance-00000082</name>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1657427533</nova:name>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:02:19</nova:creationTime>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <nova:port uuid="698673ab-115a-49aa-b3d5-68392d28aa81">
Dec 13 09:02:21 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <system>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <entry name="serial">18506534-de17-4e42-87c7-b2546619f4d4</entry>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <entry name="uuid">18506534-de17-4e42-87c7-b2546619f4d4</entry>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </system>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <os>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   </os>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <features>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   </features>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/18506534-de17-4e42-87c7-b2546619f4d4_disk">
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       </source>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/18506534-de17-4e42-87c7-b2546619f4d4_disk.config">
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       </source>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:02:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:7b:34:14"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <target dev="tap698673ab-11"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/console.log" append="off"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <video>
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </video>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:02:21 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:02:21 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:02:21 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:02:21 compute-0 nova_compute[248510]: </domain>
Dec 13 09:02:21 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.050 248514 DEBUG nova.virt.libvirt.vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:01:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:18Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.051 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.052 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.053 248514 DEBUG os_vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.055 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.056 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.060 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.060 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap698673ab-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.061 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap698673ab-11, col_values=(('external_ids', {'iface-id': '698673ab-115a-49aa-b3d5-68392d28aa81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:34:14', 'vm-uuid': '18506534-de17-4e42-87c7-b2546619f4d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:21 compute-0 NetworkManager[50376]: <info>  [1765616541.0652] manager: (tap698673ab-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.067 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.074 248514 INFO os_vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11')
Dec 13 09:02:21 compute-0 podman[379071]: 2025-12-13 09:02:21.167242814 +0000 UTC m=+0.050701572 container create 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 09:02:21 compute-0 systemd[1]: Started libpod-conmon-9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f.scope.
Dec 13 09:02:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:02:21 compute-0 podman[379071]: 2025-12-13 09:02:21.243275155 +0000 UTC m=+0.126733923 container init 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:02:21 compute-0 podman[379071]: 2025-12-13 09:02:21.150471033 +0000 UTC m=+0.033929811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:02:21 compute-0 podman[379071]: 2025-12-13 09:02:21.253032724 +0000 UTC m=+0.136491482 container start 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:02:21 compute-0 podman[379071]: 2025-12-13 09:02:21.257444522 +0000 UTC m=+0.140903300 container attach 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:02:21 compute-0 beautiful_khayyam[379088]: 167 167
Dec 13 09:02:21 compute-0 systemd[1]: libpod-9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f.scope: Deactivated successfully.
Dec 13 09:02:21 compute-0 podman[379071]: 2025-12-13 09:02:21.260186159 +0000 UTC m=+0.143644927 container died 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0001823864093692832 of space, bias 1.0, pg target 0.05471592281078496 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698314011962477 of space, bias 1.0, pg target 0.2009494203588743 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.725026712866588e-07 of space, bias 4.0, pg target 0.0006870032055439905 quantized to 16 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:02:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c8777fdae26bddf4684151dbd60a9b58d05630434eed191be2ddb1437c4c10-merged.mount: Deactivated successfully.
Dec 13 09:02:21 compute-0 podman[379071]: 2025-12-13 09:02:21.303704604 +0000 UTC m=+0.187163362 container remove 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:02:21 compute-0 systemd[1]: libpod-conmon-9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f.scope: Deactivated successfully.
Dec 13 09:02:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:02:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479693080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.433 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:21 compute-0 podman[379113]: 2025-12-13 09:02:21.482557762 +0000 UTC m=+0.044838339 container create adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:02:21 compute-0 systemd[1]: Started libpod-conmon-adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527.scope.
Dec 13 09:02:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:21 compute-0 podman[379113]: 2025-12-13 09:02:21.464655634 +0000 UTC m=+0.026936231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:02:21 compute-0 podman[379113]: 2025-12-13 09:02:21.560970801 +0000 UTC m=+0.123251408 container init adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:02:21 compute-0 ceph-mon[76537]: pgmap v3037: 321 pgs: 321 active+clean; 60 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 09:02:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/76311328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:02:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/479693080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:21 compute-0 podman[379113]: 2025-12-13 09:02:21.569215823 +0000 UTC m=+0.131496400 container start adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:02:21 compute-0 podman[379113]: 2025-12-13 09:02:21.572916494 +0000 UTC m=+0.135197071 container attach adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.598 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.599 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.600 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:7b:34:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.600 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Using config drive
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.625 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.654 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.661 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.662 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.707 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'keypairs' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.837 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.839 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3512MB free_disk=59.97734020277858GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.839 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.840 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.912 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 18506534-de17-4e42-87c7-b2546619f4d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.912 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance d1949b92-7b16-4a9d-b033-d0a6df7a9f2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.913 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.913 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:02:21 compute-0 nova_compute[248510]: 2025-12-13 09:02:21.971 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:22 compute-0 lvm[379246]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:02:22 compute-0 lvm[379247]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:02:22 compute-0 lvm[379246]: VG ceph_vg0 finished
Dec 13 09:02:22 compute-0 lvm[379247]: VG ceph_vg1 finished
Dec 13 09:02:22 compute-0 lvm[379249]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:02:22 compute-0 lvm[379249]: VG ceph_vg2 finished
Dec 13 09:02:22 compute-0 trusting_blackwell[379130]: {}
Dec 13 09:02:22 compute-0 systemd[1]: libpod-adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527.scope: Deactivated successfully.
Dec 13 09:02:22 compute-0 systemd[1]: libpod-adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527.scope: Consumed 1.303s CPU time.
Dec 13 09:02:22 compute-0 podman[379113]: 2025-12-13 09:02:22.350211759 +0000 UTC m=+0.912492346 container died adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506-merged.mount: Deactivated successfully.
Dec 13 09:02:22 compute-0 podman[379113]: 2025-12-13 09:02:22.404485958 +0000 UTC m=+0.966766525 container remove adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:02:22 compute-0 systemd[1]: libpod-conmon-adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527.scope: Deactivated successfully.
Dec 13 09:02:22 compute-0 sudo[379010]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:02:22 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:02:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:02:22 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:02:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 60 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 33 KiB/s wr, 21 op/s
Dec 13 09:02:22 compute-0 sudo[379264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:02:22 compute-0 sudo[379264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:02:22 compute-0 sudo[379264]: pam_unix(sudo:session): session closed for user root
Dec 13 09:02:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:02:22 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3919006410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.604 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.617 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.652 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.676 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.676 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.788 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Successfully created port: bf1a0f02-8913-41ae-aa00-ab927d45e18a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.847 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating config drive at /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.854 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwa2ds191 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:22 compute-0 nova_compute[248510]: 2025-12-13 09:02:22.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.018 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwa2ds191" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.051 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.055 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config 18506534-de17-4e42-87c7-b2546619f4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.232 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config 18506534-de17-4e42-87c7-b2546619f4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.234 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deleting local config drive /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config because it was imported into RBD.
Dec 13 09:02:23 compute-0 kernel: tap698673ab-11: entered promiscuous mode
Dec 13 09:02:23 compute-0 NetworkManager[50376]: <info>  [1765616543.3095] manager: (tap698673ab-11): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Dec 13 09:02:23 compute-0 systemd-udevd[379245]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:23 compute-0 ovn_controller[148476]: 2025-12-13T09:02:23Z|01327|binding|INFO|Claiming lport 698673ab-115a-49aa-b3d5-68392d28aa81 for this chassis.
Dec 13 09:02:23 compute-0 ovn_controller[148476]: 2025-12-13T09:02:23Z|01328|binding|INFO|698673ab-115a-49aa-b3d5-68392d28aa81: Claiming fa:16:3e:7b:34:14 10.100.0.5
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.320 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.322 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a bound to our chassis
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.323 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 09:02:23 compute-0 NetworkManager[50376]: <info>  [1765616543.3274] device (tap698673ab-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:02:23 compute-0 NetworkManager[50376]: <info>  [1765616543.3283] device (tap698673ab-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.340 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3c1dfd-22a5-4617-b700-9340895ece2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.341 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44b4664e-91 in ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:02:23 compute-0 ovn_controller[148476]: 2025-12-13T09:02:23Z|01329|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 up in Southbound
Dec 13 09:02:23 compute-0 ovn_controller[148476]: 2025-12-13T09:02:23Z|01330|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 ovn-installed in OVS
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.344 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44b4664e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.345 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[182b9115-571c-42e5-8602-1e8f4357e442]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.346 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62f85eeb-2b28-4ce9-825e-09e22ea43159]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.367 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[a89701b1-39db-40ba-8f5a-8689aabd30d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 systemd-machined[210538]: New machine qemu-159-instance-00000082.
Dec 13 09:02:23 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-00000082.
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.383 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d25093eb-50a4-4c58-b126-6a04f561845c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.426 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fb64c5-650a-4204-89d7-cc6cfd76bad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.434 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7da829ac-c552-47c3-929f-8d5551dbb884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 NetworkManager[50376]: <info>  [1765616543.4384] manager: (tap44b4664e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/545)
Dec 13 09:02:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:02:23 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:02:23 compute-0 ceph-mon[76537]: pgmap v3038: 321 pgs: 321 active+clean; 60 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 33 KiB/s wr, 21 op/s
Dec 13 09:02:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3919006410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.477 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cec57c-bba6-4880-9fcb-74accca554be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.480 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[43c03123-d1d4-4391-b5ed-26151f4fedc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 NetworkManager[50376]: <info>  [1765616543.5156] device (tap44b4664e-90): carrier: link connected
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.526 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[833e3c82-628d-463c-b75d-5955a8f8cc90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.555 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c175b0-0223-40c2-921c-53e99ec6a5f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44b4664e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:4b:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912073, 'reachable_time': 38150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379374, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.576 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd9efa0-950f-4201-92b3-3d13a35d79fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:4b6f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 912073, 'tstamp': 912073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379375, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.602 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[823f69d1-b017-46bc-89b7-e57d75ac9866]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44b4664e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:4b:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912073, 'reachable_time': 38150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379376, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.650 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f050553-fa02-4bfe-b248-b9bdada9aa02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.678 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6dc273-43c5-425f-9b6a-e6e66ee7d0e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.738 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44b4664e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.739 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.740 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44b4664e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:23 compute-0 NetworkManager[50376]: <info>  [1765616543.7439] manager: (tap44b4664e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Dec 13 09:02:23 compute-0 kernel: tap44b4664e-90: entered promiscuous mode
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.750 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44b4664e-90, col_values=(('external_ids', {'iface-id': '88fafa32-b4a3-42fd-afa1-df6ba2200ad6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:23 compute-0 ovn_controller[148476]: 2025-12-13T09:02:23Z|01331|binding|INFO|Releasing lport 88fafa32-b4a3-42fd-afa1-df6ba2200ad6 from this chassis (sb_readonly=0)
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.784 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.785 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[973cb458-f744-4d73-b9d8-34cd01b3d6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.786 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:02:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.787 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'env', 'PROCESS_TAG=haproxy-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44b4664e-9eef-4d04-bcdb-68869f16c46a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.876 248514 DEBUG nova.compute.manager [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.885 248514 DEBUG oslo_concurrency.lockutils [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.890 248514 DEBUG oslo_concurrency.lockutils [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.891 248514 DEBUG oslo_concurrency.lockutils [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.891 248514 DEBUG nova.compute.manager [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:23 compute-0 nova_compute[248510]: 2025-12-13 09:02:23.891 248514 WARNING nova.compute.manager [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state rebuild_spawning.
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.133 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 18506534-de17-4e42-87c7-b2546619f4d4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.135 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616544.1326709, 18506534-de17-4e42-87c7-b2546619f4d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.136 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Resumed (Lifecycle Event)
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.138 248514 DEBUG nova.compute.manager [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.139 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.143 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance spawned successfully.
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.144 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:02:24 compute-0 podman[379449]: 2025-12-13 09:02:24.212610577 +0000 UTC m=+0.052308412 container create 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.220 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.229 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.230 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.230 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.231 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.231 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.232 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.237 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:02:24 compute-0 systemd[1]: Started libpod-conmon-77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066.scope.
Dec 13 09:02:24 compute-0 podman[379449]: 2025-12-13 09:02:24.182464459 +0000 UTC m=+0.022162314 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.282 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.283 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616544.1345015, 18506534-de17-4e42-87c7-b2546619f4d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.283 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Started (Lifecycle Event)
Dec 13 09:02:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.307 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a218a85abcbb7ba749a23f029096ade87b462fcb9dae89324b66eda6c3873b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.314 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.320 248514 DEBUG nova.compute.manager [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:02:24 compute-0 podman[379449]: 2025-12-13 09:02:24.325044469 +0000 UTC m=+0.164742324 container init 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 09:02:24 compute-0 podman[379449]: 2025-12-13 09:02:24.331094487 +0000 UTC m=+0.170792322 container start 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:02:24 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [NOTICE]   (379468) : New worker (379470) forked
Dec 13 09:02:24 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [NOTICE]   (379468) : Loading success.
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.358 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.401 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.402 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.403 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.444 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Successfully updated port: bf1a0f02-8913-41ae-aa00-ab927d45e18a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.462 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.463 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.463 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:02:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.475 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.598 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:24 compute-0 nova_compute[248510]: 2025-12-13 09:02:24.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:02:25 compute-0 ceph-mon[76537]: pgmap v3039: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 13 09:02:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.006 248514 DEBUG nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.006 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.006 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.006 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 DEBUG nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 WARNING nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state None.
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 DEBUG nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 DEBUG nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing instance network info cache due to event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.050 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.087 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.087 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance network_info: |[{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.088 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.088 248514 DEBUG nova.network.neutron [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.091 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start _get_guest_xml network_info=[{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.095 248514 WARNING nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.100 248514 DEBUG nova.virt.libvirt.host [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.101 248514 DEBUG nova.virt.libvirt.host [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.104 248514 DEBUG nova.virt.libvirt.host [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.104 248514 DEBUG nova.virt.libvirt.host [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.105 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.105 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.105 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.105 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.106 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.106 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.106 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.106 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.107 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.107 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.107 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.107 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.110 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 13 09:02:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:02:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428243478' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.711 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:26 compute-0 ceph-mon[76537]: pgmap v3040: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 13 09:02:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2428243478' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.734 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:26 compute-0 nova_compute[248510]: 2025-12-13 09:02:26.744 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:02:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184419748' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.292 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.294 248514 DEBUG nova.virt.libvirt.vif [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:02:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1513004833',display_name='tempest-TestNetworkBasicOps-server-1513004833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1513004833',id=131,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8dVZ4XrPNWO5yi7otU+kwc72UZys9k+bPmo3tsYqKE1ENZ9dHjFLyBJxupI7JFtVdj7CesXySUu0b7xutQxOWMSQgMIWptsgkbrjrKxEGVeJHg8i+xoVF0hfGKec7bVQ==',key_name='tempest-TestNetworkBasicOps-61860158',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-y8sla6xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:19Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=d1949b92-7b16-4a9d-b033-d0a6df7a9f2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.294 248514 DEBUG nova.network.os_vif_util [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.295 248514 DEBUG nova.network.os_vif_util [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.296 248514 DEBUG nova.objects.instance [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid d1949b92-7b16-4a9d-b033-d0a6df7a9f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.316 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <uuid>d1949b92-7b16-4a9d-b033-d0a6df7a9f2f</uuid>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <name>instance-00000083</name>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-1513004833</nova:name>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:02:26</nova:creationTime>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <nova:port uuid="bf1a0f02-8913-41ae-aa00-ab927d45e18a">
Dec 13 09:02:27 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <system>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <entry name="serial">d1949b92-7b16-4a9d-b033-d0a6df7a9f2f</entry>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <entry name="uuid">d1949b92-7b16-4a9d-b033-d0a6df7a9f2f</entry>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </system>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <os>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   </os>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <features>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   </features>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk">
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       </source>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config">
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       </source>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:02:27 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:eb:41:b6"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <target dev="tapbf1a0f02-89"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/console.log" append="off"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <video>
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </video>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:02:27 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:02:27 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:02:27 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:02:27 compute-0 nova_compute[248510]: </domain>
Dec 13 09:02:27 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.317 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Preparing to wait for external event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.317 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.318 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.318 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.319 248514 DEBUG nova.virt.libvirt.vif [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:02:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1513004833',display_name='tempest-TestNetworkBasicOps-server-1513004833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1513004833',id=131,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8dVZ4XrPNWO5yi7otU+kwc72UZys9k+bPmo3tsYqKE1ENZ9dHjFLyBJxupI7JFtVdj7CesXySUu0b7xutQxOWMSQgMIWptsgkbrjrKxEGVeJHg8i+xoVF0hfGKec7bVQ==',key_name='tempest-TestNetworkBasicOps-61860158',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-y8sla6xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:19Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=d1949b92-7b16-4a9d-b033-d0a6df7a9f2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.319 248514 DEBUG nova.network.os_vif_util [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.319 248514 DEBUG nova.network.os_vif_util [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.320 248514 DEBUG os_vif [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.321 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.321 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.325 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.325 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf1a0f02-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.326 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf1a0f02-89, col_values=(('external_ids', {'iface-id': 'bf1a0f02-8913-41ae-aa00-ab927d45e18a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:41:b6', 'vm-uuid': 'd1949b92-7b16-4a9d-b033-d0a6df7a9f2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.328 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:27 compute-0 NetworkManager[50376]: <info>  [1765616547.3292] manager: (tapbf1a0f02-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.336 248514 INFO os_vif [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89')
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.422 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.423 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.423 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:eb:41:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.424 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Using config drive
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.451 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.651 248514 DEBUG nova.network.neutron [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updated VIF entry in instance network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.652 248514 DEBUG nova.network.neutron [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.675 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:02:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3184419748' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.773 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Creating config drive at /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.777 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkyw4vcfz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.820 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.930 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkyw4vcfz" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.962 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:02:27 compute-0 nova_compute[248510]: 2025-12-13 09:02:27.968 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.123 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.124 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Deleting local config drive /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config because it was imported into RBD.
Dec 13 09:02:28 compute-0 kernel: tapbf1a0f02-89: entered promiscuous mode
Dec 13 09:02:28 compute-0 NetworkManager[50376]: <info>  [1765616548.1946] manager: (tapbf1a0f02-89): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:28 compute-0 ovn_controller[148476]: 2025-12-13T09:02:28Z|01332|binding|INFO|Claiming lport bf1a0f02-8913-41ae-aa00-ab927d45e18a for this chassis.
Dec 13 09:02:28 compute-0 ovn_controller[148476]: 2025-12-13T09:02:28Z|01333|binding|INFO|bf1a0f02-8913-41ae-aa00-ab927d45e18a: Claiming fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.208 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:41:b6 10.100.0.7'], port_security=['fa:16:3e:eb:41:b6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd1949b92-7b16-4a9d-b033-d0a6df7a9f2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e40673f-327e-4eb6-92f2-18c595696258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2f4c3d2c-26ca-48de-bf5d-37154a8db222', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=873a69d0-ac9b-484c-a75a-e746eafe7ffd, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bf1a0f02-8913-41ae-aa00-ab927d45e18a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.210 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bf1a0f02-8913-41ae-aa00-ab927d45e18a in datapath 6e40673f-327e-4eb6-92f2-18c595696258 bound to our chassis
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.212 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e40673f-327e-4eb6-92f2-18c595696258
Dec 13 09:02:28 compute-0 ovn_controller[148476]: 2025-12-13T09:02:28Z|01334|binding|INFO|Setting lport bf1a0f02-8913-41ae-aa00-ab927d45e18a ovn-installed in OVS
Dec 13 09:02:28 compute-0 ovn_controller[148476]: 2025-12-13T09:02:28Z|01335|binding|INFO|Setting lport bf1a0f02-8913-41ae-aa00-ab927d45e18a up in Southbound
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.230 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a73f679e-2994-443a-a90e-7e86f377fbe1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.232 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e40673f-31 in ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:02:28 compute-0 systemd-udevd[379612]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.233 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e40673f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.233 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8f85aa-6990-4411-ab6e-05efc7d1679b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.234 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[08181d00-8571-436f-a56f-1f41364217c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 NetworkManager[50376]: <info>  [1765616548.2500] device (tapbf1a0f02-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:02:28 compute-0 NetworkManager[50376]: <info>  [1765616548.2515] device (tapbf1a0f02-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.254 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[922250e9-4039-4b28-b1c9-2a7a6294a143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 systemd-machined[210538]: New machine qemu-160-instance-00000083.
Dec 13 09:02:28 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-00000083.
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.273 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3139ec-2797-4d97-b97a-3819c32b903f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.323 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbc6eec-0e04-4584-9bb1-e953a36cc357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 NetworkManager[50376]: <info>  [1765616548.3334] manager: (tap6e40673f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/549)
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.332 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6225cea4-c17e-48de-88c5-9001463c081a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.6 MiB/s wr, 123 op/s
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.608 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aeea98ca-8a4f-452c-a5e4-df92f4e08331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.614 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d4119f8b-29f8-4a80-ac56-e6f4a7981a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 NetworkManager[50376]: <info>  [1765616548.6407] device (tap6e40673f-30): carrier: link connected
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.649 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[22f55905-ec93-4656-b765-e18cee849405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.666 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[081fc278-4cab-45d5-80e6-34c1daf0a795]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e40673f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:63:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912585, 'reachable_time': 34362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379689, 'error': None, 'target': 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.672 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616548.6713371, d1949b92-7b16-4a9d-b033-d0a6df7a9f2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.672 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] VM Started (Lifecycle Event)
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.683 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6356280c-50bc-4187-b6a5-3f5ef4774100]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:6367'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 912585, 'tstamp': 912585}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379690, 'error': None, 'target': 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[975ed36d-b6f1-412f-afe7-6bb0b95f8ebf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e40673f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:63:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912585, 'reachable_time': 34362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379691, 'error': None, 'target': 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.702 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.707 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616548.6746345, d1949b92-7b16-4a9d-b033-d0a6df7a9f2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.707 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] VM Paused (Lifecycle Event)
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.733 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.738 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.741 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[529c3436-59e8-4a28-94c6-f2e3380cd857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.749 248514 DEBUG nova.compute.manager [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.750 248514 DEBUG oslo_concurrency.lockutils [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.750 248514 DEBUG oslo_concurrency.lockutils [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.750 248514 DEBUG oslo_concurrency.lockutils [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.751 248514 DEBUG nova.compute.manager [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Processing event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.751 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:02:28 compute-0 ceph-mon[76537]: pgmap v3041: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.6 MiB/s wr, 123 op/s
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.756 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.760 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.760 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616548.7544723, d1949b92-7b16-4a9d-b033-d0a6df7a9f2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.761 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] VM Resumed (Lifecycle Event)
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.765 248514 INFO nova.virt.libvirt.driver [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance spawned successfully.
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.765 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.783 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.790 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.795 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.795 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.796 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.796 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.796 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.797 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.827 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7acf1fbe-691d-4397-8299-73a65ac18968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.833 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e40673f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.833 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.834 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e40673f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.835 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:28 compute-0 NetworkManager[50376]: <info>  [1765616548.8362] manager: (tap6e40673f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/550)
Dec 13 09:02:28 compute-0 kernel: tap6e40673f-30: entered promiscuous mode
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.840 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e40673f-30, col_values=(('external_ids', {'iface-id': 'b5b50ff0-6ee9-471e-bc65-c9c2b390db7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:28 compute-0 ovn_controller[148476]: 2025-12-13T09:02:28Z|01336|binding|INFO|Releasing lport b5b50ff0-6ee9-471e-bc65-c9c2b390db7a from this chassis (sb_readonly=0)
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.860 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.861 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e40673f-327e-4eb6-92f2-18c595696258.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e40673f-327e-4eb6-92f2-18c595696258.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.863 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[659e7ba8-0232-4e49-8bcd-8bef60771756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.864 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-6e40673f-327e-4eb6-92f2-18c595696258
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/6e40673f-327e-4eb6-92f2-18c595696258.pid.haproxy
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 6e40673f-327e-4eb6-92f2-18c595696258
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:02:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.864 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'env', 'PROCESS_TAG=haproxy-6e40673f-327e-4eb6-92f2-18c595696258', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e40673f-327e-4eb6-92f2-18c595696258.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.870 248514 INFO nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Took 9.46 seconds to spawn the instance on the hypervisor.
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.871 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.950 248514 INFO nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Took 11.06 seconds to build instance.
Dec 13 09:02:28 compute-0 nova_compute[248510]: 2025-12-13 09:02:28.981 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:29 compute-0 podman[379723]: 2025-12-13 09:02:29.267237491 +0000 UTC m=+0.046306065 container create dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:02:29 compute-0 systemd[1]: Started libpod-conmon-dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7.scope.
Dec 13 09:02:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b3b0cc85753c71ae0af5ca1116bf5693a0188d31c1be920354ca7251186e52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:02:29 compute-0 podman[379723]: 2025-12-13 09:02:29.241148592 +0000 UTC m=+0.020217166 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:02:29 compute-0 podman[379723]: 2025-12-13 09:02:29.343921648 +0000 UTC m=+0.122990222 container init dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:02:29 compute-0 podman[379723]: 2025-12-13 09:02:29.349816192 +0000 UTC m=+0.128884756 container start dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 09:02:29 compute-0 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [NOTICE]   (379742) : New worker (379744) forked
Dec 13 09:02:29 compute-0 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [NOTICE]   (379742) : Loading success.
Dec 13 09:02:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Dec 13 09:02:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:31 compute-0 nova_compute[248510]: 2025-12-13 09:02:31.232 248514 DEBUG nova.compute.manager [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:31 compute-0 nova_compute[248510]: 2025-12-13 09:02:31.233 248514 DEBUG oslo_concurrency.lockutils [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:31 compute-0 nova_compute[248510]: 2025-12-13 09:02:31.234 248514 DEBUG oslo_concurrency.lockutils [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:31 compute-0 nova_compute[248510]: 2025-12-13 09:02:31.234 248514 DEBUG oslo_concurrency.lockutils [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:31 compute-0 nova_compute[248510]: 2025-12-13 09:02:31.234 248514 DEBUG nova.compute.manager [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] No waiting events found dispatching network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:31 compute-0 nova_compute[248510]: 2025-12-13 09:02:31.234 248514 WARNING nova.compute.manager [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received unexpected event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a for instance with vm_state active and task_state None.
Dec 13 09:02:31 compute-0 ceph-mon[76537]: pgmap v3042: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Dec 13 09:02:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:31.722 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:02:31 compute-0 nova_compute[248510]: 2025-12-13 09:02:31.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:31.724 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:02:32 compute-0 nova_compute[248510]: 2025-12-13 09:02:32.328 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Dec 13 09:02:32 compute-0 nova_compute[248510]: 2025-12-13 09:02:32.861 248514 DEBUG nova.compute.manager [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:32 compute-0 nova_compute[248510]: 2025-12-13 09:02:32.862 248514 DEBUG nova.compute.manager [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing instance network info cache due to event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:02:32 compute-0 nova_compute[248510]: 2025-12-13 09:02:32.863 248514 DEBUG oslo_concurrency.lockutils [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:02:32 compute-0 nova_compute[248510]: 2025-12-13 09:02:32.863 248514 DEBUG oslo_concurrency.lockutils [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:02:32 compute-0 nova_compute[248510]: 2025-12-13 09:02:32.863 248514 DEBUG nova.network.neutron [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:02:32 compute-0 nova_compute[248510]: 2025-12-13 09:02:32.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:33 compute-0 ceph-mon[76537]: pgmap v3043: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Dec 13 09:02:34 compute-0 nova_compute[248510]: 2025-12-13 09:02:34.152 248514 DEBUG nova.network.neutron [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updated VIF entry in instance network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:02:34 compute-0 nova_compute[248510]: 2025-12-13 09:02:34.154 248514 DEBUG nova.network.neutron [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:34 compute-0 nova_compute[248510]: 2025-12-13 09:02:34.179 248514 DEBUG oslo_concurrency.lockutils [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:02:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 13 09:02:35 compute-0 ceph-mon[76537]: pgmap v3044: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 13 09:02:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 140 op/s
Dec 13 09:02:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:36.726 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:37 compute-0 ovn_controller[148476]: 2025-12-13T09:02:37Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:34:14 10.100.0.5
Dec 13 09:02:37 compute-0 ovn_controller[148476]: 2025-12-13T09:02:37Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:34:14 10.100.0.5
Dec 13 09:02:37 compute-0 nova_compute[248510]: 2025-12-13 09:02:37.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:37 compute-0 ceph-mon[76537]: pgmap v3045: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 140 op/s
Dec 13 09:02:38 compute-0 nova_compute[248510]: 2025-12-13 09:02:38.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 150 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 977 KiB/s wr, 171 op/s
Dec 13 09:02:38 compute-0 podman[379755]: 2025-12-13 09:02:38.970771451 +0000 UTC m=+0.052442595 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:02:39 compute-0 podman[379753]: 2025-12-13 09:02:39.003386059 +0000 UTC m=+0.093586401 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 09:02:39 compute-0 podman[379754]: 2025-12-13 09:02:39.003669296 +0000 UTC m=+0.091467800 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 09:02:39 compute-0 ceph-mon[76537]: pgmap v3046: 321 pgs: 321 active+clean; 150 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 977 KiB/s wr, 171 op/s
Dec 13 09:02:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:02:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:02:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:02:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:02:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:02:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:02:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Dec 13 09:02:40 compute-0 ovn_controller[148476]: 2025-12-13T09:02:40Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 09:02:40 compute-0 ovn_controller[148476]: 2025-12-13T09:02:40Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 09:02:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:41 compute-0 ceph-mon[76537]: pgmap v3047: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Dec 13 09:02:42 compute-0 nova_compute[248510]: 2025-12-13 09:02:42.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Dec 13 09:02:42 compute-0 ceph-mon[76537]: pgmap v3048: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Dec 13 09:02:43 compute-0 nova_compute[248510]: 2025-12-13 09:02:43.005 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:43 compute-0 nova_compute[248510]: 2025-12-13 09:02:43.830 248514 INFO nova.compute.manager [None req-1ad34f63-d60a-4656-a94c-7f6ce32210bd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Get console output
Dec 13 09:02:43 compute-0 nova_compute[248510]: 2025-12-13 09:02:43.838 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:02:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.235 248514 DEBUG nova.compute.manager [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.236 248514 DEBUG nova.compute.manager [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing instance network info cache due to event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.236 248514 DEBUG oslo_concurrency.lockutils [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.237 248514 DEBUG oslo_concurrency.lockutils [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.237 248514 DEBUG nova.network.neutron [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.324 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.324 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.325 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.325 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.325 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.327 248514 INFO nova.compute.manager [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Terminating instance
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.328 248514 DEBUG nova.compute.manager [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:02:45 compute-0 kernel: tap698673ab-11 (unregistering): left promiscuous mode
Dec 13 09:02:45 compute-0 NetworkManager[50376]: <info>  [1765616565.3849] device (tap698673ab-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:02:45 compute-0 ovn_controller[148476]: 2025-12-13T09:02:45Z|01337|binding|INFO|Releasing lport 698673ab-115a-49aa-b3d5-68392d28aa81 from this chassis (sb_readonly=0)
Dec 13 09:02:45 compute-0 ovn_controller[148476]: 2025-12-13T09:02:45Z|01338|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 down in Southbound
Dec 13 09:02:45 compute-0 ovn_controller[148476]: 2025-12-13T09:02:45Z|01339|binding|INFO|Removing iface tap698673ab-11 ovn-installed in OVS
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.399 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.410 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.411 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a unbound from our chassis
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.414 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44b4664e-9eef-4d04-bcdb-68869f16c46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.415 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78a7687b-89cb-497c-8860-a4a41f1d20ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.416 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a namespace which is not needed anymore
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000082.scope: Deactivated successfully.
Dec 13 09:02:45 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000082.scope: Consumed 13.317s CPU time.
Dec 13 09:02:45 compute-0 systemd-machined[210538]: Machine qemu-159-instance-00000082 terminated.
Dec 13 09:02:45 compute-0 kernel: tap698673ab-11: entered promiscuous mode
Dec 13 09:02:45 compute-0 systemd-udevd[379817]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:02:45 compute-0 NetworkManager[50376]: <info>  [1765616565.5588] manager: (tap698673ab-11): new Tun device (/org/freedesktop/NetworkManager/Devices/551)
Dec 13 09:02:45 compute-0 ovn_controller[148476]: 2025-12-13T09:02:45Z|01340|binding|INFO|Claiming lport 698673ab-115a-49aa-b3d5-68392d28aa81 for this chassis.
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 ovn_controller[148476]: 2025-12-13T09:02:45Z|01341|binding|INFO|698673ab-115a-49aa-b3d5-68392d28aa81: Claiming fa:16:3e:7b:34:14 10.100.0.5
Dec 13 09:02:45 compute-0 kernel: tap698673ab-11 (unregistering): left promiscuous mode
Dec 13 09:02:45 compute-0 ceph-mon[76537]: pgmap v3049: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.572 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 ovn_controller[148476]: 2025-12-13T09:02:45Z|01342|binding|INFO|Releasing lport 698673ab-115a-49aa-b3d5-68392d28aa81 from this chassis (sb_readonly=0)
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.601 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.605 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance destroyed successfully.
Dec 13 09:02:45 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [NOTICE]   (379468) : haproxy version is 2.8.14-c23fe91
Dec 13 09:02:45 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [NOTICE]   (379468) : path to executable is /usr/sbin/haproxy
Dec 13 09:02:45 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [WARNING]  (379468) : Exiting Master process...
Dec 13 09:02:45 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [WARNING]  (379468) : Exiting Master process...
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.606 248514 DEBUG nova.objects.instance [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:45 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [ALERT]    (379468) : Current worker (379470) exited with code 143 (Terminated)
Dec 13 09:02:45 compute-0 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [WARNING]  (379468) : All workers exited. Exiting... (0)
Dec 13 09:02:45 compute-0 systemd[1]: libpod-77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066.scope: Deactivated successfully.
Dec 13 09:02:45 compute-0 podman[379837]: 2025-12-13 09:02:45.617712661 +0000 UTC m=+0.076706559 container died 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.644 248514 DEBUG nova.virt.libvirt.vif [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:02:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:02:24Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.644 248514 DEBUG nova.network.os_vif_util [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.645 248514 DEBUG nova.network.os_vif_util [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.646 248514 DEBUG os_vif [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.648 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap698673ab-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066-userdata-shm.mount: Deactivated successfully.
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.655 248514 INFO os_vif [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11')
Dec 13 09:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-92a218a85abcbb7ba749a23f029096ade87b462fcb9dae89324b66eda6c3873b-merged.mount: Deactivated successfully.
Dec 13 09:02:45 compute-0 podman[379837]: 2025-12-13 09:02:45.673336462 +0000 UTC m=+0.132330330 container cleanup 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:02:45 compute-0 systemd[1]: libpod-conmon-77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066.scope: Deactivated successfully.
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.743 248514 DEBUG nova.compute.manager [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.744 248514 DEBUG oslo_concurrency.lockutils [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.744 248514 DEBUG oslo_concurrency.lockutils [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.745 248514 DEBUG oslo_concurrency.lockutils [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.745 248514 DEBUG nova.compute.manager [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.746 248514 DEBUG nova.compute.manager [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:02:45 compute-0 podman[379884]: 2025-12-13 09:02:45.758591219 +0000 UTC m=+0.054936916 container remove 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.764 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a3aff1-6683-4ce3-8c83-f6f29075642e]: (4, ('Sat Dec 13 09:02:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a (77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066)\n77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066\nSat Dec 13 09:02:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a (77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066)\n77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.766 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[682be346-63a5-4a38-82ed-1ca6e24613ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.767 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44b4664e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 kernel: tap44b4664e-90: left promiscuous mode
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.785 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd9ad41-74ec-41e9-a64f-f4cc716a3b60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.804 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[858c9360-4391-4ca3-90f8-08b28656737d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.805 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[38d77d4b-eaa9-45bd-bf59-bb5c7e836936]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d59b1551-dffc-4d83-9502-4e70359320d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912063, 'reachable_time': 41989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379902, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d44b4664e\x2d9eef\x2d4d04\x2dbcdb\x2d68869f16c46a.mount: Deactivated successfully.
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.827 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.827 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1d60d683-dfcf-40d5-82c1-4af2513d2554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.828 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a unbound from our chassis
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.830 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44b4664e-9eef-4d04-bcdb-68869f16c46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[255ccab5-b27f-43fe-b122-360e006eea79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.832 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a unbound from our chassis
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.834 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44b4664e-9eef-4d04-bcdb-68869f16c46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:02:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.834 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36e06051-46c8-4eda-8737-a7606deb892c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.882 248514 INFO nova.compute.manager [None req-69833bba-d288-46a3-ac4a-6d6d362d2fb9 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Get console output
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.888 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:02:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.947 248514 INFO nova.virt.libvirt.driver [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deleting instance files /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4_del
Dec 13 09:02:45 compute-0 nova_compute[248510]: 2025-12-13 09:02:45.948 248514 INFO nova.virt.libvirt.driver [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deletion of /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4_del complete
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.021 248514 INFO nova.compute.manager [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Took 0.69 seconds to destroy the instance on the hypervisor.
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.022 248514 DEBUG oslo.service.loopingcall [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.022 248514 DEBUG nova.compute.manager [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.022 248514 DEBUG nova.network.neutron [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:02:46 compute-0 ovn_controller[148476]: 2025-12-13T09:02:46Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 09:02:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 653 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Dec 13 09:02:46 compute-0 ceph-mon[76537]: pgmap v3050: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 653 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.766 248514 DEBUG nova.network.neutron [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updated VIF entry in instance network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.767 248514 DEBUG nova.network.neutron [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.809 248514 DEBUG nova.network.neutron [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.811 248514 DEBUG oslo_concurrency.lockutils [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.849 248514 INFO nova.compute.manager [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Took 0.83 seconds to deallocate network for instance.
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.906 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:46 compute-0 nova_compute[248510]: 2025-12-13 09:02:46.907 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.261 248514 DEBUG oslo_concurrency.processutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.334 248514 DEBUG nova.compute.manager [req-55e37121-68ae-467e-ad57-997e38d0d158 req-c667227d-80dd-48ed-bf06-c820fc3559e9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-deleted-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:02:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44448204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.767 248514 DEBUG oslo_concurrency.processutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.774 248514 DEBUG nova.compute.provider_tree [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.797 248514 DEBUG nova.scheduler.client.report [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:02:47 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/44448204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.828 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.858 248514 DEBUG nova.compute.manager [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.859 248514 DEBUG oslo_concurrency.lockutils [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.859 248514 DEBUG oslo_concurrency.lockutils [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.860 248514 DEBUG oslo_concurrency.lockutils [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.860 248514 DEBUG nova.compute.manager [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.860 248514 WARNING nova.compute.manager [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state deleted and task_state None.
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.862 248514 INFO nova.scheduler.client.report [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance 18506534-de17-4e42-87c7-b2546619f4d4
Dec 13 09:02:47 compute-0 nova_compute[248510]: 2025-12-13 09:02:47.945 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:48 compute-0 nova_compute[248510]: 2025-12-13 09:02:48.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 164 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 4.3 MiB/s wr, 142 op/s
Dec 13 09:02:48 compute-0 ceph-mon[76537]: pgmap v3051: 321 pgs: 321 active+clean; 164 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 4.3 MiB/s wr, 142 op/s
Dec 13 09:02:48 compute-0 ovn_controller[148476]: 2025-12-13T09:02:48Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 09:02:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 3.3 MiB/s wr, 126 op/s
Dec 13 09:02:50 compute-0 nova_compute[248510]: 2025-12-13 09:02:50.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:50 compute-0 ovn_controller[148476]: 2025-12-13T09:02:50Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 09:02:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:51 compute-0 ceph-mon[76537]: pgmap v3052: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 3.3 MiB/s wr, 126 op/s
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.558 248514 DEBUG nova.compute.manager [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.558 248514 DEBUG nova.compute.manager [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing instance network info cache due to event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.559 248514 DEBUG oslo_concurrency.lockutils [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.559 248514 DEBUG oslo_concurrency.lockutils [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.560 248514 DEBUG nova.network.neutron [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.664 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.665 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.666 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.666 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.667 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.669 248514 INFO nova.compute.manager [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Terminating instance
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.671 248514 DEBUG nova.compute.manager [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:02:51 compute-0 kernel: tapbf1a0f02-89 (unregistering): left promiscuous mode
Dec 13 09:02:51 compute-0 NetworkManager[50376]: <info>  [1765616571.7300] device (tapbf1a0f02-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:02:51 compute-0 ovn_controller[148476]: 2025-12-13T09:02:51Z|01343|binding|INFO|Releasing lport bf1a0f02-8913-41ae-aa00-ab927d45e18a from this chassis (sb_readonly=0)
Dec 13 09:02:51 compute-0 ovn_controller[148476]: 2025-12-13T09:02:51Z|01344|binding|INFO|Setting lport bf1a0f02-8913-41ae-aa00-ab927d45e18a down in Southbound
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:51 compute-0 ovn_controller[148476]: 2025-12-13T09:02:51Z|01345|binding|INFO|Removing iface tapbf1a0f02-89 ovn-installed in OVS
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.747 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:41:b6 10.100.0.7'], port_security=['fa:16:3e:eb:41:b6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd1949b92-7b16-4a9d-b033-d0a6df7a9f2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e40673f-327e-4eb6-92f2-18c595696258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f4c3d2c-26ca-48de-bf5d-37154a8db222', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=873a69d0-ac9b-484c-a75a-e746eafe7ffd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bf1a0f02-8913-41ae-aa00-ab927d45e18a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:02:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.749 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bf1a0f02-8913-41ae-aa00-ab927d45e18a in datapath 6e40673f-327e-4eb6-92f2-18c595696258 unbound from our chassis
Dec 13 09:02:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.750 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e40673f-327e-4eb6-92f2-18c595696258, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:02:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.751 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b8186250-0e5d-437e-8bf9-91cf3c1cc563]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.752 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 namespace which is not needed anymore
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:51 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000083.scope: Deactivated successfully.
Dec 13 09:02:51 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000083.scope: Consumed 13.500s CPU time.
Dec 13 09:02:51 compute-0 systemd-machined[210538]: Machine qemu-160-instance-00000083 terminated.
Dec 13 09:02:51 compute-0 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [NOTICE]   (379742) : haproxy version is 2.8.14-c23fe91
Dec 13 09:02:51 compute-0 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [NOTICE]   (379742) : path to executable is /usr/sbin/haproxy
Dec 13 09:02:51 compute-0 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [WARNING]  (379742) : Exiting Master process...
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.915 248514 INFO nova.virt.libvirt.driver [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance destroyed successfully.
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.916 248514 DEBUG nova.objects.instance [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid d1949b92-7b16-4a9d-b033-d0a6df7a9f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:02:51 compute-0 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [ALERT]    (379742) : Current worker (379744) exited with code 143 (Terminated)
Dec 13 09:02:51 compute-0 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [WARNING]  (379742) : All workers exited. Exiting... (0)
Dec 13 09:02:51 compute-0 systemd[1]: libpod-dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7.scope: Deactivated successfully.
Dec 13 09:02:51 compute-0 podman[379951]: 2025-12-13 09:02:51.925601193 +0000 UTC m=+0.054890155 container died dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.947 248514 DEBUG nova.virt.libvirt.vif [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:02:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1513004833',display_name='tempest-TestNetworkBasicOps-server-1513004833',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1513004833',id=131,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8dVZ4XrPNWO5yi7otU+kwc72UZys9k+bPmo3tsYqKE1ENZ9dHjFLyBJxupI7JFtVdj7CesXySUu0b7xutQxOWMSQgMIWptsgkbrjrKxEGVeJHg8i+xoVF0hfGKec7bVQ==',key_name='tempest-TestNetworkBasicOps-61860158',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:02:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-y8sla6xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:02:28Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=d1949b92-7b16-4a9d-b033-d0a6df7a9f2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.947 248514 DEBUG nova.network.os_vif_util [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.948 248514 DEBUG nova.network.os_vif_util [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:02:51 compute-0 ovn_controller[148476]: 2025-12-13T09:02:51Z|01346|binding|INFO|Releasing lport b5b50ff0-6ee9-471e-bc65-c9c2b390db7a from this chassis (sb_readonly=0)
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.950 248514 DEBUG os_vif [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.952 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf1a0f02-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.955 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7-userdata-shm.mount: Deactivated successfully.
Dec 13 09:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4b3b0cc85753c71ae0af5ca1116bf5693a0188d31c1be920354ca7251186e52-merged.mount: Deactivated successfully.
Dec 13 09:02:51 compute-0 nova_compute[248510]: 2025-12-13 09:02:51.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:51 compute-0 podman[379951]: 2025-12-13 09:02:51.977853192 +0000 UTC m=+0.107142154 container cleanup dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 09:02:51 compute-0 systemd[1]: libpod-conmon-dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7.scope: Deactivated successfully.
Dec 13 09:02:52 compute-0 podman[379991]: 2025-12-13 09:02:52.040397653 +0000 UTC m=+0.041384644 container remove dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.042 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f950f6c5-e966-4685-8abc-b135ed7e3047]: (4, ('Sat Dec 13 09:02:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 (dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7)\ndec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7\nSat Dec 13 09:02:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 (dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7)\ndec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.043 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa71108-6dc7-4ba3-a222-de2e2b48dbd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.044 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e40673f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:52 compute-0 kernel: tap6e40673f-30: left promiscuous mode
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.094 248514 INFO os_vif [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89')
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1140aee1-7e13-489e-9629-54cb28687a69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.119 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.120 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86138030-b8de-4d48-b2dd-32bf6faf11b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.121 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab24dd2c-d18c-4cf2-9a92-0376ba302d6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.122 248514 DEBUG nova.compute.manager [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-unplugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.122 248514 DEBUG oslo_concurrency.lockutils [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.123 248514 DEBUG oslo_concurrency.lockutils [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.123 248514 DEBUG oslo_concurrency.lockutils [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.123 248514 DEBUG nova.compute.manager [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] No waiting events found dispatching network-vif-unplugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.123 248514 DEBUG nova.compute.manager [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-unplugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.140 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[13307e63-f449-41c4-8e30-4798c9c26cf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912553, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380022, 'error': None, 'target': 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e40673f\x2d327e\x2d4eb6\x2d92f2\x2d18c595696258.mount: Deactivated successfully.
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.144 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:02:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.145 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[704d3917-daec-475c-816c-0bf33401e9ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.352 248514 INFO nova.virt.libvirt.driver [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Deleting instance files /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_del
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.353 248514 INFO nova.virt.libvirt.driver [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Deletion of /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_del complete
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.436 248514 INFO nova.compute.manager [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Took 0.76 seconds to destroy the instance on the hypervisor.
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.437 248514 DEBUG oslo.service.loopingcall [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.440 248514 DEBUG nova.compute.manager [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.440 248514 DEBUG nova.network.neutron [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:02:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Dec 13 09:02:52 compute-0 nova_compute[248510]: 2025-12-13 09:02:52.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.349 248514 DEBUG nova.network.neutron [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.373 248514 INFO nova.compute.manager [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Took 0.93 seconds to deallocate network for instance.
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.421 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.421 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.484 248514 DEBUG oslo_concurrency.processutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.540 248514 DEBUG nova.network.neutron [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updated VIF entry in instance network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.542 248514 DEBUG nova.network.neutron [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:53 compute-0 ceph-mon[76537]: pgmap v3053: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Dec 13 09:02:53 compute-0 nova_compute[248510]: 2025-12-13 09:02:53.564 248514 DEBUG oslo_concurrency.lockutils [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:02:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:02:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556270652' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.084 248514 DEBUG oslo_concurrency.processutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.090 248514 DEBUG nova.compute.provider_tree [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.115 248514 DEBUG nova.scheduler.client.report [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.136 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.185 248514 INFO nova.scheduler.client.report [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance d1949b92-7b16-4a9d-b033-d0a6df7a9f2f
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.249 248514 DEBUG nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.249 248514 DEBUG oslo_concurrency.lockutils [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.249 248514 DEBUG oslo_concurrency.lockutils [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 DEBUG oslo_concurrency.lockutils [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 DEBUG nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] No waiting events found dispatching network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 WARNING nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received unexpected event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a for instance with vm_state deleted and task_state None.
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 DEBUG nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-deleted-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 INFO nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Neutron deleted interface bf1a0f02-8913-41ae-aa00-ab927d45e18a; detaching it from the instance and deleting it from the info cache
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.251 248514 DEBUG nova.network.neutron [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.278 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:54 compute-0 nova_compute[248510]: 2025-12-13 09:02:54.285 248514 DEBUG nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Detach interface failed, port_id=bf1a0f02-8913-41ae-aa00-ab927d45e18a, reason: Instance d1949b92-7b16-4a9d-b033-d0a6df7a9f2f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:02:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Dec 13 09:02:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3556270652' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:02:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:02:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:02:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:02:55.441 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:02:55 compute-0 ceph-mon[76537]: pgmap v3054: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Dec 13 09:02:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:02:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 16 KiB/s wr, 56 op/s
Dec 13 09:02:56 compute-0 nova_compute[248510]: 2025-12-13 09:02:56.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:57 compute-0 ceph-mon[76537]: pgmap v3055: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 16 KiB/s wr, 56 op/s
Dec 13 09:02:58 compute-0 nova_compute[248510]: 2025-12-13 09:02:58.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:02:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3056: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 16 KiB/s wr, 56 op/s
Dec 13 09:02:58 compute-0 ceph-mon[76537]: pgmap v3056: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 16 KiB/s wr, 56 op/s
Dec 13 09:03:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Dec 13 09:03:00 compute-0 nova_compute[248510]: 2025-12-13 09:03:00.603 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616565.6018267, 18506534-de17-4e42-87c7-b2546619f4d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:00 compute-0 nova_compute[248510]: 2025-12-13 09:03:00.604 248514 INFO nova.compute.manager [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Stopped (Lifecycle Event)
Dec 13 09:03:00 compute-0 nova_compute[248510]: 2025-12-13 09:03:00.872 248514 DEBUG nova.compute.manager [None req-526d59e2-b10d-4183-b193-ddc31f3dd8a3 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:01 compute-0 ceph-mon[76537]: pgmap v3057: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Dec 13 09:03:01 compute-0 nova_compute[248510]: 2025-12-13 09:03:01.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:03:03 compute-0 nova_compute[248510]: 2025-12-13 09:03:03.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:03 compute-0 ceph-mon[76537]: pgmap v3058: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:03:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:03:05 compute-0 ceph-mon[76537]: pgmap v3059: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:03:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:03:06 compute-0 nova_compute[248510]: 2025-12-13 09:03:06.913 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616571.9108522, d1949b92-7b16-4a9d-b033-d0a6df7a9f2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:06 compute-0 nova_compute[248510]: 2025-12-13 09:03:06.913 248514 INFO nova.compute.manager [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] VM Stopped (Lifecycle Event)
Dec 13 09:03:06 compute-0 nova_compute[248510]: 2025-12-13 09:03:06.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:06 compute-0 nova_compute[248510]: 2025-12-13 09:03:06.969 248514 DEBUG nova.compute.manager [None req-4defc14b-2124-4295-82f8-d32ddb08db5f - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:07 compute-0 ceph-mon[76537]: pgmap v3060: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:03:08 compute-0 nova_compute[248510]: 2025-12-13 09:03:08.015 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:03:08 compute-0 nova_compute[248510]: 2025-12-13 09:03:08.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:03:09
Dec 13 09:03:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:03:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:03:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root', 'default.rgw.log', 'vms', 'images']
Dec 13 09:03:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:03:09 compute-0 ceph-mon[76537]: pgmap v3061: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:03:09 compute-0 podman[380051]: 2025-12-13 09:03:09.991395162 +0000 UTC m=+0.066544428 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 13 09:03:10 compute-0 podman[380050]: 2025-12-13 09:03:10.019526407 +0000 UTC m=+0.099198947 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd)
Dec 13 09:03:10 compute-0 podman[380049]: 2025-12-13 09:03:10.025808534 +0000 UTC m=+0.104832367 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:03:10 compute-0 ceph-mon[76537]: pgmap v3062: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:03:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:03:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.241 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.242 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.273 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.389 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.390 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.399 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.399 248514 INFO nova.compute.claims [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.544 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:11 compute-0 nova_compute[248510]: 2025-12-13 09:03:11.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:03:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867376723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.135 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.142 248514 DEBUG nova.compute.provider_tree [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.164 248514 DEBUG nova.scheduler.client.report [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:03:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1867376723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.193 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.194 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.247 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.247 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.277 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.300 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.400 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.402 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.403 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Creating image(s)
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.437 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.468 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.496 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.500 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.572 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.573 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.574 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.574 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.598 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.602 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.772 248514 DEBUG nova.policy [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.921 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:12 compute-0 nova_compute[248510]: 2025-12-13 09:03:12.978 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:03:13 compute-0 nova_compute[248510]: 2025-12-13 09:03:13.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:13 compute-0 nova_compute[248510]: 2025-12-13 09:03:13.064 248514 DEBUG nova.objects.instance [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:13 compute-0 nova_compute[248510]: 2025-12-13 09:03:13.083 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:03:13 compute-0 nova_compute[248510]: 2025-12-13 09:03:13.083 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Ensure instance console log exists: /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:03:13 compute-0 nova_compute[248510]: 2025-12-13 09:03:13.084 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:13 compute-0 nova_compute[248510]: 2025-12-13 09:03:13.084 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:13 compute-0 nova_compute[248510]: 2025-12-13 09:03:13.084 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:13 compute-0 ceph-mon[76537]: pgmap v3063: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:03:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 76 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 960 KiB/s wr, 19 op/s
Dec 13 09:03:15 compute-0 nova_compute[248510]: 2025-12-13 09:03:15.070 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Successfully created port: daddb809-b36f-4f29-bd15-459cfd21a812 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:03:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:03:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3032886555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:03:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:03:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3032886555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:03:15 compute-0 ceph-mon[76537]: pgmap v3064: 321 pgs: 321 active+clean; 76 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 960 KiB/s wr, 19 op/s
Dec 13 09:03:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3032886555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:03:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3032886555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:03:15 compute-0 nova_compute[248510]: 2025-12-13 09:03:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.062 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Successfully updated port: daddb809-b36f-4f29-bd15-459cfd21a812 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.081 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.082 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.082 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.198 248514 DEBUG nova.compute.manager [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.199 248514 DEBUG nova.compute.manager [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing instance network info cache due to event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.199 248514 DEBUG oslo_concurrency.lockutils [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 76 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 960 KiB/s wr, 19 op/s
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.717 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:03:16 compute-0 nova_compute[248510]: 2025-12-13 09:03:16.964 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:17 compute-0 ceph-mon[76537]: pgmap v3065: 321 pgs: 321 active+clean; 76 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 960 KiB/s wr, 19 op/s
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.654 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.690 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.691 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance network_info: |[{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.692 248514 DEBUG oslo_concurrency.lockutils [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.693 248514 DEBUG nova.network.neutron [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.698 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start _get_guest_xml network_info=[{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.706 248514 WARNING nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.713 248514 DEBUG nova.virt.libvirt.host [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.715 248514 DEBUG nova.virt.libvirt.host [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.727 248514 DEBUG nova.virt.libvirt.host [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.728 248514 DEBUG nova.virt.libvirt.host [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.729 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.730 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.731 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.731 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.731 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.732 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.732 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.733 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.733 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.734 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.734 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.735 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.740 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.804 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.806 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.841 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 09:03:17 compute-0 nova_compute[248510]: 2025-12-13 09:03:17.842 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:03:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3951604695' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.355 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.380 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.384 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:03:18 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3951604695' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:18 compute-0 ceph-mon[76537]: pgmap v3066: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:03:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:03:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591836453' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.911 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.914 248514 DEBUG nova.virt.libvirt.vif [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:12Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.915 248514 DEBUG nova.network.os_vif_util [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.917 248514 DEBUG nova.network.os_vif_util [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.919 248514 DEBUG nova.objects.instance [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.947 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <uuid>f5f29271-ff94-4d88-bc99-1cfc3e1128a0</uuid>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <name>instance-00000084</name>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-306376587</nova:name>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:03:17</nova:creationTime>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <nova:port uuid="daddb809-b36f-4f29-bd15-459cfd21a812">
Dec 13 09:03:18 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <system>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <entry name="serial">f5f29271-ff94-4d88-bc99-1cfc3e1128a0</entry>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <entry name="uuid">f5f29271-ff94-4d88-bc99-1cfc3e1128a0</entry>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </system>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <os>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   </os>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <features>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   </features>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk">
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       </source>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config">
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       </source>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:03:18 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b3:4e:45"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <target dev="tapdaddb809-b3"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/console.log" append="off"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <video>
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </video>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:03:18 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:03:18 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:03:18 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:03:18 compute-0 nova_compute[248510]: </domain>
Dec 13 09:03:18 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.949 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Preparing to wait for external event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.950 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.950 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.951 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.951 248514 DEBUG nova.virt.libvirt.vif [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:12Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.952 248514 DEBUG nova.network.os_vif_util [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.953 248514 DEBUG nova.network.os_vif_util [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.953 248514 DEBUG os_vif [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.954 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.955 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.958 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.959 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaddb809-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.959 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdaddb809-b3, col_values=(('external_ids', {'iface-id': 'daddb809-b36f-4f29-bd15-459cfd21a812', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:4e:45', 'vm-uuid': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:18 compute-0 NetworkManager[50376]: <info>  [1765616598.9626] manager: (tapdaddb809-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/552)
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:18 compute-0 nova_compute[248510]: 2025-12-13 09:03:18.971 248514 INFO os_vif [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3')
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.040 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.040 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.040 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:b3:4e:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.041 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Using config drive
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.064 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1591836453' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.630 248514 DEBUG nova.network.neutron [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updated VIF entry in instance network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.631 248514 DEBUG nova.network.neutron [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.658 248514 DEBUG oslo_concurrency.lockutils [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.753 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Creating config drive at /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.764 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvov7r_9d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.928 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvov7r_9d" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.953 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:19 compute-0 nova_compute[248510]: 2025-12-13 09:03:19.959 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.123 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.125 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Deleting local config drive /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config because it was imported into RBD.
Dec 13 09:03:20 compute-0 kernel: tapdaddb809-b3: entered promiscuous mode
Dec 13 09:03:20 compute-0 NetworkManager[50376]: <info>  [1765616600.1964] manager: (tapdaddb809-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/553)
Dec 13 09:03:20 compute-0 ovn_controller[148476]: 2025-12-13T09:03:20Z|01347|binding|INFO|Claiming lport daddb809-b36f-4f29-bd15-459cfd21a812 for this chassis.
Dec 13 09:03:20 compute-0 ovn_controller[148476]: 2025-12-13T09:03:20Z|01348|binding|INFO|daddb809-b36f-4f29-bd15-459cfd21a812: Claiming fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.213 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:4e:45 10.100.0.11'], port_security=['fa:16:3e:b3:4e:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd30303-3917-438c-8b47-a12827c948d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48e751cb-5eae-4702-97af-e0ce47d426b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c018ca30-1abb-42b0-8adb-342f8b53fd8e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=daddb809-b36f-4f29-bd15-459cfd21a812) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.215 158419 INFO neutron.agent.ovn.metadata.agent [-] Port daddb809-b36f-4f29-bd15-459cfd21a812 in datapath cdd30303-3917-438c-8b47-a12827c948d6 bound to our chassis
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.217 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 09:03:20 compute-0 systemd-machined[210538]: New machine qemu-161-instance-00000084.
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.236 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b13d932-e5ac-4808-b8b9-fc7c38e5beff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.237 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcdd30303-31 in ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.240 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcdd30303-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.240 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8029635d-0a97-4360-b7e3-a9b10525e04d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.241 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64198d5f-66da-43c7-8d7b-b6669f2a650d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000084.
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.265 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[112b766c-77cd-4289-bcac-a303c1b66eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_controller[148476]: 2025-12-13T09:03:20Z|01349|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 ovn-installed in OVS
Dec 13 09:03:20 compute-0 ovn_controller[148476]: 2025-12-13T09:03:20Z|01350|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 up in Southbound
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.269 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:20 compute-0 systemd-udevd[380441]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.288 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0180eeb-f81b-425f-b172-4ab838166c6a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 NetworkManager[50376]: <info>  [1765616600.3071] device (tapdaddb809-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:03:20 compute-0 NetworkManager[50376]: <info>  [1765616600.3082] device (tapdaddb809-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.338 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aeeefb6b-e215-4468-9aec-1cb63e34f2e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.343 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c22aa4f-7f6d-4ba2-82df-ef7171b9873d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 systemd-udevd[380449]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:03:20 compute-0 NetworkManager[50376]: <info>  [1765616600.3461] manager: (tapcdd30303-30): new Veth device (/org/freedesktop/NetworkManager/Devices/554)
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.392 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f72b109-f4d9-441a-9bc9-67de2a9e47cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.396 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5b8ee0-c730-4f0b-81fd-0db73afe0a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 NetworkManager[50376]: <info>  [1765616600.4293] device (tapcdd30303-30): carrier: link connected
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.441 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4497b5-2836-488c-86a7-a27540a0bb41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b86cd16-fc71-47c6-9e53-edb1d3620731]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd30303-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:ce:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917764, 'reachable_time': 37290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380471, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bb287535-0f66-4a14-ab58-415bec5fd6e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:ce22'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 917764, 'tstamp': 917764}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380472, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d1326e74-2682-4350-80ae-d03286835fb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd30303-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:ce:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917764, 'reachable_time': 37290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380473, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.549 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.550 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.572 248514 DEBUG nova.compute.manager [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.573 248514 DEBUG oslo_concurrency.lockutils [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.573 248514 DEBUG oslo_concurrency.lockutils [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.574 248514 DEBUG oslo_concurrency.lockutils [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.576 248514 DEBUG nova.compute.manager [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Processing event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.578 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.594 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91f2e465-2b1c-4089-85cb-3fd5da4c9883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ceph-mon[76537]: pgmap v3067: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.687 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.688 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.690 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c10e2bd8-7f44-45d0-84a4-9d889976bada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd30303-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd30303-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.697 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.698 248514 INFO nova.compute.claims [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:03:20 compute-0 kernel: tapcdd30303-30: entered promiscuous mode
Dec 13 09:03:20 compute-0 NetworkManager[50376]: <info>  [1765616600.7491] manager: (tapcdd30303-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd30303-30, col_values=(('external_ids', {'iface-id': '2b4706f8-0b78-43d8-a3df-6327712b725d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:20 compute-0 ovn_controller[148476]: 2025-12-13T09:03:20Z|01351|binding|INFO|Releasing lport 2b4706f8-0b78-43d8-a3df-6327712b725d from this chassis (sb_readonly=0)
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.773 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22fe39ab-e0f5-4619-9750-1bb856493769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.775 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:03:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.775 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'env', 'PROCESS_TAG=haproxy-cdd30303-3917-438c-8b47-a12827c948d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cdd30303-3917-438c-8b47-a12827c948d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.842 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.905 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616600.9044743, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.905 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Started (Lifecycle Event)
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.908 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.913 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.920 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance spawned successfully.
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.921 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.923 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.931 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.950 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.951 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.951 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.952 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.952 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.953 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.966 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.967 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616600.907748, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:20 compute-0 nova_compute[248510]: 2025-12-13 09:03:20.968 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Paused (Lifecycle Event)
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.007 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.021 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616600.9123144, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.022 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Resumed (Lifecycle Event)
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.026 248514 INFO nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Took 8.63 seconds to spawn the instance on the hypervisor.
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.027 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.056 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.062 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.090 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.111 248514 INFO nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Took 9.77 seconds to build instance.
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.130 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:21 compute-0 podman[380566]: 2025-12-13 09:03:21.210694836 +0000 UTC m=+0.056927797 container create 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 09:03:21 compute-0 systemd[1]: Started libpod-conmon-0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8.scope.
Dec 13 09:03:21 compute-0 podman[380566]: 2025-12-13 09:03:21.18453091 +0000 UTC m=+0.030763871 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036026279993213125 of space, bias 1.0, pg target 0.10807883997963938 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669848540879215 of space, bias 1.0, pg target 0.2009545622637645 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724716211363558e-07 of space, bias 4.0, pg target 0.0006869659453636269 quantized to 16 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:03:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:03:21 compute-0 sshd-session[380436]: Invalid user ethereum from 80.94.92.165 port 46368
Dec 13 09:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d734d895c82410d889b2d117e4e4de5869dfd023c73ed8e5614e3fc64a0d3650/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:21 compute-0 podman[380566]: 2025-12-13 09:03:21.312594389 +0000 UTC m=+0.158827340 container init 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:03:21 compute-0 podman[380566]: 2025-12-13 09:03:21.320825265 +0000 UTC m=+0.167058216 container start 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:03:21 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [NOTICE]   (380585) : New worker (380587) forked
Dec 13 09:03:21 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [NOTICE]   (380585) : Loading success.
Dec 13 09:03:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:03:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.445 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.454 248514 DEBUG nova.compute.provider_tree [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.475 248514 DEBUG nova.scheduler.client.report [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:03:21 compute-0 sshd-session[380436]: Connection closed by invalid user ethereum 80.94.92.165 port 46368 [preauth]
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.507 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.509 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.565 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.570 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.595 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.616 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:03:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1434025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.738 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.740 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.740 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Creating image(s)
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.769 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.799 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.829 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.835 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.879 248514 DEBUG nova.policy [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.916 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.917 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.918 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.918 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.947 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:21 compute-0 nova_compute[248510]: 2025-12-13 09:03:21.953 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.498 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.614 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:03:22 compute-0 sudo[380710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:03:22 compute-0 sudo[380710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:22 compute-0 sudo[380710]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:22 compute-0 ceph-mon[76537]: pgmap v3068: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.729 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Successfully created port: 90d7401e-210b-4cbe-b93d-853787434352 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:03:22 compute-0 sudo[380760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.743 248514 DEBUG nova.compute.manager [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.744 248514 DEBUG oslo_concurrency.lockutils [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.744 248514 DEBUG oslo_concurrency.lockutils [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:22 compute-0 sudo[380760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.745 248514 DEBUG oslo_concurrency.lockutils [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.745 248514 DEBUG nova.compute.manager [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.746 248514 WARNING nova.compute.manager [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state active and task_state None.
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.798 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.805 248514 DEBUG nova.objects.instance [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.837 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.837 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.838 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.838 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.883 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.884 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Ensure instance console log exists: /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.884 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.885 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:22 compute-0 nova_compute[248510]: 2025-12-13 09:03:22.885 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:23 compute-0 sudo[380760]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:03:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947359066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:23 compute-0 sudo[380864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:03:23 compute-0 sudo[380864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:23 compute-0 sudo[380864]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.468 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:23 compute-0 sudo[380891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 13 09:03:23 compute-0 sudo[380891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.535 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.535 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.618 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Successfully updated port: 90d7401e-210b-4cbe-b93d-853787434352 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:23 compute-0 NetworkManager[50376]: <info>  [1765616603.6212] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Dec 13 09:03:23 compute-0 NetworkManager[50376]: <info>  [1765616603.6223] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.634 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.635 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.635 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:23 compute-0 ovn_controller[148476]: 2025-12-13T09:03:23Z|01352|binding|INFO|Releasing lport 2b4706f8-0b78-43d8-a3df-6327712b725d from this chassis (sb_readonly=0)
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.731 248514 DEBUG nova.compute.manager [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.731 248514 DEBUG nova.compute.manager [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.731 248514 DEBUG oslo_concurrency.lockutils [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/947359066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.782 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.783 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3326MB free_disk=59.96666970383376GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.784 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.784 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:23 compute-0 sudo[380891]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.840 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:03:23 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:03:23 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.871 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance f5f29271-ff94-4d88-bc99-1cfc3e1128a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.871 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:03:23 compute-0 sudo[380935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:03:23 compute-0 sudo[380935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:23 compute-0 sudo[380935]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.943 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:23 compute-0 nova_compute[248510]: 2025-12-13 09:03:23.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:24 compute-0 sudo[380960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- inventory --format=json-pretty --filter-for-batch
Dec 13 09:03:24 compute-0 sudo[380960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:24 compute-0 podman[381018]: 2025-12-13 09:03:24.358719087 +0000 UTC m=+0.078675132 container create 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:03:24 compute-0 systemd[1]: Started libpod-conmon-099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18.scope.
Dec 13 09:03:24 compute-0 podman[381018]: 2025-12-13 09:03:24.325654199 +0000 UTC m=+0.045610274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:03:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:24 compute-0 podman[381018]: 2025-12-13 09:03:24.48295191 +0000 UTC m=+0.202907985 container init 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:03:24 compute-0 podman[381018]: 2025-12-13 09:03:24.49453208 +0000 UTC m=+0.214488125 container start 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 09:03:24 compute-0 podman[381018]: 2025-12-13 09:03:24.500140371 +0000 UTC m=+0.220096416 container attach 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:03:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 134 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Dec 13 09:03:24 compute-0 nostalgic_moser[381033]: 167 167
Dec 13 09:03:24 compute-0 systemd[1]: libpod-099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18.scope: Deactivated successfully.
Dec 13 09:03:24 compute-0 podman[381018]: 2025-12-13 09:03:24.505049424 +0000 UTC m=+0.225005459 container died 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:03:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba511b0ac0e4bd3bca49bf653e631c5739386e58244956da56dd2f791bfff990-merged.mount: Deactivated successfully.
Dec 13 09:03:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:03:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637367313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:24 compute-0 podman[381018]: 2025-12-13 09:03:24.573787126 +0000 UTC m=+0.293743171 container remove 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:03:24 compute-0 nova_compute[248510]: 2025-12-13 09:03:24.577 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:24 compute-0 nova_compute[248510]: 2025-12-13 09:03:24.589 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:03:24 compute-0 systemd[1]: libpod-conmon-099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18.scope: Deactivated successfully.
Dec 13 09:03:24 compute-0 nova_compute[248510]: 2025-12-13 09:03:24.632 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:03:24 compute-0 nova_compute[248510]: 2025-12-13 09:03:24.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:03:24 compute-0 nova_compute[248510]: 2025-12-13 09:03:24.688 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:24 compute-0 podman[381062]: 2025-12-13 09:03:24.792852944 +0000 UTC m=+0.044224109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.013 248514 DEBUG nova.compute.manager [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.013 248514 DEBUG nova.compute.manager [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing instance network info cache due to event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.013 248514 DEBUG oslo_concurrency.lockutils [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.014 248514 DEBUG oslo_concurrency.lockutils [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.014 248514 DEBUG nova.network.neutron [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.158 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.182 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.182 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance network_info: |[{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.184 248514 DEBUG oslo_concurrency.lockutils [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.184 248514 DEBUG nova.network.neutron [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.190 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start _get_guest_xml network_info=[{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:03:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:25 compute-0 podman[381062]: 2025-12-13 09:03:25.312651498 +0000 UTC m=+0.564022583 container create 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:03:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:25 compute-0 ceph-mon[76537]: pgmap v3069: 321 pgs: 321 active+clean; 134 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Dec 13 09:03:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2637367313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.319 248514 WARNING nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.337 248514 DEBUG nova.virt.libvirt.host [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.339 248514 DEBUG nova.virt.libvirt.host [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.346 248514 DEBUG nova.virt.libvirt.host [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.347 248514 DEBUG nova.virt.libvirt.host [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.348 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.348 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.349 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.349 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.350 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.350 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.350 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.351 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.351 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.352 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.352 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.352 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.358 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:25 compute-0 systemd[1]: Started libpod-conmon-99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b.scope.
Dec 13 09:03:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:25 compute-0 podman[381062]: 2025-12-13 09:03:25.456494122 +0000 UTC m=+0.707865207 container init 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 09:03:25 compute-0 podman[381062]: 2025-12-13 09:03:25.473000945 +0000 UTC m=+0.724372030 container start 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:03:25 compute-0 podman[381062]: 2025-12-13 09:03:25.47638401 +0000 UTC m=+0.727755115 container attach 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:03:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:03:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845884973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:25 compute-0 nova_compute[248510]: 2025-12-13 09:03:25.993 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.013 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.017 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]: [
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:     {
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "available": false,
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "being_replaced": false,
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "ceph_device_lvm": false,
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "lsm_data": {},
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "lvs": [],
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "path": "/dev/sr0",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "rejected_reasons": [
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "Insufficient space (<5GB)",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "Has a FileSystem"
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         ],
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         "sys_api": {
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "actuators": null,
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "device_nodes": [
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:                 "sr0"
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             ],
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "devname": "sr0",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "human_readable_size": "482.00 KB",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "id_bus": "ata",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "model": "QEMU DVD-ROM",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "nr_requests": "2",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "parent": "/dev/sr0",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "partitions": {},
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "path": "/dev/sr0",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "removable": "1",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "rev": "2.5+",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "ro": "0",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "rotational": "1",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "sas_address": "",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "sas_device_handle": "",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "scheduler_mode": "mq-deadline",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "sectors": 0,
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "sectorsize": "2048",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "size": 493568.0,
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "support_discard": "2048",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "type": "disk",
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:             "vendor": "QEMU"
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:         }
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]:     }
Dec 13 09:03:26 compute-0 compassionate_poitras[381079]: ]
Dec 13 09:03:26 compute-0 systemd[1]: libpod-99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b.scope: Deactivated successfully.
Dec 13 09:03:26 compute-0 podman[381884]: 2025-12-13 09:03:26.13698066 +0000 UTC m=+0.038086725 container died 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c-merged.mount: Deactivated successfully.
Dec 13 09:03:26 compute-0 podman[381884]: 2025-12-13 09:03:26.18209752 +0000 UTC m=+0.083203565 container remove 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 09:03:26 compute-0 systemd[1]: libpod-conmon-99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b.scope: Deactivated successfully.
Dec 13 09:03:26 compute-0 sudo[380960]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3845884973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:03:26 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:03:26 compute-0 sudo[381918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:03:26 compute-0 sudo[381918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:26 compute-0 sudo[381918]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:26 compute-0 sudo[381943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:03:26 compute-0 sudo[381943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 134 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 105 op/s
Dec 13 09:03:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:03:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289559413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.582 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.585 248514 DEBUG nova.virt.libvirt.vif [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016404015',display_name='tempest-TestNetworkBasicOps-server-1016404015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016404015',id=133,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5E+jKld2VdKWtObDqeKBGvKmvekTSiiA9sjTJdF7hEcONw3irfWTSpIiwVY9k/7NWwXxkQuubngpznyfOsRWuRq1jkSBu1LNMt0g8LKC1KQLWo836n2hzpQ9ilyrcugQ==',key_name='tempest-TestNetworkBasicOps-149670649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-i1dfr0ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:21Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=19f43277-8c8d-48f2-9ec7-2b0d36a06e27,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.585 248514 DEBUG nova.network.os_vif_util [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.587 248514 DEBUG nova.network.os_vif_util [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.589 248514 DEBUG nova.objects.instance [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.616 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <uuid>19f43277-8c8d-48f2-9ec7-2b0d36a06e27</uuid>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <name>instance-00000085</name>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-1016404015</nova:name>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:03:25</nova:creationTime>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <nova:port uuid="90d7401e-210b-4cbe-b93d-853787434352">
Dec 13 09:03:26 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <system>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <entry name="serial">19f43277-8c8d-48f2-9ec7-2b0d36a06e27</entry>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <entry name="uuid">19f43277-8c8d-48f2-9ec7-2b0d36a06e27</entry>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </system>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <os>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   </os>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <features>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   </features>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk">
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       </source>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config">
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       </source>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:03:26 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:69:b4:6d"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <target dev="tap90d7401e-21"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/console.log" append="off"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <video>
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </video>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:03:26 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:03:26 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:03:26 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:03:26 compute-0 nova_compute[248510]: </domain>
Dec 13 09:03:26 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.617 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Preparing to wait for external event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.617 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.618 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.618 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.619 248514 DEBUG nova.virt.libvirt.vif [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016404015',display_name='tempest-TestNetworkBasicOps-server-1016404015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016404015',id=133,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5E+jKld2VdKWtObDqeKBGvKmvekTSiiA9sjTJdF7hEcONw3irfWTSpIiwVY9k/7NWwXxkQuubngpznyfOsRWuRq1jkSBu1LNMt0g8LKC1KQLWo836n2hzpQ9ilyrcugQ==',key_name='tempest-TestNetworkBasicOps-149670649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-i1dfr0ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:21Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=19f43277-8c8d-48f2-9ec7-2b0d36a06e27,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.620 248514 DEBUG nova.network.os_vif_util [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.621 248514 DEBUG nova.network.os_vif_util [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.622 248514 DEBUG os_vif [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.624 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.625 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.631 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90d7401e-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.632 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90d7401e-21, col_values=(('external_ids', {'iface-id': '90d7401e-210b-4cbe-b93d-853787434352', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:b4:6d', 'vm-uuid': '19f43277-8c8d-48f2-9ec7-2b0d36a06e27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:26 compute-0 NetworkManager[50376]: <info>  [1765616606.6366] manager: (tap90d7401e-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.653 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.656 248514 INFO os_vif [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21')
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.662 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.662 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.713 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.714 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.714 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:69:b4:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.715 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Using config drive
Dec 13 09:03:26 compute-0 podman[381982]: 2025-12-13 09:03:26.732962182 +0000 UTC m=+0.066164399 container create e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.765 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:26 compute-0 systemd[1]: Started libpod-conmon-e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039.scope.
Dec 13 09:03:26 compute-0 podman[381982]: 2025-12-13 09:03:26.69736126 +0000 UTC m=+0.030563577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:03:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:26 compute-0 podman[381982]: 2025-12-13 09:03:26.831581423 +0000 UTC m=+0.164783650 container init e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:03:26 compute-0 podman[381982]: 2025-12-13 09:03:26.842014984 +0000 UTC m=+0.175217201 container start e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 09:03:26 compute-0 podman[381982]: 2025-12-13 09:03:26.845669096 +0000 UTC m=+0.178871323 container attach e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:03:26 compute-0 busy_thompson[382016]: 167 167
Dec 13 09:03:26 compute-0 systemd[1]: libpod-e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039.scope: Deactivated successfully.
Dec 13 09:03:26 compute-0 podman[381982]: 2025-12-13 09:03:26.848410154 +0000 UTC m=+0.181612371 container died e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d750da6e8095c084b0289c5babe8864245de2e7d98ba91d06411a002f436c877-merged.mount: Deactivated successfully.
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.882 248514 DEBUG nova.network.neutron [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updated VIF entry in instance network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.883 248514 DEBUG nova.network.neutron [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:26 compute-0 podman[381982]: 2025-12-13 09:03:26.889049243 +0000 UTC m=+0.222251490 container remove e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 09:03:26 compute-0 nova_compute[248510]: 2025-12-13 09:03:26.908 248514 DEBUG oslo_concurrency.lockutils [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:26 compute-0 systemd[1]: libpod-conmon-e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039.scope: Deactivated successfully.
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.029 248514 DEBUG nova.network.neutron [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.030 248514 DEBUG nova.network.neutron [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.054 248514 DEBUG oslo_concurrency.lockutils [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:27 compute-0 podman[382039]: 2025-12-13 09:03:27.14400377 +0000 UTC m=+0.055000989 container create 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.175 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Creating config drive at /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config
Dec 13 09:03:27 compute-0 systemd[1]: Started libpod-conmon-6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3.scope.
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.186 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu0u_9rke execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:27 compute-0 podman[382039]: 2025-12-13 09:03:27.118926572 +0000 UTC m=+0.029923781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:03:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:27 compute-0 podman[382039]: 2025-12-13 09:03:27.252670933 +0000 UTC m=+0.163668142 container init 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 09:03:27 compute-0 podman[382039]: 2025-12-13 09:03:27.264547201 +0000 UTC m=+0.175544380 container start 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 09:03:27 compute-0 podman[382039]: 2025-12-13 09:03:27.268234943 +0000 UTC m=+0.179232142 container attach 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:03:27 compute-0 ceph-mon[76537]: pgmap v3070: 321 pgs: 321 active+clean; 134 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 105 op/s
Dec 13 09:03:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2289559413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.348 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu0u_9rke" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.387 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.392 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.561 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.562 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Deleting local config drive /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config because it was imported into RBD.
Dec 13 09:03:27 compute-0 kernel: tap90d7401e-21: entered promiscuous mode
Dec 13 09:03:27 compute-0 NetworkManager[50376]: <info>  [1765616607.6145] manager: (tap90d7401e-21): new Tun device (/org/freedesktop/NetworkManager/Devices/559)
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:27 compute-0 ovn_controller[148476]: 2025-12-13T09:03:27Z|01353|binding|INFO|Claiming lport 90d7401e-210b-4cbe-b93d-853787434352 for this chassis.
Dec 13 09:03:27 compute-0 ovn_controller[148476]: 2025-12-13T09:03:27Z|01354|binding|INFO|90d7401e-210b-4cbe-b93d-853787434352: Claiming fa:16:3e:69:b4:6d 10.100.0.14
Dec 13 09:03:27 compute-0 systemd-udevd[382117]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.668 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:b4:6d 10.100.0.14'], port_security=['fa:16:3e:69:b4:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '19f43277-8c8d-48f2-9ec7-2b0d36a06e27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42e759fc-c2e4-4c38-92ea-ac51e44d350f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f130a7c-de13-4843-94c3-b6b246bfa796, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=90d7401e-210b-4cbe-b93d-853787434352) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.669 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 90d7401e-210b-4cbe-b93d-853787434352 in datapath c2c4278e-24ed-4454-ae7e-9f1ba7df516c bound to our chassis
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.671 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2c4278e-24ed-4454-ae7e-9f1ba7df516c
Dec 13 09:03:27 compute-0 ovn_controller[148476]: 2025-12-13T09:03:27Z|01355|binding|INFO|Setting lport 90d7401e-210b-4cbe-b93d-853787434352 ovn-installed in OVS
Dec 13 09:03:27 compute-0 ovn_controller[148476]: 2025-12-13T09:03:27Z|01356|binding|INFO|Setting lport 90d7401e-210b-4cbe-b93d-853787434352 up in Southbound
Dec 13 09:03:27 compute-0 nova_compute[248510]: 2025-12-13 09:03:27.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce65bf5-a28a-4d8c-98c1-1a5d094ac3d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.687 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2c4278e-21 in ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:03:27 compute-0 NetworkManager[50376]: <info>  [1765616607.6900] device (tap90d7401e-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.689 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2c4278e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:03:27 compute-0 NetworkManager[50376]: <info>  [1765616607.6911] device (tap90d7401e-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.689 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[986fab41-13dc-4bde-98d9-25e78476554c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.690 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f47d972e-35df-44fe-b2a6-ccaafc06d3cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 systemd-machined[210538]: New machine qemu-162-instance-00000085.
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.702 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[993179a7-4b2a-4e5c-a779-54d67ec6536c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000085.
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2162e4fd-c8db-4ab6-b5fc-b835e39b4ea2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.773 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[311278bb-05da-4d72-9cda-5fa8567a7c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.782 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d09e80-271f-4a28-9fb8-ad5424b51966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 NetworkManager[50376]: <info>  [1765616607.7828] manager: (tapc2c4278e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/560)
Dec 13 09:03:27 compute-0 great_chaplygin[382056]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:03:27 compute-0 great_chaplygin[382056]: --> All data devices are unavailable
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.851 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[280d359c-14ed-4ce3-a669-b667d7be478d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.855 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ed94591e-2878-4323-a6a0-aa361dcb9b51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 systemd[1]: libpod-6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3.scope: Deactivated successfully.
Dec 13 09:03:27 compute-0 conmon[382056]: conmon 6226b5a06f452c16c681 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3.scope/container/memory.events
Dec 13 09:03:27 compute-0 podman[382039]: 2025-12-13 09:03:27.859505487 +0000 UTC m=+0.770502686 container died 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab-merged.mount: Deactivated successfully.
Dec 13 09:03:27 compute-0 NetworkManager[50376]: <info>  [1765616607.8995] device (tapc2c4278e-20): carrier: link connected
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.905 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c4a0e9-36ab-4ee5-abf6-c550b49645d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 podman[382039]: 2025-12-13 09:03:27.910925605 +0000 UTC m=+0.821922784 container remove 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 09:03:27 compute-0 systemd[1]: libpod-conmon-6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3.scope: Deactivated successfully.
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.927 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55d208dc-df8f-4204-b1c6-729c5fea5787]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c4278e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:a6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 392], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918511, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382171, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.942 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc2e4cf-89ea-4c5d-a4cc-9e767ff2bd9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:a650'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918511, 'tstamp': 918511}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382172, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 sudo[381943]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.958 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6206ba35-d117-41d7-bc69-c08f76edc884]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c4278e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:a6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 392], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918511, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382173, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.993 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0d48d4c0-d053-419a-97f3-3606362b9689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:28 compute-0 sudo[382174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:03:28 compute-0 sudo[382174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:28 compute-0 sudo[382174]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:28 compute-0 sudo[382203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:03:28 compute-0 sudo[382203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0045352b-7450-404f-b2a6-7a695ee65e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c4278e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.086 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c4278e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:28 compute-0 kernel: tapc2c4278e-20: entered promiscuous mode
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:28 compute-0 NetworkManager[50376]: <info>  [1765616608.0898] manager: (tapc2c4278e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.091 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.095 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2c4278e-20, col_values=(('external_ids', {'iface-id': 'fb6127ed-4b20-49d3-8950-376b6e1d5999'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:28 compute-0 ovn_controller[148476]: 2025-12-13T09:03:28Z|01357|binding|INFO|Releasing lport fb6127ed-4b20-49d3-8950-376b6e1d5999 from this chassis (sb_readonly=0)
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.097 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.099 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2c4278e-24ed-4454-ae7e-9f1ba7df516c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2c4278e-24ed-4454-ae7e-9f1ba7df516c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.100 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27bf95f2-812f-4f83-8617-551a1df58c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.101 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-c2c4278e-24ed-4454-ae7e-9f1ba7df516c
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/c2c4278e-24ed-4454-ae7e-9f1ba7df516c.pid.haproxy
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID c2c4278e-24ed-4454-ae7e-9f1ba7df516c
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:03:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.103 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'env', 'PROCESS_TAG=haproxy-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2c4278e-24ed-4454-ae7e-9f1ba7df516c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.112 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:28 compute-0 podman[382243]: 2025-12-13 09:03:28.378736836 +0000 UTC m=+0.044524806 container create 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:03:28 compute-0 systemd[1]: Started libpod-conmon-7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438.scope.
Dec 13 09:03:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:28 compute-0 podman[382243]: 2025-12-13 09:03:28.358815877 +0000 UTC m=+0.024603847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:03:28 compute-0 podman[382243]: 2025-12-13 09:03:28.464507765 +0000 UTC m=+0.130295755 container init 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:03:28 compute-0 podman[382243]: 2025-12-13 09:03:28.480101116 +0000 UTC m=+0.145889086 container start 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:03:28 compute-0 podman[382243]: 2025-12-13 09:03:28.486039365 +0000 UTC m=+0.151827335 container attach 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:03:28 compute-0 reverent_grothendieck[382293]: 167 167
Dec 13 09:03:28 compute-0 systemd[1]: libpod-7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438.scope: Deactivated successfully.
Dec 13 09:03:28 compute-0 podman[382243]: 2025-12-13 09:03:28.488026074 +0000 UTC m=+0.153814044 container died 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 09:03:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 108 op/s
Dec 13 09:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ca9977e371dd94af47d37ffd3dc583b9b073f830b05dabdce9b9c4700f89245-merged.mount: Deactivated successfully.
Dec 13 09:03:28 compute-0 podman[382243]: 2025-12-13 09:03:28.532677183 +0000 UTC m=+0.198465153 container remove 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:03:28 compute-0 systemd[1]: libpod-conmon-7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438.scope: Deactivated successfully.
Dec 13 09:03:28 compute-0 podman[382323]: 2025-12-13 09:03:28.556169282 +0000 UTC m=+0.066823286 container create 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.563 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616608.5626156, 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.568 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] VM Started (Lifecycle Event)
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.593 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.597 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616608.5627651, 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.597 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] VM Paused (Lifecycle Event)
Dec 13 09:03:28 compute-0 systemd[1]: Started libpod-conmon-097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd.scope.
Dec 13 09:03:28 compute-0 podman[382323]: 2025-12-13 09:03:28.518718703 +0000 UTC m=+0.029372737 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.620 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.634 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:03:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6230ed043b200e92eb147763a359dfae1b35945024fa55c5518b190e8cc8fc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.655 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:03:28 compute-0 podman[382323]: 2025-12-13 09:03:28.665641564 +0000 UTC m=+0.176295618 container init 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 09:03:28 compute-0 podman[382323]: 2025-12-13 09:03:28.670913427 +0000 UTC m=+0.181567461 container start 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 09:03:28 compute-0 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [NOTICE]   (382364) : New worker (382377) forked
Dec 13 09:03:28 compute-0 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [NOTICE]   (382364) : Loading success.
Dec 13 09:03:28 compute-0 podman[382361]: 2025-12-13 09:03:28.745098965 +0000 UTC m=+0.070647351 container create f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:28 compute-0 nova_compute[248510]: 2025-12-13 09:03:28.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:03:28 compute-0 podman[382361]: 2025-12-13 09:03:28.70699477 +0000 UTC m=+0.032543196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:03:28 compute-0 systemd[1]: Started libpod-conmon-f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348.scope.
Dec 13 09:03:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:28 compute-0 podman[382361]: 2025-12-13 09:03:28.864148778 +0000 UTC m=+0.189697184 container init f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 09:03:28 compute-0 podman[382361]: 2025-12-13 09:03:28.874324493 +0000 UTC m=+0.199872919 container start f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:03:28 compute-0 podman[382361]: 2025-12-13 09:03:28.878708443 +0000 UTC m=+0.204256829 container attach f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 09:03:29 compute-0 jovial_franklin[382390]: {
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:     "0": [
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:         {
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "devices": [
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "/dev/loop3"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             ],
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_name": "ceph_lv0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_size": "21470642176",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "name": "ceph_lv0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "tags": {
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cluster_name": "ceph",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.crush_device_class": "",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.encrypted": "0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.objectstore": "bluestore",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osd_id": "0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.type": "block",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.vdo": "0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.with_tpm": "0"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             },
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "type": "block",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "vg_name": "ceph_vg0"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:         }
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:     ],
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:     "1": [
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:         {
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "devices": [
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "/dev/loop4"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             ],
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_name": "ceph_lv1",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_size": "21470642176",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "name": "ceph_lv1",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "tags": {
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cluster_name": "ceph",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.crush_device_class": "",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.encrypted": "0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.objectstore": "bluestore",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osd_id": "1",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.type": "block",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.vdo": "0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.with_tpm": "0"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             },
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "type": "block",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "vg_name": "ceph_vg1"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:         }
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:     ],
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:     "2": [
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:         {
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "devices": [
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "/dev/loop5"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             ],
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_name": "ceph_lv2",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_size": "21470642176",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "name": "ceph_lv2",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "tags": {
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.cluster_name": "ceph",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.crush_device_class": "",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.encrypted": "0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.objectstore": "bluestore",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osd_id": "2",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.type": "block",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.vdo": "0",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:                 "ceph.with_tpm": "0"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             },
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "type": "block",
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:             "vg_name": "ceph_vg2"
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:         }
Dec 13 09:03:29 compute-0 jovial_franklin[382390]:     ]
Dec 13 09:03:29 compute-0 jovial_franklin[382390]: }
Dec 13 09:03:29 compute-0 systemd[1]: libpod-f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348.scope: Deactivated successfully.
Dec 13 09:03:29 compute-0 conmon[382390]: conmon f30cf2ea2209ea199fef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348.scope/container/memory.events
Dec 13 09:03:29 compute-0 podman[382361]: 2025-12-13 09:03:29.214872195 +0000 UTC m=+0.540420641 container died f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 09:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7-merged.mount: Deactivated successfully.
Dec 13 09:03:29 compute-0 podman[382361]: 2025-12-13 09:03:29.262293773 +0000 UTC m=+0.587842149 container remove f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:03:29 compute-0 systemd[1]: libpod-conmon-f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348.scope: Deactivated successfully.
Dec 13 09:03:29 compute-0 sudo[382203]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:29 compute-0 sudo[382412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:03:29 compute-0 sudo[382412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:29 compute-0 sudo[382412]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:29 compute-0 sudo[382437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:03:29 compute-0 sudo[382437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:29 compute-0 ceph-mon[76537]: pgmap v3071: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 108 op/s
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.667 248514 DEBUG nova.compute.manager [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.668 248514 DEBUG oslo_concurrency.lockutils [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.668 248514 DEBUG oslo_concurrency.lockutils [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.668 248514 DEBUG oslo_concurrency.lockutils [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.668 248514 DEBUG nova.compute.manager [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Processing event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.669 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.674 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.678 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616609.6743977, 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.679 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] VM Resumed (Lifecycle Event)
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.692 248514 INFO nova.virt.libvirt.driver [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance spawned successfully.
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.693 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.704 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.714 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.720 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.720 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.721 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.722 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.722 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:29 compute-0 nova_compute[248510]: 2025-12-13 09:03:29.723 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:29 compute-0 podman[382474]: 2025-12-13 09:03:29.775552732 +0000 UTC m=+0.055052491 container create 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:03:29 compute-0 systemd[1]: Started libpod-conmon-6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238.scope.
Dec 13 09:03:29 compute-0 podman[382474]: 2025-12-13 09:03:29.752795212 +0000 UTC m=+0.032295001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:03:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:29 compute-0 podman[382474]: 2025-12-13 09:03:29.879999119 +0000 UTC m=+0.159498908 container init 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:03:29 compute-0 podman[382474]: 2025-12-13 09:03:29.887748033 +0000 UTC m=+0.167247782 container start 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:03:29 compute-0 podman[382474]: 2025-12-13 09:03:29.892508102 +0000 UTC m=+0.172007861 container attach 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:03:29 compute-0 affectionate_heyrovsky[382490]: 167 167
Dec 13 09:03:29 compute-0 systemd[1]: libpod-6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238.scope: Deactivated successfully.
Dec 13 09:03:29 compute-0 conmon[382490]: conmon 6cacd9a1213363aa5b32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238.scope/container/memory.events
Dec 13 09:03:29 compute-0 podman[382474]: 2025-12-13 09:03:29.899429065 +0000 UTC m=+0.178928824 container died 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-68dc095295ad6d7fc239b1dd7d7d480e9bbd23158f5794200bbd9b16e2b26fc7-merged.mount: Deactivated successfully.
Dec 13 09:03:29 compute-0 podman[382474]: 2025-12-13 09:03:29.957517971 +0000 UTC m=+0.237017730 container remove 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:03:29 compute-0 systemd[1]: libpod-conmon-6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238.scope: Deactivated successfully.
Dec 13 09:03:30 compute-0 nova_compute[248510]: 2025-12-13 09:03:30.015 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:03:30 compute-0 nova_compute[248510]: 2025-12-13 09:03:30.061 248514 INFO nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Took 8.32 seconds to spawn the instance on the hypervisor.
Dec 13 09:03:30 compute-0 nova_compute[248510]: 2025-12-13 09:03:30.062 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:30 compute-0 podman[382512]: 2025-12-13 09:03:30.162843805 +0000 UTC m=+0.050333492 container create ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:03:30 compute-0 systemd[1]: Started libpod-conmon-ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e.scope.
Dec 13 09:03:30 compute-0 nova_compute[248510]: 2025-12-13 09:03:30.213 248514 INFO nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Took 9.55 seconds to build instance.
Dec 13 09:03:30 compute-0 podman[382512]: 2025-12-13 09:03:30.139443189 +0000 UTC m=+0.026932906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:03:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:30 compute-0 podman[382512]: 2025-12-13 09:03:30.267118898 +0000 UTC m=+0.154608585 container init ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:03:30 compute-0 nova_compute[248510]: 2025-12-13 09:03:30.268 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:30 compute-0 podman[382512]: 2025-12-13 09:03:30.275665892 +0000 UTC m=+0.163155549 container start ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:03:30 compute-0 podman[382512]: 2025-12-13 09:03:30.279490508 +0000 UTC m=+0.166980225 container attach ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:03:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 09:03:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:31 compute-0 lvm[382608]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:03:31 compute-0 lvm[382608]: VG ceph_vg0 finished
Dec 13 09:03:31 compute-0 lvm[382609]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:03:31 compute-0 lvm[382609]: VG ceph_vg1 finished
Dec 13 09:03:31 compute-0 lvm[382610]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:03:31 compute-0 lvm[382610]: VG ceph_vg2 finished
Dec 13 09:03:31 compute-0 charming_fermat[382529]: {}
Dec 13 09:03:31 compute-0 systemd[1]: libpod-ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e.scope: Deactivated successfully.
Dec 13 09:03:31 compute-0 systemd[1]: libpod-ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e.scope: Consumed 1.543s CPU time.
Dec 13 09:03:31 compute-0 podman[382512]: 2025-12-13 09:03:31.26304364 +0000 UTC m=+1.150533337 container died ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace-merged.mount: Deactivated successfully.
Dec 13 09:03:31 compute-0 podman[382512]: 2025-12-13 09:03:31.321036103 +0000 UTC m=+1.208525770 container remove ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 09:03:31 compute-0 systemd[1]: libpod-conmon-ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e.scope: Deactivated successfully.
Dec 13 09:03:31 compute-0 sudo[382437]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:03:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:03:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:31 compute-0 sudo[382624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:03:31 compute-0 sudo[382624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:03:31 compute-0 sudo[382624]: pam_unix(sudo:session): session closed for user root
Dec 13 09:03:31 compute-0 ceph-mon[76537]: pgmap v3072: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 09:03:31 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:31 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:03:31 compute-0 nova_compute[248510]: 2025-12-13 09:03:31.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:31 compute-0 nova_compute[248510]: 2025-12-13 09:03:31.769 248514 DEBUG nova.compute.manager [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:31 compute-0 nova_compute[248510]: 2025-12-13 09:03:31.771 248514 DEBUG oslo_concurrency.lockutils [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:31 compute-0 nova_compute[248510]: 2025-12-13 09:03:31.771 248514 DEBUG oslo_concurrency.lockutils [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:31 compute-0 nova_compute[248510]: 2025-12-13 09:03:31.772 248514 DEBUG oslo_concurrency.lockutils [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:31 compute-0 nova_compute[248510]: 2025-12-13 09:03:31.772 248514 DEBUG nova.compute.manager [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:03:31 compute-0 nova_compute[248510]: 2025-12-13 09:03:31.773 248514 WARNING nova.compute.manager [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.
Dec 13 09:03:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 09:03:32 compute-0 ceph-mon[76537]: pgmap v3073: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 09:03:33 compute-0 nova_compute[248510]: 2025-12-13 09:03:33.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:34 compute-0 nova_compute[248510]: 2025-12-13 09:03:34.277 248514 DEBUG nova.compute.manager [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:34 compute-0 nova_compute[248510]: 2025-12-13 09:03:34.277 248514 DEBUG nova.compute.manager [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:03:34 compute-0 nova_compute[248510]: 2025-12-13 09:03:34.277 248514 DEBUG oslo_concurrency.lockutils [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:34 compute-0 nova_compute[248510]: 2025-12-13 09:03:34.278 248514 DEBUG oslo_concurrency.lockutils [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:34 compute-0 nova_compute[248510]: 2025-12-13 09:03:34.278 248514 DEBUG nova.network.neutron [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:03:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 162 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Dec 13 09:03:34 compute-0 ovn_controller[148476]: 2025-12-13T09:03:34Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 09:03:34 compute-0 ovn_controller[148476]: 2025-12-13T09:03:34Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 09:03:35 compute-0 ceph-mon[76537]: pgmap v3074: 321 pgs: 321 active+clean; 162 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Dec 13 09:03:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 162 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Dec 13 09:03:36 compute-0 nova_compute[248510]: 2025-12-13 09:03:36.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:36 compute-0 ceph-mon[76537]: pgmap v3075: 321 pgs: 321 active+clean; 162 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Dec 13 09:03:36 compute-0 nova_compute[248510]: 2025-12-13 09:03:36.717 248514 DEBUG nova.network.neutron [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:03:36 compute-0 nova_compute[248510]: 2025-12-13 09:03:36.718 248514 DEBUG nova.network.neutron [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:36 compute-0 nova_compute[248510]: 2025-12-13 09:03:36.750 248514 DEBUG oslo_concurrency.lockutils [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:37.188 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:03:37 compute-0 nova_compute[248510]: 2025-12-13 09:03:37.190 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:37.191 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:03:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:37.192 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:38 compute-0 nova_compute[248510]: 2025-12-13 09:03:38.063 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 09:03:39 compute-0 ceph-mon[76537]: pgmap v3076: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 09:03:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:03:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:03:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:03:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:03:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:03:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:03:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 13 09:03:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:41 compute-0 podman[382650]: 2025-12-13 09:03:41.01584819 +0000 UTC m=+0.095347820 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd)
Dec 13 09:03:41 compute-0 podman[382651]: 2025-12-13 09:03:41.027275456 +0000 UTC m=+0.097260828 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 09:03:41 compute-0 podman[382649]: 2025-12-13 09:03:41.100915931 +0000 UTC m=+0.176492373 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:03:41 compute-0 ceph-mon[76537]: pgmap v3077: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 13 09:03:41 compute-0 nova_compute[248510]: 2025-12-13 09:03:41.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 13 09:03:42 compute-0 ovn_controller[148476]: 2025-12-13T09:03:42Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:b4:6d 10.100.0.14
Dec 13 09:03:42 compute-0 ovn_controller[148476]: 2025-12-13T09:03:42Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:b4:6d 10.100.0.14
Dec 13 09:03:42 compute-0 nova_compute[248510]: 2025-12-13 09:03:42.776 248514 INFO nova.compute.manager [None req-b721c43c-1d8d-4da5-ba7a-0e0d6c792c3b a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Get console output
Dec 13 09:03:42 compute-0 nova_compute[248510]: 2025-12-13 09:03:42.784 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:03:43 compute-0 nova_compute[248510]: 2025-12-13 09:03:43.057 248514 DEBUG oslo_concurrency.lockutils [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:43 compute-0 nova_compute[248510]: 2025-12-13 09:03:43.057 248514 DEBUG oslo_concurrency.lockutils [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:43 compute-0 nova_compute[248510]: 2025-12-13 09:03:43.058 248514 DEBUG nova.compute.manager [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:43 compute-0 nova_compute[248510]: 2025-12-13 09:03:43.062 248514 DEBUG nova.compute.manager [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 13 09:03:43 compute-0 nova_compute[248510]: 2025-12-13 09:03:43.063 248514 DEBUG nova.objects.instance [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:43 compute-0 nova_compute[248510]: 2025-12-13 09:03:43.066 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:43 compute-0 nova_compute[248510]: 2025-12-13 09:03:43.106 248514 DEBUG nova.virt.libvirt.driver [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 13 09:03:43 compute-0 ceph-mon[76537]: pgmap v3078: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.060 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.061 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.086 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.170 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.171 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.180 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.180 248514 INFO nova.compute.claims [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.319 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 194 op/s
Dec 13 09:03:44 compute-0 ceph-mon[76537]: pgmap v3079: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 194 op/s
Dec 13 09:03:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:03:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3062633587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.963 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:44 compute-0 nova_compute[248510]: 2025-12-13 09:03:44.972 248514 DEBUG nova.compute.provider_tree [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.036 248514 DEBUG nova.scheduler.client.report [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.068 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.069 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.125 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.126 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.158 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.190 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.289 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.290 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.291 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Creating image(s)
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.314 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.348 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.371 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.376 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.455 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.456 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.457 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.457 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.483 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.488 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c89b029c-146b-47ae-8961-0000d4e49e29_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:45 compute-0 kernel: tapdaddb809-b3 (unregistering): left promiscuous mode
Dec 13 09:03:45 compute-0 NetworkManager[50376]: <info>  [1765616625.5184] device (tapdaddb809-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:03:45 compute-0 ovn_controller[148476]: 2025-12-13T09:03:45Z|01358|binding|INFO|Releasing lport daddb809-b36f-4f29-bd15-459cfd21a812 from this chassis (sb_readonly=0)
Dec 13 09:03:45 compute-0 ovn_controller[148476]: 2025-12-13T09:03:45Z|01359|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 down in Southbound
Dec 13 09:03:45 compute-0 ovn_controller[148476]: 2025-12-13T09:03:45Z|01360|binding|INFO|Removing iface tapdaddb809-b3 ovn-installed in OVS
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.534 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.537 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:4e:45 10.100.0.11'], port_security=['fa:16:3e:b3:4e:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd30303-3917-438c-8b47-a12827c948d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48e751cb-5eae-4702-97af-e0ce47d426b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c018ca30-1abb-42b0-8adb-342f8b53fd8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=daddb809-b36f-4f29-bd15-459cfd21a812) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:03:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.538 158419 INFO neutron.agent.ovn.metadata.agent [-] Port daddb809-b36f-4f29-bd15-459cfd21a812 in datapath cdd30303-3917-438c-8b47-a12827c948d6 unbound from our chassis
Dec 13 09:03:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.540 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cdd30303-3917-438c-8b47-a12827c948d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:03:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.542 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee0158f-7a08-49d2-b50c-1019f13a49a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.542 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 namespace which is not needed anymore
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:45 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000084.scope: Deactivated successfully.
Dec 13 09:03:45 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000084.scope: Consumed 13.428s CPU time.
Dec 13 09:03:45 compute-0 systemd-machined[210538]: Machine qemu-161-instance-00000084 terminated.
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.741 248514 DEBUG nova.policy [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:45 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [NOTICE]   (380585) : haproxy version is 2.8.14-c23fe91
Dec 13 09:03:45 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [NOTICE]   (380585) : path to executable is /usr/sbin/haproxy
Dec 13 09:03:45 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [WARNING]  (380585) : Exiting Master process...
Dec 13 09:03:45 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [WARNING]  (380585) : Exiting Master process...
Dec 13 09:03:45 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [ALERT]    (380585) : Current worker (380587) exited with code 143 (Terminated)
Dec 13 09:03:45 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [WARNING]  (380585) : All workers exited. Exiting... (0)
Dec 13 09:03:45 compute-0 systemd[1]: libpod-0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8.scope: Deactivated successfully.
Dec 13 09:03:45 compute-0 podman[382846]: 2025-12-13 09:03:45.900985773 +0000 UTC m=+0.266448536 container died 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.941 248514 DEBUG nova.compute.manager [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.942 248514 DEBUG oslo_concurrency.lockutils [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.942 248514 DEBUG oslo_concurrency.lockutils [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.942 248514 DEBUG oslo_concurrency.lockutils [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.943 248514 DEBUG nova.compute.manager [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:03:45 compute-0 nova_compute[248510]: 2025-12-13 09:03:45.943 248514 WARNING nova.compute.manager [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state active and task_state powering-off.
Dec 13 09:03:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3062633587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.142 248514 INFO nova.virt.libvirt.driver [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance shutdown successfully after 3 seconds.
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.148 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance destroyed successfully.
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.149 248514 DEBUG nova.objects.instance [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'numa_topology' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.180 248514 DEBUG nova.compute.manager [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.232 248514 DEBUG oslo_concurrency.lockutils [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8-userdata-shm.mount: Deactivated successfully.
Dec 13 09:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d734d895c82410d889b2d117e4e4de5869dfd023c73ed8e5614e3fc64a0d3650-merged.mount: Deactivated successfully.
Dec 13 09:03:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.646 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:46 compute-0 podman[382846]: 2025-12-13 09:03:46.680452733 +0000 UTC m=+1.045915546 container cleanup 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:03:46 compute-0 systemd[1]: libpod-conmon-0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8.scope: Deactivated successfully.
Dec 13 09:03:46 compute-0 podman[382890]: 2025-12-13 09:03:46.853791886 +0000 UTC m=+0.132774918 container remove 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.867 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[baf73bac-eb4a-4a1b-a817-5fc610c88eed]: (4, ('Sat Dec 13 09:03:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 (0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8)\n0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8\nSat Dec 13 09:03:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 (0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8)\n0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.870 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b54ce545-2b2c-41ff-8b39-c49a946c90d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.870 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c89b029c-146b-47ae-8961-0000d4e49e29_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.871 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd30303-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:46 compute-0 kernel: tapcdd30303-30: left promiscuous mode
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03b75d7d-0387-4b09-8d96-b3687ebdcfec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.912 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.928 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0991a73-5988-431e-af80-c1cd54941b54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.930 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[830b537b-1c8d-4a40-8dec-b7e83685b765]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.955 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0abb3bb4-89c3-40b5-8343-8688e0b871b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917754, 'reachable_time': 26809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382933, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:46 compute-0 systemd[1]: run-netns-ovnmeta\x2dcdd30303\x2d3917\x2d438c\x2d8b47\x2da12827c948d6.mount: Deactivated successfully.
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.961 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:03:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.961 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[45c7e238-edf2-4d72-8fba-4d6b07691e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:46 compute-0 nova_compute[248510]: 2025-12-13 09:03:46.968 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:03:47 compute-0 nova_compute[248510]: 2025-12-13 09:03:47.069 248514 DEBUG nova.objects.instance [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid c89b029c-146b-47ae-8961-0000d4e49e29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:47 compute-0 ceph-mon[76537]: pgmap v3080: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 13 09:03:47 compute-0 nova_compute[248510]: 2025-12-13 09:03:47.128 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:03:47 compute-0 nova_compute[248510]: 2025-12-13 09:03:47.129 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Ensure instance console log exists: /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:03:47 compute-0 nova_compute[248510]: 2025-12-13 09:03:47.130 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:47 compute-0 nova_compute[248510]: 2025-12-13 09:03:47.131 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:47 compute-0 nova_compute[248510]: 2025-12-13 09:03:47.131 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.157851) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627157958, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2065, "num_deletes": 251, "total_data_size": 3547995, "memory_usage": 3601632, "flush_reason": "Manual Compaction"}
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627199358, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3438958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59278, "largest_seqno": 61342, "table_properties": {"data_size": 3429635, "index_size": 5816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19158, "raw_average_key_size": 20, "raw_value_size": 3411078, "raw_average_value_size": 3594, "num_data_blocks": 258, "num_entries": 949, "num_filter_entries": 949, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616412, "oldest_key_time": 1765616412, "file_creation_time": 1765616627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 41706 microseconds, and 11773 cpu microseconds.
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.199561) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3438958 bytes OK
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.199642) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.209324) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.209351) EVENT_LOG_v1 {"time_micros": 1765616627209343, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.209373) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3539313, prev total WAL file size 3539313, number of live WAL files 2.
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.211286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3358KB)], [140(8717KB)]
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627211432, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 12365360, "oldest_snapshot_seqno": -1}
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8246 keys, 10588064 bytes, temperature: kUnknown
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627309194, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10588064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10534897, "index_size": 31422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 215100, "raw_average_key_size": 26, "raw_value_size": 10389645, "raw_average_value_size": 1259, "num_data_blocks": 1219, "num_entries": 8246, "num_filter_entries": 8246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.309687) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10588064 bytes
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.312261) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.2 rd, 108.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 8760, records dropped: 514 output_compression: NoCompression
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.312289) EVENT_LOG_v1 {"time_micros": 1765616627312276, "job": 86, "event": "compaction_finished", "compaction_time_micros": 97973, "compaction_time_cpu_micros": 48138, "output_level": 6, "num_output_files": 1, "total_output_size": 10588064, "num_input_records": 8760, "num_output_records": 8246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627313807, "job": 86, "event": "table_file_deletion", "file_number": 142}
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627317106, "job": 86, "event": "table_file_deletion", "file_number": 140}
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.211153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:03:47 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:03:47 compute-0 nova_compute[248510]: 2025-12-13 09:03:47.661 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Successfully created port: 8b1532d0-4269-4522-a1c5-961c9f9254dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:03:48 compute-0 nova_compute[248510]: 2025-12-13 09:03:48.042 248514 DEBUG nova.compute.manager [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:48 compute-0 nova_compute[248510]: 2025-12-13 09:03:48.043 248514 DEBUG oslo_concurrency.lockutils [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:48 compute-0 nova_compute[248510]: 2025-12-13 09:03:48.043 248514 DEBUG oslo_concurrency.lockutils [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:48 compute-0 nova_compute[248510]: 2025-12-13 09:03:48.043 248514 DEBUG oslo_concurrency.lockutils [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:48 compute-0 nova_compute[248510]: 2025-12-13 09:03:48.043 248514 DEBUG nova.compute.manager [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:03:48 compute-0 nova_compute[248510]: 2025-12-13 09:03:48.044 248514 WARNING nova.compute.manager [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state stopped and task_state None.
Dec 13 09:03:48 compute-0 nova_compute[248510]: 2025-12-13 09:03:48.067 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 228 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.4 MiB/s wr, 92 op/s
Dec 13 09:03:48 compute-0 nova_compute[248510]: 2025-12-13 09:03:48.995 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Successfully updated port: 8b1532d0-4269-4522-a1c5-961c9f9254dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.014 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.015 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.015 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.122 248514 DEBUG nova.compute.manager [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.123 248514 DEBUG nova.compute.manager [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing instance network info cache due to event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.123 248514 DEBUG oslo_concurrency.lockutils [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.235 248514 INFO nova.compute.manager [None req-29f6a72c-096a-4bcd-8a23-5d761f619b4c a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Get console output
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.426 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.450 248514 DEBUG oslo_concurrency.lockutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.450 248514 DEBUG oslo_concurrency.lockutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.450 248514 DEBUG nova.network.neutron [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.450 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'info_cache' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:49 compute-0 nova_compute[248510]: 2025-12-13 09:03:49.534 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:03:49 compute-0 ceph-mon[76537]: pgmap v3081: 321 pgs: 321 active+clean; 228 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.4 MiB/s wr, 92 op/s
Dec 13 09:03:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.903 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.935 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.936 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance network_info: |[{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.936 248514 DEBUG oslo_concurrency.lockutils [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.937 248514 DEBUG nova.network.neutron [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.942 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start _get_guest_xml network_info=[{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.948 248514 WARNING nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:03:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.970 248514 DEBUG nova.virt.libvirt.host [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.972 248514 DEBUG nova.virt.libvirt.host [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:03:50 compute-0 ceph-mon[76537]: pgmap v3082: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.976 248514 DEBUG nova.virt.libvirt.host [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.977 248514 DEBUG nova.virt.libvirt.host [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.977 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.977 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.978 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.978 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.978 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.979 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.979 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.979 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.979 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.980 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.980 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.980 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:03:50 compute-0 nova_compute[248510]: 2025-12-13 09:03:50.983 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:03:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138571391' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.556 248514 DEBUG nova.network.neutron [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.563 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.586 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.590 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.635 248514 DEBUG oslo_concurrency.lockutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.648 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.665 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance destroyed successfully.
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.666 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'numa_topology' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.683 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.697 248514 DEBUG nova.virt.libvirt.vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:46Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.697 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.699 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.699 248514 DEBUG os_vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.701 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.702 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaddb809-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.708 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.710 248514 INFO os_vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3')
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.719 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start _get_guest_xml network_info=[{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.725 248514 WARNING nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.728 248514 DEBUG nova.virt.libvirt.host [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.729 248514 DEBUG nova.virt.libvirt.host [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.731 248514 DEBUG nova.virt.libvirt.host [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.732 248514 DEBUG nova.virt.libvirt.host [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.732 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.733 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.733 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.734 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.734 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.734 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.734 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.735 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.735 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.735 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.736 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.736 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.736 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:51 compute-0 nova_compute[248510]: 2025-12-13 09:03:51.762 248514 DEBUG oslo_concurrency.processutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/138571391' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:03:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396109086' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.172 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.174 248514 DEBUG nova.virt.libvirt.vif [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1708858245',display_name='tempest-TestNetworkBasicOps-server-1708858245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1708858245',id=134,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL++NwsCfuXrqu4UC7qgUOg89a/J8JDHWy/KO/6t34DIcODsr1DE3qwPt8YEbfOTv/XMYKYNpUizYYeq5l6dOm5t1OYnPo2H2KGHxdPaguJpqsjNgEOg4dU6arkPu8gPzA==',key_name='tempest-TestNetworkBasicOps-1668505721',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-gfm2ze7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:45Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=c89b029c-146b-47ae-8961-0000d4e49e29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.174 248514 DEBUG nova.network.os_vif_util [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.175 248514 DEBUG nova.network.os_vif_util [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.177 248514 DEBUG nova.objects.instance [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid c89b029c-146b-47ae-8961-0000d4e49e29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.196 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <uuid>c89b029c-146b-47ae-8961-0000d4e49e29</uuid>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <name>instance-00000086</name>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-1708858245</nova:name>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:03:50</nova:creationTime>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:port uuid="8b1532d0-4269-4522-a1c5-961c9f9254dc">
Dec 13 09:03:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <system>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="serial">c89b029c-146b-47ae-8961-0000d4e49e29</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="uuid">c89b029c-146b-47ae-8961-0000d4e49e29</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </system>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <os>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </os>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <features>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </features>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c89b029c-146b-47ae-8961-0000d4e49e29_disk">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </source>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c89b029c-146b-47ae-8961-0000d4e49e29_disk.config">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </source>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b8:b5:f3"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <target dev="tap8b1532d0-42"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/console.log" append="off"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <video>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </video>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:03:52 compute-0 nova_compute[248510]: </domain>
Dec 13 09:03:52 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.197 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Preparing to wait for external event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.197 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.197 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.198 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.198 248514 DEBUG nova.virt.libvirt.vif [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1708858245',display_name='tempest-TestNetworkBasicOps-server-1708858245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1708858245',id=134,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL++NwsCfuXrqu4UC7qgUOg89a/J8JDHWy/KO/6t34DIcODsr1DE3qwPt8YEbfOTv/XMYKYNpUizYYeq5l6dOm5t1OYnPo2H2KGHxdPaguJpqsjNgEOg4dU6arkPu8gPzA==',key_name='tempest-TestNetworkBasicOps-1668505721',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-gfm2ze7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:45Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=c89b029c-146b-47ae-8961-0000d4e49e29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.198 248514 DEBUG nova.network.os_vif_util [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.199 248514 DEBUG nova.network.os_vif_util [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.199 248514 DEBUG os_vif [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.201 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.201 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.203 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.203 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b1532d0-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.203 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b1532d0-42, col_values=(('external_ids', {'iface-id': '8b1532d0-4269-4522-a1c5-961c9f9254dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:b5:f3', 'vm-uuid': 'c89b029c-146b-47ae-8961-0000d4e49e29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:52 compute-0 NetworkManager[50376]: <info>  [1765616632.2073] manager: (tap8b1532d0-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.213 248514 INFO os_vif [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42')
Dec 13 09:03:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:03:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731843873' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.297 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.298 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.298 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:b8:b5:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.298 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Using config drive
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.324 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.329 248514 DEBUG oslo_concurrency.processutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.370 248514 DEBUG oslo_concurrency.processutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.775 248514 DEBUG nova.network.neutron [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updated VIF entry in instance network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.776 248514 DEBUG nova.network.neutron [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.794 248514 DEBUG oslo_concurrency.lockutils [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:03:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:03:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654709939' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.939 248514 DEBUG oslo_concurrency.processutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.941 248514 DEBUG nova.virt.libvirt.vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:46Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.942 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.943 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.944 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.971 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <uuid>f5f29271-ff94-4d88-bc99-1cfc3e1128a0</uuid>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <name>instance-00000084</name>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-306376587</nova:name>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:03:51</nova:creationTime>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <nova:port uuid="daddb809-b36f-4f29-bd15-459cfd21a812">
Dec 13 09:03:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <system>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="serial">f5f29271-ff94-4d88-bc99-1cfc3e1128a0</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="uuid">f5f29271-ff94-4d88-bc99-1cfc3e1128a0</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </system>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <os>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </os>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <features>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </features>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </source>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </source>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:03:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b3:4e:45"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <target dev="tapdaddb809-b3"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/console.log" append="off"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <video>
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </video>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <input type="keyboard" bus="usb"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:03:52 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:03:52 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:03:52 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:03:52 compute-0 nova_compute[248510]: </domain>
Dec 13 09:03:52 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.974 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.974 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.975 248514 DEBUG nova.virt.libvirt.vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:46Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.977 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.977 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.978 248514 DEBUG os_vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.981 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Creating config drive at /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config
Dec 13 09:03:52 compute-0 nova_compute[248510]: 2025-12-13 09:03:52.986 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2iw8k2q3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3396109086' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/731843873' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:53 compute-0 ceph-mon[76537]: pgmap v3083: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Dec 13 09:03:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3654709939' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.030 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.032 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.033 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.037 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaddb809-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.038 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdaddb809-b3, col_values=(('external_ids', {'iface-id': 'daddb809-b36f-4f29-bd15-459cfd21a812', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:4e:45', 'vm-uuid': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.0410] manager: (tapdaddb809-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.042 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.050 248514 INFO os_vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3')
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 kernel: tapdaddb809-b3: entered promiscuous mode
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.1312] manager: (tapdaddb809-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01361|binding|INFO|Claiming lport daddb809-b36f-4f29-bd15-459cfd21a812 for this chassis.
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01362|binding|INFO|daddb809-b36f-4f29-bd15-459cfd21a812: Claiming fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.138 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2iw8k2q3" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.150 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:4e:45 10.100.0.11'], port_security=['fa:16:3e:b3:4e:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd30303-3917-438c-8b47-a12827c948d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '48e751cb-5eae-4702-97af-e0ce47d426b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c018ca30-1abb-42b0-8adb-342f8b53fd8e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=daddb809-b36f-4f29-bd15-459cfd21a812) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.153 158419 INFO neutron.agent.ovn.metadata.agent [-] Port daddb809-b36f-4f29-bd15-459cfd21a812 in datapath cdd30303-3917-438c-8b47-a12827c948d6 bound to our chassis
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.156 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.170 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:03:53 compute-0 systemd-udevd[383162]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.174 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config c89b029c-146b-47ae-8961-0000d4e49e29_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01363|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 ovn-installed in OVS
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01364|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 up in Southbound
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.176 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4a8ffd-93e6-4080-a700-e8c6d9779e56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.178 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcdd30303-31 in ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.186 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcdd30303-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.186 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7dbdec58-14c2-4f4b-b8f3-c9cc6c3dee47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.189 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb254ef-6365-4321-bf5d-93018d820fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.1938] device (tapdaddb809-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.1951] device (tapdaddb809-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:03:53 compute-0 systemd-machined[210538]: New machine qemu-163-instance-00000084.
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.206 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ea7b9c-9afe-4fee-b6cb-a54969e4b691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000084.
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.226 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1dc515e-1f0d-4ea2-8f71-761da7e3f49b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.264 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dda8e9-e7d2-491d-b4d7-ef1c3d1ab32c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.2727] manager: (tapcdd30303-30): new Veth device (/org/freedesktop/NetworkManager/Devices/565)
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.271 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[30224f6d-512d-46e3-97c1-d92d49430c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.319 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e754267-217d-4099-b2df-26e07b60fe1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.323 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e998fe6a-ceab-4648-abc2-6c85e11ca88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.3564] device (tapcdd30303-30): carrier: link connected
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.365 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[22ecb394-054f-45c5-87ec-106b70f7a194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.387 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[438c14d4-6ddb-4653-9190-c5070a139d92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd30303-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:ce:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 395], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921057, 'reachable_time': 22161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383220, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.404 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f51fba81-ca8e-4aae-9ae9-c3fa9e5f0632]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:ce22'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 921057, 'tstamp': 921057}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383221, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.414 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config c89b029c-146b-47ae-8961-0000d4e49e29_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.415 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Deleting local config drive /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config because it was imported into RBD.
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.424 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8e706f-016b-4874-8d68-c17613ae9d20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd30303-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:ce:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 395], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921057, 'reachable_time': 22161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383222, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8336f0-d16d-4e31-91d4-587f7a16222f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 kernel: tap8b1532d0-42: entered promiscuous mode
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.4715] manager: (tap8b1532d0-42): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Dec 13 09:03:53 compute-0 systemd-udevd[383202]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01365|binding|INFO|Claiming lport 8b1532d0-4269-4522-a1c5-961c9f9254dc for this chassis.
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01366|binding|INFO|8b1532d0-4269-4522-a1c5-961c9f9254dc: Claiming fa:16:3e:b8:b5:f3 10.100.0.9
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.483 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b5:f3 10.100.0.9'], port_security=['fa:16:3e:b8:b5:f3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c89b029c-146b-47ae-8961-0000d4e49e29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82626aa1-467a-40c3-a9a5-93921d70f234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f130a7c-de13-4843-94c3-b6b246bfa796, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8b1532d0-4269-4522-a1c5-961c9f9254dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.4936] device (tap8b1532d0-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01367|binding|INFO|Setting lport 8b1532d0-4269-4522-a1c5-961c9f9254dc ovn-installed in OVS
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01368|binding|INFO|Setting lport 8b1532d0-4269-4522-a1c5-961c9f9254dc up in Southbound
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.4942] device (tap8b1532d0-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 systemd-machined[210538]: New machine qemu-164-instance-00000086.
Dec 13 09:03:53 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000086.
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.580 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94f8a8a8-d708-42d1-8345-52a52f57a734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd30303-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd30303-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:53 compute-0 NetworkManager[50376]: <info>  [1765616633.5867] manager: (tapcdd30303-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Dec 13 09:03:53 compute-0 kernel: tapcdd30303-30: entered promiscuous mode
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.595 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd30303-30, col_values=(('external_ids', {'iface-id': '2b4706f8-0b78-43d8-a3df-6327712b725d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 ovn_controller[148476]: 2025-12-13T09:03:53Z|01369|binding|INFO|Releasing lport 2b4706f8-0b78-43d8-a3df-6327712b725d from this chassis (sb_readonly=0)
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.601 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.602 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4a34f6-6dc3-4b62-bc8e-0b268b15e5b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.604 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:03:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.604 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'env', 'PROCESS_TAG=haproxy-cdd30303-3917-438c-8b47-a12827c948d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cdd30303-3917-438c-8b47-a12827c948d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.817 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for f5f29271-ff94-4d88-bc99-1cfc3e1128a0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.818 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616633.8162923, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.818 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Resumed (Lifecycle Event)
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.821 248514 DEBUG nova.compute.manager [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.827 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance rebooted successfully.
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.827 248514 DEBUG nova.compute.manager [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.870 248514 DEBUG nova.compute.manager [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.870 248514 DEBUG oslo_concurrency.lockutils [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.870 248514 DEBUG oslo_concurrency.lockutils [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.870 248514 DEBUG oslo_concurrency.lockutils [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.871 248514 DEBUG nova.compute.manager [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.871 248514 WARNING nova.compute.manager [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state stopped and task_state powering-on.
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.872 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.880 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.919 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616633.8167195, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.919 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Started (Lifecycle Event)
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.942 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:53 compute-0 nova_compute[248510]: 2025-12-13 09:03:53.946 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:03:54 compute-0 podman[383333]: 2025-12-13 09:03:54.010095783 +0000 UTC m=+0.031372917 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:03:54 compute-0 nova_compute[248510]: 2025-12-13 09:03:54.125 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616634.1247852, c89b029c-146b-47ae-8961-0000d4e49e29 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:54 compute-0 nova_compute[248510]: 2025-12-13 09:03:54.126 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] VM Started (Lifecycle Event)
Dec 13 09:03:54 compute-0 nova_compute[248510]: 2025-12-13 09:03:54.156 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:54 compute-0 nova_compute[248510]: 2025-12-13 09:03:54.161 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616634.12489, c89b029c-146b-47ae-8961-0000d4e49e29 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:54 compute-0 nova_compute[248510]: 2025-12-13 09:03:54.162 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] VM Paused (Lifecycle Event)
Dec 13 09:03:54 compute-0 nova_compute[248510]: 2025-12-13 09:03:54.191 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:54 compute-0 nova_compute[248510]: 2025-12-13 09:03:54.196 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:03:54 compute-0 nova_compute[248510]: 2025-12-13 09:03:54.226 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:03:54 compute-0 podman[383333]: 2025-12-13 09:03:54.300846628 +0000 UTC m=+0.322123782 container create 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 09:03:54 compute-0 systemd[1]: Started libpod-conmon-231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7.scope.
Dec 13 09:03:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4488aafd38a05ea281a308b9d734b5be686496064f1968b2b681c3b5260af583/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:03:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec 13 09:03:54 compute-0 podman[383333]: 2025-12-13 09:03:54.751741654 +0000 UTC m=+0.773018828 container init 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 09:03:54 compute-0 podman[383333]: 2025-12-13 09:03:54.760099984 +0000 UTC m=+0.781377128 container start 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:03:54 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [NOTICE]   (383375) : New worker (383377) forked
Dec 13 09:03:54 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [NOTICE]   (383375) : Loading success.
Dec 13 09:03:55 compute-0 ceph-mon[76537]: pgmap v3084: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.222 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8b1532d0-4269-4522-a1c5-961c9f9254dc in datapath c2c4278e-24ed-4454-ae7e-9f1ba7df516c unbound from our chassis
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.224 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2c4278e-24ed-4454-ae7e-9f1ba7df516c
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.248 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7da951c3-3a9e-4c9a-b1f8-8746ad58b5f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.285 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[02a9e977-3634-46a2-9444-66305c7da9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.290 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[21ce9214-644d-4f0a-b00f-bb51e0c2fe03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.322 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[76f92dc9-4026-4e73-915f-9885839600c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.340 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95c48d7b-2d9f-429e-820b-769c44cddcb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c4278e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:a6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 392], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918511, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383391, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.361 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[14308fc9-55b1-4bcf-8a08-af1158889432]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc2c4278e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918524, 'tstamp': 918524}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383392, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc2c4278e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918528, 'tstamp': 918528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383392, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.363 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c4278e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:55 compute-0 nova_compute[248510]: 2025-12-13 09:03:55.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.366 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c4278e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.366 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.367 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2c4278e-20, col_values=(('external_ids', {'iface-id': 'fb6127ed-4b20-49d3-8950-376b6e1d5999'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.367 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.441 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.094 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.095 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.095 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.096 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.096 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.096 248514 WARNING nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state active and task_state None.
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.096 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.097 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.097 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.097 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.098 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Processing event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.098 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.098 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.098 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.099 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.099 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] No waiting events found dispatching network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.099 248514 WARNING nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received unexpected event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc for instance with vm_state building and task_state spawning.
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.100 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.107 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.107 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616636.1067662, c89b029c-146b-47ae-8961-0000d4e49e29 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.107 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] VM Resumed (Lifecycle Event)
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.113 248514 INFO nova.virt.libvirt.driver [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance spawned successfully.
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.113 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.131 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.138 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.142 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.143 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.143 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.144 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.144 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.145 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.161 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.260 248514 INFO nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Took 10.97 seconds to spawn the instance on the hypervisor.
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.260 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.351 248514 INFO nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Took 12.21 seconds to build instance.
Dec 13 09:03:56 compute-0 nova_compute[248510]: 2025-12-13 09:03:56.369 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:03:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 13 09:03:56 compute-0 ceph-mon[76537]: pgmap v3085: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 13 09:03:58 compute-0 nova_compute[248510]: 2025-12-13 09:03:58.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:58 compute-0 nova_compute[248510]: 2025-12-13 09:03:58.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:03:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Dec 13 09:03:59 compute-0 ceph-mon[76537]: pgmap v3086: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Dec 13 09:04:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 639 KiB/s wr, 164 op/s
Dec 13 09:04:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:01 compute-0 ceph-mon[76537]: pgmap v3087: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 639 KiB/s wr, 164 op/s
Dec 13 09:04:01 compute-0 nova_compute[248510]: 2025-12-13 09:04:01.459 248514 DEBUG nova.compute.manager [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:01 compute-0 nova_compute[248510]: 2025-12-13 09:04:01.459 248514 DEBUG nova.compute.manager [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing instance network info cache due to event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:04:01 compute-0 nova_compute[248510]: 2025-12-13 09:04:01.460 248514 DEBUG oslo_concurrency.lockutils [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:01 compute-0 nova_compute[248510]: 2025-12-13 09:04:01.460 248514 DEBUG oslo_concurrency.lockutils [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:01 compute-0 nova_compute[248510]: 2025-12-13 09:04:01.460 248514 DEBUG nova.network.neutron [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:04:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 149 op/s
Dec 13 09:04:03 compute-0 ceph-mon[76537]: pgmap v3088: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 149 op/s
Dec 13 09:04:03 compute-0 nova_compute[248510]: 2025-12-13 09:04:03.043 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:03 compute-0 nova_compute[248510]: 2025-12-13 09:04:03.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:03 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Dec 13 09:04:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 201 op/s
Dec 13 09:04:04 compute-0 nova_compute[248510]: 2025-12-13 09:04:04.993 248514 DEBUG nova.network.neutron [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updated VIF entry in instance network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:04:04 compute-0 nova_compute[248510]: 2025-12-13 09:04:04.994 248514 DEBUG nova.network.neutron [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:05 compute-0 nova_compute[248510]: 2025-12-13 09:04:05.020 248514 DEBUG oslo_concurrency.lockutils [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:05 compute-0 ceph-mon[76537]: pgmap v3089: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 201 op/s
Dec 13 09:04:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 14 KiB/s wr, 187 op/s
Dec 13 09:04:06 compute-0 ceph-mon[76537]: pgmap v3090: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 14 KiB/s wr, 187 op/s
Dec 13 09:04:06 compute-0 ovn_controller[148476]: 2025-12-13T09:04:06Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 09:04:08 compute-0 nova_compute[248510]: 2025-12-13 09:04:08.046 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:08 compute-0 nova_compute[248510]: 2025-12-13 09:04:08.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3091: 321 pgs: 321 active+clean; 246 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 24 KiB/s wr, 202 op/s
Dec 13 09:04:08 compute-0 nova_compute[248510]: 2025-12-13 09:04:08.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:04:09
Dec 13 09:04:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:04:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:04:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Dec 13 09:04:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:04:09 compute-0 ceph-mon[76537]: pgmap v3091: 321 pgs: 321 active+clean; 246 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 24 KiB/s wr, 202 op/s
Dec 13 09:04:09 compute-0 ovn_controller[148476]: 2025-12-13T09:04:09Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:b5:f3 10.100.0.9
Dec 13 09:04:09 compute-0 ovn_controller[148476]: 2025-12-13T09:04:09Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:b5:f3 10.100.0.9
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 259 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1001 KiB/s wr, 183 op/s
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:04:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:04:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:11 compute-0 ceph-mon[76537]: pgmap v3092: 321 pgs: 321 active+clean; 259 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1001 KiB/s wr, 183 op/s
Dec 13 09:04:12 compute-0 podman[383395]: 2025-12-13 09:04:12.000413129 +0000 UTC m=+0.082256742 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:04:12 compute-0 podman[383396]: 2025-12-13 09:04:12.010936772 +0000 UTC m=+0.079444801 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:04:12 compute-0 podman[383394]: 2025-12-13 09:04:12.054920324 +0000 UTC m=+0.128353667 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:04:12 compute-0 nova_compute[248510]: 2025-12-13 09:04:12.175 248514 INFO nova.compute.manager [None req-597a6f41-f0ad-418c-8166-00ef38848157 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Get console output
Dec 13 09:04:12 compute-0 nova_compute[248510]: 2025-12-13 09:04:12.186 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:04:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 259 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 728 KiB/s rd, 1000 KiB/s wr, 119 op/s
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.408 248514 DEBUG nova.compute.manager [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.409 248514 DEBUG nova.compute.manager [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing instance network info cache due to event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.409 248514 DEBUG oslo_concurrency.lockutils [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.410 248514 DEBUG oslo_concurrency.lockutils [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.410 248514 DEBUG nova.network.neutron [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.499 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.499 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.499 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.500 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.500 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.501 248514 INFO nova.compute.manager [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Terminating instance
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.502 248514 DEBUG nova.compute.manager [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:04:13 compute-0 kernel: tapdaddb809-b3 (unregistering): left promiscuous mode
Dec 13 09:04:13 compute-0 NetworkManager[50376]: <info>  [1765616653.5601] device (tapdaddb809-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:04:13 compute-0 ovn_controller[148476]: 2025-12-13T09:04:13Z|01370|binding|INFO|Releasing lport daddb809-b36f-4f29-bd15-459cfd21a812 from this chassis (sb_readonly=0)
Dec 13 09:04:13 compute-0 ovn_controller[148476]: 2025-12-13T09:04:13Z|01371|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 down in Southbound
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:13 compute-0 ovn_controller[148476]: 2025-12-13T09:04:13Z|01372|binding|INFO|Removing iface tapdaddb809-b3 ovn-installed in OVS
Dec 13 09:04:13 compute-0 ceph-mon[76537]: pgmap v3093: 321 pgs: 321 active+clean; 259 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 728 KiB/s rd, 1000 KiB/s wr, 119 op/s
Dec 13 09:04:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.604 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:4e:45 10.100.0.11'], port_security=['fa:16:3e:b3:4e:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd30303-3917-438c-8b47-a12827c948d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '48e751cb-5eae-4702-97af-e0ce47d426b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c018ca30-1abb-42b0-8adb-342f8b53fd8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=daddb809-b36f-4f29-bd15-459cfd21a812) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:04:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.606 158419 INFO neutron.agent.ovn.metadata.agent [-] Port daddb809-b36f-4f29-bd15-459cfd21a812 in datapath cdd30303-3917-438c-8b47-a12827c948d6 unbound from our chassis
Dec 13 09:04:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.607 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cdd30303-3917-438c-8b47-a12827c948d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:04:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.608 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[82aca5ad-23f8-4ab7-8ce2-66acdbf46ded]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.609 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 namespace which is not needed anymore
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:13 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000084.scope: Deactivated successfully.
Dec 13 09:04:13 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000084.scope: Consumed 13.649s CPU time.
Dec 13 09:04:13 compute-0 systemd-machined[210538]: Machine qemu-163-instance-00000084 terminated.
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.742 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance destroyed successfully.
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.742 248514 DEBUG nova.objects.instance [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.761 248514 DEBUG nova.virt.libvirt.vif [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:53Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.762 248514 DEBUG nova.network.os_vif_util [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.763 248514 DEBUG nova.network.os_vif_util [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.763 248514 DEBUG os_vif [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.767 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaddb809-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:04:13 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [NOTICE]   (383375) : haproxy version is 2.8.14-c23fe91
Dec 13 09:04:13 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [NOTICE]   (383375) : path to executable is /usr/sbin/haproxy
Dec 13 09:04:13 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [WARNING]  (383375) : Exiting Master process...
Dec 13 09:04:13 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [WARNING]  (383375) : Exiting Master process...
Dec 13 09:04:13 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [ALERT]    (383375) : Current worker (383377) exited with code 143 (Terminated)
Dec 13 09:04:13 compute-0 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [WARNING]  (383375) : All workers exited. Exiting... (0)
Dec 13 09:04:13 compute-0 nova_compute[248510]: 2025-12-13 09:04:13.776 248514 INFO os_vif [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3')
Dec 13 09:04:13 compute-0 systemd[1]: libpod-231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7.scope: Deactivated successfully.
Dec 13 09:04:13 compute-0 podman[383477]: 2025-12-13 09:04:13.785859931 +0000 UTC m=+0.073197265 container died 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7-userdata-shm.mount: Deactivated successfully.
Dec 13 09:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4488aafd38a05ea281a308b9d734b5be686496064f1968b2b681c3b5260af583-merged.mount: Deactivated successfully.
Dec 13 09:04:14 compute-0 podman[383477]: 2025-12-13 09:04:14.232707747 +0000 UTC m=+0.520045071 container cleanup 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:04:14 compute-0 systemd[1]: libpod-conmon-231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7.scope: Deactivated successfully.
Dec 13 09:04:14 compute-0 podman[383533]: 2025-12-13 09:04:14.343755079 +0000 UTC m=+0.072214780 container remove 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.350 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a538a8f1-a2f5-4dab-a68b-dc4a52339101]: (4, ('Sat Dec 13 09:04:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 (231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7)\n231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7\nSat Dec 13 09:04:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 (231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7)\n231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[400dcc4c-9b1a-4321-a96b-dce8f09f5602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.353 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd30303-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:14 compute-0 nova_compute[248510]: 2025-12-13 09:04:14.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:14 compute-0 kernel: tapcdd30303-30: left promiscuous mode
Dec 13 09:04:14 compute-0 nova_compute[248510]: 2025-12-13 09:04:14.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.373 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9148c643-51e5-4880-8bd9-74606d7a5159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.391 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03666ed8-a7f6-44b3-8902-58491e4822de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.393 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9dec077a-3f23-4b80-81e4-1550b19e04d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.409 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb286c48-945e-48f7-9197-eea3a5e03e82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921047, 'reachable_time': 30164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383549, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.412 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:04:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.412 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[73ef057d-496d-464a-bfa2-572dc3081823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:14 compute-0 systemd[1]: run-netns-ovnmeta\x2dcdd30303\x2d3917\x2d438c\x2d8b47\x2da12827c948d6.mount: Deactivated successfully.
Dec 13 09:04:14 compute-0 nova_compute[248510]: 2025-12-13 09:04:14.463 248514 INFO nova.virt.libvirt.driver [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Deleting instance files /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_del
Dec 13 09:04:14 compute-0 nova_compute[248510]: 2025-12-13 09:04:14.464 248514 INFO nova.virt.libvirt.driver [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Deletion of /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_del complete
Dec 13 09:04:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 281 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 2.2 MiB/s wr, 168 op/s
Dec 13 09:04:14 compute-0 nova_compute[248510]: 2025-12-13 09:04:14.528 248514 INFO nova.compute.manager [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Took 1.03 seconds to destroy the instance on the hypervisor.
Dec 13 09:04:14 compute-0 nova_compute[248510]: 2025-12-13 09:04:14.529 248514 DEBUG oslo.service.loopingcall [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:04:14 compute-0 nova_compute[248510]: 2025-12-13 09:04:14.529 248514 DEBUG nova.compute.manager [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:04:14 compute-0 nova_compute[248510]: 2025-12-13 09:04:14.529 248514 DEBUG nova.network.neutron [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:04:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:04:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3900729166' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:04:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:04:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3900729166' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.320 248514 DEBUG nova.network.neutron [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.343 248514 INFO nova.compute.manager [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Took 0.81 seconds to deallocate network for instance.
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.398 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.399 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.467 248514 DEBUG nova.network.neutron [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updated VIF entry in instance network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.468 248514 DEBUG nova.network.neutron [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.506 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.507 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.507 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.507 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.507 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 WARNING nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state deleted and task_state None.
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.509 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.509 248514 WARNING nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state deleted and task_state None.
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.509 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-deleted-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.511 248514 DEBUG oslo_concurrency.lockutils [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:15 compute-0 nova_compute[248510]: 2025-12-13 09:04:15.519 248514 DEBUG oslo_concurrency.processutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:15 compute-0 ceph-mon[76537]: pgmap v3094: 321 pgs: 321 active+clean; 281 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 2.2 MiB/s wr, 168 op/s
Dec 13 09:04:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3900729166' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:04:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3900729166' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:04:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:04:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714183722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:16 compute-0 nova_compute[248510]: 2025-12-13 09:04:16.085 248514 DEBUG oslo_concurrency.processutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:16 compute-0 nova_compute[248510]: 2025-12-13 09:04:16.091 248514 DEBUG nova.compute.provider_tree [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:04:16 compute-0 nova_compute[248510]: 2025-12-13 09:04:16.118 248514 DEBUG nova.scheduler.client.report [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:04:16 compute-0 nova_compute[248510]: 2025-12-13 09:04:16.147 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:16 compute-0 nova_compute[248510]: 2025-12-13 09:04:16.171 248514 INFO nova.scheduler.client.report [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance f5f29271-ff94-4d88-bc99-1cfc3e1128a0
Dec 13 09:04:16 compute-0 nova_compute[248510]: 2025-12-13 09:04:16.260 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 281 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 912 KiB/s rd, 2.2 MiB/s wr, 116 op/s
Dec 13 09:04:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3714183722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:16 compute-0 ceph-mon[76537]: pgmap v3095: 321 pgs: 321 active+clean; 281 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 912 KiB/s rd, 2.2 MiB/s wr, 116 op/s
Dec 13 09:04:16 compute-0 nova_compute[248510]: 2025-12-13 09:04:16.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:17 compute-0 nova_compute[248510]: 2025-12-13 09:04:17.565 248514 INFO nova.compute.manager [None req-8747b0a7-9805-4359-ad8d-4df7dff2a0cb 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Get console output
Dec 13 09:04:17 compute-0 nova_compute[248510]: 2025-12-13 09:04:17.572 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:04:17 compute-0 nova_compute[248510]: 2025-12-13 09:04:17.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:17 compute-0 nova_compute[248510]: 2025-12-13 09:04:17.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:04:17 compute-0 nova_compute[248510]: 2025-12-13 09:04:17.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:04:17 compute-0 nova_compute[248510]: 2025-12-13 09:04:17.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:17.834 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:04:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:17.835 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:04:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:17.836 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:18 compute-0 nova_compute[248510]: 2025-12-13 09:04:18.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 259 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 2.2 MiB/s wr, 133 op/s
Dec 13 09:04:18 compute-0 nova_compute[248510]: 2025-12-13 09:04:18.770 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:18 compute-0 nova_compute[248510]: 2025-12-13 09:04:18.830 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:18 compute-0 nova_compute[248510]: 2025-12-13 09:04:18.830 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:18 compute-0 nova_compute[248510]: 2025-12-13 09:04:18.830 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:04:18 compute-0 nova_compute[248510]: 2025-12-13 09:04:18.831 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:04:19 compute-0 ceph-mon[76537]: pgmap v3096: 321 pgs: 321 active+clean; 259 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 2.2 MiB/s wr, 133 op/s
Dec 13 09:04:19 compute-0 nova_compute[248510]: 2025-12-13 09:04:19.809 248514 DEBUG nova.compute.manager [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:19 compute-0 nova_compute[248510]: 2025-12-13 09:04:19.810 248514 DEBUG nova.compute.manager [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:04:19 compute-0 nova_compute[248510]: 2025-12-13 09:04:19.810 248514 DEBUG oslo_concurrency.lockutils [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 797 KiB/s rd, 2.2 MiB/s wr, 127 op/s
Dec 13 09:04:20 compute-0 nova_compute[248510]: 2025-12-13 09:04:20.969 248514 INFO nova.compute.manager [None req-de57d6f8-7376-4ff4-a1b2-cf0f391ac3b4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Get console output
Dec 13 09:04:20 compute-0 nova_compute[248510]: 2025-12-13 09:04:20.978 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:04:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015325918245449068 of space, bias 1.0, pg target 0.45977754736347204 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698288550839229 of space, bias 1.0, pg target 0.20094865652517685 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:04:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:04:21 compute-0 ceph-mon[76537]: pgmap v3097: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 797 KiB/s rd, 2.2 MiB/s wr, 127 op/s
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.757 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.788 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.789 248514 DEBUG oslo_concurrency.lockutils [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.790 248514 DEBUG nova.network.neutron [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.919 248514 DEBUG nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.920 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.921 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.921 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.922 248514 DEBUG nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.922 248514 WARNING nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.923 248514 DEBUG nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.924 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.924 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.925 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.925 248514 DEBUG nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:21 compute-0 nova_compute[248510]: 2025-12-13 09:04:21.926 248514 WARNING nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.
Dec 13 09:04:21 compute-0 ovn_controller[148476]: 2025-12-13T09:04:21Z|01373|binding|INFO|Releasing lport fb6127ed-4b20-49d3-8950-376b6e1d5999 from this chassis (sb_readonly=0)
Dec 13 09:04:22 compute-0 nova_compute[248510]: 2025-12-13 09:04:22.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.2 MiB/s wr, 74 op/s
Dec 13 09:04:22 compute-0 nova_compute[248510]: 2025-12-13 09:04:22.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:22 compute-0 nova_compute[248510]: 2025-12-13 09:04:22.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:22 compute-0 nova_compute[248510]: 2025-12-13 09:04:22.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:22 compute-0 nova_compute[248510]: 2025-12-13 09:04:22.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:22 compute-0 nova_compute[248510]: 2025-12-13 09:04:22.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:04:22 compute-0 nova_compute[248510]: 2025-12-13 09:04:22.854 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:04:23 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826702923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.442 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.544 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.544 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.549 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.549 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:04:23 compute-0 ceph-mon[76537]: pgmap v3098: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.2 MiB/s wr, 74 op/s
Dec 13 09:04:23 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/826702923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.766 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.768 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3090MB free_disk=59.89634370058775GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.768 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.769 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.890 248514 INFO nova.compute.manager [None req-ca0ae6b8-ab90-47fe-97bb-bd4ff05d4505 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Get console output
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.893 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.894 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c89b029c-146b-47ae-8961-0000d4e49e29 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.894 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.894 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.904 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:04:23 compute-0 nova_compute[248510]: 2025-12-13 09:04:23.970 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:24 compute-0 nova_compute[248510]: 2025-12-13 09:04:24.491 248514 DEBUG nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:24 compute-0 nova_compute[248510]: 2025-12-13 09:04:24.492 248514 DEBUG nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:04:24 compute-0 nova_compute[248510]: 2025-12-13 09:04:24.493 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.2 MiB/s wr, 74 op/s
Dec 13 09:04:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:04:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4153118151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:24 compute-0 nova_compute[248510]: 2025-12-13 09:04:24.568 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:24 compute-0 nova_compute[248510]: 2025-12-13 09:04:24.576 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:04:24 compute-0 nova_compute[248510]: 2025-12-13 09:04:24.608 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:04:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4153118151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:24 compute-0 nova_compute[248510]: 2025-12-13 09:04:24.653 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:04:24 compute-0 nova_compute[248510]: 2025-12-13 09:04:24.653 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:25 compute-0 ceph-mon[76537]: pgmap v3099: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.2 MiB/s wr, 74 op/s
Dec 13 09:04:25 compute-0 nova_compute[248510]: 2025-12-13 09:04:25.620 248514 DEBUG nova.network.neutron [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:04:25 compute-0 nova_compute[248510]: 2025-12-13 09:04:25.621 248514 DEBUG nova.network.neutron [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:25 compute-0 nova_compute[248510]: 2025-12-13 09:04:25.653 248514 DEBUG oslo_concurrency.lockutils [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:25 compute-0 nova_compute[248510]: 2025-12-13 09:04:25.654 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:25 compute-0 nova_compute[248510]: 2025-12-13 09:04:25.654 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:25 compute-0 nova_compute[248510]: 2025-12-13 09:04:25.654 248514 DEBUG nova.network.neutron [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:04:25 compute-0 nova_compute[248510]: 2025-12-13 09:04:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:25 compute-0 nova_compute[248510]: 2025-12-13 09:04:25.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.031 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.032 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.032 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.033 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.033 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.034 248514 INFO nova.compute.manager [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Terminating instance
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.035 248514 DEBUG nova.compute.manager [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:04:26 compute-0 kernel: tap8b1532d0-42 (unregistering): left promiscuous mode
Dec 13 09:04:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:26 compute-0 NetworkManager[50376]: <info>  [1765616666.0885] device (tap8b1532d0-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:04:26 compute-0 ovn_controller[148476]: 2025-12-13T09:04:26Z|01374|binding|INFO|Releasing lport 8b1532d0-4269-4522-a1c5-961c9f9254dc from this chassis (sb_readonly=0)
Dec 13 09:04:26 compute-0 ovn_controller[148476]: 2025-12-13T09:04:26Z|01375|binding|INFO|Setting lport 8b1532d0-4269-4522-a1c5-961c9f9254dc down in Southbound
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.097 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:26 compute-0 ovn_controller[148476]: 2025-12-13T09:04:26Z|01376|binding|INFO|Removing iface tap8b1532d0-42 ovn-installed in OVS
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.108 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b5:f3 10.100.0.9'], port_security=['fa:16:3e:b8:b5:f3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c89b029c-146b-47ae-8961-0000d4e49e29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82626aa1-467a-40c3-a9a5-93921d70f234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f130a7c-de13-4843-94c3-b6b246bfa796, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8b1532d0-4269-4522-a1c5-961c9f9254dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.111 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8b1532d0-4269-4522-a1c5-961c9f9254dc in datapath c2c4278e-24ed-4454-ae7e-9f1ba7df516c unbound from our chassis
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.113 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2c4278e-24ed-4454-ae7e-9f1ba7df516c
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.136 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[601e77f6-aa87-4dcf-b30f-9f4aa79f68bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:26 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000086.scope: Deactivated successfully.
Dec 13 09:04:26 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000086.scope: Consumed 14.875s CPU time.
Dec 13 09:04:26 compute-0 systemd-machined[210538]: Machine qemu-164-instance-00000086 terminated.
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.173 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0ccd9b-698a-4ec1-8436-069c71ce461c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.176 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5dfe2e-3de4-4331-9ec6-c473c3867cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.209 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[495a8c8c-ac76-4409-b344-78dc0001cca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.226 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[300b83c0-72d1-4741-8192-95a27b0a159b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c4278e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:a6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 392], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918511, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383629, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.245 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe51239-e276-4b79-bef3-f47c2da5ce32]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc2c4278e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918524, 'tstamp': 918524}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383630, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc2c4278e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918528, 'tstamp': 918528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383630, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.247 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c4278e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.257 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c4278e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.258 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.258 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2c4278e-20, col_values=(('external_ids', {'iface-id': 'fb6127ed-4b20-49d3-8950-376b6e1d5999'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.259 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.284 248514 INFO nova.virt.libvirt.driver [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance destroyed successfully.
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.285 248514 DEBUG nova.objects.instance [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid c89b029c-146b-47ae-8961-0000d4e49e29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.309 248514 DEBUG nova.virt.libvirt.vif [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1708858245',display_name='tempest-TestNetworkBasicOps-server-1708858245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1708858245',id=134,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL++NwsCfuXrqu4UC7qgUOg89a/J8JDHWy/KO/6t34DIcODsr1DE3qwPt8YEbfOTv/XMYKYNpUizYYeq5l6dOm5t1OYnPo2H2KGHxdPaguJpqsjNgEOg4dU6arkPu8gPzA==',key_name='tempest-TestNetworkBasicOps-1668505721',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-gfm2ze7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:56Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=c89b029c-146b-47ae-8961-0000d4e49e29,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.310 248514 DEBUG nova.network.os_vif_util [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.312 248514 DEBUG nova.network.os_vif_util [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.313 248514 DEBUG os_vif [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.316 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.317 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b1532d0-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.323 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.326 248514 INFO os_vif [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42')
Dec 13 09:04:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 18 KiB/s wr, 26 op/s
Dec 13 09:04:26 compute-0 ceph-mon[76537]: pgmap v3100: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 18 KiB/s wr, 26 op/s
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.667 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.668 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.669 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.669 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.669 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.670 248514 WARNING nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.670 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.671 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing instance network info cache due to event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.671 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.671 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.672 248514 DEBUG nova.network.neutron [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.680 248514 INFO nova.virt.libvirt.driver [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Deleting instance files /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29_del
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.681 248514 INFO nova.virt.libvirt.driver [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Deletion of /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29_del complete
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.733 248514 INFO nova.compute.manager [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Took 0.70 seconds to destroy the instance on the hypervisor.
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.733 248514 DEBUG oslo.service.loopingcall [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.734 248514 DEBUG nova.compute.manager [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:04:26 compute-0 nova_compute[248510]: 2025-12-13 09:04:26.734 248514 DEBUG nova.network.neutron [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.096 248514 DEBUG nova.network.neutron [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.127 248514 INFO nova.compute.manager [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Took 1.39 seconds to deallocate network for instance.
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.235 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.235 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.239 248514 DEBUG nova.compute.manager [req-fb20ba8d-e454-4c17-9138-1b84b39aed13 req-7b058af0-54ea-4be1-a921-66b2931f47d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-deleted-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.301 248514 DEBUG oslo_concurrency.processutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 159 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 19 KiB/s wr, 35 op/s
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.740 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616653.7396152, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.741 248514 INFO nova.compute.manager [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Stopped (Lifecycle Event)
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.777 248514 DEBUG nova.compute.manager [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.777 248514 DEBUG oslo_concurrency.lockutils [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.777 248514 DEBUG oslo_concurrency.lockutils [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.777 248514 DEBUG oslo_concurrency.lockutils [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.778 248514 DEBUG nova.compute.manager [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] No waiting events found dispatching network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.778 248514 WARNING nova.compute.manager [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received unexpected event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc for instance with vm_state deleted and task_state None.
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.780 248514 DEBUG nova.compute.manager [None req-a07515fe-850a-494f-ac21-4edea3e7ed48 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:04:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:04:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753694766' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.870 248514 DEBUG oslo_concurrency.processutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.878 248514 DEBUG nova.compute.provider_tree [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.899 248514 DEBUG nova.scheduler.client.report [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.953 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.962 248514 DEBUG nova.network.neutron [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.963 248514 DEBUG nova.network.neutron [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.998 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.998 248514 DEBUG nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:28 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.999 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:28.999 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.000 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.000 248514 DEBUG nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.001 248514 WARNING nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.030 248514 INFO nova.scheduler.client.report [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance c89b029c-146b-47ae-8961-0000d4e49e29
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.120 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.330 248514 DEBUG nova.network.neutron [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updated VIF entry in instance network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.331 248514 DEBUG nova.network.neutron [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.359 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.360 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-unplugged-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.360 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.361 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.361 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.362 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] No waiting events found dispatching network-vif-unplugged-8b1532d0-4269-4522-a1c5-961c9f9254dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:29 compute-0 nova_compute[248510]: 2025-12-13 09:04:29.362 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-unplugged-8b1532d0-4269-4522-a1c5-961c9f9254dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:04:29 compute-0 ceph-mon[76537]: pgmap v3101: 321 pgs: 321 active+clean; 159 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 19 KiB/s wr, 35 op/s
Dec 13 09:04:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2753694766' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.503 248514 DEBUG nova.compute.manager [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.505 248514 DEBUG nova.compute.manager [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.505 248514 DEBUG oslo_concurrency.lockutils [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.506 248514 DEBUG oslo_concurrency.lockutils [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.506 248514 DEBUG nova.network.neutron [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:04:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 17 KiB/s wr, 37 op/s
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.622 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.623 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.623 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.623 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.624 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.625 248514 INFO nova.compute.manager [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Terminating instance
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.627 248514 DEBUG nova.compute.manager [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:04:30 compute-0 kernel: tap90d7401e-21 (unregistering): left promiscuous mode
Dec 13 09:04:30 compute-0 NetworkManager[50376]: <info>  [1765616670.6726] device (tap90d7401e-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:04:30 compute-0 ovn_controller[148476]: 2025-12-13T09:04:30Z|01377|binding|INFO|Releasing lport 90d7401e-210b-4cbe-b93d-853787434352 from this chassis (sb_readonly=0)
Dec 13 09:04:30 compute-0 ovn_controller[148476]: 2025-12-13T09:04:30Z|01378|binding|INFO|Setting lport 90d7401e-210b-4cbe-b93d-853787434352 down in Southbound
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:30 compute-0 ovn_controller[148476]: 2025-12-13T09:04:30Z|01379|binding|INFO|Removing iface tap90d7401e-21 ovn-installed in OVS
Dec 13 09:04:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.690 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:b4:6d 10.100.0.14'], port_security=['fa:16:3e:69:b4:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '19f43277-8c8d-48f2-9ec7-2b0d36a06e27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '8', 'neutron:security_group_ids': '42e759fc-c2e4-4c38-92ea-ac51e44d350f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f130a7c-de13-4843-94c3-b6b246bfa796, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=90d7401e-210b-4cbe-b93d-853787434352) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:04:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.691 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 90d7401e-210b-4cbe-b93d-853787434352 in datapath c2c4278e-24ed-4454-ae7e-9f1ba7df516c unbound from our chassis
Dec 13 09:04:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.693 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2c4278e-24ed-4454-ae7e-9f1ba7df516c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:04:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68606cd1-a6be-421a-9a32-070f5bd3c1ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.694 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c namespace which is not needed anymore
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:30 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000085.scope: Deactivated successfully.
Dec 13 09:04:30 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000085.scope: Consumed 15.165s CPU time.
Dec 13 09:04:30 compute-0 systemd-machined[210538]: Machine qemu-162-instance-00000085 terminated.
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.854 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.872 248514 INFO nova.virt.libvirt.driver [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance destroyed successfully.
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.872 248514 DEBUG nova.objects.instance [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:04:30 compute-0 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [NOTICE]   (382364) : haproxy version is 2.8.14-c23fe91
Dec 13 09:04:30 compute-0 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [NOTICE]   (382364) : path to executable is /usr/sbin/haproxy
Dec 13 09:04:30 compute-0 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [WARNING]  (382364) : Exiting Master process...
Dec 13 09:04:30 compute-0 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [WARNING]  (382364) : Exiting Master process...
Dec 13 09:04:30 compute-0 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [ALERT]    (382364) : Current worker (382377) exited with code 143 (Terminated)
Dec 13 09:04:30 compute-0 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [WARNING]  (382364) : All workers exited. Exiting... (0)
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.895 248514 DEBUG nova.virt.libvirt.vif [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016404015',display_name='tempest-TestNetworkBasicOps-server-1016404015',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016404015',id=133,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5E+jKld2VdKWtObDqeKBGvKmvekTSiiA9sjTJdF7hEcONw3irfWTSpIiwVY9k/7NWwXxkQuubngpznyfOsRWuRq1jkSBu1LNMt0g8LKC1KQLWo836n2hzpQ9ilyrcugQ==',key_name='tempest-TestNetworkBasicOps-149670649',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-i1dfr0ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:30Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=19f43277-8c8d-48f2-9ec7-2b0d36a06e27,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.895 248514 DEBUG nova.network.os_vif_util [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.896 248514 DEBUG nova.network.os_vif_util [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.896 248514 DEBUG os_vif [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:04:30 compute-0 systemd[1]: libpod-097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd.scope: Deactivated successfully.
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.898 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90d7401e-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:30 compute-0 podman[383709]: 2025-12-13 09:04:30.901111141 +0000 UTC m=+0.072645811 container died 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:04:30 compute-0 nova_compute[248510]: 2025-12-13 09:04:30.905 248514 INFO os_vif [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21')
Dec 13 09:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd-userdata-shm.mount: Deactivated successfully.
Dec 13 09:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6230ed043b200e92eb147763a359dfae1b35945024fa55c5518b190e8cc8fc9-merged.mount: Deactivated successfully.
Dec 13 09:04:30 compute-0 podman[383709]: 2025-12-13 09:04:30.946655823 +0000 UTC m=+0.118190493 container cleanup 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:04:30 compute-0 systemd[1]: libpod-conmon-097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd.scope: Deactivated successfully.
Dec 13 09:04:31 compute-0 podman[383765]: 2025-12-13 09:04:31.020626316 +0000 UTC m=+0.049145703 container remove 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.027 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb8a9f5-b63c-41d3-839b-5db99ed2e0f9]: (4, ('Sat Dec 13 09:04:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c (097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd)\n097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd\nSat Dec 13 09:04:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c (097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd)\n097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.030 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2026a5c6-a2c2-4a28-9f82-ddd56038a7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.033 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c4278e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:31 compute-0 kernel: tapc2c4278e-20: left promiscuous mode
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.035 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.051 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbc86a5-b506-49cd-955b-aa075c6964ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.077 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c25cb0d-4bd0-4c24-945b-7849fcec603f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.078 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85a1157c-3926-4b39-ac68-3a983f6dfe18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.092 248514 DEBUG nova.compute.manager [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.093 248514 DEBUG oslo_concurrency.lockutils [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.093 248514 DEBUG oslo_concurrency.lockutils [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.093 248514 DEBUG oslo_concurrency.lockutils [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.094 248514 DEBUG nova.compute.manager [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.093 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dce131-1acd-450f-8571-6db91d4a6f41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918498, 'reachable_time': 20111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383781, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.094 248514 DEBUG nova.compute.manager [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.096 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:04:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dc2c4278e\x2d24ed\x2d4454\x2dae7e\x2d9f1ba7df516c.mount: Deactivated successfully.
Dec 13 09:04:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.096 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd57a8c-99f5-4d99-8b35-0ff05392a722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.220 248514 INFO nova.virt.libvirt.driver [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Deleting instance files /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27_del
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.221 248514 INFO nova.virt.libvirt.driver [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Deletion of /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27_del complete
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.293 248514 INFO nova.compute.manager [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Took 0.67 seconds to destroy the instance on the hypervisor.
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.294 248514 DEBUG oslo.service.loopingcall [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.296 248514 DEBUG nova.compute.manager [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:04:31 compute-0 nova_compute[248510]: 2025-12-13 09:04:31.296 248514 DEBUG nova.network.neutron [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:04:31 compute-0 sudo[383782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:04:31 compute-0 sudo[383782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:31 compute-0 sudo[383782]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:31 compute-0 ceph-mon[76537]: pgmap v3102: 321 pgs: 321 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 17 KiB/s wr, 37 op/s
Dec 13 09:04:31 compute-0 sudo[383807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:04:31 compute-0 sudo[383807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:32 compute-0 sudo[383807]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:04:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:04:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:04:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:04:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:04:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:04:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:04:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:04:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:04:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:04:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:04:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:04:32 compute-0 sudo[383861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:04:32 compute-0 sudo[383861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:32 compute-0 sudo[383861]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:32 compute-0 sudo[383886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:04:32 compute-0 sudo[383886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:32 compute-0 sshd-session[383683]: Connection closed by authenticating user root 61.245.11.87 port 58244 [preauth]
Dec 13 09:04:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3103: 321 pgs: 321 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Dec 13 09:04:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:04:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:04:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:04:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:04:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:04:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:04:32 compute-0 nova_compute[248510]: 2025-12-13 09:04:32.703 248514 DEBUG nova.network.neutron [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:04:32 compute-0 nova_compute[248510]: 2025-12-13 09:04:32.703 248514 DEBUG nova.network.neutron [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:32 compute-0 podman[383922]: 2025-12-13 09:04:32.729267376 +0000 UTC m=+0.043557923 container create b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 09:04:32 compute-0 nova_compute[248510]: 2025-12-13 09:04:32.733 248514 DEBUG oslo_concurrency.lockutils [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:32 compute-0 systemd[1]: Started libpod-conmon-b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95.scope.
Dec 13 09:04:32 compute-0 nova_compute[248510]: 2025-12-13 09:04:32.790 248514 DEBUG nova.network.neutron [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:04:32 compute-0 nova_compute[248510]: 2025-12-13 09:04:32.806 248514 INFO nova.compute.manager [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Took 1.51 seconds to deallocate network for instance.
Dec 13 09:04:32 compute-0 podman[383922]: 2025-12-13 09:04:32.710237189 +0000 UTC m=+0.024527746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:04:32 compute-0 podman[383922]: 2025-12-13 09:04:32.814476941 +0000 UTC m=+0.128767518 container init b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 13 09:04:32 compute-0 podman[383922]: 2025-12-13 09:04:32.823828035 +0000 UTC m=+0.138118592 container start b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:04:32 compute-0 podman[383922]: 2025-12-13 09:04:32.827431405 +0000 UTC m=+0.141721952 container attach b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:04:32 compute-0 pedantic_wing[383938]: 167 167
Dec 13 09:04:32 compute-0 systemd[1]: libpod-b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95.scope: Deactivated successfully.
Dec 13 09:04:32 compute-0 podman[383922]: 2025-12-13 09:04:32.833394525 +0000 UTC m=+0.147685072 container died b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7be5c1772ccec0ce56345119b1f1e6a1efb309150b0da4c514c735297deb6461-merged.mount: Deactivated successfully.
Dec 13 09:04:32 compute-0 nova_compute[248510]: 2025-12-13 09:04:32.867 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:32 compute-0 nova_compute[248510]: 2025-12-13 09:04:32.868 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:32 compute-0 podman[383922]: 2025-12-13 09:04:32.882188587 +0000 UTC m=+0.196479134 container remove b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 09:04:32 compute-0 systemd[1]: libpod-conmon-b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95.scope: Deactivated successfully.
Dec 13 09:04:32 compute-0 nova_compute[248510]: 2025-12-13 09:04:32.930 248514 DEBUG oslo_concurrency.processutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:33 compute-0 podman[383962]: 2025-12-13 09:04:33.080142467 +0000 UTC m=+0.051471871 container create a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.090 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:33 compute-0 systemd[1]: Started libpod-conmon-a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e.scope.
Dec 13 09:04:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:04:33 compute-0 podman[383962]: 2025-12-13 09:04:33.058296039 +0000 UTC m=+0.029625473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.183 248514 DEBUG nova.compute.manager [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.184 248514 DEBUG oslo_concurrency.lockutils [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.184 248514 DEBUG oslo_concurrency.lockutils [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.185 248514 DEBUG oslo_concurrency.lockutils [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.185 248514 DEBUG nova.compute.manager [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.186 248514 WARNING nova.compute.manager [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state deleted and task_state None.
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.186 248514 DEBUG nova.compute.manager [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-deleted-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:33 compute-0 podman[383962]: 2025-12-13 09:04:33.198176344 +0000 UTC m=+0.169505748 container init a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:04:33 compute-0 podman[383962]: 2025-12-13 09:04:33.205294362 +0000 UTC m=+0.176623746 container start a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 09:04:33 compute-0 podman[383962]: 2025-12-13 09:04:33.208650466 +0000 UTC m=+0.179979850 container attach a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:04:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:04:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295483595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.487 248514 DEBUG oslo_concurrency.processutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.494 248514 DEBUG nova.compute.provider_tree [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.518 248514 DEBUG nova.scheduler.client.report [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.541 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.593 248514 INFO nova.scheduler.client.report [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 19f43277-8c8d-48f2-9ec7-2b0d36a06e27
Dec 13 09:04:33 compute-0 ceph-mon[76537]: pgmap v3103: 321 pgs: 321 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Dec 13 09:04:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3295483595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:33 compute-0 nova_compute[248510]: 2025-12-13 09:04:33.679 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:33 compute-0 pedantic_bardeen[383997]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:04:33 compute-0 pedantic_bardeen[383997]: --> All data devices are unavailable
Dec 13 09:04:33 compute-0 systemd[1]: libpod-a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e.scope: Deactivated successfully.
Dec 13 09:04:33 compute-0 conmon[383997]: conmon a49377d4ab703814b3e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e.scope/container/memory.events
Dec 13 09:04:33 compute-0 podman[383962]: 2025-12-13 09:04:33.792420763 +0000 UTC m=+0.763750177 container died a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 09:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74-merged.mount: Deactivated successfully.
Dec 13 09:04:33 compute-0 podman[383962]: 2025-12-13 09:04:33.847478312 +0000 UTC m=+0.818807696 container remove a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:04:33 compute-0 systemd[1]: libpod-conmon-a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e.scope: Deactivated successfully.
Dec 13 09:04:33 compute-0 sudo[383886]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:33 compute-0 sudo[384033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:04:33 compute-0 sudo[384033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:33 compute-0 sudo[384033]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:34 compute-0 sudo[384058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:04:34 compute-0 sudo[384058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:34 compute-0 podman[384095]: 2025-12-13 09:04:34.372618298 +0000 UTC m=+0.062120717 container create 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:04:34 compute-0 systemd[1]: Started libpod-conmon-1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3.scope.
Dec 13 09:04:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:04:34 compute-0 podman[384095]: 2025-12-13 09:04:34.35512257 +0000 UTC m=+0.044624989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:04:34 compute-0 podman[384095]: 2025-12-13 09:04:34.449284989 +0000 UTC m=+0.138787438 container init 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:04:34 compute-0 podman[384095]: 2025-12-13 09:04:34.461456224 +0000 UTC m=+0.150958633 container start 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 09:04:34 compute-0 condescending_darwin[384111]: 167 167
Dec 13 09:04:34 compute-0 podman[384095]: 2025-12-13 09:04:34.465947956 +0000 UTC m=+0.155450455 container attach 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 09:04:34 compute-0 systemd[1]: libpod-1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3.scope: Deactivated successfully.
Dec 13 09:04:34 compute-0 podman[384095]: 2025-12-13 09:04:34.467412613 +0000 UTC m=+0.156915032 container died 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cc7266df2d5c8dba92b9785e109499e5f16c73045492728919e810bda8c1e3d-merged.mount: Deactivated successfully.
Dec 13 09:04:34 compute-0 podman[384095]: 2025-12-13 09:04:34.515293043 +0000 UTC m=+0.204795442 container remove 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:04:34 compute-0 systemd[1]: libpod-conmon-1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3.scope: Deactivated successfully.
Dec 13 09:04:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 8.1 KiB/s wr, 56 op/s
Dec 13 09:04:34 compute-0 podman[384135]: 2025-12-13 09:04:34.734775042 +0000 UTC m=+0.068984740 container create f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 09:04:34 compute-0 systemd[1]: Started libpod-conmon-f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05.scope.
Dec 13 09:04:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:34 compute-0 podman[384135]: 2025-12-13 09:04:34.707050877 +0000 UTC m=+0.041260635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:04:34 compute-0 podman[384135]: 2025-12-13 09:04:34.818982352 +0000 UTC m=+0.153192080 container init f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:04:34 compute-0 podman[384135]: 2025-12-13 09:04:34.833370822 +0000 UTC m=+0.167580520 container start f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:04:34 compute-0 podman[384135]: 2025-12-13 09:04:34.838315436 +0000 UTC m=+0.172525184 container attach f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]: {
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:     "0": [
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:         {
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "devices": [
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "/dev/loop3"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             ],
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_name": "ceph_lv0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_size": "21470642176",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "name": "ceph_lv0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "tags": {
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cluster_name": "ceph",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.crush_device_class": "",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.encrypted": "0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.objectstore": "bluestore",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osd_id": "0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.type": "block",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.vdo": "0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.with_tpm": "0"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             },
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "type": "block",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "vg_name": "ceph_vg0"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:         }
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:     ],
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:     "1": [
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:         {
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "devices": [
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "/dev/loop4"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             ],
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_name": "ceph_lv1",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_size": "21470642176",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "name": "ceph_lv1",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "tags": {
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cluster_name": "ceph",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.crush_device_class": "",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.encrypted": "0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.objectstore": "bluestore",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osd_id": "1",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.type": "block",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.vdo": "0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.with_tpm": "0"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             },
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "type": "block",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "vg_name": "ceph_vg1"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:         }
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:     ],
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:     "2": [
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:         {
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "devices": [
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "/dev/loop5"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             ],
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_name": "ceph_lv2",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_size": "21470642176",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "name": "ceph_lv2",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "tags": {
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.cluster_name": "ceph",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.crush_device_class": "",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.encrypted": "0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.objectstore": "bluestore",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osd_id": "2",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.type": "block",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.vdo": "0",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:                 "ceph.with_tpm": "0"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             },
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "type": "block",
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:             "vg_name": "ceph_vg2"
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:         }
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]:     ]
Dec 13 09:04:35 compute-0 naughty_ganguly[384152]: }
Dec 13 09:04:35 compute-0 systemd[1]: libpod-f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05.scope: Deactivated successfully.
Dec 13 09:04:35 compute-0 podman[384135]: 2025-12-13 09:04:35.137139953 +0000 UTC m=+0.471349661 container died f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4-merged.mount: Deactivated successfully.
Dec 13 09:04:35 compute-0 podman[384135]: 2025-12-13 09:04:35.191566976 +0000 UTC m=+0.525776644 container remove f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:04:35 compute-0 systemd[1]: libpod-conmon-f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05.scope: Deactivated successfully.
Dec 13 09:04:35 compute-0 sudo[384058]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:35 compute-0 sudo[384174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:04:35 compute-0 sudo[384174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:35 compute-0 sudo[384174]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:35 compute-0 sudo[384199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:04:35 compute-0 sudo[384199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:35 compute-0 ceph-mon[76537]: pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 8.1 KiB/s wr, 56 op/s
Dec 13 09:04:35 compute-0 podman[384236]: 2025-12-13 09:04:35.762877531 +0000 UTC m=+0.054450346 container create e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:04:35 compute-0 systemd[1]: Started libpod-conmon-e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43.scope.
Dec 13 09:04:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:04:35 compute-0 podman[384236]: 2025-12-13 09:04:35.744145961 +0000 UTC m=+0.035718816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:04:35 compute-0 podman[384236]: 2025-12-13 09:04:35.853919792 +0000 UTC m=+0.145492667 container init e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:04:35 compute-0 podman[384236]: 2025-12-13 09:04:35.861300496 +0000 UTC m=+0.152873321 container start e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 09:04:35 compute-0 podman[384236]: 2025-12-13 09:04:35.864678041 +0000 UTC m=+0.156250876 container attach e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:04:35 compute-0 nostalgic_jennings[384252]: 167 167
Dec 13 09:04:35 compute-0 systemd[1]: libpod-e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43.scope: Deactivated successfully.
Dec 13 09:04:35 compute-0 podman[384236]: 2025-12-13 09:04:35.87064098 +0000 UTC m=+0.162213795 container died e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-17b7d72f077d8f001193fab4cad49204f1df912da607d97c738c6e8fd71b541d-merged.mount: Deactivated successfully.
Dec 13 09:04:35 compute-0 nova_compute[248510]: 2025-12-13 09:04:35.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:35 compute-0 podman[384236]: 2025-12-13 09:04:35.947062485 +0000 UTC m=+0.238635310 container remove e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:04:35 compute-0 systemd[1]: libpod-conmon-e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43.scope: Deactivated successfully.
Dec 13 09:04:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:36 compute-0 podman[384276]: 2025-12-13 09:04:36.155927088 +0000 UTC m=+0.062079086 container create aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:04:36 compute-0 systemd[1]: Started libpod-conmon-aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3.scope.
Dec 13 09:04:36 compute-0 podman[384276]: 2025-12-13 09:04:36.124384238 +0000 UTC m=+0.030536316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:04:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:36 compute-0 podman[384276]: 2025-12-13 09:04:36.26335331 +0000 UTC m=+0.169505298 container init aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:04:36 compute-0 podman[384276]: 2025-12-13 09:04:36.279590847 +0000 UTC m=+0.185742795 container start aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:04:36 compute-0 podman[384276]: 2025-12-13 09:04:36.283235518 +0000 UTC m=+0.189387556 container attach aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 09:04:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Dec 13 09:04:36 compute-0 ceph-mon[76537]: pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Dec 13 09:04:37 compute-0 lvm[384368]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:04:37 compute-0 lvm[384368]: VG ceph_vg0 finished
Dec 13 09:04:37 compute-0 lvm[384371]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:04:37 compute-0 lvm[384371]: VG ceph_vg1 finished
Dec 13 09:04:37 compute-0 lvm[384373]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:04:37 compute-0 lvm[384373]: VG ceph_vg2 finished
Dec 13 09:04:37 compute-0 silly_carson[384292]: {}
Dec 13 09:04:37 compute-0 systemd[1]: libpod-aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3.scope: Deactivated successfully.
Dec 13 09:04:37 compute-0 systemd[1]: libpod-aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3.scope: Consumed 1.573s CPU time.
Dec 13 09:04:37 compute-0 podman[384276]: 2025-12-13 09:04:37.250317518 +0000 UTC m=+1.156469516 container died aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70-merged.mount: Deactivated successfully.
Dec 13 09:04:37 compute-0 podman[384276]: 2025-12-13 09:04:37.295804267 +0000 UTC m=+1.201956255 container remove aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:04:37 compute-0 systemd[1]: libpod-conmon-aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3.scope: Deactivated successfully.
Dec 13 09:04:37 compute-0 sudo[384199]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:04:37 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:04:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:04:37 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:04:37 compute-0 sudo[384388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:04:37 compute-0 sudo[384388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:04:37 compute-0 sudo[384388]: pam_unix(sudo:session): session closed for user root
Dec 13 09:04:38 compute-0 nova_compute[248510]: 2025-12-13 09:04:38.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:04:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:04:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3106: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Dec 13 09:04:39 compute-0 ceph-mon[76537]: pgmap v3106: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Dec 13 09:04:39 compute-0 nova_compute[248510]: 2025-12-13 09:04:39.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:39 compute-0 nova_compute[248510]: 2025-12-13 09:04:39.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:04:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:04:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:04:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:04:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:04:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:04:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 46 op/s
Dec 13 09:04:40 compute-0 ceph-mon[76537]: pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 46 op/s
Dec 13 09:04:40 compute-0 nova_compute[248510]: 2025-12-13 09:04:40.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:41 compute-0 nova_compute[248510]: 2025-12-13 09:04:41.280 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616666.2778819, c89b029c-146b-47ae-8961-0000d4e49e29 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:04:41 compute-0 nova_compute[248510]: 2025-12-13 09:04:41.281 248514 INFO nova.compute.manager [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] VM Stopped (Lifecycle Event)
Dec 13 09:04:41 compute-0 nova_compute[248510]: 2025-12-13 09:04:41.316 248514 DEBUG nova.compute.manager [None req-66ad3494-31f1-46c9-9c37-a35870d4fb05 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:04:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 13 09:04:42 compute-0 sshd-session[384415]: Invalid user solana from 193.32.162.146 port 37864
Dec 13 09:04:43 compute-0 podman[384418]: 2025-12-13 09:04:43.010705231 +0000 UTC m=+0.080704173 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 09:04:43 compute-0 podman[384419]: 2025-12-13 09:04:43.026884196 +0000 UTC m=+0.103579306 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 09:04:43 compute-0 podman[384417]: 2025-12-13 09:04:43.039480562 +0000 UTC m=+0.116587752 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 09:04:43 compute-0 nova_compute[248510]: 2025-12-13 09:04:43.095 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:43 compute-0 sshd-session[384415]: Connection closed by invalid user solana 193.32.162.146 port 37864 [preauth]
Dec 13 09:04:43 compute-0 ceph-mon[76537]: pgmap v3108: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 13 09:04:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3109: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 30 op/s
Dec 13 09:04:45 compute-0 ceph-mon[76537]: pgmap v3109: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 30 op/s
Dec 13 09:04:45 compute-0 nova_compute[248510]: 2025-12-13 09:04:45.870 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616670.869777, 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:04:45 compute-0 nova_compute[248510]: 2025-12-13 09:04:45.870 248514 INFO nova.compute.manager [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] VM Stopped (Lifecycle Event)
Dec 13 09:04:45 compute-0 nova_compute[248510]: 2025-12-13 09:04:45.901 248514 DEBUG nova.compute.manager [None req-ff76660d-c0c7-477b-b79d-d1524f2f2f8c - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:04:45 compute-0 nova_compute[248510]: 2025-12-13 09:04:45.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:46 compute-0 nova_compute[248510]: 2025-12-13 09:04:46.505 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:46 compute-0 nova_compute[248510]: 2025-12-13 09:04:46.506 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:46 compute-0 nova_compute[248510]: 2025-12-13 09:04:46.525 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:04:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 09:04:46 compute-0 nova_compute[248510]: 2025-12-13 09:04:46.912 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:46 compute-0 nova_compute[248510]: 2025-12-13 09:04:46.913 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:46 compute-0 nova_compute[248510]: 2025-12-13 09:04:46.925 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:04:46 compute-0 nova_compute[248510]: 2025-12-13 09:04:46.926 248514 INFO nova.compute.claims [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.072 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:47 compute-0 ceph-mon[76537]: pgmap v3110: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 09:04:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:04:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402231214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.681 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.689 248514 DEBUG nova.compute.provider_tree [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.724 248514 DEBUG nova.scheduler.client.report [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.757 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.758 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.838 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.839 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.893 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:04:47 compute-0 nova_compute[248510]: 2025-12-13 09:04:47.912 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.045 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.048 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.049 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Creating image(s)
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.088 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.126 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.164 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.168 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.217 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.271 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.272 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.273 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.274 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.309 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.314 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3111: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.570 248514 DEBUG nova.policy [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:04:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/402231214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.672 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.762 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.868 248514 DEBUG nova.objects.instance [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.891 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.892 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Ensure instance console log exists: /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.892 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.893 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:48 compute-0 nova_compute[248510]: 2025-12-13 09:04:48.893 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:49 compute-0 ceph-mon[76537]: pgmap v3111: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:04:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 67 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 929 KiB/s wr, 30 op/s
Dec 13 09:04:50 compute-0 ceph-mon[76537]: pgmap v3112: 321 pgs: 321 active+clean; 67 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 929 KiB/s wr, 30 op/s
Dec 13 09:04:50 compute-0 nova_compute[248510]: 2025-12-13 09:04:50.749 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Successfully created port: 21744784-e35a-46d5-ac8c-8f9783a0a387 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:04:50 compute-0 nova_compute[248510]: 2025-12-13 09:04:50.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.288 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Successfully updated port: 21744784-e35a-46d5-ac8c-8f9783a0a387 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.318 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.319 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.319 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.447 248514 DEBUG nova.compute.manager [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.447 248514 DEBUG nova.compute.manager [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing instance network info cache due to event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.448 248514 DEBUG oslo_concurrency.lockutils [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:04:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 67 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 929 KiB/s wr, 30 op/s
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:04:52 compute-0 nova_compute[248510]: 2025-12-13 09:04:52.928 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:04:53 compute-0 nova_compute[248510]: 2025-12-13 09:04:53.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:53 compute-0 ceph-mon[76537]: pgmap v3113: 321 pgs: 321 active+clean; 67 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 929 KiB/s wr, 30 op/s
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.362 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.399 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.400 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance network_info: |[{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.400 248514 DEBUG oslo_concurrency.lockutils [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.401 248514 DEBUG nova.network.neutron [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.405 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start _get_guest_xml network_info=[{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.412 248514 WARNING nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.417 248514 DEBUG nova.virt.libvirt.host [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.418 248514 DEBUG nova.virt.libvirt.host [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.429 248514 DEBUG nova.virt.libvirt.host [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.430 248514 DEBUG nova.virt.libvirt.host [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.430 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.431 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.432 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.432 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.433 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.433 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.433 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.434 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.434 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.435 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.435 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.436 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:04:54 compute-0 nova_compute[248510]: 2025-12-13 09:04:54.441 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 13 09:04:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:04:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894288681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.042 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.077 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.083 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:55.441 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:55.442 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:55.442 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:55 compute-0 ceph-mon[76537]: pgmap v3114: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 13 09:04:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3894288681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:04:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:04:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830231446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.692 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.694 248514 DEBUG nova.virt.libvirt.vif [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1148057715',display_name='tempest-TestNetworkAdvancedServerOps-server-1148057715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1148057715',id=135,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEs/ai9hGvppAaJZucTj3XKRO2Hw6GseMJO/eDqjojdlq1TGbhXoM0CcjiSd48S7VRrLpZa2ASSqmIp8ZUA6nKNWf+HEZDknZ93ek9rtzJ3ZoLckx8X8JxIdR5xCKRsULQ==',key_name='tempest-TestNetworkAdvancedServerOps-326216115',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a5ac50ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:04:47Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=17c3814d-c11f-4032-a891-4cbdf3f7c065,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.695 248514 DEBUG nova.network.os_vif_util [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.696 248514 DEBUG nova.network.os_vif_util [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.698 248514 DEBUG nova.objects.instance [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.715 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <uuid>17c3814d-c11f-4032-a891-4cbdf3f7c065</uuid>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <name>instance-00000087</name>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1148057715</nova:name>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:04:54</nova:creationTime>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <nova:port uuid="21744784-e35a-46d5-ac8c-8f9783a0a387">
Dec 13 09:04:55 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <system>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <entry name="serial">17c3814d-c11f-4032-a891-4cbdf3f7c065</entry>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <entry name="uuid">17c3814d-c11f-4032-a891-4cbdf3f7c065</entry>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </system>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <os>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   </os>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <features>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   </features>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/17c3814d-c11f-4032-a891-4cbdf3f7c065_disk">
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       </source>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config">
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       </source>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:04:55 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:0c:86:07"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <target dev="tap21744784-e3"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/console.log" append="off"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <video>
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </video>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:04:55 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:04:55 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:04:55 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:04:55 compute-0 nova_compute[248510]: </domain>
Dec 13 09:04:55 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.717 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Preparing to wait for external event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.717 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.717 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.717 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.718 248514 DEBUG nova.virt.libvirt.vif [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1148057715',display_name='tempest-TestNetworkAdvancedServerOps-server-1148057715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1148057715',id=135,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEs/ai9hGvppAaJZucTj3XKRO2Hw6GseMJO/eDqjojdlq1TGbhXoM0CcjiSd48S7VRrLpZa2ASSqmIp8ZUA6nKNWf+HEZDknZ93ek9rtzJ3ZoLckx8X8JxIdR5xCKRsULQ==',key_name='tempest-TestNetworkAdvancedServerOps-326216115',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a5ac50ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:04:47Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=17c3814d-c11f-4032-a891-4cbdf3f7c065,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.719 248514 DEBUG nova.network.os_vif_util [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.719 248514 DEBUG nova.network.os_vif_util [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.720 248514 DEBUG os_vif [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.721 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.721 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.725 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21744784-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.726 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21744784-e3, col_values=(('external_ids', {'iface-id': '21744784-e35a-46d5-ac8c-8f9783a0a387', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:86:07', 'vm-uuid': '17c3814d-c11f-4032-a891-4cbdf3f7c065'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:55 compute-0 NetworkManager[50376]: <info>  [1765616695.7303] manager: (tap21744784-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.741 248514 INFO os_vif [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3')
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.813 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.814 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.814 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:0c:86:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.815 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Using config drive
Dec 13 09:04:55 compute-0 nova_compute[248510]: 2025-12-13 09:04:55.843 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:04:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.287 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Creating config drive at /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.297 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnw6v8ok3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.458 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnw6v8ok3" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.497 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.504 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:04:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec 13 09:04:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2830231446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.711 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.713 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Deleting local config drive /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config because it was imported into RBD.
Dec 13 09:04:56 compute-0 kernel: tap21744784-e3: entered promiscuous mode
Dec 13 09:04:56 compute-0 NetworkManager[50376]: <info>  [1765616696.7966] manager: (tap21744784-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/569)
Dec 13 09:04:56 compute-0 ovn_controller[148476]: 2025-12-13T09:04:56Z|01380|binding|INFO|Claiming lport 21744784-e35a-46d5-ac8c-8f9783a0a387 for this chassis.
Dec 13 09:04:56 compute-0 ovn_controller[148476]: 2025-12-13T09:04:56Z|01381|binding|INFO|21744784-e35a-46d5-ac8c-8f9783a0a387: Claiming fa:16:3e:0c:86:07 10.100.0.13
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.819 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.821 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c bound to our chassis
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.824 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 09:04:56 compute-0 systemd-udevd[384803]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.846 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6f0646-19e4-4f86-bbaf-aca2147c43d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.847 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap084e5836-e1 in ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.849 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap084e5836-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.849 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7591c125-5bbe-416a-921f-7c64c254ea6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.850 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7c9217-c725-433a-8fd1-efd8ed9c186d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:56 compute-0 systemd-machined[210538]: New machine qemu-165-instance-00000087.
Dec 13 09:04:56 compute-0 NetworkManager[50376]: <info>  [1765616696.8687] device (tap21744784-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:04:56 compute-0 NetworkManager[50376]: <info>  [1765616696.8694] device (tap21744784-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.870 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[48a853f6-5443-4153-a487-a7a87ab5e819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:56 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000087.
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.902 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[018cd9fd-1632-4381-b792-dd0282bf3aa2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:56 compute-0 ovn_controller[148476]: 2025-12-13T09:04:56Z|01382|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 ovn-installed in OVS
Dec 13 09:04:56 compute-0 ovn_controller[148476]: 2025-12-13T09:04:56Z|01383|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 up in Southbound
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.934 248514 DEBUG nova.network.neutron [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updated VIF entry in instance network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.934 248514 DEBUG nova.network.neutron [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.940 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5e26e2-58bf-44d3-9401-6ca6e90e37d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.946 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b44cb4b-67f3-4c49-b06f-36db4f06d273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:56 compute-0 NetworkManager[50376]: <info>  [1765616696.9477] manager: (tap084e5836-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/570)
Dec 13 09:04:56 compute-0 nova_compute[248510]: 2025-12-13 09:04:56.956 248514 DEBUG oslo_concurrency.lockutils [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.004 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dbacf862-9c44-4547-a607-44d257768ca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.008 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c44d74-09bd-491c-83b1-61e6611c9622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 NetworkManager[50376]: <info>  [1765616697.0459] device (tap084e5836-e0): carrier: link connected
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.055 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0b8f2f-7ad0-41cf-94a8-46829da2edc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b56384b5-9253-4c36-aed3-a5c085135ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap084e5836-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:f7:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 927426, 'reachable_time': 39915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384836, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.107 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62f10731-53ea-46c2-8060-73b0ac62b704]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:f759'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 927426, 'tstamp': 927426}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384837, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.136 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f06f28e-b682-496a-b3f9-530f82af2ac9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap084e5836-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:f7:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 927426, 'reachable_time': 39915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384851, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.181 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01d74e7b-7cc7-4fe8-a94b-25d84642390e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9bfa2a-9c52-461b-968a-018f763aff4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.262 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap084e5836-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.263 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.264 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616697.2632244, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.264 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap084e5836-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.265 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Started (Lifecycle Event)
Dec 13 09:04:57 compute-0 NetworkManager[50376]: <info>  [1765616697.2670] manager: (tap084e5836-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Dec 13 09:04:57 compute-0 kernel: tap084e5836-e0: entered promiscuous mode
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.269 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap084e5836-e0, col_values=(('external_ids', {'iface-id': '2953b79f-9235-4cc1-ad54-d75b960374dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:04:57 compute-0 ovn_controller[148476]: 2025-12-13T09:04:57Z|01384|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.293 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.294 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27128b2e-1b0b-408b-962e-1a1c1c5755f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.295 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:04:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.297 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'env', 'PROCESS_TAG=haproxy-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/084e5836-e0e4-4328-8e03-dfcdcd227a7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.298 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.304 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616697.263509, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.305 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Paused (Lifecycle Event)
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.329 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.335 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:04:57 compute-0 nova_compute[248510]: 2025-12-13 09:04:57.372 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:04:57 compute-0 ceph-mon[76537]: pgmap v3115: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec 13 09:04:57 compute-0 podman[384912]: 2025-12-13 09:04:57.795155248 +0000 UTC m=+0.087627896 container create cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 09:04:57 compute-0 systemd[1]: Started libpod-conmon-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc.scope.
Dec 13 09:04:57 compute-0 podman[384912]: 2025-12-13 09:04:57.752425338 +0000 UTC m=+0.044898046 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:04:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab414d9614e7f964aa77c46b42174cdefbb9b6ee0ab6eaef22a54e3e3265f7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:04:57 compute-0 podman[384912]: 2025-12-13 09:04:57.880133237 +0000 UTC m=+0.172605905 container init cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:04:57 compute-0 podman[384912]: 2025-12-13 09:04:57.88622223 +0000 UTC m=+0.178694878 container start cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 09:04:57 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [NOTICE]   (384931) : New worker (384933) forked
Dec 13 09:04:57 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [NOTICE]   (384931) : Loading success.
Dec 13 09:04:58 compute-0 nova_compute[248510]: 2025-12-13 09:04:58.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:04:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Dec 13 09:04:58 compute-0 ceph-mon[76537]: pgmap v3116: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.234 248514 DEBUG nova.compute.manager [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.235 248514 DEBUG oslo_concurrency.lockutils [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.235 248514 DEBUG oslo_concurrency.lockutils [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.236 248514 DEBUG oslo_concurrency.lockutils [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.236 248514 DEBUG nova.compute.manager [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Processing event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.237 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.244 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616699.2442088, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.245 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Resumed (Lifecycle Event)
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.248 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.253 248514 INFO nova.virt.libvirt.driver [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance spawned successfully.
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.254 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.276 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.283 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.293 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.294 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.294 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.295 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.296 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.297 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.310 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.364 248514 INFO nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Took 11.32 seconds to spawn the instance on the hypervisor.
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.364 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.440 248514 INFO nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Took 12.58 seconds to build instance.
Dec 13 09:04:59 compute-0 nova_compute[248510]: 2025-12-13 09:04:59.465 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 13 09:05:00 compute-0 nova_compute[248510]: 2025-12-13 09:05:00.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.102770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701102824, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 851, "num_deletes": 256, "total_data_size": 1183193, "memory_usage": 1205024, "flush_reason": "Manual Compaction"}
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701112095, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1162689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61343, "largest_seqno": 62193, "table_properties": {"data_size": 1158358, "index_size": 1982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9371, "raw_average_key_size": 19, "raw_value_size": 1149747, "raw_average_value_size": 2346, "num_data_blocks": 88, "num_entries": 490, "num_filter_entries": 490, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616628, "oldest_key_time": 1765616628, "file_creation_time": 1765616701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 9384 microseconds, and 3915 cpu microseconds.
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.112148) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1162689 bytes OK
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.112170) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114052) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114084) EVENT_LOG_v1 {"time_micros": 1765616701114063, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114104) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1178972, prev total WAL file size 1178972, number of live WAL files 2.
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114539) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353131' seq:72057594037927935, type:22 .. '6C6F676D0032373633' seq:0, type:0; will stop at (end)
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1135KB)], [143(10MB)]
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701114598, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 11750753, "oldest_snapshot_seqno": -1}
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8213 keys, 11635831 bytes, temperature: kUnknown
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701193312, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 11635831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11581317, "index_size": 32897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 215351, "raw_average_key_size": 26, "raw_value_size": 11435016, "raw_average_value_size": 1392, "num_data_blocks": 1281, "num_entries": 8213, "num_filter_entries": 8213, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.193621) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 11635831 bytes
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.195482) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.1 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(20.1) write-amplify(10.0) OK, records in: 8736, records dropped: 523 output_compression: NoCompression
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.195499) EVENT_LOG_v1 {"time_micros": 1765616701195490, "job": 88, "event": "compaction_finished", "compaction_time_micros": 78807, "compaction_time_cpu_micros": 30971, "output_level": 6, "num_output_files": 1, "total_output_size": 11635831, "num_input_records": 8736, "num_output_records": 8213, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701195821, "job": 88, "event": "table_file_deletion", "file_number": 145}
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701198406, "job": 88, "event": "table_file_deletion", "file_number": 143}
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:05:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.224 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.225 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.251 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.346 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.347 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.355 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.356 248514 INFO nova.compute.claims [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.366 248514 DEBUG nova.compute.manager [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.367 248514 DEBUG oslo_concurrency.lockutils [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.367 248514 DEBUG oslo_concurrency.lockutils [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.368 248514 DEBUG oslo_concurrency.lockutils [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.368 248514 DEBUG nova.compute.manager [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.369 248514 WARNING nova.compute.manager [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state active and task_state None.
Dec 13 09:05:01 compute-0 nova_compute[248510]: 2025-12-13 09:05:01.506 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:01 compute-0 ceph-mon[76537]: pgmap v3117: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 13 09:05:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:05:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055904252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.082 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.091 248514 DEBUG nova.compute.provider_tree [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.116 248514 DEBUG nova.scheduler.client.report [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.149 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.150 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.208 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.209 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.238 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.267 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.396 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.398 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.399 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Creating image(s)
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.437 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.475 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.504 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.508 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 898 KiB/s wr, 22 op/s
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.555 248514 DEBUG nova.policy [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.602 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.603 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.604 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.604 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1055904252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.634 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.642 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bb154aa9-7029-4193-8d00-38e8e552a382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:02 compute-0 nova_compute[248510]: 2025-12-13 09:05:02.948 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bb154aa9-7029-4193-8d00-38e8e552a382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.034 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:05:03 compute-0 NetworkManager[50376]: <info>  [1765616703.1436] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Dec 13 09:05:03 compute-0 NetworkManager[50376]: <info>  [1765616703.1443] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.147 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.165 248514 DEBUG nova.objects.instance [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid bb154aa9-7029-4193-8d00-38e8e552a382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.184 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.184 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Ensure instance console log exists: /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.185 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.186 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.186 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:03 compute-0 ovn_controller[148476]: 2025-12-13T09:05:03Z|01385|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.254 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.265 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:03 compute-0 nova_compute[248510]: 2025-12-13 09:05:03.450 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Successfully created port: f16b53b1-eab6-4458-976e-5cb112c77ef8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:05:03 compute-0 ceph-mon[76537]: pgmap v3118: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 898 KiB/s wr, 22 op/s
Dec 13 09:05:04 compute-0 nova_compute[248510]: 2025-12-13 09:05:04.526 248514 DEBUG nova.compute.manager [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:04 compute-0 nova_compute[248510]: 2025-12-13 09:05:04.527 248514 DEBUG nova.compute.manager [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing instance network info cache due to event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:05:04 compute-0 nova_compute[248510]: 2025-12-13 09:05:04.527 248514 DEBUG oslo_concurrency.lockutils [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:05:04 compute-0 nova_compute[248510]: 2025-12-13 09:05:04.528 248514 DEBUG oslo_concurrency.lockutils [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:05:04 compute-0 nova_compute[248510]: 2025-12-13 09:05:04.528 248514 DEBUG nova.network.neutron [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:05:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 125 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Dec 13 09:05:04 compute-0 ceph-mon[76537]: pgmap v3119: 321 pgs: 321 active+clean; 125 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.344 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Successfully updated port: f16b53b1-eab6-4458-976e-5cb112c77ef8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.365 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.366 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.366 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.466 248514 DEBUG nova.compute.manager [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.466 248514 DEBUG nova.compute.manager [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing instance network info cache due to event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.467 248514 DEBUG oslo_concurrency.lockutils [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:05 compute-0 nova_compute[248510]: 2025-12-13 09:05:05.859 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:05:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:06 compute-0 nova_compute[248510]: 2025-12-13 09:05:06.277 248514 DEBUG nova.network.neutron [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updated VIF entry in instance network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:05:06 compute-0 nova_compute[248510]: 2025-12-13 09:05:06.277 248514 DEBUG nova.network.neutron [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:06 compute-0 nova_compute[248510]: 2025-12-13 09:05:06.302 248514 DEBUG oslo_concurrency.lockutils [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:05:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 125 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 84 op/s
Dec 13 09:05:06 compute-0 nova_compute[248510]: 2025-12-13 09:05:06.996 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.023 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.024 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance network_info: |[{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.024 248514 DEBUG oslo_concurrency.lockutils [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.024 248514 DEBUG nova.network.neutron [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.027 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start _get_guest_xml network_info=[{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.033 248514 WARNING nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.038 248514 DEBUG nova.virt.libvirt.host [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.039 248514 DEBUG nova.virt.libvirt.host [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.047 248514 DEBUG nova.virt.libvirt.host [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.048 248514 DEBUG nova.virt.libvirt.host [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.048 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.049 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.049 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.049 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.050 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.050 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.050 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.050 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.051 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.051 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.051 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.052 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.054 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:07 compute-0 ceph-mon[76537]: pgmap v3120: 321 pgs: 321 active+clean; 125 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 84 op/s
Dec 13 09:05:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:05:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949943202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.643 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.668 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:05:07 compute-0 nova_compute[248510]: 2025-12-13 09:05:07.673 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:05:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108130498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.295 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.297 248514 DEBUG nova.virt.libvirt.vif [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:04:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908300967',display_name='tempest-TestNetworkBasicOps-server-908300967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908300967',id=136,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBmqAnv0oD+weSJxDWEwIjgt5HjSBoUiVoV5CSnaOj2JsKkOoULuMXZQ1EHyUAr0FnFFBRSfsXFNYjXLHRq9tDCS1GQZDyMWxvCAc8xSVkqfYg8qd+Wbjc8cJKV8CheTw==',key_name='tempest-TestNetworkBasicOps-403784450',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-8elo0a7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:05:02Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=bb154aa9-7029-4193-8d00-38e8e552a382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.298 248514 DEBUG nova.network.os_vif_util [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.299 248514 DEBUG nova.network.os_vif_util [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.300 248514 DEBUG nova.objects.instance [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid bb154aa9-7029-4193-8d00-38e8e552a382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.323 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <uuid>bb154aa9-7029-4193-8d00-38e8e552a382</uuid>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <name>instance-00000088</name>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <nova:name>tempest-TestNetworkBasicOps-server-908300967</nova:name>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:05:07</nova:creationTime>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <nova:port uuid="f16b53b1-eab6-4458-976e-5cb112c77ef8">
Dec 13 09:05:08 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <system>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <entry name="serial">bb154aa9-7029-4193-8d00-38e8e552a382</entry>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <entry name="uuid">bb154aa9-7029-4193-8d00-38e8e552a382</entry>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </system>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <os>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   </os>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <features>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   </features>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bb154aa9-7029-4193-8d00-38e8e552a382_disk">
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       </source>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bb154aa9-7029-4193-8d00-38e8e552a382_disk.config">
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       </source>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:05:08 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:68:ba:c6"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <target dev="tapf16b53b1-ea"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/console.log" append="off"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <video>
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </video>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:05:08 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:05:08 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:05:08 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:05:08 compute-0 nova_compute[248510]: </domain>
Dec 13 09:05:08 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.328 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Preparing to wait for external event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.329 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.329 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.329 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.330 248514 DEBUG nova.virt.libvirt.vif [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:04:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908300967',display_name='tempest-TestNetworkBasicOps-server-908300967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908300967',id=136,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBmqAnv0oD+weSJxDWEwIjgt5HjSBoUiVoV5CSnaOj2JsKkOoULuMXZQ1EHyUAr0FnFFBRSfsXFNYjXLHRq9tDCS1GQZDyMWxvCAc8xSVkqfYg8qd+Wbjc8cJKV8CheTw==',key_name='tempest-TestNetworkBasicOps-403784450',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-8elo0a7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:05:02Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=bb154aa9-7029-4193-8d00-38e8e552a382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.330 248514 DEBUG nova.network.os_vif_util [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.331 248514 DEBUG nova.network.os_vif_util [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.331 248514 DEBUG os_vif [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.332 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.332 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.335 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf16b53b1-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.336 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf16b53b1-ea, col_values=(('external_ids', {'iface-id': 'f16b53b1-eab6-4458-976e-5cb112c77ef8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:ba:c6', 'vm-uuid': 'bb154aa9-7029-4193-8d00-38e8e552a382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:08 compute-0 NetworkManager[50376]: <info>  [1765616708.3387] manager: (tapf16b53b1-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.347 248514 INFO os_vif [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea')
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.352 248514 DEBUG nova.network.neutron [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updated VIF entry in instance network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.352 248514 DEBUG nova.network.neutron [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.371 248514 DEBUG oslo_concurrency.lockutils [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.413 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.414 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.414 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:68:ba:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.414 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Using config drive
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.436 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:05:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 09:05:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3949943202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:05:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2108130498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.868 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Creating config drive at /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config
Dec 13 09:05:08 compute-0 nova_compute[248510]: 2025-12-13 09:05:08.872 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9edbuxxs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.042 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9edbuxxs" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.072 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.076 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config bb154aa9-7029-4193-8d00-38e8e552a382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.251 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config bb154aa9-7029-4193-8d00-38e8e552a382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.252 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Deleting local config drive /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config because it was imported into RBD.
Dec 13 09:05:09 compute-0 kernel: tapf16b53b1-ea: entered promiscuous mode
Dec 13 09:05:09 compute-0 NetworkManager[50376]: <info>  [1765616709.3240] manager: (tapf16b53b1-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Dec 13 09:05:09 compute-0 ovn_controller[148476]: 2025-12-13T09:05:09Z|01386|binding|INFO|Claiming lport f16b53b1-eab6-4458-976e-5cb112c77ef8 for this chassis.
Dec 13 09:05:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:05:09
Dec 13 09:05:09 compute-0 ovn_controller[148476]: 2025-12-13T09:05:09Z|01387|binding|INFO|f16b53b1-eab6-4458-976e-5cb112c77ef8: Claiming fa:16:3e:68:ba:c6 10.100.0.7
Dec 13 09:05:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:05:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:05:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta']
Dec 13 09:05:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.382 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:ba:c6 10.100.0.7'], port_security=['fa:16:3e:68:ba:c6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb154aa9-7029-4193-8d00-38e8e552a382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-065ccb44-e305-43f2-ab7a-728a65062da2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20aff6be-cf34-4af0-9288-26483b58fb2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a94dc2d-ac12-4530-9899-bb1ddf6ed47b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f16b53b1-eab6-4458-976e-5cb112c77ef8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.385 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f16b53b1-eab6-4458-976e-5cb112c77ef8 in datapath 065ccb44-e305-43f2-ab7a-728a65062da2 bound to our chassis
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.389 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 065ccb44-e305-43f2-ab7a-728a65062da2
Dec 13 09:05:09 compute-0 ovn_controller[148476]: 2025-12-13T09:05:09Z|01388|binding|INFO|Setting lport f16b53b1-eab6-4458-976e-5cb112c77ef8 ovn-installed in OVS
Dec 13 09:05:09 compute-0 ovn_controller[148476]: 2025-12-13T09:05:09Z|01389|binding|INFO|Setting lport f16b53b1-eab6-4458-976e-5cb112c77ef8 up in Southbound
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.401 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bda51c0c-258b-4a70-8416-cc70667e9835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.402 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap065ccb44-e1 in ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.404 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap065ccb44-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.405 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7274ef0c-128e-431c-98fc-b5106f2c537e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.406 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40b5c442-2829-4f52-9934-a42bef337880]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 systemd-udevd[385269]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:05:09 compute-0 systemd-machined[210538]: New machine qemu-166-instance-00000088.
Dec 13 09:05:09 compute-0 NetworkManager[50376]: <info>  [1765616709.4257] device (tapf16b53b1-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:05:09 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000088.
Dec 13 09:05:09 compute-0 NetworkManager[50376]: <info>  [1765616709.4270] device (tapf16b53b1-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.426 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[586af8d0-dc48-4f1c-9f37-a6abaf262302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.444 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56e5c614-7ff4-4947-9d83-61e0e8b44482]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.482 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d7bb55-9b60-46b8-9afa-4a31fb935705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.490 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2c20b6-7f98-4ee4-81be-bc76e1020e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 NetworkManager[50376]: <info>  [1765616709.4918] manager: (tap065ccb44-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/576)
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.538 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b06be451-9cdc-40f3-a42d-d7da7ce732b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.542 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[79d976b2-f9b3-4f77-aa07-d57bb37a5012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 NetworkManager[50376]: <info>  [1765616709.5689] device (tap065ccb44-e0): carrier: link connected
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.578 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[910fe7e5-f513-4a9b-8a75-301a8b36ffef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.597 248514 DEBUG nova.compute.manager [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.598 248514 DEBUG oslo_concurrency.lockutils [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.599 248514 DEBUG oslo_concurrency.lockutils [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.600 248514 DEBUG oslo_concurrency.lockutils [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.600 248514 DEBUG nova.compute.manager [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Processing event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94111080-b269-46e4-871b-b120ea8f95dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap065ccb44-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:8b:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928678, 'reachable_time': 36100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385301, 'error': None, 'target': 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ceph-mon[76537]: pgmap v3121: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.638 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8276bc8-75c6-45ff-aa4a-4be38ab441c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:8b71'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 928678, 'tstamp': 928678}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385302, 'error': None, 'target': 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.668 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e12b0a9a-0a70-482e-8802-308f5013c957]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap065ccb44-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:8b:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928678, 'reachable_time': 36100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385303, 'error': None, 'target': 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.705 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e41fff1c-2ddc-495c-b498-17b18f621424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.765 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9fdd7ce-2465-45af-a9b3-857f3f39c308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.766 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap065ccb44-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.767 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.767 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap065ccb44-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:09 compute-0 NetworkManager[50376]: <info>  [1765616709.7700] manager: (tap065ccb44-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/577)
Dec 13 09:05:09 compute-0 kernel: tap065ccb44-e0: entered promiscuous mode
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.772 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap065ccb44-e0, col_values=(('external_ids', {'iface-id': '648ca59e-493f-4103-aa42-8d4449fe6f9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:09 compute-0 ovn_controller[148476]: 2025-12-13T09:05:09Z|01390|binding|INFO|Releasing lport 648ca59e-493f-4103-aa42-8d4449fe6f9d from this chassis (sb_readonly=0)
Dec 13 09:05:09 compute-0 nova_compute[248510]: 2025-12-13 09:05:09.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.797 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/065ccb44-e305-43f2-ab7a-728a65062da2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/065ccb44-e305-43f2-ab7a-728a65062da2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.799 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9eab321c-a833-445a-9912-1304179d8987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.800 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-065ccb44-e305-43f2-ab7a-728a65062da2
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/065ccb44-e305-43f2-ab7a-728a65062da2.pid.haproxy
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 065ccb44-e305-43f2-ab7a-728a65062da2
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:05:09 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.801 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'env', 'PROCESS_TAG=haproxy-065ccb44-e305-43f2-ab7a-728a65062da2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/065ccb44-e305-43f2-ab7a-728a65062da2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:05:10 compute-0 podman[385335]: 2025-12-13 09:05:10.22863371 +0000 UTC m=+0.052350543 container create d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 09:05:10 compute-0 systemd[1]: Started libpod-conmon-d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec.scope.
Dec 13 09:05:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa95f491df7395f5ed1e1bdb4690670b794dc054243ab1a1438a6d32051c9c18/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:10 compute-0 podman[385335]: 2025-12-13 09:05:10.205639724 +0000 UTC m=+0.029356577 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:05:10 compute-0 podman[385335]: 2025-12-13 09:05:10.311742222 +0000 UTC m=+0.135459055 container init d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:05:10 compute-0 podman[385335]: 2025-12-13 09:05:10.32124482 +0000 UTC m=+0.144961653 container start d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:05:10 compute-0 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [NOTICE]   (385373) : New worker (385389) forked
Dec 13 09:05:10 compute-0 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [NOTICE]   (385373) : Loading success.
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.469 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.470 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616710.468951, bb154aa9-7029-4193-8d00-38e8e552a382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.470 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] VM Started (Lifecycle Event)
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.473 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.476 248514 INFO nova.virt.libvirt.driver [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance spawned successfully.
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.476 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.502 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.515 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.520 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.521 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.521 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.522 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.522 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.522 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.555 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.556 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616710.4699361, bb154aa9-7029-4193-8d00-38e8e552a382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.556 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] VM Paused (Lifecycle Event)
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.591 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.595 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616710.4724905, bb154aa9-7029-4193-8d00-38e8e552a382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.595 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] VM Resumed (Lifecycle Event)
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.612 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.615 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.619 248514 INFO nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Took 8.22 seconds to spawn the instance on the hypervisor.
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.620 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:10 compute-0 ceph-mon[76537]: pgmap v3122: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.658 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.742 248514 INFO nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Took 9.43 seconds to build instance.
Dec 13 09:05:10 compute-0 nova_compute[248510]: 2025-12-13 09:05:10.774 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:05:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:05:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:11 compute-0 nova_compute[248510]: 2025-12-13 09:05:11.728 248514 DEBUG nova.compute.manager [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:11 compute-0 nova_compute[248510]: 2025-12-13 09:05:11.728 248514 DEBUG oslo_concurrency.lockutils [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:11 compute-0 nova_compute[248510]: 2025-12-13 09:05:11.729 248514 DEBUG oslo_concurrency.lockutils [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:11 compute-0 nova_compute[248510]: 2025-12-13 09:05:11.729 248514 DEBUG oslo_concurrency.lockutils [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:11 compute-0 nova_compute[248510]: 2025-12-13 09:05:11.729 248514 DEBUG nova.compute.manager [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] No waiting events found dispatching network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:11 compute-0 nova_compute[248510]: 2025-12-13 09:05:11.729 248514 WARNING nova.compute.manager [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received unexpected event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 for instance with vm_state active and task_state None.
Dec 13 09:05:11 compute-0 ovn_controller[148476]: 2025-12-13T09:05:11Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:86:07 10.100.0.13
Dec 13 09:05:11 compute-0 ovn_controller[148476]: 2025-12-13T09:05:11Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:86:07 10.100.0.13
Dec 13 09:05:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 13 09:05:13 compute-0 nova_compute[248510]: 2025-12-13 09:05:13.178 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:13 compute-0 nova_compute[248510]: 2025-12-13 09:05:13.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:13 compute-0 ceph-mon[76537]: pgmap v3123: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 13 09:05:13 compute-0 nova_compute[248510]: 2025-12-13 09:05:13.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:14 compute-0 podman[385408]: 2025-12-13 09:05:14.013194349 +0000 UTC m=+0.101676788 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:05:14 compute-0 podman[385409]: 2025-12-13 09:05:14.028734999 +0000 UTC m=+0.098889459 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:05:14 compute-0 podman[385415]: 2025-12-13 09:05:14.038859552 +0000 UTC m=+0.103970516 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 13 09:05:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Dec 13 09:05:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:05:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/402574808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:05:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:05:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/402574808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:05:15 compute-0 ceph-mon[76537]: pgmap v3124: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Dec 13 09:05:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/402574808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:05:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/402574808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:05:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 154 op/s
Dec 13 09:05:16 compute-0 ceph-mon[76537]: pgmap v3125: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 154 op/s
Dec 13 09:05:17 compute-0 nova_compute[248510]: 2025-12-13 09:05:17.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:17 compute-0 nova_compute[248510]: 2025-12-13 09:05:17.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:05:17 compute-0 nova_compute[248510]: 2025-12-13 09:05:17.798 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:05:17 compute-0 nova_compute[248510]: 2025-12-13 09:05:17.798 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:17 compute-0 nova_compute[248510]: 2025-12-13 09:05:17.877 248514 INFO nova.compute.manager [None req-c45334f2-af22-4605-b023-637047f21221 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Get console output
Dec 13 09:05:17 compute-0 nova_compute[248510]: 2025-12-13 09:05:17.885 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.050 248514 DEBUG nova.compute.manager [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.051 248514 DEBUG nova.compute.manager [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing instance network info cache due to event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.051 248514 DEBUG oslo_concurrency.lockutils [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.051 248514 DEBUG oslo_concurrency.lockutils [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.051 248514 DEBUG nova.network.neutron [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.185 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.262 248514 DEBUG nova.objects.instance [None req-9aa5701a-b47b-483e-b99e-78c50f8e4e21 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.298 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616718.2978554, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.298 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Paused (Lifecycle Event)
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.326 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.333 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:18 compute-0 nova_compute[248510]: 2025-12-13 09:05:18.360 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 13 09:05:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 154 op/s
Dec 13 09:05:19 compute-0 ceph-mon[76537]: pgmap v3126: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 154 op/s
Dec 13 09:05:19 compute-0 kernel: tap21744784-e3 (unregistering): left promiscuous mode
Dec 13 09:05:19 compute-0 NetworkManager[50376]: <info>  [1765616719.1190] device (tap21744784-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01391|binding|INFO|Releasing lport 21744784-e35a-46d5-ac8c-8f9783a0a387 from this chassis (sb_readonly=0)
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01392|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 down in Southbound
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01393|binding|INFO|Removing iface tap21744784-e3 ovn-installed in OVS
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.140 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.142 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c unbound from our chassis
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.143 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.145 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[decbe2b1-ae78-4e66-8d5d-da426d532f44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.148 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c namespace which is not needed anymore
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:19 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000087.scope: Deactivated successfully.
Dec 13 09:05:19 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000087.scope: Consumed 12.506s CPU time.
Dec 13 09:05:19 compute-0 systemd-machined[210538]: Machine qemu-165-instance-00000087 terminated.
Dec 13 09:05:19 compute-0 kernel: tap21744784-e3: entered promiscuous mode
Dec 13 09:05:19 compute-0 NetworkManager[50376]: <info>  [1765616719.3112] manager: (tap21744784-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01394|binding|INFO|Claiming lport 21744784-e35a-46d5-ac8c-8f9783a0a387 for this chassis.
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01395|binding|INFO|21744784-e35a-46d5-ac8c-8f9783a0a387: Claiming fa:16:3e:0c:86:07 10.100.0.13
Dec 13 09:05:19 compute-0 kernel: tap21744784-e3 (unregistering): left promiscuous mode
Dec 13 09:05:19 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [NOTICE]   (384931) : haproxy version is 2.8.14-c23fe91
Dec 13 09:05:19 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [NOTICE]   (384931) : path to executable is /usr/sbin/haproxy
Dec 13 09:05:19 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [WARNING]  (384931) : Exiting Master process...
Dec 13 09:05:19 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [WARNING]  (384931) : Exiting Master process...
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.336 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:05:19 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [ALERT]    (384931) : Current worker (384933) exited with code 143 (Terminated)
Dec 13 09:05:19 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [WARNING]  (384931) : All workers exited. Exiting... (0)
Dec 13 09:05:19 compute-0 systemd[1]: libpod-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc.scope: Deactivated successfully.
Dec 13 09:05:19 compute-0 conmon[384927]: conmon cf1b23cdfc4dbbf79f4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc.scope/container/memory.events
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.350 248514 DEBUG nova.compute.manager [None req-9aa5701a-b47b-483e-b99e-78c50f8e4e21 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01396|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 ovn-installed in OVS
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01397|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 up in Southbound
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01398|binding|INFO|Releasing lport 21744784-e35a-46d5-ac8c-8f9783a0a387 from this chassis (sb_readonly=1)
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01399|if_status|INFO|Dropped 4 log messages in last 250 seconds (most recently, 245 seconds ago) due to excessive rate
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01400|if_status|INFO|Not setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 down as sb is readonly
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.353 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01401|binding|INFO|Removing iface tap21744784-e3 ovn-installed in OVS
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:19 compute-0 podman[385496]: 2025-12-13 09:05:19.357545759 +0000 UTC m=+0.083032622 container died cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01402|binding|INFO|Releasing lport 21744784-e35a-46d5-ac8c-8f9783a0a387 from this chassis (sb_readonly=0)
Dec 13 09:05:19 compute-0 ovn_controller[148476]: 2025-12-13T09:05:19Z|01403|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 down in Southbound
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.366 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc-userdata-shm.mount: Deactivated successfully.
Dec 13 09:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-bab414d9614e7f964aa77c46b42174cdefbb9b6ee0ab6eaef22a54e3e3265f7e-merged.mount: Deactivated successfully.
Dec 13 09:05:19 compute-0 podman[385496]: 2025-12-13 09:05:19.423988594 +0000 UTC m=+0.149475477 container cleanup cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 09:05:19 compute-0 systemd[1]: libpod-conmon-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc.scope: Deactivated successfully.
Dec 13 09:05:19 compute-0 podman[385533]: 2025-12-13 09:05:19.529816885 +0000 UTC m=+0.058033305 container remove cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.539 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce93cf9-d24c-49c9-aaa9-1db5c65286ee]: (4, ('Sat Dec 13 09:05:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c (cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc)\ncf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc\nSat Dec 13 09:05:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c (cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc)\ncf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.543 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9521fb9d-8ad0-4749-bd5a-b36c0b282899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap084e5836-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:19 compute-0 kernel: tap084e5836-e0: left promiscuous mode
Dec 13 09:05:19 compute-0 nova_compute[248510]: 2025-12-13 09:05:19.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.589 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6e667e-eddf-4679-9d20-42324d241d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8562cef7-ce90-4b44-b92f-d09fda9174d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.614 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b2e41c-2594-4633-8b12-98f8d12e4503]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.644 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d92c1a-b035-4037-aff6-e40778a77df1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 927414, 'reachable_time': 17075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385552, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.648 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.648 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[edbb5dec-904e-4aea-a543-406de346a4cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.649 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c unbound from our chassis
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.651 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:05:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d084e5836\x2de0e4\x2d4328\x2d8e03\x2ddfcdcd227a7c.mount: Deactivated successfully.
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.654 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[292df783-09ad-46ec-9dd8-ed563b45d881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.655 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c unbound from our chassis
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.658 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:05:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e7cdff-f545-4917-9fe8-e07959b5d203]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.802 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.802 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.803 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.803 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.803 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.804 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.804 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.804 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.805 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.805 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.805 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.806 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.806 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.806 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.807 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.807 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.807 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.808 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.808 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.808 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.809 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.809 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.809 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.810 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.810 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.810 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.811 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.811 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.812 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.812 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.812 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.813 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.813 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.813 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.814 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.814 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.936 248514 DEBUG nova.network.neutron [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updated VIF entry in instance network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:05:20 compute-0 nova_compute[248510]: 2025-12-13 09:05:20.937 248514 DEBUG nova.network.neutron [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:21 compute-0 nova_compute[248510]: 2025-12-13 09:05:21.005 248514 DEBUG oslo_concurrency.lockutils [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:05:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:21 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011216364250275013 of space, bias 1.0, pg target 0.3364909275082504 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698180496316175 of space, bias 1.0, pg target 0.20094541488948525 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:05:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:05:21 compute-0 ceph-mon[76537]: pgmap v3127: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Dec 13 09:05:22 compute-0 ovn_controller[148476]: 2025-12-13T09:05:22Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:ba:c6 10.100.0.7
Dec 13 09:05:22 compute-0 ovn_controller[148476]: 2025-12-13T09:05:22Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:ba:c6 10.100.0.7
Dec 13 09:05:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Dec 13 09:05:22 compute-0 ceph-mon[76537]: pgmap v3128: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Dec 13 09:05:22 compute-0 nova_compute[248510]: 2025-12-13 09:05:22.757 248514 INFO nova.compute.manager [None req-d7086825-e1fe-4a5f-afeb-cdd070d9e891 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Get console output
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.036 248514 INFO nova.compute.manager [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Resuming
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.038 248514 DEBUG nova.objects.instance [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.085 248514 DEBUG oslo_concurrency.lockutils [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.086 248514 DEBUG oslo_concurrency.lockutils [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.086 248514 DEBUG nova.network.neutron [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.184 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.984 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.985 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.985 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.986 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:05:23 compute-0 nova_compute[248510]: 2025-12-13 09:05:23.987 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Dec 13 09:05:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:05:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738187052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:25 compute-0 ceph-mon[76537]: pgmap v3129: 321 pgs: 321 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Dec 13 09:05:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2738187052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.386 248514 DEBUG nova.network.neutron [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.394 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.414 248514 DEBUG oslo_concurrency.lockutils [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.423 248514 DEBUG nova.virt.libvirt.vif [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1148057715',display_name='tempest-TestNetworkAdvancedServerOps-server-1148057715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1148057715',id=135,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEs/ai9hGvppAaJZucTj3XKRO2Hw6GseMJO/eDqjojdlq1TGbhXoM0CcjiSd48S7VRrLpZa2ASSqmIp8ZUA6nKNWf+HEZDknZ93ek9rtzJ3ZoLckx8X8JxIdR5xCKRsULQ==',key_name='tempest-TestNetworkAdvancedServerOps-326216115',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:04:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a5ac50ap',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:05:19Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=17c3814d-c11f-4032-a891-4cbdf3f7c065,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.424 248514 DEBUG nova.network.os_vif_util [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.425 248514 DEBUG nova.network.os_vif_util [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.425 248514 DEBUG os_vif [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.426 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.427 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.431 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.431 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21744784-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.431 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21744784-e3, col_values=(('external_ids', {'iface-id': '21744784-e35a-46d5-ac8c-8f9783a0a387', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:86:07', 'vm-uuid': '17c3814d-c11f-4032-a891-4cbdf3f7c065'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.432 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.432 248514 INFO os_vif [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3')
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.455 248514 DEBUG nova.objects.instance [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.511 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.512 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:05:25 compute-0 kernel: tap21744784-e3: entered promiscuous mode
Dec 13 09:05:25 compute-0 NetworkManager[50376]: <info>  [1765616725.5433] manager: (tap21744784-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/579)
Dec 13 09:05:25 compute-0 ovn_controller[148476]: 2025-12-13T09:05:25Z|01404|binding|INFO|Claiming lport 21744784-e35a-46d5-ac8c-8f9783a0a387 for this chassis.
Dec 13 09:05:25 compute-0 ovn_controller[148476]: 2025-12-13T09:05:25Z|01405|binding|INFO|21744784-e35a-46d5-ac8c-8f9783a0a387: Claiming fa:16:3e:0c:86:07 10.100.0.13
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.581 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:25 compute-0 systemd-udevd[385589]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.590 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '7', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.591 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c bound to our chassis
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.593 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 09:05:25 compute-0 ovn_controller[148476]: 2025-12-13T09:05:25Z|01406|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 ovn-installed in OVS
Dec 13 09:05:25 compute-0 ovn_controller[148476]: 2025-12-13T09:05:25Z|01407|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 up in Southbound
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:25 compute-0 NetworkManager[50376]: <info>  [1765616725.6054] device (tap21744784-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:05:25 compute-0 NetworkManager[50376]: <info>  [1765616725.6063] device (tap21744784-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f48d4cd-3f18-4c6f-9654-e09accfabe28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.614 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap084e5836-e1 in ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.616 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap084e5836-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.616 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51392abc-e0fd-4a70-95bc-5f565e5f4b27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.617 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0db26429-b28e-42d1-9d81-2557a80d9dae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 systemd-machined[210538]: New machine qemu-167-instance-00000087.
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.630 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0deef4d9-0a73-42ba-b081-bfd40f3f988a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000087.
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.657 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f93b17f5-7460-4cc1-bb5a-6eed784c9700]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.707 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbc5921-a058-4c47-9361-ccffa41274dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 systemd-udevd[385594]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.715 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07dda38f-998b-41ed-aabb-74c919da2805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 NetworkManager[50376]: <info>  [1765616725.7182] manager: (tap084e5836-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/580)
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.726 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.727 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.762 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b442f224-d338-49a3-8fa2-48eb4d4b9519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.765 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3a79a48d-1ff7-431c-a1a1-df32af41221a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 NetworkManager[50376]: <info>  [1765616725.8105] device (tap084e5836-e0): carrier: link connected
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.819 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dce8dda9-e698-4b8d-b8fc-a0495dad21a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.854 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd47de09-9df8-4b67-86f6-19adb8f44377]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap084e5836-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:f7:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 406], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930302, 'reachable_time': 27305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385625, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.877 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6a12d3bb-e877-44be-9006-058e4fa00bb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:f759'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 930302, 'tstamp': 930302}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385626, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.898 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1508edf2-610f-47d2-9591-7c46ebec4598]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap084e5836-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:f7:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 406], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930302, 'reachable_time': 27305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385627, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.945 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c95a459-a143-4130-af43-1e0d1096484e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.962 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.963 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3283MB free_disk=59.920996208675206GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.963 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:25 compute-0 nova_compute[248510]: 2025-12-13 09:05:25.963 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.036 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcb7526-cf9c-4444-8d0e-57193d5f4ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.038 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap084e5836-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.039 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.039 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap084e5836-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:26 compute-0 kernel: tap084e5836-e0: entered promiscuous mode
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:26 compute-0 NetworkManager[50376]: <info>  [1765616726.0425] manager: (tap084e5836-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.043 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap084e5836-e0, col_values=(('external_ids', {'iface-id': '2953b79f-9235-4cc1-ad54-d75b960374dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:26 compute-0 ovn_controller[148476]: 2025-12-13T09:05:26Z|01408|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.048 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.049 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9a8eee-14e8-49c8-ae5e-e0b45b6472e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.050 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:05:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.050 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'env', 'PROCESS_TAG=haproxy-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/084e5836-e0e4-4328-8e03-dfcdcd227a7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.114 248514 DEBUG nova.compute.manager [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.115 248514 DEBUG oslo_concurrency.lockutils [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.115 248514 DEBUG oslo_concurrency.lockutils [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.116 248514 DEBUG oslo_concurrency.lockutils [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.116 248514 DEBUG nova.compute.manager [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.116 248514 WARNING nova.compute.manager [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state resuming.
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.181 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 17c3814d-c11f-4032-a891-4cbdf3f7c065 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.182 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance bb154aa9-7029-4193-8d00-38e8e552a382 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.182 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.182 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.283 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.378 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 17c3814d-c11f-4032-a891-4cbdf3f7c065 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.379 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616726.377207, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.379 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Started (Lifecycle Event)
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.422 248514 DEBUG nova.compute.manager [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.422 248514 DEBUG nova.objects.instance [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.426 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.444 248514 INFO nova.virt.libvirt.driver [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance running successfully.
Dec 13 09:05:26 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.447 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.447 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616726.382789, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.448 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Resumed (Lifecycle Event)
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.450 248514 DEBUG nova.virt.libvirt.guest [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.450 248514 DEBUG nova.compute.manager [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.496 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.500 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:05:26 compute-0 podman[385698]: 2025-12-13 09:05:26.518556903 +0000 UTC m=+0.071066352 container create 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.527 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 13 09:05:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 09:05:26 compute-0 podman[385698]: 2025-12-13 09:05:26.484883179 +0000 UTC m=+0.037392638 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:05:26 compute-0 systemd[1]: Started libpod-conmon-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f.scope.
Dec 13 09:05:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f7d273eeceb0bf4fd1ef9b86958ede76e0fc0f84087e86c40344f9a7eed60d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:26 compute-0 podman[385698]: 2025-12-13 09:05:26.646196621 +0000 UTC m=+0.198706080 container init 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:05:26 compute-0 podman[385698]: 2025-12-13 09:05:26.652263693 +0000 UTC m=+0.204773122 container start 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:05:26 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [NOTICE]   (385736) : New worker (385738) forked
Dec 13 09:05:26 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [NOTICE]   (385736) : Loading success.
Dec 13 09:05:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:05:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079382082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.894 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.903 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.921 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.954 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:05:26 compute-0 nova_compute[248510]: 2025-12-13 09:05:26.954 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:27 compute-0 ceph-mon[76537]: pgmap v3130: 321 pgs: 321 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 09:05:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2079382082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.066 248514 INFO nova.compute.manager [None req-6562f15c-1dc1-494a-8016-dac0ef581a6b a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Get console output
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.076 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.237 248514 DEBUG nova.compute.manager [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 DEBUG oslo_concurrency.lockutils [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 DEBUG oslo_concurrency.lockutils [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 DEBUG oslo_concurrency.lockutils [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 DEBUG nova.compute.manager [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 WARNING nova.compute.manager [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state active and task_state None.
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.930 248514 INFO nova.compute.manager [None req-3551c2a8-eb1c-411c-9534-f4415f34581d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Get console output
Dec 13 09:05:28 compute-0 nova_compute[248510]: 2025-12-13 09:05:28.936 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:29.540 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:05:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:29.543 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:05:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:29.545 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:29 compute-0 ceph-mon[76537]: pgmap v3131: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.887 248514 DEBUG nova.compute.manager [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.888 248514 DEBUG nova.compute.manager [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing instance network info cache due to event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.888 248514 DEBUG oslo_concurrency.lockutils [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.889 248514 DEBUG oslo_concurrency.lockutils [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.889 248514 DEBUG nova.network.neutron [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.971 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.971 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.972 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.972 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.972 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.974 248514 INFO nova.compute.manager [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Terminating instance
Dec 13 09:05:29 compute-0 nova_compute[248510]: 2025-12-13 09:05:29.976 248514 DEBUG nova.compute.manager [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:05:30 compute-0 ovn_controller[148476]: 2025-12-13T09:05:30Z|01409|binding|INFO|Releasing lport 648ca59e-493f-4103-aa42-8d4449fe6f9d from this chassis (sb_readonly=0)
Dec 13 09:05:30 compute-0 ovn_controller[148476]: 2025-12-13T09:05:30Z|01410|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 kernel: tap21744784-e3 (unregistering): left promiscuous mode
Dec 13 09:05:30 compute-0 NetworkManager[50376]: <info>  [1765616730.0418] device (tap21744784-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 ovn_controller[148476]: 2025-12-13T09:05:30Z|01411|binding|INFO|Releasing lport 648ca59e-493f-4103-aa42-8d4449fe6f9d from this chassis (sb_readonly=0)
Dec 13 09:05:30 compute-0 ovn_controller[148476]: 2025-12-13T09:05:30Z|01412|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 09:05:30 compute-0 ovn_controller[148476]: 2025-12-13T09:05:30Z|01413|binding|INFO|Releasing lport 21744784-e35a-46d5-ac8c-8f9783a0a387 from this chassis (sb_readonly=0)
Dec 13 09:05:30 compute-0 ovn_controller[148476]: 2025-12-13T09:05:30Z|01414|binding|INFO|Removing iface tap21744784-e3 ovn-installed in OVS
Dec 13 09:05:30 compute-0 ovn_controller[148476]: 2025-12-13T09:05:30Z|01415|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 down in Southbound
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.117 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.117 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.120 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c unbound from our chassis
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.123 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.124 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7cd1086-f7da-4682-b301-791ccc286462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.125 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c namespace which is not needed anymore
Dec 13 09:05:30 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Deactivated successfully.
Dec 13 09:05:30 compute-0 systemd-machined[210538]: Machine qemu-167-instance-00000087 terminated.
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.218 248514 INFO nova.virt.libvirt.driver [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance destroyed successfully.
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.218 248514 DEBUG nova.objects.instance [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.240 248514 DEBUG nova.virt.libvirt.vif [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1148057715',display_name='tempest-TestNetworkAdvancedServerOps-server-1148057715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1148057715',id=135,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEs/ai9hGvppAaJZucTj3XKRO2Hw6GseMJO/eDqjojdlq1TGbhXoM0CcjiSd48S7VRrLpZa2ASSqmIp8ZUA6nKNWf+HEZDknZ93ek9rtzJ3ZoLckx8X8JxIdR5xCKRsULQ==',key_name='tempest-TestNetworkAdvancedServerOps-326216115',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:04:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a5ac50ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:05:26Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=17c3814d-c11f-4032-a891-4cbdf3f7c065,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.240 248514 DEBUG nova.network.os_vif_util [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.241 248514 DEBUG nova.network.os_vif_util [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.241 248514 DEBUG os_vif [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.244 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21744784-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.253 248514 INFO os_vif [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3')
Dec 13 09:05:30 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [NOTICE]   (385736) : haproxy version is 2.8.14-c23fe91
Dec 13 09:05:30 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [NOTICE]   (385736) : path to executable is /usr/sbin/haproxy
Dec 13 09:05:30 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [WARNING]  (385736) : Exiting Master process...
Dec 13 09:05:30 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [WARNING]  (385736) : Exiting Master process...
Dec 13 09:05:30 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [ALERT]    (385736) : Current worker (385738) exited with code 143 (Terminated)
Dec 13 09:05:30 compute-0 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [WARNING]  (385736) : All workers exited. Exiting... (0)
Dec 13 09:05:30 compute-0 systemd[1]: libpod-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f.scope: Deactivated successfully.
Dec 13 09:05:30 compute-0 conmon[385732]: conmon 142a55ec433536e54c1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f.scope/container/memory.events
Dec 13 09:05:30 compute-0 podman[385784]: 2025-12-13 09:05:30.327998596 +0000 UTC m=+0.063062441 container died 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 09:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f-userdata-shm.mount: Deactivated successfully.
Dec 13 09:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7f7d273eeceb0bf4fd1ef9b86958ede76e0fc0f84087e86c40344f9a7eed60d-merged.mount: Deactivated successfully.
Dec 13 09:05:30 compute-0 podman[385784]: 2025-12-13 09:05:30.377202019 +0000 UTC m=+0.112265834 container cleanup 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:05:30 compute-0 systemd[1]: libpod-conmon-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f.scope: Deactivated successfully.
Dec 13 09:05:30 compute-0 podman[385828]: 2025-12-13 09:05:30.473894431 +0000 UTC m=+0.066607420 container remove 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.487 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b3cbaa-e2ae-4703-97ac-6d756b210205]: (4, ('Sat Dec 13 09:05:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c (142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f)\n142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f\nSat Dec 13 09:05:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c (142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f)\n142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.490 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86ce0825-5da9-4182-b484-872592d37b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.491 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap084e5836-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.493 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 kernel: tap084e5836-e0: left promiscuous mode
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.514 248514 DEBUG nova.compute.manager [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.514 248514 DEBUG oslo_concurrency.lockutils [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.515 248514 DEBUG oslo_concurrency.lockutils [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.515 248514 DEBUG oslo_concurrency.lockutils [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.516 248514 DEBUG nova.compute.manager [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.516 248514 DEBUG nova.compute.manager [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.522 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.526 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7bffc04a-309e-41df-98cc-57cce2339fe2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.542 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[936d57b9-33a5-4f5f-97d4-925acf0fc0e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.544 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1a3904-02ad-4b1d-afa3-d87450ef8b8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.561 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27d9deb1-c690-4a0b-9bcc-eafc3785dbf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930291, 'reachable_time': 25674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385844, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d084e5836\x2de0e4\x2d4328\x2d8e03\x2ddfcdcd227a7c.mount: Deactivated successfully.
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.564 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:05:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.564 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b0474edf-6966-4477-9388-b9bbffa40463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.585 248514 INFO nova.virt.libvirt.driver [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Deleting instance files /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065_del
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.586 248514 INFO nova.virt.libvirt.driver [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Deletion of /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065_del complete
Dec 13 09:05:30 compute-0 ceph-mon[76537]: pgmap v3132: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.717 248514 INFO nova.compute.manager [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Took 0.74 seconds to destroy the instance on the hypervisor.
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.718 248514 DEBUG oslo.service.loopingcall [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.719 248514 DEBUG nova.compute.manager [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.720 248514 DEBUG nova.network.neutron [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.955 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.956 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.956 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:30 compute-0 nova_compute[248510]: 2025-12-13 09:05:30.956 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:05:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:31 compute-0 nova_compute[248510]: 2025-12-13 09:05:31.337 248514 INFO nova.compute.manager [None req-02c63348-5dc6-4357-b15e-6e1a951214e9 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Get console output
Dec 13 09:05:31 compute-0 nova_compute[248510]: 2025-12-13 09:05:31.344 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:05:31 compute-0 nova_compute[248510]: 2025-12-13 09:05:31.804 248514 DEBUG nova.network.neutron [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:31 compute-0 nova_compute[248510]: 2025-12-13 09:05:31.828 248514 INFO nova.compute.manager [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Took 1.11 seconds to deallocate network for instance.
Dec 13 09:05:31 compute-0 nova_compute[248510]: 2025-12-13 09:05:31.890 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:31 compute-0 nova_compute[248510]: 2025-12-13 09:05:31.891 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:31.999 248514 DEBUG oslo_concurrency.processutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:32 compute-0 NetworkManager[50376]: <info>  [1765616732.0975] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Dec 13 09:05:32 compute-0 NetworkManager[50376]: <info>  [1765616732.0985] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:32 compute-0 ovn_controller[148476]: 2025-12-13T09:05:32Z|01416|binding|INFO|Releasing lport 648ca59e-493f-4103-aa42-8d4449fe6f9d from this chassis (sb_readonly=0)
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.260 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.417 248514 DEBUG nova.network.neutron [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updated VIF entry in instance network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.418 248514 DEBUG nova.network.neutron [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.460 248514 DEBUG oslo_concurrency.lockutils [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:05:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:05:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4155451256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.581 248514 DEBUG oslo_concurrency.processutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.591 248514 DEBUG nova.compute.provider_tree [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:05:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4155451256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.619 248514 DEBUG nova.scheduler.client.report [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.631 248514 INFO nova.compute.manager [None req-2134f62a-d5d5-45b5-9211-4f404b91cee5 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Get console output
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.638 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.647 248514 DEBUG nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.648 248514 DEBUG oslo_concurrency.lockutils [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.648 248514 DEBUG oslo_concurrency.lockutils [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.649 248514 DEBUG oslo_concurrency.lockutils [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.649 248514 DEBUG nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.649 248514 WARNING nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state deleted and task_state None.
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.649 248514 DEBUG nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-deleted-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.650 248514 INFO nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Neutron deleted interface 21744784-e35a-46d5-ac8c-8f9783a0a387; detaching it from the instance and deleting it from the info cache
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.650 248514 DEBUG nova.network.neutron [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.654 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.681 248514 INFO nova.scheduler.client.report [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance 17c3814d-c11f-4032-a891-4cbdf3f7c065
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.688 248514 DEBUG nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Detach interface failed, port_id=21744784-e35a-46d5-ac8c-8f9783a0a387, reason: Instance 17c3814d-c11f-4032-a891-4cbdf3f7c065 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:05:32 compute-0 nova_compute[248510]: 2025-12-13 09:05:32.775 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:33 compute-0 nova_compute[248510]: 2025-12-13 09:05:33.190 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:33 compute-0 ceph-mon[76537]: pgmap v3133: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.357 248514 DEBUG nova.compute.manager [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.358 248514 DEBUG nova.compute.manager [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing instance network info cache due to event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.359 248514 DEBUG oslo_concurrency.lockutils [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.359 248514 DEBUG oslo_concurrency.lockutils [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.360 248514 DEBUG nova.network.neutron [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.428 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.429 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.429 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.430 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.431 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.433 248514 INFO nova.compute.manager [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Terminating instance
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.435 248514 DEBUG nova.compute.manager [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:05:34 compute-0 kernel: tapf16b53b1-ea (unregistering): left promiscuous mode
Dec 13 09:05:34 compute-0 NetworkManager[50376]: <info>  [1765616734.5057] device (tapf16b53b1-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.514 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:34 compute-0 ovn_controller[148476]: 2025-12-13T09:05:34Z|01417|binding|INFO|Releasing lport f16b53b1-eab6-4458-976e-5cb112c77ef8 from this chassis (sb_readonly=0)
Dec 13 09:05:34 compute-0 ovn_controller[148476]: 2025-12-13T09:05:34Z|01418|binding|INFO|Setting lport f16b53b1-eab6-4458-976e-5cb112c77ef8 down in Southbound
Dec 13 09:05:34 compute-0 ovn_controller[148476]: 2025-12-13T09:05:34Z|01419|binding|INFO|Removing iface tapf16b53b1-ea ovn-installed in OVS
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.523 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:ba:c6 10.100.0.7'], port_security=['fa:16:3e:68:ba:c6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb154aa9-7029-4193-8d00-38e8e552a382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-065ccb44-e305-43f2-ab7a-728a65062da2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20aff6be-cf34-4af0-9288-26483b58fb2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a94dc2d-ac12-4530-9899-bb1ddf6ed47b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f16b53b1-eab6-4458-976e-5cb112c77ef8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.525 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f16b53b1-eab6-4458-976e-5cb112c77ef8 in datapath 065ccb44-e305-43f2-ab7a-728a65062da2 unbound from our chassis
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.528 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 065ccb44-e305-43f2-ab7a-728a65062da2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.529 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7537cc5a-1b1c-4b79-8b42-0f2eca4f7666]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.529 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 namespace which is not needed anymore
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 121 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 13 09:05:34 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000088.scope: Deactivated successfully.
Dec 13 09:05:34 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000088.scope: Consumed 13.514s CPU time.
Dec 13 09:05:34 compute-0 systemd-machined[210538]: Machine qemu-166-instance-00000088 terminated.
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:34 compute-0 ceph-mon[76537]: pgmap v3134: 321 pgs: 321 active+clean; 121 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.686 248514 INFO nova.virt.libvirt.driver [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance destroyed successfully.
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.687 248514 DEBUG nova.objects.instance [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid bb154aa9-7029-4193-8d00-38e8e552a382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.711 248514 DEBUG nova.virt.libvirt.vif [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:04:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908300967',display_name='tempest-TestNetworkBasicOps-server-908300967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908300967',id=136,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBmqAnv0oD+weSJxDWEwIjgt5HjSBoUiVoV5CSnaOj2JsKkOoULuMXZQ1EHyUAr0FnFFBRSfsXFNYjXLHRq9tDCS1GQZDyMWxvCAc8xSVkqfYg8qd+Wbjc8cJKV8CheTw==',key_name='tempest-TestNetworkBasicOps-403784450',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:05:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-8elo0a7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:05:10Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=bb154aa9-7029-4193-8d00-38e8e552a382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.712 248514 DEBUG nova.network.os_vif_util [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.713 248514 DEBUG nova.network.os_vif_util [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.714 248514 DEBUG os_vif [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:34 compute-0 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [NOTICE]   (385373) : haproxy version is 2.8.14-c23fe91
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.718 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf16b53b1-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:34 compute-0 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [NOTICE]   (385373) : path to executable is /usr/sbin/haproxy
Dec 13 09:05:34 compute-0 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [WARNING]  (385373) : Exiting Master process...
Dec 13 09:05:34 compute-0 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [ALERT]    (385373) : Current worker (385389) exited with code 143 (Terminated)
Dec 13 09:05:34 compute-0 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [WARNING]  (385373) : All workers exited. Exiting... (0)
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:34 compute-0 systemd[1]: libpod-d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec.scope: Deactivated successfully.
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.729 248514 INFO os_vif [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea')
Dec 13 09:05:34 compute-0 podman[385892]: 2025-12-13 09:05:34.730811546 +0000 UTC m=+0.059166963 container died d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec-userdata-shm.mount: Deactivated successfully.
Dec 13 09:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa95f491df7395f5ed1e1bdb4690670b794dc054243ab1a1438a6d32051c9c18-merged.mount: Deactivated successfully.
Dec 13 09:05:34 compute-0 podman[385892]: 2025-12-13 09:05:34.773935676 +0000 UTC m=+0.102291073 container cleanup d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 09:05:34 compute-0 systemd[1]: libpod-conmon-d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec.scope: Deactivated successfully.
Dec 13 09:05:34 compute-0 podman[385948]: 2025-12-13 09:05:34.842228907 +0000 UTC m=+0.040401173 container remove d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.852 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[595a132d-9eae-462e-ac19-5a480f938b04]: (4, ('Sat Dec 13 09:05:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 (d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec)\nd33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec\nSat Dec 13 09:05:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 (d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec)\nd33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.854 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76b6dbaa-025b-4261-bd92-10d773188767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.856 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap065ccb44-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.858 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:34 compute-0 kernel: tap065ccb44-e0: left promiscuous mode
Dec 13 09:05:34 compute-0 nova_compute[248510]: 2025-12-13 09:05:34.884 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.888 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0e282bbf-2159-4c68-9c0b-6d49c3196c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.913 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e49c647a-36c4-4e69-a992-7531210f6ca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[80c3f9cc-c785-4db0-b763-8c1d845d6096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[73498bb5-3342-495a-947f-eb6b9be01d0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928669, 'reachable_time': 15688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385966, 'error': None, 'target': 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d065ccb44\x2de305\x2d43f2\x2dab7a\x2d728a65062da2.mount: Deactivated successfully.
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.949 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:05:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.950 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[758e6551-aaa3-485d-830a-db93cdd39361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.029 248514 INFO nova.virt.libvirt.driver [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Deleting instance files /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382_del
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.030 248514 INFO nova.virt.libvirt.driver [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Deletion of /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382_del complete
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.111 248514 DEBUG nova.compute.manager [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-unplugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.112 248514 DEBUG oslo_concurrency.lockutils [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.113 248514 DEBUG oslo_concurrency.lockutils [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.113 248514 DEBUG oslo_concurrency.lockutils [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.114 248514 DEBUG nova.compute.manager [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] No waiting events found dispatching network-vif-unplugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.114 248514 DEBUG nova.compute.manager [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-unplugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.120 248514 INFO nova.compute.manager [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Took 0.69 seconds to destroy the instance on the hypervisor.
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.121 248514 DEBUG oslo.service.loopingcall [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.122 248514 DEBUG nova.compute.manager [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:05:35 compute-0 nova_compute[248510]: 2025-12-13 09:05:35.122 248514 DEBUG nova.network.neutron [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:05:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 121 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 16 KiB/s wr, 32 op/s
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.267 248514 DEBUG nova.compute.manager [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.267 248514 DEBUG oslo_concurrency.lockutils [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.268 248514 DEBUG oslo_concurrency.lockutils [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.268 248514 DEBUG oslo_concurrency.lockutils [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.268 248514 DEBUG nova.compute.manager [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] No waiting events found dispatching network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.268 248514 WARNING nova.compute.manager [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received unexpected event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 for instance with vm_state active and task_state deleting.
Dec 13 09:05:37 compute-0 sudo[385967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:05:37 compute-0 sudo[385967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:37 compute-0 sudo[385967]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:37 compute-0 ceph-mon[76537]: pgmap v3135: 321 pgs: 321 active+clean; 121 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 16 KiB/s wr, 32 op/s
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.623 248514 DEBUG nova.network.neutron [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updated VIF entry in instance network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.623 248514 DEBUG nova.network.neutron [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:37 compute-0 sudo[385992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:05:37 compute-0 sudo[385992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.661 248514 DEBUG oslo_concurrency.lockutils [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.662 248514 DEBUG nova.network.neutron [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.695 248514 INFO nova.compute.manager [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Took 2.57 seconds to deallocate network for instance.
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.773 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.774 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:37 compute-0 nova_compute[248510]: 2025-12-13 09:05:37.860 248514 DEBUG oslo_concurrency.processutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:38 compute-0 sudo[385992]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:05:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:05:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:05:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:05:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:05:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:05:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:05:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:05:38 compute-0 sudo[386068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:05:38 compute-0 sudo[386068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:38 compute-0 sudo[386068]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:38 compute-0 sudo[386093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:05:38 compute-0 sudo[386093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:05:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157535386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.481 248514 DEBUG oslo_concurrency.processutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.488 248514 DEBUG nova.compute.provider_tree [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.512 248514 DEBUG nova.scheduler.client.report [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.554 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 96 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 17 KiB/s wr, 46 op/s
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.594 248514 INFO nova.scheduler.client.report [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance bb154aa9-7029-4193-8d00-38e8e552a382
Dec 13 09:05:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:05:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:05:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2157535386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.665 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:38 compute-0 podman[386133]: 2025-12-13 09:05:38.732626757 +0000 UTC m=+0.051080380 container create b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:05:38 compute-0 systemd[1]: Started libpod-conmon-b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c.scope.
Dec 13 09:05:38 compute-0 podman[386133]: 2025-12-13 09:05:38.714347729 +0000 UTC m=+0.032801382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:05:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:05:38 compute-0 podman[386133]: 2025-12-13 09:05:38.828856898 +0000 UTC m=+0.147310561 container init b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:05:38 compute-0 podman[386133]: 2025-12-13 09:05:38.836157461 +0000 UTC m=+0.154611084 container start b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:05:38 compute-0 podman[386133]: 2025-12-13 09:05:38.839577597 +0000 UTC m=+0.158031250 container attach b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 09:05:38 compute-0 nervous_sanderson[386149]: 167 167
Dec 13 09:05:38 compute-0 systemd[1]: libpod-b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c.scope: Deactivated successfully.
Dec 13 09:05:38 compute-0 conmon[386149]: conmon b684faf8edd3bc3f9357 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c.scope/container/memory.events
Dec 13 09:05:38 compute-0 podman[386133]: 2025-12-13 09:05:38.845767742 +0000 UTC m=+0.164221385 container died b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f18622e545ddf9e436cff2f3af6da1ab78b413af5e1fa0edab60b3a2a0893bc-merged.mount: Deactivated successfully.
Dec 13 09:05:38 compute-0 podman[386133]: 2025-12-13 09:05:38.90273942 +0000 UTC m=+0.221193053 container remove b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:05:38 compute-0 systemd[1]: libpod-conmon-b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c.scope: Deactivated successfully.
Dec 13 09:05:38 compute-0 nova_compute[248510]: 2025-12-13 09:05:38.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:39 compute-0 podman[386174]: 2025-12-13 09:05:39.101736905 +0000 UTC m=+0.053025389 container create e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:05:39 compute-0 systemd[1]: Started libpod-conmon-e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a.scope.
Dec 13 09:05:39 compute-0 podman[386174]: 2025-12-13 09:05:39.073358624 +0000 UTC m=+0.024647158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:05:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:39 compute-0 podman[386174]: 2025-12-13 09:05:39.215044584 +0000 UTC m=+0.166333098 container init e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Dec 13 09:05:39 compute-0 podman[386174]: 2025-12-13 09:05:39.23045468 +0000 UTC m=+0.181743154 container start e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:05:39 compute-0 podman[386174]: 2025-12-13 09:05:39.234905102 +0000 UTC m=+0.186193586 container attach e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:05:39 compute-0 nova_compute[248510]: 2025-12-13 09:05:39.372 248514 DEBUG nova.compute.manager [req-e3195a83-23a1-4003-97e5-4ad4df0ed27f req-9e8fbb6f-78a2-4fc4-af7e-d439f8f9fdea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-deleted-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:05:39 compute-0 ceph-mon[76537]: pgmap v3136: 321 pgs: 321 active+clean; 96 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 17 KiB/s wr, 46 op/s
Dec 13 09:05:39 compute-0 nova_compute[248510]: 2025-12-13 09:05:39.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:39 compute-0 pensive_kalam[386191]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:05:39 compute-0 pensive_kalam[386191]: --> All data devices are unavailable
Dec 13 09:05:39 compute-0 systemd[1]: libpod-e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a.scope: Deactivated successfully.
Dec 13 09:05:39 compute-0 podman[386174]: 2025-12-13 09:05:39.871609544 +0000 UTC m=+0.822898028 container died e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 09:05:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b-merged.mount: Deactivated successfully.
Dec 13 09:05:39 compute-0 podman[386174]: 2025-12-13 09:05:39.919783561 +0000 UTC m=+0.871071995 container remove e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:05:39 compute-0 systemd[1]: libpod-conmon-e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a.scope: Deactivated successfully.
Dec 13 09:05:39 compute-0 sudo[386093]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:40 compute-0 sudo[386223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:05:40 compute-0 sudo[386223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:40 compute-0 sudo[386223]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:40 compute-0 sudo[386248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:05:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:05:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:05:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:05:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:05:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:05:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:05:40 compute-0 sudo[386248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:40 compute-0 podman[386285]: 2025-12-13 09:05:40.429586374 +0000 UTC m=+0.048172878 container create 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:05:40 compute-0 systemd[1]: Started libpod-conmon-34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528.scope.
Dec 13 09:05:40 compute-0 podman[386285]: 2025-12-13 09:05:40.405610703 +0000 UTC m=+0.024197227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:05:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:05:40 compute-0 podman[386285]: 2025-12-13 09:05:40.52041862 +0000 UTC m=+0.139005114 container init 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:05:40 compute-0 podman[386285]: 2025-12-13 09:05:40.530726188 +0000 UTC m=+0.149312662 container start 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 09:05:40 compute-0 podman[386285]: 2025-12-13 09:05:40.534681597 +0000 UTC m=+0.153268121 container attach 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:05:40 compute-0 sleepy_mirzakhani[386301]: 167 167
Dec 13 09:05:40 compute-0 systemd[1]: libpod-34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528.scope: Deactivated successfully.
Dec 13 09:05:40 compute-0 podman[386285]: 2025-12-13 09:05:40.539445346 +0000 UTC m=+0.158031830 container died 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:05:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Dec 13 09:05:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-219308bc69710a4998d304356995be37ac9bdf4f47761e2d3b61aaffdd59d358-merged.mount: Deactivated successfully.
Dec 13 09:05:40 compute-0 podman[386285]: 2025-12-13 09:05:40.582965537 +0000 UTC m=+0.201552011 container remove 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:05:40 compute-0 systemd[1]: libpod-conmon-34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528.scope: Deactivated successfully.
Dec 13 09:05:40 compute-0 ceph-mon[76537]: pgmap v3137: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Dec 13 09:05:40 compute-0 nova_compute[248510]: 2025-12-13 09:05:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:40 compute-0 podman[386325]: 2025-12-13 09:05:40.813667687 +0000 UTC m=+0.061964613 container create 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:05:40 compute-0 systemd[1]: Started libpod-conmon-2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1.scope.
Dec 13 09:05:40 compute-0 podman[386325]: 2025-12-13 09:05:40.788764603 +0000 UTC m=+0.037061529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:05:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:40 compute-0 podman[386325]: 2025-12-13 09:05:40.928409722 +0000 UTC m=+0.176706648 container init 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:05:40 compute-0 podman[386325]: 2025-12-13 09:05:40.93673543 +0000 UTC m=+0.185032336 container start 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:05:40 compute-0 podman[386325]: 2025-12-13 09:05:40.939729055 +0000 UTC m=+0.188025981 container attach 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:05:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]: {
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:     "0": [
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:         {
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "devices": [
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "/dev/loop3"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             ],
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_name": "ceph_lv0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_size": "21470642176",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "name": "ceph_lv0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "tags": {
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cluster_name": "ceph",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.crush_device_class": "",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.encrypted": "0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.objectstore": "bluestore",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osd_id": "0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.type": "block",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.vdo": "0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.with_tpm": "0"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             },
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "type": "block",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "vg_name": "ceph_vg0"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:         }
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:     ],
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:     "1": [
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:         {
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "devices": [
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "/dev/loop4"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             ],
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_name": "ceph_lv1",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_size": "21470642176",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "name": "ceph_lv1",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "tags": {
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cluster_name": "ceph",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.crush_device_class": "",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.encrypted": "0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.objectstore": "bluestore",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osd_id": "1",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.type": "block",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.vdo": "0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.with_tpm": "0"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             },
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "type": "block",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "vg_name": "ceph_vg1"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:         }
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:     ],
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:     "2": [
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:         {
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "devices": [
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "/dev/loop5"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             ],
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_name": "ceph_lv2",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_size": "21470642176",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "name": "ceph_lv2",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "tags": {
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.cluster_name": "ceph",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.crush_device_class": "",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.encrypted": "0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.objectstore": "bluestore",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osd_id": "2",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.type": "block",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.vdo": "0",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:                 "ceph.with_tpm": "0"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             },
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "type": "block",
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:             "vg_name": "ceph_vg2"
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:         }
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]:     ]
Dec 13 09:05:41 compute-0 vigilant_cartwright[386342]: }
Dec 13 09:05:41 compute-0 systemd[1]: libpod-2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1.scope: Deactivated successfully.
Dec 13 09:05:41 compute-0 podman[386325]: 2025-12-13 09:05:41.298266998 +0000 UTC m=+0.546563964 container died 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 09:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5-merged.mount: Deactivated successfully.
Dec 13 09:05:41 compute-0 podman[386325]: 2025-12-13 09:05:41.362911668 +0000 UTC m=+0.611208614 container remove 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:05:41 compute-0 systemd[1]: libpod-conmon-2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1.scope: Deactivated successfully.
Dec 13 09:05:41 compute-0 sudo[386248]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:41 compute-0 sudo[386363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:05:41 compute-0 sudo[386363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:41 compute-0 sudo[386363]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:41 compute-0 sudo[386388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:05:41 compute-0 sudo[386388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:41 compute-0 podman[386425]: 2025-12-13 09:05:41.937452963 +0000 UTC m=+0.049962333 container create 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 09:05:41 compute-0 systemd[1]: Started libpod-conmon-5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066.scope.
Dec 13 09:05:42 compute-0 podman[386425]: 2025-12-13 09:05:41.916945809 +0000 UTC m=+0.029455209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:05:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:05:42 compute-0 podman[386425]: 2025-12-13 09:05:42.049774076 +0000 UTC m=+0.162283546 container init 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:05:42 compute-0 podman[386425]: 2025-12-13 09:05:42.058494445 +0000 UTC m=+0.171003815 container start 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 09:05:42 compute-0 podman[386425]: 2025-12-13 09:05:42.061333866 +0000 UTC m=+0.173843236 container attach 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:05:42 compute-0 amazing_galois[386442]: 167 167
Dec 13 09:05:42 compute-0 systemd[1]: libpod-5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066.scope: Deactivated successfully.
Dec 13 09:05:42 compute-0 podman[386425]: 2025-12-13 09:05:42.066749691 +0000 UTC m=+0.179259061 container died 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-35105134f3f8e3628a339fda27c2b7ff28025c8563f35f65321e581e6b06d4f7-merged.mount: Deactivated successfully.
Dec 13 09:05:42 compute-0 podman[386425]: 2025-12-13 09:05:42.10820911 +0000 UTC m=+0.220718520 container remove 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:05:42 compute-0 systemd[1]: libpod-conmon-5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066.scope: Deactivated successfully.
Dec 13 09:05:42 compute-0 podman[386466]: 2025-12-13 09:05:42.309389401 +0000 UTC m=+0.046840765 container create fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:05:42 compute-0 systemd[1]: Started libpod-conmon-fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4.scope.
Dec 13 09:05:42 compute-0 podman[386466]: 2025-12-13 09:05:42.290469857 +0000 UTC m=+0.027921241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:05:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:05:42 compute-0 podman[386466]: 2025-12-13 09:05:42.421032788 +0000 UTC m=+0.158484232 container init fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:05:42 compute-0 podman[386466]: 2025-12-13 09:05:42.432888785 +0000 UTC m=+0.170340179 container start fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:05:42 compute-0 podman[386466]: 2025-12-13 09:05:42.437307356 +0000 UTC m=+0.174758750 container attach fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:05:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 55 op/s
Dec 13 09:05:43 compute-0 nova_compute[248510]: 2025-12-13 09:05:43.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:43 compute-0 lvm[386559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:05:43 compute-0 lvm[386559]: VG ceph_vg0 finished
Dec 13 09:05:43 compute-0 lvm[386562]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:05:43 compute-0 lvm[386562]: VG ceph_vg1 finished
Dec 13 09:05:43 compute-0 lvm[386563]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:05:43 compute-0 lvm[386563]: VG ceph_vg2 finished
Dec 13 09:05:43 compute-0 fervent_hypatia[386482]: {}
Dec 13 09:05:43 compute-0 systemd[1]: libpod-fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4.scope: Deactivated successfully.
Dec 13 09:05:43 compute-0 systemd[1]: libpod-fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4.scope: Consumed 1.518s CPU time.
Dec 13 09:05:43 compute-0 podman[386466]: 2025-12-13 09:05:43.320833242 +0000 UTC m=+1.058284616 container died fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654-merged.mount: Deactivated successfully.
Dec 13 09:05:43 compute-0 podman[386466]: 2025-12-13 09:05:43.377783499 +0000 UTC m=+1.115234883 container remove fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:05:43 compute-0 systemd[1]: libpod-conmon-fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4.scope: Deactivated successfully.
Dec 13 09:05:43 compute-0 sudo[386388]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:05:43 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:05:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:05:43 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:05:43 compute-0 sudo[386579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:05:43 compute-0 sudo[386579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:05:43 compute-0 sudo[386579]: pam_unix(sudo:session): session closed for user root
Dec 13 09:05:43 compute-0 ceph-mon[76537]: pgmap v3138: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 55 op/s
Dec 13 09:05:43 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:05:43 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:05:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 55 op/s
Dec 13 09:05:44 compute-0 ceph-mon[76537]: pgmap v3139: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 55 op/s
Dec 13 09:05:44 compute-0 nova_compute[248510]: 2025-12-13 09:05:44.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:44 compute-0 podman[386606]: 2025-12-13 09:05:44.968987746 +0000 UTC m=+0.062635801 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 09:05:44 compute-0 podman[386605]: 2025-12-13 09:05:44.979587921 +0000 UTC m=+0.073101392 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:05:45 compute-0 podman[386604]: 2025-12-13 09:05:45.029175114 +0000 UTC m=+0.122483150 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 09:05:45 compute-0 nova_compute[248510]: 2025-12-13 09:05:45.216 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616730.214843, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:05:45 compute-0 nova_compute[248510]: 2025-12-13 09:05:45.216 248514 INFO nova.compute.manager [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Stopped (Lifecycle Event)
Dec 13 09:05:45 compute-0 nova_compute[248510]: 2025-12-13 09:05:45.251 248514 DEBUG nova.compute.manager [None req-67200722-9277-4b38-84ea-b836ae35dadc - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:05:46 compute-0 ceph-mon[76537]: pgmap v3140: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:05:48 compute-0 nova_compute[248510]: 2025-12-13 09:05:48.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:05:48 compute-0 ceph-mon[76537]: pgmap v3141: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:05:49 compute-0 nova_compute[248510]: 2025-12-13 09:05:49.033 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:05:49 compute-0 nova_compute[248510]: 2025-12-13 09:05:49.033 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:05:49 compute-0 nova_compute[248510]: 2025-12-13 09:05:49.686 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616734.6855795, bb154aa9-7029-4193-8d00-38e8e552a382 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:05:49 compute-0 nova_compute[248510]: 2025-12-13 09:05:49.687 248514 INFO nova.compute.manager [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] VM Stopped (Lifecycle Event)
Dec 13 09:05:49 compute-0 nova_compute[248510]: 2025-12-13 09:05:49.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:49 compute-0 nova_compute[248510]: 2025-12-13 09:05:49.932 248514 DEBUG nova.compute.manager [None req-ce1c4621-9888-4dd9-a6fa-8039dc53993d - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:05:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 14 op/s
Dec 13 09:05:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:51 compute-0 ceph-mon[76537]: pgmap v3142: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 14 op/s
Dec 13 09:05:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:05:52 compute-0 ceph-mon[76537]: pgmap v3143: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:05:53 compute-0 nova_compute[248510]: 2025-12-13 09:05:53.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:05:54 compute-0 ceph-mon[76537]: pgmap v3144: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:05:54 compute-0 nova_compute[248510]: 2025-12-13 09:05:54.727 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:55.442 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:05:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:55.443 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:05:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:05:55.443 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:05:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:05:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:05:56 compute-0 ceph-mon[76537]: pgmap v3145: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:05:58 compute-0 nova_compute[248510]: 2025-12-13 09:05:58.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:05:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:05:58 compute-0 ceph-mon[76537]: pgmap v3146: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:05:59 compute-0 nova_compute[248510]: 2025-12-13 09:05:59.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:00 compute-0 ceph-mon[76537]: pgmap v3147: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:01 compute-0 nova_compute[248510]: 2025-12-13 09:06:01.786 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:01 compute-0 nova_compute[248510]: 2025-12-13 09:06:01.787 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:06:01 compute-0 nova_compute[248510]: 2025-12-13 09:06:01.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:06:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:02 compute-0 ceph-mon[76537]: pgmap v3148: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:03 compute-0 nova_compute[248510]: 2025-12-13 09:06:03.217 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:04 compute-0 ceph-mon[76537]: pgmap v3149: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:04 compute-0 nova_compute[248510]: 2025-12-13 09:06:04.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:06 compute-0 ceph-mon[76537]: pgmap v3150: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:08 compute-0 nova_compute[248510]: 2025-12-13 09:06:08.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:08 compute-0 ceph-mon[76537]: pgmap v3151: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:06:09
Dec 13 09:06:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:06:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:06:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'images']
Dec 13 09:06:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:06:09 compute-0 nova_compute[248510]: 2025-12-13 09:06:09.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:09 compute-0 nova_compute[248510]: 2025-12-13 09:06:09.791 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:10 compute-0 ceph-mon[76537]: pgmap v3152: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:06:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:06:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:11.887 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:27:74 10.100.0.2 2001:db8::f816:3eff:fe17:2774'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe17:2774/64', 'neutron:device_id': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=966dba06-5969-4f23-922d-d3fb06b4a741) old=Port_Binding(mac=['fa:16:3e:17:27:74 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:06:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:11.889 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 966dba06-5969-4f23-922d-d3fb06b4a741 in datapath 4134c529-684a-4aee-a450-f026f71bff55 updated
Dec 13 09:06:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:11.892 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4134c529-684a-4aee-a450-f026f71bff55, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:06:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:11.893 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36146ca6-e610-4b8f-8e2d-65e3a9c01c42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:12 compute-0 ceph-mon[76537]: pgmap v3153: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:13 compute-0 nova_compute[248510]: 2025-12-13 09:06:13.261 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:14 compute-0 ceph-mon[76537]: pgmap v3154: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:14 compute-0 nova_compute[248510]: 2025-12-13 09:06:14.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:06:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3398534661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:06:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:06:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3398534661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:06:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3398534661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:06:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3398534661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:06:15 compute-0 nova_compute[248510]: 2025-12-13 09:06:15.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:16 compute-0 podman[386670]: 2025-12-13 09:06:16.027901482 +0000 UTC m=+0.105202067 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 09:06:16 compute-0 podman[386671]: 2025-12-13 09:06:16.044372127 +0000 UTC m=+0.123060128 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 09:06:16 compute-0 podman[386669]: 2025-12-13 09:06:16.096174994 +0000 UTC m=+0.173564231 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 09:06:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:16 compute-0 ceph-mon[76537]: pgmap v3155: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:18 compute-0 nova_compute[248510]: 2025-12-13 09:06:18.295 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:18 compute-0 ceph-mon[76537]: pgmap v3156: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:18 compute-0 nova_compute[248510]: 2025-12-13 09:06:18.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:18 compute-0 nova_compute[248510]: 2025-12-13 09:06:18.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:06:18 compute-0 nova_compute[248510]: 2025-12-13 09:06:18.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:06:18 compute-0 nova_compute[248510]: 2025-12-13 09:06:18.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:06:18 compute-0 nova_compute[248510]: 2025-12-13 09:06:18.796 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:19 compute-0 nova_compute[248510]: 2025-12-13 09:06:19.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.036 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.037 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.061 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.141 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.142 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.153 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.154 248514 INFO nova.compute.claims [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.276 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:20 compute-0 ceph-mon[76537]: pgmap v3157: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:06:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373886190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.880 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.886 248514 DEBUG nova.compute.provider_tree [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.920 248514 DEBUG nova.scheduler.client.report [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.961 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:20 compute-0 nova_compute[248510]: 2025-12-13 09:06:20.962 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.019 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.019 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.048 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.074 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:06:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.177 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.179 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.180 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Creating image(s)
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.216 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.252 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.286 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.292 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.355 248514 DEBUG nova.policy [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4444623071405115e-05 of space, bias 1.0, pg target 0.004333386921421534 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698148359410611 of space, bias 1.0, pg target 0.20094445078231832 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:06:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.408 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.409 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.410 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.411 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.447 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.453 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e407f205-43a8-423e-a1cb-dc7f58ccced2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2373886190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.801 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e407f205-43a8-423e-a1cb-dc7f58ccced2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.875 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.972 248514 DEBUG nova.objects.instance [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid e407f205-43a8-423e-a1cb-dc7f58ccced2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.990 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.990 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Ensure instance console log exists: /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.991 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.991 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:21 compute-0 nova_compute[248510]: 2025-12-13 09:06:21.992 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:22 compute-0 nova_compute[248510]: 2025-12-13 09:06:22.453 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Successfully created port: cccd2f42-71c6-4464-b04e-1c3e885a4378 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:06:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:22 compute-0 ceph-mon[76537]: pgmap v3158: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.236 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Successfully updated port: cccd2f42-71c6-4464-b04e-1c3e885a4378 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.254 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.255 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.255 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.365 248514 DEBUG nova.compute.manager [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.366 248514 DEBUG nova.compute.manager [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing instance network info cache due to event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.366 248514 DEBUG oslo_concurrency.lockutils [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:06:23 compute-0 nova_compute[248510]: 2025-12-13 09:06:23.423 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:06:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:24 compute-0 ceph-mon[76537]: pgmap v3159: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:24 compute-0 nova_compute[248510]: 2025-12-13 09:06:24.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:24 compute-0 nova_compute[248510]: 2025-12-13 09:06:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:24 compute-0 nova_compute[248510]: 2025-12-13 09:06:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:24 compute-0 nova_compute[248510]: 2025-12-13 09:06:24.806 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:24 compute-0 nova_compute[248510]: 2025-12-13 09:06:24.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:24 compute-0 nova_compute[248510]: 2025-12-13 09:06:24.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:24 compute-0 nova_compute[248510]: 2025-12-13 09:06:24.808 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:06:24 compute-0 nova_compute[248510]: 2025-12-13 09:06:24.808 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:06:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350107981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.538 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.730s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3350107981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.731 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.733 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3433MB free_disk=59.96666217781603GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.734 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.734 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.846 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance e407f205-43a8-423e-a1cb-dc7f58ccced2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.847 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.847 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:06:25 compute-0 nova_compute[248510]: 2025-12-13 09:06:25.914 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:06:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341216508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:06:26 compute-0 nova_compute[248510]: 2025-12-13 09:06:26.491 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:26 compute-0 nova_compute[248510]: 2025-12-13 09:06:26.499 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:06:26 compute-0 nova_compute[248510]: 2025-12-13 09:06:26.522 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:06:26 compute-0 nova_compute[248510]: 2025-12-13 09:06:26.553 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:06:26 compute-0 nova_compute[248510]: 2025-12-13 09:06:26.554 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3341216508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:06:26 compute-0 ceph-mon[76537]: pgmap v3160: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.415 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.444 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.445 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance network_info: |[{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.446 248514 DEBUG oslo_concurrency.lockutils [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.447 248514 DEBUG nova.network.neutron [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.454 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start _get_guest_xml network_info=[{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.462 248514 WARNING nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.470 248514 DEBUG nova.virt.libvirt.host [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.471 248514 DEBUG nova.virt.libvirt.host [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.484 248514 DEBUG nova.virt.libvirt.host [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.486 248514 DEBUG nova.virt.libvirt.host [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.487 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.487 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.488 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.488 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.489 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.489 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.490 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.491 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.491 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.492 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.492 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.493 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:06:27 compute-0 nova_compute[248510]: 2025-12-13 09:06:27.500 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:06:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434418499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.050 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.070 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.073 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2434418499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:06:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741688218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.645 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.647 248514 DEBUG nova.virt.libvirt.vif [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-906384999',display_name='tempest-TestGettingAddress-server-906384999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-906384999',id=137,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ozf85u1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:06:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=e407f205-43a8-423e-a1cb-dc7f58ccced2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.648 248514 DEBUG nova.network.os_vif_util [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.650 248514 DEBUG nova.network.os_vif_util [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.651 248514 DEBUG nova.objects.instance [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid e407f205-43a8-423e-a1cb-dc7f58ccced2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.674 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <uuid>e407f205-43a8-423e-a1cb-dc7f58ccced2</uuid>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <name>instance-00000089</name>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-906384999</nova:name>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:06:27</nova:creationTime>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <nova:port uuid="cccd2f42-71c6-4464-b04e-1c3e885a4378">
Dec 13 09:06:28 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fec6:dfa0" ipVersion="6"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <system>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <entry name="serial">e407f205-43a8-423e-a1cb-dc7f58ccced2</entry>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <entry name="uuid">e407f205-43a8-423e-a1cb-dc7f58ccced2</entry>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </system>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <os>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   </os>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <features>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   </features>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/e407f205-43a8-423e-a1cb-dc7f58ccced2_disk">
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       </source>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config">
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       </source>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:06:28 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:c6:df:a0"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <target dev="tapcccd2f42-71"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/console.log" append="off"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <video>
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </video>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:06:28 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:06:28 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:06:28 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:06:28 compute-0 nova_compute[248510]: </domain>
Dec 13 09:06:28 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.676 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Preparing to wait for external event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.677 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.677 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.678 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.679 248514 DEBUG nova.virt.libvirt.vif [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-906384999',display_name='tempest-TestGettingAddress-server-906384999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-906384999',id=137,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ozf85u1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:06:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=e407f205-43a8-423e-a1cb-dc7f58ccced2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.680 248514 DEBUG nova.network.os_vif_util [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.681 248514 DEBUG nova.network.os_vif_util [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.682 248514 DEBUG os_vif [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.683 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.684 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.688 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcccd2f42-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.689 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcccd2f42-71, col_values=(('external_ids', {'iface-id': 'cccd2f42-71c6-4464-b04e-1c3e885a4378', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:df:a0', 'vm-uuid': 'e407f205-43a8-423e-a1cb-dc7f58ccced2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:28 compute-0 NetworkManager[50376]: <info>  [1765616788.6925] manager: (tapcccd2f42-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.700 248514 INFO os_vif [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71')
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.763 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.764 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.764 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:c6:df:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.765 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Using config drive
Dec 13 09:06:28 compute-0 nova_compute[248510]: 2025-12-13 09:06:28.798 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:29 compute-0 ceph-mon[76537]: pgmap v3161: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3741688218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.217 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Creating config drive at /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.226 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzeohr92u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.376 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzeohr92u" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.403 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.407 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.553 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.554 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.555 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.588 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.589 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Deleting local config drive /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config because it was imported into RBD.
Dec 13 09:06:29 compute-0 kernel: tapcccd2f42-71: entered promiscuous mode
Dec 13 09:06:29 compute-0 NetworkManager[50376]: <info>  [1765616789.6688] manager: (tapcccd2f42-71): new Tun device (/org/freedesktop/NetworkManager/Devices/585)
Dec 13 09:06:29 compute-0 ovn_controller[148476]: 2025-12-13T09:06:29Z|01420|binding|INFO|Claiming lport cccd2f42-71c6-4464-b04e-1c3e885a4378 for this chassis.
Dec 13 09:06:29 compute-0 ovn_controller[148476]: 2025-12-13T09:06:29Z|01421|binding|INFO|cccd2f42-71c6-4464-b04e-1c3e885a4378: Claiming fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.698 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0'], port_security=['fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fec6:dfa0/64', 'neutron:device_id': 'e407f205-43a8-423e-a1cb-dc7f58ccced2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f3b9c4f-25eb-4912-b05c-6ea015f84c28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=cccd2f42-71c6-4464-b04e-1c3e885a4378) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.701 158419 INFO neutron.agent.ovn.metadata.agent [-] Port cccd2f42-71c6-4464-b04e-1c3e885a4378 in datapath 4134c529-684a-4aee-a450-f026f71bff55 bound to our chassis
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.704 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4134c529-684a-4aee-a450-f026f71bff55
Dec 13 09:06:29 compute-0 systemd-machined[210538]: New machine qemu-168-instance-00000089.
Dec 13 09:06:29 compute-0 systemd-udevd[387097]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.719 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4dfa25-da41-4c1a-abb1-322e7ba7c079]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.720 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4134c529-61 in ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.721 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4134c529-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0fe0f7-b963-4a4c-b82f-880de2b1539b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.722 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc35db83-9d2e-4886-9131-18f80b818e14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 NetworkManager[50376]: <info>  [1765616789.7327] device (tapcccd2f42-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:06:29 compute-0 NetworkManager[50376]: <info>  [1765616789.7350] device (tapcccd2f42-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.737 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[254fbc2d-51d3-4657-814f-93e5ad8dd6a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000089.
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.762 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01bac608-9967-4310-af8a-9d7f30e91733]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:29 compute-0 ovn_controller[148476]: 2025-12-13T09:06:29Z|01422|binding|INFO|Setting lport cccd2f42-71c6-4464-b04e-1c3e885a4378 ovn-installed in OVS
Dec 13 09:06:29 compute-0 ovn_controller[148476]: 2025-12-13T09:06:29Z|01423|binding|INFO|Setting lport cccd2f42-71c6-4464-b04e-1c3e885a4378 up in Southbound
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:29 compute-0 nova_compute[248510]: 2025-12-13 09:06:29.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.802 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f55a1a14-70a3-4acf-bbd2-10d8a5e99bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.806 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b96583d7-d478-480e-a1b4-2bc8c4041a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 systemd-udevd[387100]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:06:29 compute-0 NetworkManager[50376]: <info>  [1765616789.8079] manager: (tap4134c529-60): new Veth device (/org/freedesktop/NetworkManager/Devices/586)
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.845 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[30477490-2afa-4eb8-b8f3-efda5c436cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.849 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[71154260-fb0f-4d49-afc7-ccc723335808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 NetworkManager[50376]: <info>  [1765616789.8797] device (tap4134c529-60): carrier: link connected
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.890 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0a7fca-f632-4248-9490-dd0f09cebb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0f3201-1b96-43ce-b6c3-33f4dfebe371]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4134c529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:27:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936709, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387129, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.938 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6e4d16-43c3-47f9-a5c5-5173e127e291]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2774'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936709, 'tstamp': 936709}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387130, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.961 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[286017f1-b5ad-4975-9f06-25503c1e0f05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4134c529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:27:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936709, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387131, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.005 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c690e52b-0aa9-40d8-9760-1b39f2f8ec00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.078 248514 DEBUG nova.compute.manager [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.078 248514 DEBUG oslo_concurrency.lockutils [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.078 248514 DEBUG oslo_concurrency.lockutils [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.079 248514 DEBUG oslo_concurrency.lockutils [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.079 248514 DEBUG nova.compute.manager [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Processing event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.082 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5629999-1240-431c-abe8-4e7f2a689c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.084 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4134c529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.084 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4134c529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:06:30 compute-0 NetworkManager[50376]: <info>  [1765616790.0871] manager: (tap4134c529-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Dec 13 09:06:30 compute-0 kernel: tap4134c529-60: entered promiscuous mode
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.089 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4134c529-60, col_values=(('external_ids', {'iface-id': '966dba06-5969-4f23-922d-d3fb06b4a741'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:06:30 compute-0 ovn_controller[148476]: 2025-12-13T09:06:30Z|01424|binding|INFO|Releasing lport 966dba06-5969-4f23-922d-d3fb06b4a741 from this chassis (sb_readonly=0)
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.107 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4134c529-684a-4aee-a450-f026f71bff55.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4134c529-684a-4aee-a450-f026f71bff55.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.108 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f69d870-91e8-4453-9175-1ff9cd486fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.108 248514 DEBUG nova.network.neutron [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updated VIF entry in instance network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.109 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-4134c529-684a-4aee-a450-f026f71bff55
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/4134c529-684a-4aee-a450-f026f71bff55.pid.haproxy
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 4134c529-684a-4aee-a450-f026f71bff55
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.109 248514 DEBUG nova.network.neutron [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:06:30 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.110 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'env', 'PROCESS_TAG=haproxy-4134c529-684a-4aee-a450-f026f71bff55', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4134c529-684a-4aee-a450-f026f71bff55.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.129 248514 DEBUG oslo_concurrency.lockutils [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.393 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616790.3923938, e407f205-43a8-423e-a1cb-dc7f58ccced2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.394 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] VM Started (Lifecycle Event)
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.397 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.402 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.407 248514 INFO nova.virt.libvirt.driver [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance spawned successfully.
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.407 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.419 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.424 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.442 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.443 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.444 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.444 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.445 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.446 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.478 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.479 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616790.3939173, e407f205-43a8-423e-a1cb-dc7f58ccced2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.479 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] VM Paused (Lifecycle Event)
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.518 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.523 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616790.4006698, e407f205-43a8-423e-a1cb-dc7f58ccced2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.524 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] VM Resumed (Lifecycle Event)
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.538 248514 INFO nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Took 9.36 seconds to spawn the instance on the hypervisor.
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.539 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.583 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.586 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:06:30 compute-0 podman[387204]: 2025-12-13 09:06:30.492403089 +0000 UTC m=+0.033992478 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:06:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.643 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.916 248514 INFO nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Took 10.80 seconds to build instance.
Dec 13 09:06:30 compute-0 nova_compute[248510]: 2025-12-13 09:06:30.952 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:31 compute-0 ceph-mon[76537]: pgmap v3162: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:31 compute-0 podman[387204]: 2025-12-13 09:06:31.34363423 +0000 UTC m=+0.885223599 container create b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:06:31 compute-0 systemd[1]: Started libpod-conmon-b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e.scope.
Dec 13 09:06:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6629d0d2946616c16ab6364a7afe8e36321cd71c229fca3a1701a5a9a8e1eeb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:31 compute-0 podman[387204]: 2025-12-13 09:06:31.502877931 +0000 UTC m=+1.044467320 container init b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 09:06:31 compute-0 podman[387204]: 2025-12-13 09:06:31.513925646 +0000 UTC m=+1.055514995 container start b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 09:06:31 compute-0 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [NOTICE]   (387224) : New worker (387226) forked
Dec 13 09:06:31 compute-0 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [NOTICE]   (387224) : Loading success.
Dec 13 09:06:32 compute-0 nova_compute[248510]: 2025-12-13 09:06:32.207 248514 DEBUG nova.compute.manager [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:06:32 compute-0 nova_compute[248510]: 2025-12-13 09:06:32.208 248514 DEBUG oslo_concurrency.lockutils [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:32 compute-0 nova_compute[248510]: 2025-12-13 09:06:32.208 248514 DEBUG oslo_concurrency.lockutils [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:32 compute-0 nova_compute[248510]: 2025-12-13 09:06:32.209 248514 DEBUG oslo_concurrency.lockutils [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:32 compute-0 nova_compute[248510]: 2025-12-13 09:06:32.209 248514 DEBUG nova.compute.manager [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] No waiting events found dispatching network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:06:32 compute-0 nova_compute[248510]: 2025-12-13 09:06:32.210 248514 WARNING nova.compute.manager [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received unexpected event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 for instance with vm_state active and task_state None.
Dec 13 09:06:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:32 compute-0 ceph-mon[76537]: pgmap v3163: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:06:33 compute-0 nova_compute[248510]: 2025-12-13 09:06:33.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:33 compute-0 nova_compute[248510]: 2025-12-13 09:06:33.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 09:06:34 compute-0 ceph-mon[76537]: pgmap v3164: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 09:06:35 compute-0 ovn_controller[148476]: 2025-12-13T09:06:35Z|01425|binding|INFO|Releasing lport 966dba06-5969-4f23-922d-d3fb06b4a741 from this chassis (sb_readonly=0)
Dec 13 09:06:35 compute-0 NetworkManager[50376]: <info>  [1765616795.4696] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Dec 13 09:06:35 compute-0 NetworkManager[50376]: <info>  [1765616795.4706] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Dec 13 09:06:35 compute-0 nova_compute[248510]: 2025-12-13 09:06:35.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:35 compute-0 nova_compute[248510]: 2025-12-13 09:06:35.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:35 compute-0 ovn_controller[148476]: 2025-12-13T09:06:35Z|01426|binding|INFO|Releasing lport 966dba06-5969-4f23-922d-d3fb06b4a741 from this chassis (sb_readonly=0)
Dec 13 09:06:35 compute-0 nova_compute[248510]: 2025-12-13 09:06:35.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:36 compute-0 nova_compute[248510]: 2025-12-13 09:06:36.595 248514 DEBUG nova.compute.manager [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:06:36 compute-0 nova_compute[248510]: 2025-12-13 09:06:36.596 248514 DEBUG nova.compute.manager [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing instance network info cache due to event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:06:36 compute-0 nova_compute[248510]: 2025-12-13 09:06:36.596 248514 DEBUG oslo_concurrency.lockutils [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:06:36 compute-0 nova_compute[248510]: 2025-12-13 09:06:36.596 248514 DEBUG oslo_concurrency.lockutils [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:06:36 compute-0 nova_compute[248510]: 2025-12-13 09:06:36.596 248514 DEBUG nova.network.neutron [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:06:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:06:36 compute-0 ceph-mon[76537]: pgmap v3165: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:06:38 compute-0 nova_compute[248510]: 2025-12-13 09:06:38.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:06:38 compute-0 nova_compute[248510]: 2025-12-13 09:06:38.599 248514 DEBUG nova.network.neutron [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updated VIF entry in instance network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:06:38 compute-0 nova_compute[248510]: 2025-12-13 09:06:38.600 248514 DEBUG nova.network.neutron [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:06:38 compute-0 ceph-mon[76537]: pgmap v3166: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:06:38 compute-0 nova_compute[248510]: 2025-12-13 09:06:38.693 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:39 compute-0 nova_compute[248510]: 2025-12-13 09:06:39.036 248514 DEBUG oslo_concurrency.lockutils [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:06:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:06:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:06:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:06:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:06:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:06:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:06:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:06:40 compute-0 ceph-mon[76537]: pgmap v3167: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:06:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:06:42 compute-0 ceph-mon[76537]: pgmap v3168: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:06:42 compute-0 ovn_controller[148476]: 2025-12-13T09:06:42Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:df:a0 10.100.0.3
Dec 13 09:06:42 compute-0 ovn_controller[148476]: 2025-12-13T09:06:42Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:df:a0 10.100.0.3
Dec 13 09:06:43 compute-0 nova_compute[248510]: 2025-12-13 09:06:43.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:43 compute-0 sudo[387236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:06:43 compute-0 sudo[387236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:43 compute-0 sudo[387236]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:43 compute-0 nova_compute[248510]: 2025-12-13 09:06:43.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:43 compute-0 sudo[387261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 09:06:43 compute-0 sudo[387261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:44 compute-0 sudo[387261]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:06:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:06:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:44 compute-0 sudo[387307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:06:44 compute-0 sudo[387307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:44 compute-0 sudo[387307]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:44 compute-0 sudo[387332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:06:44 compute-0 sudo[387332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 120 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 09:06:44 compute-0 sudo[387332]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:06:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:06:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:06:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:06:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:06:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:06:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:06:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:06:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:06:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:06:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:06:45 compute-0 sshd-session[387357]: Invalid user firedancer from 80.94.92.165 port 49028
Dec 13 09:06:45 compute-0 sudo[387391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:06:45 compute-0 sudo[387391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:45 compute-0 sudo[387391]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:45 compute-0 ceph-mon[76537]: pgmap v3169: 321 pgs: 321 active+clean; 120 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 09:06:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:06:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:06:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:06:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:06:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:06:45 compute-0 sudo[387416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:06:45 compute-0 sudo[387416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:45 compute-0 sshd-session[387357]: Connection closed by invalid user firedancer 80.94.92.165 port 49028 [preauth]
Dec 13 09:06:45 compute-0 podman[387453]: 2025-12-13 09:06:45.47849927 +0000 UTC m=+0.069221738 container create 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:06:45 compute-0 systemd[1]: Started libpod-conmon-3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324.scope.
Dec 13 09:06:45 compute-0 podman[387453]: 2025-12-13 09:06:45.444482672 +0000 UTC m=+0.035205190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:06:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:06:45 compute-0 podman[387453]: 2025-12-13 09:06:45.585652556 +0000 UTC m=+0.176375074 container init 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:06:45 compute-0 podman[387453]: 2025-12-13 09:06:45.600696714 +0000 UTC m=+0.191419142 container start 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 09:06:45 compute-0 podman[387453]: 2025-12-13 09:06:45.604952794 +0000 UTC m=+0.195675322 container attach 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 09:06:45 compute-0 admiring_turing[387469]: 167 167
Dec 13 09:06:45 compute-0 systemd[1]: libpod-3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324.scope: Deactivated successfully.
Dec 13 09:06:45 compute-0 conmon[387469]: conmon 3f37a697286e682251ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324.scope/container/memory.events
Dec 13 09:06:45 compute-0 podman[387453]: 2025-12-13 09:06:45.612897319 +0000 UTC m=+0.203619777 container died 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 09:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d75def86a616fdcd3f51e8f810ec8d56e3ba98a236fa91b198b5f5b6b564d53f-merged.mount: Deactivated successfully.
Dec 13 09:06:45 compute-0 podman[387453]: 2025-12-13 09:06:45.664439979 +0000 UTC m=+0.255162437 container remove 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:06:45 compute-0 systemd[1]: libpod-conmon-3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324.scope: Deactivated successfully.
Dec 13 09:06:45 compute-0 podman[387492]: 2025-12-13 09:06:45.847247318 +0000 UTC m=+0.041895182 container create 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:06:45 compute-0 systemd[1]: Started libpod-conmon-1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f.scope.
Dec 13 09:06:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:06:45 compute-0 podman[387492]: 2025-12-13 09:06:45.829303715 +0000 UTC m=+0.023951609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:45 compute-0 podman[387492]: 2025-12-13 09:06:45.940021913 +0000 UTC m=+0.134669837 container init 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:06:45 compute-0 podman[387492]: 2025-12-13 09:06:45.946867189 +0000 UTC m=+0.141515093 container start 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:06:45 compute-0 podman[387492]: 2025-12-13 09:06:45.950978585 +0000 UTC m=+0.145626459 container attach 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:06:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:46 compute-0 romantic_chebyshev[387508]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:06:46 compute-0 romantic_chebyshev[387508]: --> All data devices are unavailable
Dec 13 09:06:46 compute-0 systemd[1]: libpod-1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f.scope: Deactivated successfully.
Dec 13 09:06:46 compute-0 podman[387492]: 2025-12-13 09:06:46.543620072 +0000 UTC m=+0.738268006 container died 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d-merged.mount: Deactivated successfully.
Dec 13 09:06:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 120 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:06:46 compute-0 podman[387492]: 2025-12-13 09:06:46.604889563 +0000 UTC m=+0.799537477 container remove 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:06:46 compute-0 systemd[1]: libpod-conmon-1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f.scope: Deactivated successfully.
Dec 13 09:06:46 compute-0 sudo[387416]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:46 compute-0 podman[387540]: 2025-12-13 09:06:46.674001137 +0000 UTC m=+0.080668933 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 09:06:46 compute-0 ceph-mon[76537]: pgmap v3170: 321 pgs: 321 active+clean; 120 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:06:46 compute-0 podman[387538]: 2025-12-13 09:06:46.721029851 +0000 UTC m=+0.125670345 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible)
Dec 13 09:06:46 compute-0 podman[387529]: 2025-12-13 09:06:46.740885463 +0000 UTC m=+0.149066798 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 09:06:46 compute-0 sudo[387593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:06:46 compute-0 sudo[387593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:46 compute-0 sudo[387593]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:46 compute-0 sudo[387627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:06:46 compute-0 sudo[387627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:47 compute-0 podman[387664]: 2025-12-13 09:06:47.123279873 +0000 UTC m=+0.043248987 container create c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 09:06:47 compute-0 systemd[1]: Started libpod-conmon-c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b.scope.
Dec 13 09:06:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:06:47 compute-0 podman[387664]: 2025-12-13 09:06:47.101365128 +0000 UTC m=+0.021334282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:06:47 compute-0 podman[387664]: 2025-12-13 09:06:47.220165064 +0000 UTC m=+0.140134208 container init c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:06:47 compute-0 podman[387664]: 2025-12-13 09:06:47.233345524 +0000 UTC m=+0.153314678 container start c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:06:47 compute-0 podman[387664]: 2025-12-13 09:06:47.237358638 +0000 UTC m=+0.157327772 container attach c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:06:47 compute-0 charming_curie[387680]: 167 167
Dec 13 09:06:47 compute-0 systemd[1]: libpod-c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b.scope: Deactivated successfully.
Dec 13 09:06:47 compute-0 conmon[387680]: conmon c8fe316ffef2c63c9292 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b.scope/container/memory.events
Dec 13 09:06:47 compute-0 podman[387664]: 2025-12-13 09:06:47.243277891 +0000 UTC m=+0.163247025 container died c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3313704034a8a0eb4a76198f395417933d7eb710bab3c2e5795deb01a06e0b69-merged.mount: Deactivated successfully.
Dec 13 09:06:47 compute-0 podman[387664]: 2025-12-13 09:06:47.297414588 +0000 UTC m=+0.217383732 container remove c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:06:47 compute-0 systemd[1]: libpod-conmon-c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b.scope: Deactivated successfully.
Dec 13 09:06:47 compute-0 podman[387704]: 2025-12-13 09:06:47.560884639 +0000 UTC m=+0.065687177 container create 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 09:06:47 compute-0 systemd[1]: Started libpod-conmon-4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1.scope.
Dec 13 09:06:47 compute-0 podman[387704]: 2025-12-13 09:06:47.538359887 +0000 UTC m=+0.043162495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:06:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:47 compute-0 podman[387704]: 2025-12-13 09:06:47.669148273 +0000 UTC m=+0.173950841 container init 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:06:47 compute-0 podman[387704]: 2025-12-13 09:06:47.687575039 +0000 UTC m=+0.192377607 container start 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:06:47 compute-0 podman[387704]: 2025-12-13 09:06:47.69186618 +0000 UTC m=+0.196668748 container attach 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 09:06:47 compute-0 sharp_moser[387720]: {
Dec 13 09:06:47 compute-0 sharp_moser[387720]:     "0": [
Dec 13 09:06:47 compute-0 sharp_moser[387720]:         {
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "devices": [
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "/dev/loop3"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             ],
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_name": "ceph_lv0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_size": "21470642176",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "name": "ceph_lv0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "tags": {
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cluster_name": "ceph",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.crush_device_class": "",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.encrypted": "0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.objectstore": "bluestore",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osd_id": "0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.type": "block",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.vdo": "0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.with_tpm": "0"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             },
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "type": "block",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "vg_name": "ceph_vg0"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:         }
Dec 13 09:06:47 compute-0 sharp_moser[387720]:     ],
Dec 13 09:06:47 compute-0 sharp_moser[387720]:     "1": [
Dec 13 09:06:47 compute-0 sharp_moser[387720]:         {
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "devices": [
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "/dev/loop4"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             ],
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_name": "ceph_lv1",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_size": "21470642176",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "name": "ceph_lv1",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "tags": {
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cluster_name": "ceph",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.crush_device_class": "",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.encrypted": "0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.objectstore": "bluestore",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osd_id": "1",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.type": "block",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.vdo": "0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.with_tpm": "0"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             },
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "type": "block",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "vg_name": "ceph_vg1"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:         }
Dec 13 09:06:47 compute-0 sharp_moser[387720]:     ],
Dec 13 09:06:47 compute-0 sharp_moser[387720]:     "2": [
Dec 13 09:06:47 compute-0 sharp_moser[387720]:         {
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "devices": [
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "/dev/loop5"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             ],
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_name": "ceph_lv2",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_size": "21470642176",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "name": "ceph_lv2",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "tags": {
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.cluster_name": "ceph",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.crush_device_class": "",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.encrypted": "0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.objectstore": "bluestore",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osd_id": "2",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.type": "block",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.vdo": "0",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:                 "ceph.with_tpm": "0"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             },
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "type": "block",
Dec 13 09:06:47 compute-0 sharp_moser[387720]:             "vg_name": "ceph_vg2"
Dec 13 09:06:47 compute-0 sharp_moser[387720]:         }
Dec 13 09:06:47 compute-0 sharp_moser[387720]:     ]
Dec 13 09:06:47 compute-0 sharp_moser[387720]: }
Dec 13 09:06:48 compute-0 systemd[1]: libpod-4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1.scope: Deactivated successfully.
Dec 13 09:06:48 compute-0 podman[387729]: 2025-12-13 09:06:48.080546742 +0000 UTC m=+0.040211259 container died 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d-merged.mount: Deactivated successfully.
Dec 13 09:06:48 compute-0 podman[387729]: 2025-12-13 09:06:48.131312502 +0000 UTC m=+0.090976979 container remove 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Dec 13 09:06:48 compute-0 systemd[1]: libpod-conmon-4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1.scope: Deactivated successfully.
Dec 13 09:06:48 compute-0 sudo[387627]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:48 compute-0 sudo[387743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:06:48 compute-0 sudo[387743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:48 compute-0 sudo[387743]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:48 compute-0 sudo[387768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:06:48 compute-0 sudo[387768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:48 compute-0 nova_compute[248510]: 2025-12-13 09:06:48.395 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:06:48 compute-0 ceph-mon[76537]: pgmap v3171: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:06:48 compute-0 nova_compute[248510]: 2025-12-13 09:06:48.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:48 compute-0 podman[387806]: 2025-12-13 09:06:48.700818802 +0000 UTC m=+0.051836649 container create 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:06:48 compute-0 systemd[1]: Started libpod-conmon-2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103.scope.
Dec 13 09:06:48 compute-0 podman[387806]: 2025-12-13 09:06:48.674131513 +0000 UTC m=+0.025149410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:06:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:06:48 compute-0 podman[387806]: 2025-12-13 09:06:48.807378813 +0000 UTC m=+0.158396690 container init 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:06:48 compute-0 podman[387806]: 2025-12-13 09:06:48.820747348 +0000 UTC m=+0.171765215 container start 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 09:06:48 compute-0 podman[387806]: 2025-12-13 09:06:48.825144211 +0000 UTC m=+0.176162078 container attach 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 09:06:48 compute-0 kind_napier[387823]: 167 167
Dec 13 09:06:48 compute-0 systemd[1]: libpod-2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103.scope: Deactivated successfully.
Dec 13 09:06:48 compute-0 podman[387806]: 2025-12-13 09:06:48.82818315 +0000 UTC m=+0.179201037 container died 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-17f06fc154fb8a1e4dadf4278479a35067ecb219b4b18a980dcbe50d57a66cda-merged.mount: Deactivated successfully.
Dec 13 09:06:48 compute-0 podman[387806]: 2025-12-13 09:06:48.877091952 +0000 UTC m=+0.228109819 container remove 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:06:48 compute-0 systemd[1]: libpod-conmon-2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103.scope: Deactivated successfully.
Dec 13 09:06:49 compute-0 podman[387847]: 2025-12-13 09:06:49.103536227 +0000 UTC m=+0.049614392 container create 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 09:06:49 compute-0 systemd[1]: Started libpod-conmon-51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89.scope.
Dec 13 09:06:49 compute-0 podman[387847]: 2025-12-13 09:06:49.083267024 +0000 UTC m=+0.029345209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:06:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:06:49 compute-0 podman[387847]: 2025-12-13 09:06:49.208166238 +0000 UTC m=+0.154244393 container init 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:06:49 compute-0 podman[387847]: 2025-12-13 09:06:49.219898331 +0000 UTC m=+0.165976496 container start 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:06:49 compute-0 podman[387847]: 2025-12-13 09:06:49.223717569 +0000 UTC m=+0.169795754 container attach 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:06:49 compute-0 lvm[387943]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:06:49 compute-0 lvm[387943]: VG ceph_vg1 finished
Dec 13 09:06:49 compute-0 lvm[387942]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:06:49 compute-0 lvm[387942]: VG ceph_vg0 finished
Dec 13 09:06:49 compute-0 lvm[387945]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:06:49 compute-0 lvm[387945]: VG ceph_vg2 finished
Dec 13 09:06:50 compute-0 agitated_blackburn[387864]: {}
Dec 13 09:06:50 compute-0 systemd[1]: libpod-51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89.scope: Deactivated successfully.
Dec 13 09:06:50 compute-0 podman[387847]: 2025-12-13 09:06:50.087451452 +0000 UTC m=+1.033529617 container died 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 09:06:50 compute-0 systemd[1]: libpod-51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89.scope: Consumed 1.429s CPU time.
Dec 13 09:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97-merged.mount: Deactivated successfully.
Dec 13 09:06:50 compute-0 podman[387847]: 2025-12-13 09:06:50.141344493 +0000 UTC m=+1.087422668 container remove 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 13 09:06:50 compute-0 systemd[1]: libpod-conmon-51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89.scope: Deactivated successfully.
Dec 13 09:06:50 compute-0 sudo[387768]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:06:50 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:06:50 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:50 compute-0 sudo[387960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:06:50 compute-0 sudo[387960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:06:50 compute-0 sudo[387960]: pam_unix(sudo:session): session closed for user root
Dec 13 09:06:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:06:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:06:51 compute-0 ceph-mon[76537]: pgmap v3172: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:06:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:06:52 compute-0 ceph-mon[76537]: pgmap v3173: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:06:53 compute-0 nova_compute[248510]: 2025-12-13 09:06:53.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:53 compute-0 nova_compute[248510]: 2025-12-13 09:06:53.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:53 compute-0 nova_compute[248510]: 2025-12-13 09:06:53.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:06:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:06:54 compute-0 ceph-mon[76537]: pgmap v3174: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.072 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.073 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.109 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.225 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.226 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.236 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.237 248514 INFO nova.compute.claims [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.380 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:55.443 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:55.444 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:55.445 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:06:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2022079524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.939 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.949 248514 DEBUG nova.compute.provider_tree [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:06:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2022079524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:06:55 compute-0 nova_compute[248510]: 2025-12-13 09:06:55.982 248514 DEBUG nova.scheduler.client.report [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.012 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.013 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.080 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.081 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.117 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:06:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.152 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.265 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.267 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.268 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Creating image(s)
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.298 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.327 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.350 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.358 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.401 248514 DEBUG nova.policy [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.437 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.438 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.439 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.439 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.465 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.469 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:06:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 14 KiB/s wr, 1 op/s
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.786 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.886 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:06:56 compute-0 ceph-mon[76537]: pgmap v3175: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 14 KiB/s wr, 1 op/s
Dec 13 09:06:56 compute-0 nova_compute[248510]: 2025-12-13 09:06:56.990 248514 DEBUG nova.objects.instance [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 9fc8d93d-978b-4964-b6ca-86b96580e92f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:06:57 compute-0 nova_compute[248510]: 2025-12-13 09:06:57.007 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:06:57 compute-0 nova_compute[248510]: 2025-12-13 09:06:57.008 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Ensure instance console log exists: /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:06:57 compute-0 nova_compute[248510]: 2025-12-13 09:06:57.008 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:06:57 compute-0 nova_compute[248510]: 2025-12-13 09:06:57.008 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:06:57 compute-0 nova_compute[248510]: 2025-12-13 09:06:57.009 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:06:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:57.694 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:06:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:57.695 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:06:57 compute-0 nova_compute[248510]: 2025-12-13 09:06:57.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:57 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:06:57.696 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:06:57 compute-0 nova_compute[248510]: 2025-12-13 09:06:57.851 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Successfully created port: 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 135 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 562 KiB/s wr, 13 op/s
Dec 13 09:06:58 compute-0 ceph-mon[76537]: pgmap v3176: 321 pgs: 321 active+clean; 135 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 562 KiB/s wr, 13 op/s
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.758 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Successfully updated port: 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.779 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.780 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.780 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.877 248514 DEBUG nova.compute.manager [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.878 248514 DEBUG nova.compute.manager [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing instance network info cache due to event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.878 248514 DEBUG oslo_concurrency.lockutils [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:06:58 compute-0 nova_compute[248510]: 2025-12-13 09:06:58.971 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:07:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:07:00 compute-0 ceph-mon[76537]: pgmap v3177: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.094 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.118 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.118 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance network_info: |[{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.119 248514 DEBUG oslo_concurrency.lockutils [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.119 248514 DEBUG nova.network.neutron [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.123 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start _get_guest_xml network_info=[{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:07:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.128 248514 WARNING nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.136 248514 DEBUG nova.virt.libvirt.host [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.137 248514 DEBUG nova.virt.libvirt.host [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.147 248514 DEBUG nova.virt.libvirt.host [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.147 248514 DEBUG nova.virt.libvirt.host [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.148 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.148 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.150 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.150 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.150 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.151 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.151 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.154 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:07:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:07:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351611217' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.754 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.789 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:07:01 compute-0 nova_compute[248510]: 2025-12-13 09:07:01.795 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:07:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2351611217' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:07:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:07:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136825159' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.404 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.406 248514 DEBUG nova.virt.libvirt.vif [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487738396',display_name='tempest-TestGettingAddress-server-487738396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487738396',id=138,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4nz1f07d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:06:56Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=9fc8d93d-978b-4964-b6ca-86b96580e92f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.407 248514 DEBUG nova.network.os_vif_util [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.408 248514 DEBUG nova.network.os_vif_util [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.409 248514 DEBUG nova.objects.instance [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fc8d93d-978b-4964-b6ca-86b96580e92f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.439 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <uuid>9fc8d93d-978b-4964-b6ca-86b96580e92f</uuid>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <name>instance-0000008a</name>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-487738396</nova:name>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:07:01</nova:creationTime>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <nova:port uuid="1e0dc858-5ac1-402f-aa44-2ae372f17d6f">
Dec 13 09:07:02 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feda:8c61" ipVersion="6"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <system>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <entry name="serial">9fc8d93d-978b-4964-b6ca-86b96580e92f</entry>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <entry name="uuid">9fc8d93d-978b-4964-b6ca-86b96580e92f</entry>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </system>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <os>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   </os>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <features>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   </features>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9fc8d93d-978b-4964-b6ca-86b96580e92f_disk">
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       </source>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config">
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       </source>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:07:02 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:da:8c:61"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <target dev="tap1e0dc858-5a"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/console.log" append="off"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <video>
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </video>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:07:02 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:07:02 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:07:02 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:07:02 compute-0 nova_compute[248510]: </domain>
Dec 13 09:07:02 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.441 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Preparing to wait for external event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.442 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.443 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.443 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.445 248514 DEBUG nova.virt.libvirt.vif [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487738396',display_name='tempest-TestGettingAddress-server-487738396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487738396',id=138,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4nz1f07d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:06:56Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=9fc8d93d-978b-4964-b6ca-86b96580e92f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.445 248514 DEBUG nova.network.os_vif_util [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.447 248514 DEBUG nova.network.os_vif_util [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.447 248514 DEBUG os_vif [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.449 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.450 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.455 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e0dc858-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.456 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e0dc858-5a, col_values=(('external_ids', {'iface-id': '1e0dc858-5ac1-402f-aa44-2ae372f17d6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:8c:61', 'vm-uuid': '9fc8d93d-978b-4964-b6ca-86b96580e92f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:02 compute-0 NetworkManager[50376]: <info>  [1765616822.4599] manager: (tap1e0dc858-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.470 248514 INFO os_vif [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a')
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.554 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.555 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.555 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:da:8c:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.556 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Using config drive
Dec 13 09:07:02 compute-0 nova_compute[248510]: 2025-12-13 09:07:02.589 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:07:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:07:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2136825159' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:07:02 compute-0 ceph-mon[76537]: pgmap v3178: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.203 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Creating config drive at /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.213 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxytrjt5m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.394 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxytrjt5m" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.440 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.446 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.648 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.649 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Deleting local config drive /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config because it was imported into RBD.
Dec 13 09:07:03 compute-0 kernel: tap1e0dc858-5a: entered promiscuous mode
Dec 13 09:07:03 compute-0 NetworkManager[50376]: <info>  [1765616823.7324] manager: (tap1e0dc858-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/591)
Dec 13 09:07:03 compute-0 ovn_controller[148476]: 2025-12-13T09:07:03Z|01427|binding|INFO|Claiming lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f for this chassis.
Dec 13 09:07:03 compute-0 ovn_controller[148476]: 2025-12-13T09:07:03Z|01428|binding|INFO|1e0dc858-5ac1-402f-aa44-2ae372f17d6f: Claiming fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.744 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61'], port_security=['fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:feda:8c61/64', 'neutron:device_id': '9fc8d93d-978b-4964-b6ca-86b96580e92f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f3b9c4f-25eb-4912-b05c-6ea015f84c28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1e0dc858-5ac1-402f-aa44-2ae372f17d6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.747 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f in datapath 4134c529-684a-4aee-a450-f026f71bff55 bound to our chassis
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.749 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4134c529-684a-4aee-a450-f026f71bff55
Dec 13 09:07:03 compute-0 ovn_controller[148476]: 2025-12-13T09:07:03Z|01429|binding|INFO|Setting lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f up in Southbound
Dec 13 09:07:03 compute-0 ovn_controller[148476]: 2025-12-13T09:07:03Z|01430|binding|INFO|Setting lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f ovn-installed in OVS
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.770 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa043859-8e53-47a2-9291-08c9b9d1ebee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.778 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:03 compute-0 systemd-machined[210538]: New machine qemu-169-instance-0000008a.
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.805 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[38d1d25a-58bc-4a57-940f-7d43fe6765f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:03 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-0000008a.
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.809 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[661e2b41-de22-41c1-b9ad-c91dd9fd23d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:03 compute-0 systemd-udevd[388313]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.842 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f96c1f-7fd4-4df7-96c8-6736c3aff105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:03 compute-0 NetworkManager[50376]: <info>  [1765616823.8467] device (tap1e0dc858-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:07:03 compute-0 NetworkManager[50376]: <info>  [1765616823.8486] device (tap1e0dc858-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.875 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da233a2c-616f-4873-a1a3-7933d1d4bc49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4134c529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:27:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936709, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388319, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.905 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f39ee43b-25f3-4af2-846f-54f8690ae583]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4134c529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936725, 'tstamp': 936725}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388323, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4134c529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936729, 'tstamp': 936729}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388323, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4134c529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:03 compute-0 nova_compute[248510]: 2025-12-13 09:07:03.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.910 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4134c529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.911 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.911 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4134c529-60, col_values=(('external_ids', {'iface-id': '966dba06-5969-4f23-922d-d3fb06b4a741'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.911 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.174 248514 DEBUG nova.compute.manager [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.174 248514 DEBUG oslo_concurrency.lockutils [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.174 248514 DEBUG oslo_concurrency.lockutils [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.175 248514 DEBUG oslo_concurrency.lockutils [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.175 248514 DEBUG nova.compute.manager [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Processing event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.255 248514 DEBUG nova.network.neutron [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updated VIF entry in instance network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.256 248514 DEBUG nova.network.neutron [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.276 248514 DEBUG oslo_concurrency.lockutils [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.371 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616824.3709996, 9fc8d93d-978b-4964-b6ca-86b96580e92f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.372 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] VM Started (Lifecycle Event)
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.376 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.381 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.387 248514 INFO nova.virt.libvirt.driver [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance spawned successfully.
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.388 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.395 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.400 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.417 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.418 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.419 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.420 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.421 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.422 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.428 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.429 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616824.3712752, 9fc8d93d-978b-4964-b6ca-86b96580e92f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.430 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] VM Paused (Lifecycle Event)
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.459 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.463 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616824.3798156, 9fc8d93d-978b-4964-b6ca-86b96580e92f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.464 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] VM Resumed (Lifecycle Event)
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.485 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.495 248514 INFO nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Took 8.23 seconds to spawn the instance on the hypervisor.
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.495 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.496 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.533 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.584 248514 INFO nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Took 9.39 seconds to build instance.
Dec 13 09:07:04 compute-0 nova_compute[248510]: 2025-12-13 09:07:04.604 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 09:07:04 compute-0 ceph-mon[76537]: pgmap v3179: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 09:07:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:06 compute-0 nova_compute[248510]: 2025-12-13 09:07:06.281 248514 DEBUG nova.compute.manager [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:06 compute-0 nova_compute[248510]: 2025-12-13 09:07:06.282 248514 DEBUG oslo_concurrency.lockutils [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:06 compute-0 nova_compute[248510]: 2025-12-13 09:07:06.282 248514 DEBUG oslo_concurrency.lockutils [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:06 compute-0 nova_compute[248510]: 2025-12-13 09:07:06.283 248514 DEBUG oslo_concurrency.lockutils [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:06 compute-0 nova_compute[248510]: 2025-12-13 09:07:06.283 248514 DEBUG nova.compute.manager [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] No waiting events found dispatching network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:07:06 compute-0 nova_compute[248510]: 2025-12-13 09:07:06.284 248514 WARNING nova.compute.manager [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received unexpected event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f for instance with vm_state active and task_state None.
Dec 13 09:07:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 09:07:06 compute-0 ceph-mon[76537]: pgmap v3180: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 09:07:07 compute-0 nova_compute[248510]: 2025-12-13 09:07:07.460 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:08 compute-0 nova_compute[248510]: 2025-12-13 09:07:08.403 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec 13 09:07:08 compute-0 ceph-mon[76537]: pgmap v3181: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec 13 09:07:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:07:09
Dec 13 09:07:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:07:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:07:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 13 09:07:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:07:10 compute-0 nova_compute[248510]: 2025-12-13 09:07:10.253 248514 DEBUG nova.compute.manager [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:10 compute-0 nova_compute[248510]: 2025-12-13 09:07:10.254 248514 DEBUG nova.compute.manager [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing instance network info cache due to event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:07:10 compute-0 nova_compute[248510]: 2025-12-13 09:07:10.254 248514 DEBUG oslo_concurrency.lockutils [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:07:10 compute-0 nova_compute[248510]: 2025-12-13 09:07:10.255 248514 DEBUG oslo_concurrency.lockutils [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:07:10 compute-0 nova_compute[248510]: 2025-12-13 09:07:10.255 248514 DEBUG nova.network.neutron [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 89 op/s
Dec 13 09:07:10 compute-0 ceph-mon[76537]: pgmap v3182: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 89 op/s
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:07:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:07:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:11 compute-0 nova_compute[248510]: 2025-12-13 09:07:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:12 compute-0 nova_compute[248510]: 2025-12-13 09:07:12.000 248514 DEBUG nova.network.neutron [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updated VIF entry in instance network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:07:12 compute-0 nova_compute[248510]: 2025-12-13 09:07:12.001 248514 DEBUG nova.network.neutron [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:07:12 compute-0 nova_compute[248510]: 2025-12-13 09:07:12.309 248514 DEBUG oslo_concurrency.lockutils [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:07:12 compute-0 nova_compute[248510]: 2025-12-13 09:07:12.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:07:12 compute-0 ceph-mon[76537]: pgmap v3183: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:07:13 compute-0 nova_compute[248510]: 2025-12-13 09:07:13.406 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 13 09:07:14 compute-0 ceph-mon[76537]: pgmap v3184: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 13 09:07:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:07:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1819600709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:07:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:07:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1819600709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:07:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1819600709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:07:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1819600709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:07:15 compute-0 nova_compute[248510]: 2025-12-13 09:07:15.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:16 compute-0 ovn_controller[148476]: 2025-12-13T09:07:16Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:8c:61 10.100.0.4
Dec 13 09:07:16 compute-0 ovn_controller[148476]: 2025-12-13T09:07:16Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:8c:61 10.100.0.4
Dec 13 09:07:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 77 op/s
Dec 13 09:07:16 compute-0 ceph-mon[76537]: pgmap v3185: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 77 op/s
Dec 13 09:07:17 compute-0 podman[388370]: 2025-12-13 09:07:17.027356565 +0000 UTC m=+0.095511316 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 09:07:17 compute-0 podman[388369]: 2025-12-13 09:07:17.030024484 +0000 UTC m=+0.109175599 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Dec 13 09:07:17 compute-0 podman[388368]: 2025-12-13 09:07:17.083437432 +0000 UTC m=+0.161315244 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:07:17 compute-0 nova_compute[248510]: 2025-12-13 09:07:17.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:18 compute-0 nova_compute[248510]: 2025-12-13 09:07:18.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 177 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 99 op/s
Dec 13 09:07:18 compute-0 ceph-mon[76537]: pgmap v3186: 321 pgs: 321 active+clean; 177 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 99 op/s
Dec 13 09:07:19 compute-0 nova_compute[248510]: 2025-12-13 09:07:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:19 compute-0 nova_compute[248510]: 2025-12-13 09:07:19.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:07:19 compute-0 nova_compute[248510]: 2025-12-13 09:07:19.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:07:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Dec 13 09:07:20 compute-0 ceph-mon[76537]: pgmap v3187: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Dec 13 09:07:20 compute-0 nova_compute[248510]: 2025-12-13 09:07:20.987 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:07:20 compute-0 nova_compute[248510]: 2025-12-13 09:07:20.988 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:07:20 compute-0 nova_compute[248510]: 2025-12-13 09:07:20.988 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:07:20 compute-0 nova_compute[248510]: 2025-12-13 09:07:20.988 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid e407f205-43a8-423e-a1cb-dc7f58ccced2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:07:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015304081295743975 of space, bias 1.0, pg target 0.45912243887231924 quantized to 32 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698104578698684 of space, bias 1.0, pg target 0.2009431373609605 quantized to 32 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:07:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:07:22 compute-0 nova_compute[248510]: 2025-12-13 09:07:22.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 13 09:07:22 compute-0 ceph-mon[76537]: pgmap v3188: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 13 09:07:23 compute-0 nova_compute[248510]: 2025-12-13 09:07:23.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:23 compute-0 nova_compute[248510]: 2025-12-13 09:07:23.995 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:07:24 compute-0 nova_compute[248510]: 2025-12-13 09:07:24.022 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:07:24 compute-0 nova_compute[248510]: 2025-12-13 09:07:24.023 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:07:24 compute-0 nova_compute[248510]: 2025-12-13 09:07:24.024 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:07:24 compute-0 ceph-mon[76537]: pgmap v3189: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:07:24 compute-0 nova_compute[248510]: 2025-12-13 09:07:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:25 compute-0 nova_compute[248510]: 2025-12-13 09:07:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:25 compute-0 nova_compute[248510]: 2025-12-13 09:07:25.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:25 compute-0 nova_compute[248510]: 2025-12-13 09:07:25.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:25 compute-0 nova_compute[248510]: 2025-12-13 09:07:25.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:25 compute-0 nova_compute[248510]: 2025-12-13 09:07:25.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:07:25 compute-0 nova_compute[248510]: 2025-12-13 09:07:25.805 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:07:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:07:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201304418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.470 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:07:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1201304418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.565 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.566 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.573 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.573 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:07:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.795 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.798 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3075MB free_disk=59.89634881168604GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.799 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.884 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance e407f205-43a8-423e-a1cb-dc7f58ccced2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.884 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fc8d93d-978b-4964-b6ca-86b96580e92f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.884 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.885 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.902 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.923 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.923 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.938 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:07:26 compute-0 nova_compute[248510]: 2025-12-13 09:07:26.962 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:07:27 compute-0 nova_compute[248510]: 2025-12-13 09:07:27.054 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:07:27 compute-0 nova_compute[248510]: 2025-12-13 09:07:27.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:27 compute-0 ceph-mon[76537]: pgmap v3190: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.522894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847522937, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1457, "num_deletes": 251, "total_data_size": 2370995, "memory_usage": 2406000, "flush_reason": "Manual Compaction"}
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847537246, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2306066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62194, "largest_seqno": 63650, "table_properties": {"data_size": 2299336, "index_size": 3864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14237, "raw_average_key_size": 19, "raw_value_size": 2285799, "raw_average_value_size": 3201, "num_data_blocks": 172, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616701, "oldest_key_time": 1765616701, "file_creation_time": 1765616847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 14504 microseconds, and 6816 cpu microseconds.
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.537392) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2306066 bytes OK
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.537450) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.538899) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.538917) EVENT_LOG_v1 {"time_micros": 1765616847538912, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.538940) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2364577, prev total WAL file size 2364577, number of live WAL files 2.
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.540004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2252KB)], [146(11MB)]
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847540099, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13941897, "oldest_snapshot_seqno": -1}
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8413 keys, 12098620 bytes, temperature: kUnknown
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847634527, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 12098620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12042400, "index_size": 34091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 220161, "raw_average_key_size": 26, "raw_value_size": 11892291, "raw_average_value_size": 1413, "num_data_blocks": 1326, "num_entries": 8413, "num_filter_entries": 8413, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.634870) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 12098620 bytes
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.638348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.4 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(11.3) write-amplify(5.2) OK, records in: 8927, records dropped: 514 output_compression: NoCompression
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.638411) EVENT_LOG_v1 {"time_micros": 1765616847638386, "job": 90, "event": "compaction_finished", "compaction_time_micros": 94581, "compaction_time_cpu_micros": 37108, "output_level": 6, "num_output_files": 1, "total_output_size": 12098620, "num_input_records": 8927, "num_output_records": 8413, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847639502, "job": 90, "event": "table_file_deletion", "file_number": 148}
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847643120, "job": 90, "event": "table_file_deletion", "file_number": 146}
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.539947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:27 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:07:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763754987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:07:27 compute-0 nova_compute[248510]: 2025-12-13 09:07:27.737 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:07:27 compute-0 nova_compute[248510]: 2025-12-13 09:07:27.746 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.037 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.066 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.067 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.116 248514 DEBUG nova.compute.manager [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.117 248514 DEBUG nova.compute.manager [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing instance network info cache due to event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.118 248514 DEBUG oslo_concurrency.lockutils [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.118 248514 DEBUG oslo_concurrency.lockutils [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.119 248514 DEBUG nova.network.neutron [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.245 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.246 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.246 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.247 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.247 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.250 248514 INFO nova.compute.manager [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Terminating instance
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.253 248514 DEBUG nova.compute.manager [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:07:28 compute-0 kernel: tap1e0dc858-5a (unregistering): left promiscuous mode
Dec 13 09:07:28 compute-0 NetworkManager[50376]: <info>  [1765616848.3028] device (tap1e0dc858-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:07:28 compute-0 ovn_controller[148476]: 2025-12-13T09:07:28Z|01431|binding|INFO|Releasing lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f from this chassis (sb_readonly=0)
Dec 13 09:07:28 compute-0 ovn_controller[148476]: 2025-12-13T09:07:28Z|01432|binding|INFO|Setting lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f down in Southbound
Dec 13 09:07:28 compute-0 ovn_controller[148476]: 2025-12-13T09:07:28Z|01433|binding|INFO|Removing iface tap1e0dc858-5a ovn-installed in OVS
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.324 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61'], port_security=['fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:feda:8c61/64', 'neutron:device_id': '9fc8d93d-978b-4964-b6ca-86b96580e92f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f3b9c4f-25eb-4912-b05c-6ea015f84c28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1e0dc858-5ac1-402f-aa44-2ae372f17d6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.326 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f in datapath 4134c529-684a-4aee-a450-f026f71bff55 unbound from our chassis
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.328 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4134c529-684a-4aee-a450-f026f71bff55
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.343 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.357 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3041bbc1-4d9a-4df4-811f-482af257c65d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:28 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Dec 13 09:07:28 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008a.scope: Consumed 12.846s CPU time.
Dec 13 09:07:28 compute-0 systemd-machined[210538]: Machine qemu-169-instance-0000008a terminated.
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.405 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3442a46a-23e3-4499-9b67-3fbef7643c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.410 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2597d2ee-1db2-4298-b9d8-57f2548d2c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.457 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb5b160-9a74-4740-a63a-0cc797923538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.515 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.517 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7f1ead-070c-4038-ae4f-16ad172fe75a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4134c529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:27:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936709, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388487, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2763754987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.529 248514 INFO nova.virt.libvirt.driver [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance destroyed successfully.
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.529 248514 DEBUG nova.objects.instance [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 9fc8d93d-978b-4964-b6ca-86b96580e92f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f08aa9d-f939-4f04-a26f-33a097937005]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4134c529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936725, 'tstamp': 936725}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388494, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4134c529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936729, 'tstamp': 936729}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388494, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.539 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4134c529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4134c529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:28 compute-0 nova_compute[248510]: 2025-12-13 09:07:28.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.547 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4134c529-60, col_values=(('external_ids', {'iface-id': '966dba06-5969-4f23-922d-d3fb06b4a741'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.547 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:07:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 13 09:07:28 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.000 248514 DEBUG nova.virt.libvirt.vif [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487738396',display_name='tempest-TestGettingAddress-server-487738396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487738396',id=138,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:07:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4nz1f07d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:07:04Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=9fc8d93d-978b-4964-b6ca-86b96580e92f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.001 248514 DEBUG nova.network.os_vif_util [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.003 248514 DEBUG nova.network.os_vif_util [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.004 248514 DEBUG os_vif [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.009 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e0dc858-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.015 248514 DEBUG nova.compute.manager [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-unplugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.016 248514 DEBUG oslo_concurrency.lockutils [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.016 248514 DEBUG oslo_concurrency.lockutils [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.017 248514 DEBUG oslo_concurrency.lockutils [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.017 248514 DEBUG nova.compute.manager [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] No waiting events found dispatching network-vif-unplugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.017 248514 DEBUG nova.compute.manager [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-unplugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.024 248514 INFO os_vif [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a')
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.364 248514 INFO nova.virt.libvirt.driver [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Deleting instance files /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f_del
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.365 248514 INFO nova.virt.libvirt.driver [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Deletion of /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f_del complete
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.424 248514 INFO nova.compute.manager [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Took 1.17 seconds to destroy the instance on the hypervisor.
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.425 248514 DEBUG oslo.service.loopingcall [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.425 248514 DEBUG nova.compute.manager [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:07:29 compute-0 nova_compute[248510]: 2025-12-13 09:07:29.425 248514 DEBUG nova.network.neutron [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:07:29 compute-0 ceph-mon[76537]: pgmap v3191: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.069 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.070 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.232 248514 DEBUG nova.network.neutron [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updated VIF entry in instance network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.233 248514 DEBUG nova.network.neutron [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.260 248514 DEBUG oslo_concurrency.lockutils [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.348 248514 DEBUG nova.network.neutron [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.366 248514 INFO nova.compute.manager [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Took 0.94 seconds to deallocate network for instance.
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.431 248514 DEBUG nova.compute.manager [req-83d3e230-36be-447d-b144-ff07d5e400f0 req-aaabe7b0-9e75-4480-9c28-1b5a504e0018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-deleted-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.434 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.435 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.503 248514 DEBUG oslo_concurrency.processutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:07:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 166 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Dec 13 09:07:30 compute-0 nova_compute[248510]: 2025-12-13 09:07:30.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:30 compute-0 ceph-mon[76537]: pgmap v3192: 321 pgs: 321 active+clean; 166 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.122 248514 DEBUG nova.compute.manager [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.123 248514 DEBUG oslo_concurrency.lockutils [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.123 248514 DEBUG oslo_concurrency.lockutils [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.123 248514 DEBUG oslo_concurrency.lockutils [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.124 248514 DEBUG nova.compute.manager [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] No waiting events found dispatching network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.124 248514 WARNING nova.compute.manager [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received unexpected event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f for instance with vm_state deleted and task_state None.
Dec 13 09:07:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:07:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796815517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.248 248514 DEBUG oslo_concurrency.processutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.259 248514 DEBUG nova.compute.provider_tree [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.279 248514 DEBUG nova.scheduler.client.report [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.304 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.340 248514 INFO nova.scheduler.client.report [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 9fc8d93d-978b-4964-b6ca-86b96580e92f
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.409 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:31 compute-0 nova_compute[248510]: 2025-12-13 09:07:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:07:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3796815517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.597 248514 DEBUG nova.compute.manager [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.597 248514 DEBUG nova.compute.manager [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing instance network info cache due to event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.598 248514 DEBUG oslo_concurrency.lockutils [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.598 248514 DEBUG oslo_concurrency.lockutils [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.599 248514 DEBUG nova.network.neutron [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:07:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 166 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 24 KiB/s wr, 12 op/s
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.676 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.677 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.679 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.680 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.680 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.682 248514 INFO nova.compute.manager [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Terminating instance
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.685 248514 DEBUG nova.compute.manager [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:07:32 compute-0 kernel: tapcccd2f42-71 (unregistering): left promiscuous mode
Dec 13 09:07:32 compute-0 NetworkManager[50376]: <info>  [1765616852.7512] device (tapcccd2f42-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:07:32 compute-0 ovn_controller[148476]: 2025-12-13T09:07:32Z|01434|binding|INFO|Releasing lport cccd2f42-71c6-4464-b04e-1c3e885a4378 from this chassis (sb_readonly=0)
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:32 compute-0 ovn_controller[148476]: 2025-12-13T09:07:32Z|01435|binding|INFO|Setting lport cccd2f42-71c6-4464-b04e-1c3e885a4378 down in Southbound
Dec 13 09:07:32 compute-0 ovn_controller[148476]: 2025-12-13T09:07:32Z|01436|binding|INFO|Removing iface tapcccd2f42-71 ovn-installed in OVS
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:32 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000089.scope: Deactivated successfully.
Dec 13 09:07:32 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000089.scope: Consumed 15.478s CPU time.
Dec 13 09:07:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.843 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0'], port_security=['fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fec6:dfa0/64', 'neutron:device_id': 'e407f205-43a8-423e-a1cb-dc7f58ccced2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f3b9c4f-25eb-4912-b05c-6ea015f84c28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=cccd2f42-71c6-4464-b04e-1c3e885a4378) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:07:32 compute-0 systemd-machined[210538]: Machine qemu-168-instance-00000089 terminated.
Dec 13 09:07:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.846 158419 INFO neutron.agent.ovn.metadata.agent [-] Port cccd2f42-71c6-4464-b04e-1c3e885a4378 in datapath 4134c529-684a-4aee-a450-f026f71bff55 unbound from our chassis
Dec 13 09:07:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.851 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4134c529-684a-4aee-a450-f026f71bff55, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:07:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.852 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff56cce-b35f-435a-a92c-5e0afc79e2e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.853 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 namespace which is not needed anymore
Dec 13 09:07:32 compute-0 ceph-mon[76537]: pgmap v3193: 321 pgs: 321 active+clean; 166 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 24 KiB/s wr, 12 op/s
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.940 248514 INFO nova.virt.libvirt.driver [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance destroyed successfully.
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.941 248514 DEBUG nova.objects.instance [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid e407f205-43a8-423e-a1cb-dc7f58ccced2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.964 248514 DEBUG nova.virt.libvirt.vif [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-906384999',display_name='tempest-TestGettingAddress-server-906384999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-906384999',id=137,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:06:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ozf85u1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:06:30Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=e407f205-43a8-423e-a1cb-dc7f58ccced2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.965 248514 DEBUG nova.network.os_vif_util [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.967 248514 DEBUG nova.network.os_vif_util [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.968 248514 DEBUG os_vif [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.973 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcccd2f42-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.978 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:07:32 compute-0 nova_compute[248510]: 2025-12-13 09:07:32.981 248514 INFO os_vif [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71')
Dec 13 09:07:33 compute-0 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [NOTICE]   (387224) : haproxy version is 2.8.14-c23fe91
Dec 13 09:07:33 compute-0 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [NOTICE]   (387224) : path to executable is /usr/sbin/haproxy
Dec 13 09:07:33 compute-0 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [WARNING]  (387224) : Exiting Master process...
Dec 13 09:07:33 compute-0 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [WARNING]  (387224) : Exiting Master process...
Dec 13 09:07:33 compute-0 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [ALERT]    (387224) : Current worker (387226) exited with code 143 (Terminated)
Dec 13 09:07:33 compute-0 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [WARNING]  (387224) : All workers exited. Exiting... (0)
Dec 13 09:07:33 compute-0 systemd[1]: libpod-b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e.scope: Deactivated successfully.
Dec 13 09:07:33 compute-0 podman[388577]: 2025-12-13 09:07:33.073466331 +0000 UTC m=+0.080048977 container died b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e-userdata-shm.mount: Deactivated successfully.
Dec 13 09:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6629d0d2946616c16ab6364a7afe8e36321cd71c229fca3a1701a5a9a8e1eeb-merged.mount: Deactivated successfully.
Dec 13 09:07:33 compute-0 podman[388577]: 2025-12-13 09:07:33.140522362 +0000 UTC m=+0.147105038 container cleanup b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 09:07:33 compute-0 systemd[1]: libpod-conmon-b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e.scope: Deactivated successfully.
Dec 13 09:07:33 compute-0 podman[388628]: 2025-12-13 09:07:33.230658609 +0000 UTC m=+0.053073601 container remove b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.241 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[afc661d5-19eb-4dea-88f9-9189ebdfa26f]: (4, ('Sat Dec 13 09:07:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 (b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e)\nb0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e\nSat Dec 13 09:07:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 (b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e)\nb0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.243 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9080573-07cd-42ce-946c-f1e576798697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.245 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4134c529-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:33 compute-0 kernel: tap4134c529-60: left promiscuous mode
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.278 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[826255ba-3a07-41ba-b88b-dc368ea3ba4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6df9ad91-3b8f-4e92-bd07-525f3312a539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.297 248514 INFO nova.virt.libvirt.driver [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Deleting instance files /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2_del
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.297 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[899e4006-8dd9-4088-b630-9ced67ebf61b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.297 248514 INFO nova.virt.libvirt.driver [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Deletion of /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2_del complete
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.316 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b26158-7f2f-4d85-8744-5b7aee6e3513]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936700, 'reachable_time': 28385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388643, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.321 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:07:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d4134c529\x2d684a\x2d4aee\x2da450\x2df026f71bff55.mount: Deactivated successfully.
Dec 13 09:07:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.322 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d755843d-05b3-4856-9f38-99847323e4a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.472 248514 INFO nova.compute.manager [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Took 0.79 seconds to destroy the instance on the hypervisor.
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.473 248514 DEBUG oslo.service.loopingcall [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.473 248514 DEBUG nova.compute.manager [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.473 248514 DEBUG nova.network.neutron [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:07:33 compute-0 nova_compute[248510]: 2025-12-13 09:07:33.496 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 69 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 28 KiB/s wr, 37 op/s
Dec 13 09:07:34 compute-0 ceph-mon[76537]: pgmap v3194: 321 pgs: 321 active+clean; 69 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 28 KiB/s wr, 37 op/s
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.834 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-unplugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.835 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.835 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.835 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.836 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] No waiting events found dispatching network-vif-unplugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.836 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-unplugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.836 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.837 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.837 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.838 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.838 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] No waiting events found dispatching network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:07:34 compute-0 nova_compute[248510]: 2025-12-13 09:07:34.838 248514 WARNING nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received unexpected event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 for instance with vm_state active and task_state deleting.
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.108 248514 DEBUG nova.network.neutron [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.134 248514 INFO nova.compute.manager [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Took 1.66 seconds to deallocate network for instance.
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.240 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.240 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.287 248514 DEBUG oslo_concurrency.processutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.378 248514 DEBUG nova.compute.manager [req-98b56e73-beb3-4a37-9ccf-e91fe2bb215a req-ce866b64-0eee-4411-9314-f2f57f46165f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-deleted-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.688 248514 DEBUG nova.network.neutron [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updated VIF entry in instance network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.689 248514 DEBUG nova.network.neutron [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.711 248514 DEBUG oslo_concurrency.lockutils [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:07:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:07:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344881787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.902 248514 DEBUG oslo_concurrency.processutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.908 248514 DEBUG nova.compute.provider_tree [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.925 248514 DEBUG nova.scheduler.client.report [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:07:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2344881787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.947 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:35 compute-0 nova_compute[248510]: 2025-12-13 09:07:35.974 248514 INFO nova.scheduler.client.report [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance e407f205-43a8-423e-a1cb-dc7f58ccced2
Dec 13 09:07:36 compute-0 nova_compute[248510]: 2025-12-13 09:07:36.058 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.141519) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856141559, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 332, "num_deletes": 250, "total_data_size": 137266, "memory_usage": 143264, "flush_reason": "Manual Compaction"}
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856144845, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 134585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63651, "largest_seqno": 63982, "table_properties": {"data_size": 132513, "index_size": 235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5797, "raw_average_key_size": 20, "raw_value_size": 128431, "raw_average_value_size": 447, "num_data_blocks": 11, "num_entries": 287, "num_filter_entries": 287, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616848, "oldest_key_time": 1765616848, "file_creation_time": 1765616856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 3376 microseconds, and 1593 cpu microseconds.
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.144890) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 134585 bytes OK
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.144915) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146362) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146380) EVENT_LOG_v1 {"time_micros": 1765616856146374, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 134980, prev total WAL file size 134980, number of live WAL files 2.
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146767) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353131' seq:72057594037927935, type:22 .. '6D6772737461740032373632' seq:0, type:0; will stop at (end)
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(131KB)], [149(11MB)]
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856146810, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12233205, "oldest_snapshot_seqno": -1}
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8193 keys, 8785127 bytes, temperature: kUnknown
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856209168, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8785127, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8735161, "index_size": 28349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 215773, "raw_average_key_size": 26, "raw_value_size": 8593707, "raw_average_value_size": 1048, "num_data_blocks": 1086, "num_entries": 8193, "num_filter_entries": 8193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.209557) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8785127 bytes
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.211017) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.7 rd, 140.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.5 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(156.2) write-amplify(65.3) OK, records in: 8700, records dropped: 507 output_compression: NoCompression
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.211037) EVENT_LOG_v1 {"time_micros": 1765616856211027, "job": 92, "event": "compaction_finished", "compaction_time_micros": 62495, "compaction_time_cpu_micros": 31445, "output_level": 6, "num_output_files": 1, "total_output_size": 8785127, "num_input_records": 8700, "num_output_records": 8193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856211230, "job": 92, "event": "table_file_deletion", "file_number": 151}
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856213818, "job": 92, "event": "table_file_deletion", "file_number": 149}
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:36 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:07:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 69 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 17 KiB/s wr, 37 op/s
Dec 13 09:07:37 compute-0 ceph-mon[76537]: pgmap v3195: 321 pgs: 321 active+clean; 69 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 17 KiB/s wr, 37 op/s
Dec 13 09:07:37 compute-0 nova_compute[248510]: 2025-12-13 09:07:37.977 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:38 compute-0 nova_compute[248510]: 2025-12-13 09:07:38.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 18 KiB/s wr, 58 op/s
Dec 13 09:07:38 compute-0 ceph-mon[76537]: pgmap v3196: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 18 KiB/s wr, 58 op/s
Dec 13 09:07:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:07:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:07:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:07:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:07:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:07:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:07:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 17 KiB/s wr, 57 op/s
Dec 13 09:07:40 compute-0 ceph-mon[76537]: pgmap v3197: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 17 KiB/s wr, 57 op/s
Dec 13 09:07:41 compute-0 nova_compute[248510]: 2025-12-13 09:07:41.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:41 compute-0 nova_compute[248510]: 2025-12-13 09:07:41.178 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 KiB/s wr, 45 op/s
Dec 13 09:07:42 compute-0 ceph-mon[76537]: pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 KiB/s wr, 45 op/s
Dec 13 09:07:42 compute-0 nova_compute[248510]: 2025-12-13 09:07:42.979 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:43 compute-0 nova_compute[248510]: 2025-12-13 09:07:43.527 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616848.5262506, 9fc8d93d-978b-4964-b6ca-86b96580e92f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:07:43 compute-0 nova_compute[248510]: 2025-12-13 09:07:43.528 248514 INFO nova.compute.manager [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] VM Stopped (Lifecycle Event)
Dec 13 09:07:43 compute-0 nova_compute[248510]: 2025-12-13 09:07:43.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:43 compute-0 nova_compute[248510]: 2025-12-13 09:07:43.557 248514 DEBUG nova.compute.manager [None req-4560b96e-d278-4200-85af-097f7dcaaa57 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:07:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 KiB/s wr, 45 op/s
Dec 13 09:07:44 compute-0 ceph-mon[76537]: pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 KiB/s wr, 45 op/s
Dec 13 09:07:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Dec 13 09:07:46 compute-0 ceph-mon[76537]: pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Dec 13 09:07:47 compute-0 nova_compute[248510]: 2025-12-13 09:07:47.936 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616852.9349854, e407f205-43a8-423e-a1cb-dc7f58ccced2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:07:47 compute-0 nova_compute[248510]: 2025-12-13 09:07:47.937 248514 INFO nova.compute.manager [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] VM Stopped (Lifecycle Event)
Dec 13 09:07:47 compute-0 nova_compute[248510]: 2025-12-13 09:07:47.968 248514 DEBUG nova.compute.manager [None req-d6406821-3854-4722-b152-e94a830fef12 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:07:47 compute-0 nova_compute[248510]: 2025-12-13 09:07:47.981 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:47 compute-0 podman[388669]: 2025-12-13 09:07:47.986865853 +0000 UTC m=+0.064061454 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:07:48 compute-0 podman[388668]: 2025-12-13 09:07:48.007917327 +0000 UTC m=+0.086356650 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:07:48 compute-0 podman[388667]: 2025-12-13 09:07:48.022449782 +0000 UTC m=+0.109722433 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:07:48 compute-0 nova_compute[248510]: 2025-12-13 09:07:48.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Dec 13 09:07:48 compute-0 ceph-mon[76537]: pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Dec 13 09:07:50 compute-0 sudo[388731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:07:50 compute-0 sudo[388731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:50 compute-0 sudo[388731]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:50 compute-0 sudo[388756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:07:50 compute-0 sudo[388756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:50 compute-0 ceph-mon[76537]: pgmap v3202: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:51 compute-0 sudo[388756]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 09:07:51 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:07:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:07:51 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:07:51 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:07:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:07:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:07:51 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:07:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:07:51 compute-0 sudo[388812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:07:51 compute-0 sudo[388812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:51 compute-0 sudo[388812]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:51 compute-0 sudo[388837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:07:51 compute-0 sudo[388837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:07:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:07:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:07:51 compute-0 podman[388874]: 2025-12-13 09:07:51.72465311 +0000 UTC m=+0.073462407 container create 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 09:07:51 compute-0 systemd[1]: Started libpod-conmon-298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b.scope.
Dec 13 09:07:51 compute-0 podman[388874]: 2025-12-13 09:07:51.690420056 +0000 UTC m=+0.039229403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:07:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:07:51 compute-0 podman[388874]: 2025-12-13 09:07:51.843024885 +0000 UTC m=+0.191834212 container init 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 09:07:51 compute-0 podman[388874]: 2025-12-13 09:07:51.855468357 +0000 UTC m=+0.204277604 container start 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:07:51 compute-0 podman[388874]: 2025-12-13 09:07:51.85946406 +0000 UTC m=+0.208273377 container attach 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 09:07:51 compute-0 elated_shaw[388891]: 167 167
Dec 13 09:07:51 compute-0 systemd[1]: libpod-298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b.scope: Deactivated successfully.
Dec 13 09:07:51 compute-0 podman[388874]: 2025-12-13 09:07:51.865394423 +0000 UTC m=+0.214203670 container died 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 09:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b4300bd6ecbd447897c20e829212ba39085bc670b5c11204b816f6745600c9-merged.mount: Deactivated successfully.
Dec 13 09:07:51 compute-0 podman[388874]: 2025-12-13 09:07:51.916900992 +0000 UTC m=+0.265710229 container remove 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:07:51 compute-0 systemd[1]: libpod-conmon-298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b.scope: Deactivated successfully.
Dec 13 09:07:52 compute-0 podman[388914]: 2025-12-13 09:07:52.152937795 +0000 UTC m=+0.074011862 container create 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:07:52 compute-0 systemd[1]: Started libpod-conmon-2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5.scope.
Dec 13 09:07:52 compute-0 podman[388914]: 2025-12-13 09:07:52.120685692 +0000 UTC m=+0.041759809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:07:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:52 compute-0 podman[388914]: 2025-12-13 09:07:52.27557713 +0000 UTC m=+0.196651177 container init 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 09:07:52 compute-0 podman[388914]: 2025-12-13 09:07:52.289683694 +0000 UTC m=+0.210757771 container start 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:07:52 compute-0 podman[388914]: 2025-12-13 09:07:52.29532516 +0000 UTC m=+0.216399187 container attach 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:07:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:52 compute-0 ceph-mon[76537]: pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:52 compute-0 gifted_mirzakhani[388930]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:07:52 compute-0 gifted_mirzakhani[388930]: --> All data devices are unavailable
Dec 13 09:07:52 compute-0 systemd[1]: libpod-2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5.scope: Deactivated successfully.
Dec 13 09:07:52 compute-0 podman[388950]: 2025-12-13 09:07:52.95239863 +0000 UTC m=+0.024600386 container died 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87-merged.mount: Deactivated successfully.
Dec 13 09:07:52 compute-0 nova_compute[248510]: 2025-12-13 09:07:52.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:53 compute-0 podman[388950]: 2025-12-13 09:07:53.003042407 +0000 UTC m=+0.075244143 container remove 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:07:53 compute-0 systemd[1]: libpod-conmon-2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5.scope: Deactivated successfully.
Dec 13 09:07:53 compute-0 sudo[388837]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:53 compute-0 sudo[388966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:07:53 compute-0 sudo[388966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:53 compute-0 sudo[388966]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:53 compute-0 sudo[388991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:07:53 compute-0 sudo[388991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:53 compute-0 nova_compute[248510]: 2025-12-13 09:07:53.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:53 compute-0 podman[389027]: 2025-12-13 09:07:53.56800759 +0000 UTC m=+0.081493215 container create 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:07:53 compute-0 podman[389027]: 2025-12-13 09:07:53.513863802 +0000 UTC m=+0.027349517 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:07:53 compute-0 systemd[1]: Started libpod-conmon-2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197.scope.
Dec 13 09:07:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:07:53 compute-0 podman[389027]: 2025-12-13 09:07:53.649498963 +0000 UTC m=+0.162984628 container init 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:07:53 compute-0 podman[389027]: 2025-12-13 09:07:53.657607172 +0000 UTC m=+0.171092797 container start 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:07:53 compute-0 podman[389027]: 2025-12-13 09:07:53.661409331 +0000 UTC m=+0.174894996 container attach 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:07:53 compute-0 stupefied_banzai[389041]: 167 167
Dec 13 09:07:53 compute-0 systemd[1]: libpod-2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197.scope: Deactivated successfully.
Dec 13 09:07:53 compute-0 conmon[389041]: conmon 2f52703391785696d364 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197.scope/container/memory.events
Dec 13 09:07:53 compute-0 podman[389027]: 2025-12-13 09:07:53.664651284 +0000 UTC m=+0.178136959 container died 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 09:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f359c19036d19c4ebe312800fb98e526208674dd3c91999878eba204c000b099-merged.mount: Deactivated successfully.
Dec 13 09:07:53 compute-0 podman[389027]: 2025-12-13 09:07:53.703500527 +0000 UTC m=+0.216986152 container remove 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:07:53 compute-0 systemd[1]: libpod-conmon-2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197.scope: Deactivated successfully.
Dec 13 09:07:53 compute-0 podman[389064]: 2025-12-13 09:07:53.903763906 +0000 UTC m=+0.047062556 container create b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:07:53 compute-0 systemd[1]: Started libpod-conmon-b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e.scope.
Dec 13 09:07:53 compute-0 podman[389064]: 2025-12-13 09:07:53.886695856 +0000 UTC m=+0.029994526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:07:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:54 compute-0 podman[389064]: 2025-12-13 09:07:54.00772301 +0000 UTC m=+0.151021740 container init b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:07:54 compute-0 podman[389064]: 2025-12-13 09:07:54.022714067 +0000 UTC m=+0.166012717 container start b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 09:07:54 compute-0 podman[389064]: 2025-12-13 09:07:54.026249318 +0000 UTC m=+0.169548068 container attach b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:07:54 compute-0 sshd-session[389066]: Invalid user sol from 193.32.162.146 port 48900
Dec 13 09:07:54 compute-0 recursing_galileo[389080]: {
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:     "0": [
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:         {
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "devices": [
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "/dev/loop3"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             ],
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_name": "ceph_lv0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_size": "21470642176",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "name": "ceph_lv0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "tags": {
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cluster_name": "ceph",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.crush_device_class": "",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.encrypted": "0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.objectstore": "bluestore",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osd_id": "0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.type": "block",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.vdo": "0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.with_tpm": "0"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             },
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "type": "block",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "vg_name": "ceph_vg0"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:         }
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:     ],
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:     "1": [
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:         {
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "devices": [
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "/dev/loop4"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             ],
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_name": "ceph_lv1",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_size": "21470642176",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "name": "ceph_lv1",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "tags": {
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cluster_name": "ceph",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.crush_device_class": "",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.encrypted": "0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.objectstore": "bluestore",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osd_id": "1",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.type": "block",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.vdo": "0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.with_tpm": "0"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             },
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "type": "block",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "vg_name": "ceph_vg1"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:         }
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:     ],
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:     "2": [
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:         {
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "devices": [
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "/dev/loop5"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             ],
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_name": "ceph_lv2",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_size": "21470642176",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "name": "ceph_lv2",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "tags": {
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.cluster_name": "ceph",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.crush_device_class": "",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.encrypted": "0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.objectstore": "bluestore",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osd_id": "2",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.type": "block",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.vdo": "0",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:                 "ceph.with_tpm": "0"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             },
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "type": "block",
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:             "vg_name": "ceph_vg2"
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:         }
Dec 13 09:07:54 compute-0 recursing_galileo[389080]:     ]
Dec 13 09:07:54 compute-0 recursing_galileo[389080]: }
Dec 13 09:07:54 compute-0 systemd[1]: libpod-b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e.scope: Deactivated successfully.
Dec 13 09:07:54 compute-0 podman[389064]: 2025-12-13 09:07:54.39981225 +0000 UTC m=+0.543110950 container died b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:07:54 compute-0 sshd-session[389066]: Connection closed by invalid user sol 193.32.162.146 port 48900 [preauth]
Dec 13 09:07:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250-merged.mount: Deactivated successfully.
Dec 13 09:07:54 compute-0 ceph-mon[76537]: pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:55 compute-0 podman[389064]: 2025-12-13 09:07:55.013937331 +0000 UTC m=+1.157236011 container remove b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:07:55 compute-0 sudo[388991]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:55 compute-0 systemd[1]: libpod-conmon-b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e.scope: Deactivated successfully.
Dec 13 09:07:55 compute-0 sudo[389103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:07:55 compute-0 sudo[389103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:55 compute-0 sudo[389103]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:55 compute-0 sudo[389128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:07:55 compute-0 sudo[389128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:55.444 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:07:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:55.445 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:07:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:55.446 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:07:55 compute-0 podman[389166]: 2025-12-13 09:07:55.537924516 +0000 UTC m=+0.031862334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:07:55 compute-0 podman[389166]: 2025-12-13 09:07:55.635667159 +0000 UTC m=+0.129604937 container create 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:07:55 compute-0 systemd[1]: Started libpod-conmon-753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc.scope.
Dec 13 09:07:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:07:55 compute-0 podman[389166]: 2025-12-13 09:07:55.840815284 +0000 UTC m=+0.334753092 container init 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:07:55 compute-0 podman[389166]: 2025-12-13 09:07:55.849225971 +0000 UTC m=+0.343163779 container start 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:07:55 compute-0 podman[389166]: 2025-12-13 09:07:55.853333217 +0000 UTC m=+0.347271025 container attach 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 09:07:55 compute-0 nifty_keldysh[389181]: 167 167
Dec 13 09:07:55 compute-0 systemd[1]: libpod-753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc.scope: Deactivated successfully.
Dec 13 09:07:55 compute-0 podman[389166]: 2025-12-13 09:07:55.858258744 +0000 UTC m=+0.352196622 container died 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-884cfa3c3ec925f5b51a76f27625f6709d0c5522feb549e0807b2af8bed045fe-merged.mount: Deactivated successfully.
Dec 13 09:07:55 compute-0 podman[389166]: 2025-12-13 09:07:55.903661266 +0000 UTC m=+0.397599074 container remove 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:07:55 compute-0 systemd[1]: libpod-conmon-753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc.scope: Deactivated successfully.
Dec 13 09:07:56 compute-0 podman[389206]: 2025-12-13 09:07:56.109872439 +0000 UTC m=+0.050916385 container create 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:07:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:07:56 compute-0 systemd[1]: Started libpod-conmon-2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547.scope.
Dec 13 09:07:56 compute-0 podman[389206]: 2025-12-13 09:07:56.087893181 +0000 UTC m=+0.028937127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:07:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:07:56 compute-0 podman[389206]: 2025-12-13 09:07:56.22500028 +0000 UTC m=+0.166044316 container init 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:07:56 compute-0 podman[389206]: 2025-12-13 09:07:56.237040511 +0000 UTC m=+0.178084427 container start 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:07:56 compute-0 podman[389206]: 2025-12-13 09:07:56.240693335 +0000 UTC m=+0.181737301 container attach 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:07:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:56 compute-0 ceph-mon[76537]: pgmap v3205: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:57 compute-0 lvm[389300]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:07:57 compute-0 lvm[389300]: VG ceph_vg0 finished
Dec 13 09:07:57 compute-0 lvm[389301]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:07:57 compute-0 lvm[389301]: VG ceph_vg1 finished
Dec 13 09:07:57 compute-0 lvm[389303]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:07:57 compute-0 lvm[389303]: VG ceph_vg2 finished
Dec 13 09:07:57 compute-0 amazing_darwin[389222]: {}
Dec 13 09:07:57 compute-0 systemd[1]: libpod-2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547.scope: Deactivated successfully.
Dec 13 09:07:57 compute-0 systemd[1]: libpod-2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547.scope: Consumed 1.595s CPU time.
Dec 13 09:07:57 compute-0 podman[389206]: 2025-12-13 09:07:57.194127735 +0000 UTC m=+1.135171661 container died 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999-merged.mount: Deactivated successfully.
Dec 13 09:07:57 compute-0 podman[389206]: 2025-12-13 09:07:57.238832629 +0000 UTC m=+1.179876555 container remove 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 09:07:57 compute-0 systemd[1]: libpod-conmon-2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547.scope: Deactivated successfully.
Dec 13 09:07:57 compute-0 sudo[389128]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:07:57 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:07:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:07:57 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:07:57 compute-0 sudo[389319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:07:57 compute-0 sudo[389319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:07:57 compute-0 sudo[389319]: pam_unix(sudo:session): session closed for user root
Dec 13 09:07:57 compute-0 nova_compute[248510]: 2025-12-13 09:07:57.988 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:58 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:07:58 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:07:58 compute-0 nova_compute[248510]: 2025-12-13 09:07:58.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:59 compute-0 ceph-mon[76537]: pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:07:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:59.718 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:07:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:59.719 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:07:59 compute-0 nova_compute[248510]: 2025-12-13 09:07:59.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:07:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:07:59.721 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:00 compute-0 ceph-mon[76537]: pgmap v3207: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:02 compute-0 ceph-mon[76537]: pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:03 compute-0 nova_compute[248510]: 2025-12-13 09:08:03.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:03 compute-0 nova_compute[248510]: 2025-12-13 09:08:03.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:04 compute-0 ceph-mon[76537]: pgmap v3209: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:06 compute-0 ceph-mon[76537]: pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:07 compute-0 nova_compute[248510]: 2025-12-13 09:08:07.534 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:07 compute-0 nova_compute[248510]: 2025-12-13 09:08:07.535 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:07 compute-0 nova_compute[248510]: 2025-12-13 09:08:07.556 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:08:07 compute-0 nova_compute[248510]: 2025-12-13 09:08:07.705 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:07 compute-0 nova_compute[248510]: 2025-12-13 09:08:07.706 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:07 compute-0 nova_compute[248510]: 2025-12-13 09:08:07.721 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:08:07 compute-0 nova_compute[248510]: 2025-12-13 09:08:07.722 248514 INFO nova.compute.claims [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:08:07 compute-0 nova_compute[248510]: 2025-12-13 09:08:07.949 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:08:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638317404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.577 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.586 248514 DEBUG nova.compute.provider_tree [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:08:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3638317404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:08:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.667 248514 DEBUG nova.scheduler.client.report [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.701 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.703 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.762 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.763 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.795 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.824 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.932 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.934 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.934 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Creating image(s)
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.961 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:08 compute-0 nova_compute[248510]: 2025-12-13 09:08:08.992 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.023 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.028 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.135 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.136 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.136 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.137 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.164 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.167 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 54c767d1-14e1-4a29-be59-440d4e412c4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:09 compute-0 nova_compute[248510]: 2025-12-13 09:08:09.231 248514 DEBUG nova.policy [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:08:09 compute-0 sshd-session[389344]: Connection closed by authenticating user root 61.245.11.87 port 46794 [preauth]
Dec 13 09:08:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:08:09
Dec 13 09:08:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:08:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:08:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.meta']
Dec 13 09:08:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:08:10 compute-0 ceph-mon[76537]: pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.185 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 54c767d1-14e1-4a29-be59-440d4e412c4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.226 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Successfully created port: 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.278 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.382 248514 DEBUG nova.objects.instance [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 54c767d1-14e1-4a29-be59-440d4e412c4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.406 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.407 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Ensure instance console log exists: /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.407 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.407 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.408 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:10 compute-0 nova_compute[248510]: 2025-12-13 09:08:10.821 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Successfully created port: 06da1719-402b-4c36-b4b8-dcc95ed8b65c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:08:10 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:08:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:08:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:08:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:08:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:08:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:08:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:08:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:08:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:08:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:11 compute-0 ceph-mon[76537]: pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:11 compute-0 nova_compute[248510]: 2025-12-13 09:08:11.709 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Successfully updated port: 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:08:11 compute-0 nova_compute[248510]: 2025-12-13 09:08:11.990 248514 DEBUG nova.compute.manager [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:08:11 compute-0 nova_compute[248510]: 2025-12-13 09:08:11.990 248514 DEBUG nova.compute.manager [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing instance network info cache due to event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:08:11 compute-0 nova_compute[248510]: 2025-12-13 09:08:11.991 248514 DEBUG oslo_concurrency.lockutils [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:08:11 compute-0 nova_compute[248510]: 2025-12-13 09:08:11.991 248514 DEBUG oslo_concurrency.lockutils [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:08:11 compute-0 nova_compute[248510]: 2025-12-13 09:08:11.991 248514 DEBUG nova.network.neutron [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:08:12 compute-0 nova_compute[248510]: 2025-12-13 09:08:12.184 248514 DEBUG nova.network.neutron [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:08:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:12 compute-0 nova_compute[248510]: 2025-12-13 09:08:12.664 248514 DEBUG nova.network.neutron [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:08:12 compute-0 ceph-mon[76537]: pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:08:13 compute-0 nova_compute[248510]: 2025-12-13 09:08:13.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:13 compute-0 nova_compute[248510]: 2025-12-13 09:08:13.178 248514 DEBUG oslo_concurrency.lockutils [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:08:13 compute-0 nova_compute[248510]: 2025-12-13 09:08:13.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:13 compute-0 nova_compute[248510]: 2025-12-13 09:08:13.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:14 compute-0 ceph-mon[76537]: pgmap v3214: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:08:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2884898755' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:08:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:08:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2884898755' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:08:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2884898755' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:08:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2884898755' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:08:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:16 compute-0 nova_compute[248510]: 2025-12-13 09:08:16.283 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Successfully updated port: 06da1719-402b-4c36-b4b8-dcc95ed8b65c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:08:16 compute-0 nova_compute[248510]: 2025-12-13 09:08:16.298 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:08:16 compute-0 nova_compute[248510]: 2025-12-13 09:08:16.299 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:08:16 compute-0 nova_compute[248510]: 2025-12-13 09:08:16.299 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:08:16 compute-0 nova_compute[248510]: 2025-12-13 09:08:16.382 248514 DEBUG nova.compute.manager [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-changed-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:08:16 compute-0 nova_compute[248510]: 2025-12-13 09:08:16.383 248514 DEBUG nova.compute.manager [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing instance network info cache due to event network-changed-06da1719-402b-4c36-b4b8-dcc95ed8b65c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:08:16 compute-0 nova_compute[248510]: 2025-12-13 09:08:16.383 248514 DEBUG oslo_concurrency.lockutils [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:08:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:16 compute-0 nova_compute[248510]: 2025-12-13 09:08:16.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:16 compute-0 ceph-mon[76537]: pgmap v3215: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:17 compute-0 nova_compute[248510]: 2025-12-13 09:08:17.272 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:08:18 compute-0 nova_compute[248510]: 2025-12-13 09:08:18.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:18 compute-0 nova_compute[248510]: 2025-12-13 09:08:18.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:18 compute-0 ceph-mon[76537]: pgmap v3216: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:18 compute-0 podman[389538]: 2025-12-13 09:08:18.993910876 +0000 UTC m=+0.071940518 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 09:08:19 compute-0 podman[389537]: 2025-12-13 09:08:19.02816623 +0000 UTC m=+0.114722602 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:08:19 compute-0 podman[389536]: 2025-12-13 09:08:19.033999681 +0000 UTC m=+0.117459723 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.327 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.355 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.356 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance network_info: |[{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.356 248514 DEBUG oslo_concurrency.lockutils [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.357 248514 DEBUG nova.network.neutron [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing network info cache for port 06da1719-402b-4c36-b4b8-dcc95ed8b65c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.361 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start _get_guest_xml network_info=[{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.367 248514 WARNING nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.375 248514 DEBUG nova.virt.libvirt.host [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.376 248514 DEBUG nova.virt.libvirt.host [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.382 248514 DEBUG nova.virt.libvirt.host [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.383 248514 DEBUG nova.virt.libvirt.host [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.383 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.384 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.384 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.385 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.385 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.385 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.385 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.386 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.386 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.386 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.387 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.387 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.390 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:20 compute-0 ceph-mon[76537]: pgmap v3217: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:08:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203756744' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.952 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.975 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:20 compute-0 nova_compute[248510]: 2025-12-13 09:08:20.979 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000360499293401914 of space, bias 1.0, pg target 0.1081497880205742 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697557475050345 of space, bias 1.0, pg target 0.20092672425151034 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:08:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:08:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:08:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964944478' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.549 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.551 248514 DEBUG nova.virt.libvirt.vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:08Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.551 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.552 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.553 248514 DEBUG nova.virt.libvirt.vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:08Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.554 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.554 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.555 248514 DEBUG nova.objects.instance [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 54c767d1-14e1-4a29-be59-440d4e412c4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.580 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <uuid>54c767d1-14e1-4a29-be59-440d4e412c4e</uuid>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <name>instance-0000008b</name>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-698361226</nova:name>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:08:20</nova:creationTime>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:port uuid="820e6a4a-a776-40a1-a7c0-d34cd3d2543c">
Dec 13 09:08:21 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <nova:port uuid="06da1719-402b-4c36-b4b8-dcc95ed8b65c">
Dec 13 09:08:21 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef0:d903" ipVersion="6"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <system>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <entry name="serial">54c767d1-14e1-4a29-be59-440d4e412c4e</entry>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <entry name="uuid">54c767d1-14e1-4a29-be59-440d4e412c4e</entry>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </system>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <os>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   </os>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <features>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   </features>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/54c767d1-14e1-4a29-be59-440d4e412c4e_disk">
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       </source>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config">
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       </source>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:08:21 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:bc:71:ee"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <target dev="tap820e6a4a-a7"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:f0:d9:03"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <target dev="tap06da1719-40"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/console.log" append="off"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <video>
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </video>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:08:21 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:08:21 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:08:21 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:08:21 compute-0 nova_compute[248510]: </domain>
Dec 13 09:08:21 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.582 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Preparing to wait for external event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.582 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.583 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.583 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.583 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Preparing to wait for external event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.583 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.584 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.584 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.585 248514 DEBUG nova.virt.libvirt.vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:08Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.585 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.586 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.586 248514 DEBUG os_vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.588 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.588 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.592 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.593 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap820e6a4a-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.593 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap820e6a4a-a7, col_values=(('external_ids', {'iface-id': '820e6a4a-a776-40a1-a7c0-d34cd3d2543c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:71:ee', 'vm-uuid': '54c767d1-14e1-4a29-be59-440d4e412c4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.595 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:21 compute-0 NetworkManager[50376]: <info>  [1765616901.5966] manager: (tap820e6a4a-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.604 248514 INFO os_vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7')
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.605 248514 DEBUG nova.virt.libvirt.vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:08Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.605 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.606 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.606 248514 DEBUG os_vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.607 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.607 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.607 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.610 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06da1719-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.611 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06da1719-40, col_values=(('external_ids', {'iface-id': '06da1719-402b-4c36-b4b8-dcc95ed8b65c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:d9:03', 'vm-uuid': '54c767d1-14e1-4a29-be59-440d4e412c4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:21 compute-0 NetworkManager[50376]: <info>  [1765616901.6134] manager: (tap06da1719-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.621 248514 INFO os_vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40')
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:08:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2203756744' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:08:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1964944478' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.859 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.860 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.883 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.884 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.885 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:bc:71:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.885 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:f0:d9:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.886 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Using config drive
Dec 13 09:08:21 compute-0 nova_compute[248510]: 2025-12-13 09:08:21.919 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.503 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Creating config drive at /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.510 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphul1qh03 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.671 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphul1qh03" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.712 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.718 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.774 248514 DEBUG nova.network.neutron [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updated VIF entry in instance network info cache for port 06da1719-402b-4c36-b4b8-dcc95ed8b65c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.776 248514 DEBUG nova.network.neutron [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:08:22 compute-0 ceph-mon[76537]: pgmap v3218: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.967 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:22 compute-0 nova_compute[248510]: 2025-12-13 09:08:22.968 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Deleting local config drive /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config because it was imported into RBD.
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.0302] manager: (tap820e6a4a-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/594)
Dec 13 09:08:23 compute-0 kernel: tap820e6a4a-a7: entered promiscuous mode
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01437|binding|INFO|Claiming lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c for this chassis.
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01438|binding|INFO|820e6a4a-a776-40a1-a7c0-d34cd3d2543c: Claiming fa:16:3e:bc:71:ee 10.100.0.12
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.034 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.0468] manager: (tap06da1719-40): new Tun device (/org/freedesktop/NetworkManager/Devices/595)
Dec 13 09:08:23 compute-0 systemd-udevd[389743]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:08:23 compute-0 systemd-udevd[389742]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.0775] device (tap820e6a4a-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.0791] device (tap820e6a4a-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:08:23 compute-0 systemd-machined[210538]: New machine qemu-170-instance-0000008b.
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.114 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:71:ee 10.100.0.12'], port_security=['fa:16:3e:bc:71:ee 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '54c767d1-14e1-4a29-be59-440d4e412c4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=642d5a3d-2cd7-49a8-8481-4caf7441de52, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=820e6a4a-a776-40a1-a7c0-d34cd3d2543c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.115 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c in datapath 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 bound to our chassis
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.117 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.125 248514 DEBUG oslo_concurrency.lockutils [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.135 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4930ec-6a22-4c72-b123-d47f6a415893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.136 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31e0a1ad-e1 in ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.138 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31e0a1ad-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.138 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff198ce-9db2-479b-b8f5-a3f05f93a78f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.140 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f94e9d9-7008-4595-a69d-326ff693786b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 kernel: tap06da1719-40: entered promiscuous mode
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.1493] device (tap06da1719-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.151 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.1531] device (tap06da1719-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:08:23 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-0000008b.
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01439|binding|INFO|Claiming lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c for this chassis.
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01440|binding|INFO|06da1719-402b-4c36-b4b8-dcc95ed8b65c: Claiming fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.156 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[727c448e-fd95-4e13-9f80-b5febf9f35f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.160 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903'], port_security=['fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:d903/64', 'neutron:device_id': '54c767d1-14e1-4a29-be59-440d4e412c4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d30eddc7-04d5-44b8-966c-6e93f6525e9a, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=06da1719-402b-4c36-b4b8-dcc95ed8b65c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01441|binding|INFO|Setting lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c ovn-installed in OVS
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01442|binding|INFO|Setting lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c up in Southbound
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01443|binding|INFO|Setting lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c ovn-installed in OVS
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01444|binding|INFO|Setting lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c up in Southbound
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.174 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd869dfa-93dc-4b7a-a273-e1355f8b6401]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.222 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84c8c15f-bf7a-4af5-9cdc-bc44f3f4eda0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.2317] manager: (tap31e0a1ad-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/596)
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.232 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[932d5809-ced8-473a-9da8-2d7df3500b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.274 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f62f88bd-5666-413e-b126-5240fc697560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.278 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2835c75c-c582-49e1-bb6d-08e8215fa557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.3033] device (tap31e0a1ad-e0): carrier: link connected
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.312 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d3152b-83a4-4809-93d0-9a840e8238c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.336 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[632e8994-a3c8-4f22-a71e-edba565a0d40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e0a1ad-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:46:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948051, 'reachable_time': 26277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389779, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.358 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47938127-0d77-4e88-9646-c3541ff9e81b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:46da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948051, 'tstamp': 948051}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389780, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.387 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ba026b76-350e-4678-ad29-72b3ca3bf597]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e0a1ad-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:46:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948051, 'reachable_time': 26277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389781, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.433 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[766eee82-14b1-4c77-b5e6-2ce7b14c4360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.489 248514 DEBUG nova.compute.manager [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.489 248514 DEBUG oslo_concurrency.lockutils [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.489 248514 DEBUG oslo_concurrency.lockutils [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.490 248514 DEBUG oslo_concurrency.lockutils [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.490 248514 DEBUG nova.compute.manager [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Processing event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG nova.compute.manager [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG oslo_concurrency.lockutils [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG oslo_concurrency.lockutils [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG oslo_concurrency.lockutils [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG nova.compute.manager [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Processing event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.521 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0baaa94-16c2-44e5-baea-438b33a619cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.523 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e0a1ad-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.523 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.523 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31e0a1ad-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:23 compute-0 NetworkManager[50376]: <info>  [1765616903.5262] manager: (tap31e0a1ad-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Dec 13 09:08:23 compute-0 kernel: tap31e0a1ad-e0: entered promiscuous mode
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.532 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31e0a1ad-e0, col_values=(('external_ids', {'iface-id': 'bbfdb82a-c1cb-4d3d-b44d-9475a3177d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:23 compute-0 ovn_controller[148476]: 2025-12-13T09:08:23Z|01445|binding|INFO|Releasing lport bbfdb82a-c1cb-4d3d-b44d-9475a3177d38 from this chassis (sb_readonly=0)
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.536 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb751b8-5007-483a-8adf-1a58210362d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.537 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5.pid.haproxy
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:08:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.538 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'env', 'PROCESS_TAG=haproxy-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:23 compute-0 podman[389849]: 2025-12-13 09:08:23.925888495 +0000 UTC m=+0.053117002 container create 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 09:08:23 compute-0 systemd[1]: Started libpod-conmon-63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922.scope.
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.988 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.990 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616903.9879081, 54c767d1-14e1-4a29-be59-440d4e412c4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.990 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] VM Started (Lifecycle Event)
Dec 13 09:08:23 compute-0 podman[389849]: 2025-12-13 09:08:23.899413862 +0000 UTC m=+0.026642389 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:08:23 compute-0 nova_compute[248510]: 2025-12-13 09:08:23.999 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.004 248514 INFO nova.virt.libvirt.driver [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance spawned successfully.
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.004 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:08:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43e0b8694a8e8a523da23bc5772976ab296f6f8c8d08138f08d92ba5a318180/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.027 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.030 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.030 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.031 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.031 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.031 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.032 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:08:24 compute-0 podman[389849]: 2025-12-13 09:08:24.038244236 +0000 UTC m=+0.165472763 container init 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:08:24 compute-0 podman[389849]: 2025-12-13 09:08:24.0446151 +0000 UTC m=+0.171843607 container start 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.067 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.067 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616903.9891014, 54c767d1-14e1-4a29-be59-440d4e412c4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.068 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] VM Paused (Lifecycle Event)
Dec 13 09:08:24 compute-0 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [NOTICE]   (389874) : New worker (389876) forked
Dec 13 09:08:24 compute-0 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [NOTICE]   (389874) : Loading success.
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.102 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 06da1719-402b-4c36-b4b8-dcc95ed8b65c in datapath 9cd5db3b-c54c-46c9-b59b-e477b2784420 unbound from our chassis
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.104 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cd5db3b-c54c-46c9-b59b-e477b2784420
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.108 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.111 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616903.992605, 54c767d1-14e1-4a29-be59-440d4e412c4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.111 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] VM Resumed (Lifecycle Event)
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.114 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68003e99-5bcd-4153-a4d7-a6e32b57f3d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.115 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9cd5db3b-c1 in ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.118 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9cd5db3b-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.118 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69b68b38-0a72-4df3-8470-da6be77a5ef6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.120 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9e45ac6b-83e1-4909-a168-1dd5bcee5a55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.124 248514 INFO nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Took 15.19 seconds to spawn the instance on the hypervisor.
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.125 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.130 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.134 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0719bcb0-c164-4a72-a7bc-7b0f9118c1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.143 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34b4e4f5-cafb-4cdf-8812-24680111283b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.177 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.180 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0575ff-339f-4bba-84e2-79c60b7a6b90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.187 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[48aed80b-5759-44e2-9386-645259423abf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 NetworkManager[50376]: <info>  [1765616904.1888] manager: (tap9cd5db3b-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/598)
Dec 13 09:08:24 compute-0 systemd-udevd[389765]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.227 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae02a4f-3c23-46be-8ce1-6b3aa4530676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.228 248514 INFO nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Took 16.62 seconds to build instance.
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.230 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc41e16-fa84-4588-b4aa-a2091a8ae99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 NetworkManager[50376]: <info>  [1765616904.2606] device (tap9cd5db3b-c0): carrier: link connected
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.269 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ea735268-9aa0-4f2d-9564-a42e1a4d1e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.299 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89aae542-77a5-455b-8c60-1c898db2977c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cd5db3b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:ad:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948147, 'reachable_time': 31724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389895, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.317 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ccd03d-e61a-4ac6-bf9d-f595c75cc0c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:ade0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948147, 'tstamp': 948147}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389896, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.344 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57b81455-8ab0-4181-8d46-fb284f282901]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cd5db3b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:ad:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948147, 'reachable_time': 31724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389897, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.382 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f260c5-c504-4d62-bc5d-35467ca01b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.389 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.422 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[539cce2b-4b98-4102-83f4-91fea9c5e6e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.424 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cd5db3b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.425 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.426 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cd5db3b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:24 compute-0 kernel: tap9cd5db3b-c0: entered promiscuous mode
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:24 compute-0 NetworkManager[50376]: <info>  [1765616904.4297] manager: (tap9cd5db3b-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.430 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.462 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cd5db3b-c0, col_values=(('external_ids', {'iface-id': '26b2c676-4744-4ec7-b362-11535b0ba025'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.465 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.468 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9cd5db3b-c54c-46c9-b59b-e477b2784420.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9cd5db3b-c54c-46c9-b59b-e477b2784420.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.469 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f42713-2c2e-47b2-b66d-3c8c74030eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.470 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-9cd5db3b-c54c-46c9-b59b-e477b2784420
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/9cd5db3b-c54c-46c9-b59b-e477b2784420.pid.haproxy
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 9cd5db3b-c54c-46c9-b59b-e477b2784420
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:08:24 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.471 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'env', 'PROCESS_TAG=haproxy-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9cd5db3b-c54c-46c9-b59b-e477b2784420.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:08:24 compute-0 ovn_controller[148476]: 2025-12-13T09:08:24Z|01446|binding|INFO|Releasing lport 26b2c676-4744-4ec7-b362-11535b0ba025 from this chassis (sb_readonly=0)
Dec 13 09:08:24 compute-0 nova_compute[248510]: 2025-12-13 09:08:24.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 09:08:24 compute-0 ceph-mon[76537]: pgmap v3219: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 09:08:24 compute-0 podman[389927]: 2025-12-13 09:08:24.967669705 +0000 UTC m=+0.071454245 container create a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:08:25 compute-0 systemd[1]: Started libpod-conmon-a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b.scope.
Dec 13 09:08:25 compute-0 podman[389927]: 2025-12-13 09:08:24.941155581 +0000 UTC m=+0.044940141 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:08:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d728be1d4b2966613ad5805ec267c950bc3159538ec562a41c9a868f9efff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:08:25 compute-0 podman[389927]: 2025-12-13 09:08:25.060264935 +0000 UTC m=+0.164049475 container init a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:08:25 compute-0 podman[389927]: 2025-12-13 09:08:25.07243281 +0000 UTC m=+0.176217320 container start a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:08:25 compute-0 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [NOTICE]   (389946) : New worker (389948) forked
Dec 13 09:08:25 compute-0 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [NOTICE]   (389946) : Loading success.
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.839 248514 DEBUG nova.compute.manager [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.839 248514 DEBUG oslo_concurrency.lockutils [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.839 248514 DEBUG oslo_concurrency.lockutils [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.840 248514 DEBUG oslo_concurrency.lockutils [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.840 248514 DEBUG nova.compute.manager [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.841 248514 WARNING nova.compute.manager [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received unexpected event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c for instance with vm_state active and task_state None.
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.964 248514 DEBUG nova.compute.manager [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.964 248514 DEBUG oslo_concurrency.lockutils [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.965 248514 DEBUG oslo_concurrency.lockutils [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.965 248514 DEBUG oslo_concurrency.lockutils [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.965 248514 DEBUG nova.compute.manager [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:08:25 compute-0 nova_compute[248510]: 2025-12-13 09:08:25.965 248514 WARNING nova.compute.manager [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received unexpected event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c for instance with vm_state active and task_state None.
Dec 13 09:08:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:26 compute-0 nova_compute[248510]: 2025-12-13 09:08:26.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 09:08:26 compute-0 ceph-mon[76537]: pgmap v3220: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 09:08:26 compute-0 nova_compute[248510]: 2025-12-13 09:08:26.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:26 compute-0 nova_compute[248510]: 2025-12-13 09:08:26.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:26 compute-0 nova_compute[248510]: 2025-12-13 09:08:26.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:26 compute-0 nova_compute[248510]: 2025-12-13 09:08:26.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:26 compute-0 nova_compute[248510]: 2025-12-13 09:08:26.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:26 compute-0 nova_compute[248510]: 2025-12-13 09:08:26.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:08:26 compute-0 nova_compute[248510]: 2025-12-13 09:08:26.811 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:08:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2721834458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.362 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.447 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.447 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.666 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.668 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3293MB free_disk=59.966510431841016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.668 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.668 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:27 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2721834458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.858 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 54c767d1-14e1-4a29-be59-440d4e412c4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.859 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.859 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:08:27 compute-0 nova_compute[248510]: 2025-12-13 09:08:27.922 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:08:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2326658025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:08:28 compute-0 nova_compute[248510]: 2025-12-13 09:08:28.477 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:28 compute-0 nova_compute[248510]: 2025-12-13 09:08:28.483 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:08:28 compute-0 nova_compute[248510]: 2025-12-13 09:08:28.531 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:08:28 compute-0 nova_compute[248510]: 2025-12-13 09:08:28.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 12 KiB/s wr, 33 op/s
Dec 13 09:08:28 compute-0 nova_compute[248510]: 2025-12-13 09:08:28.679 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:08:28 compute-0 nova_compute[248510]: 2025-12-13 09:08:28.679 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2326658025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:08:28 compute-0 ceph-mon[76537]: pgmap v3221: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 12 KiB/s wr, 33 op/s
Dec 13 09:08:29 compute-0 NetworkManager[50376]: <info>  [1765616909.6511] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Dec 13 09:08:29 compute-0 NetworkManager[50376]: <info>  [1765616909.6520] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Dec 13 09:08:29 compute-0 nova_compute[248510]: 2025-12-13 09:08:29.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:29 compute-0 ovn_controller[148476]: 2025-12-13T09:08:29Z|01447|binding|INFO|Releasing lport bbfdb82a-c1cb-4d3d-b44d-9475a3177d38 from this chassis (sb_readonly=0)
Dec 13 09:08:29 compute-0 ovn_controller[148476]: 2025-12-13T09:08:29Z|01448|binding|INFO|Releasing lport 26b2c676-4744-4ec7-b362-11535b0ba025 from this chassis (sb_readonly=0)
Dec 13 09:08:29 compute-0 nova_compute[248510]: 2025-12-13 09:08:29.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:29 compute-0 ovn_controller[148476]: 2025-12-13T09:08:29Z|01449|binding|INFO|Releasing lport bbfdb82a-c1cb-4d3d-b44d-9475a3177d38 from this chassis (sb_readonly=0)
Dec 13 09:08:29 compute-0 ovn_controller[148476]: 2025-12-13T09:08:29Z|01450|binding|INFO|Releasing lport 26b2c676-4744-4ec7-b362-11535b0ba025 from this chassis (sb_readonly=0)
Dec 13 09:08:29 compute-0 nova_compute[248510]: 2025-12-13 09:08:29.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:29 compute-0 nova_compute[248510]: 2025-12-13 09:08:29.952 248514 DEBUG nova.compute.manager [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:08:29 compute-0 nova_compute[248510]: 2025-12-13 09:08:29.953 248514 DEBUG nova.compute.manager [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing instance network info cache due to event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:08:29 compute-0 nova_compute[248510]: 2025-12-13 09:08:29.954 248514 DEBUG oslo_concurrency.lockutils [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:08:29 compute-0 nova_compute[248510]: 2025-12-13 09:08:29.954 248514 DEBUG oslo_concurrency.lockutils [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:08:29 compute-0 nova_compute[248510]: 2025-12-13 09:08:29.954 248514 DEBUG nova.network.neutron [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:08:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:08:30 compute-0 nova_compute[248510]: 2025-12-13 09:08:30.679 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:30 compute-0 nova_compute[248510]: 2025-12-13 09:08:30.680 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:08:30 compute-0 ceph-mon[76537]: pgmap v3222: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:08:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:31 compute-0 nova_compute[248510]: 2025-12-13 09:08:31.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:31 compute-0 nova_compute[248510]: 2025-12-13 09:08:31.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:32 compute-0 nova_compute[248510]: 2025-12-13 09:08:32.484 248514 DEBUG nova.network.neutron [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updated VIF entry in instance network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:08:32 compute-0 nova_compute[248510]: 2025-12-13 09:08:32.485 248514 DEBUG nova.network.neutron [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:08:32 compute-0 nova_compute[248510]: 2025-12-13 09:08:32.516 248514 DEBUG oslo_concurrency.lockutils [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:08:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:08:32 compute-0 ceph-mon[76537]: pgmap v3223: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:08:33 compute-0 nova_compute[248510]: 2025-12-13 09:08:33.566 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:33 compute-0 nova_compute[248510]: 2025-12-13 09:08:33.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:08:34 compute-0 ceph-mon[76537]: pgmap v3224: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:08:36 compute-0 ovn_controller[148476]: 2025-12-13T09:08:36Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:71:ee 10.100.0.12
Dec 13 09:08:36 compute-0 ovn_controller[148476]: 2025-12-13T09:08:36Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:71:ee 10.100.0.12
Dec 13 09:08:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:36 compute-0 nova_compute[248510]: 2025-12-13 09:08:36.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 68 op/s
Dec 13 09:08:36 compute-0 ceph-mon[76537]: pgmap v3225: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 68 op/s
Dec 13 09:08:38 compute-0 nova_compute[248510]: 2025-12-13 09:08:38.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 94 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 372 KiB/s wr, 83 op/s
Dec 13 09:08:38 compute-0 ceph-mon[76537]: pgmap v3226: 321 pgs: 321 active+clean; 94 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 372 KiB/s wr, 83 op/s
Dec 13 09:08:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:08:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:08:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:08:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:08:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:08:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:08:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Dec 13 09:08:40 compute-0 ceph-mon[76537]: pgmap v3227: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Dec 13 09:08:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:41 compute-0 nova_compute[248510]: 2025-12-13 09:08:41.625 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:08:42 compute-0 ceph-mon[76537]: pgmap v3228: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:08:43 compute-0 nova_compute[248510]: 2025-12-13 09:08:43.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:08:44 compute-0 ceph-mon[76537]: pgmap v3229: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:08:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:46 compute-0 nova_compute[248510]: 2025-12-13 09:08:46.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:08:46 compute-0 ceph-mon[76537]: pgmap v3230: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:08:48 compute-0 nova_compute[248510]: 2025-12-13 09:08:48.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:08:48 compute-0 ceph-mon[76537]: pgmap v3231: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:08:49 compute-0 podman[390004]: 2025-12-13 09:08:49.996614336 +0000 UTC m=+0.083840685 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:08:50 compute-0 podman[390005]: 2025-12-13 09:08:50.003711749 +0000 UTC m=+0.072221075 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:08:50 compute-0 podman[390003]: 2025-12-13 09:08:50.031973199 +0000 UTC m=+0.119104266 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 09:08:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Dec 13 09:08:50 compute-0 ceph-mon[76537]: pgmap v3232: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Dec 13 09:08:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:51 compute-0 nova_compute[248510]: 2025-12-13 09:08:51.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 13 09:08:52 compute-0 ceph-mon[76537]: pgmap v3233: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 13 09:08:52 compute-0 nova_compute[248510]: 2025-12-13 09:08:52.774 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:52 compute-0 nova_compute[248510]: 2025-12-13 09:08:52.775 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:52 compute-0 nova_compute[248510]: 2025-12-13 09:08:52.869 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:08:53 compute-0 nova_compute[248510]: 2025-12-13 09:08:53.215 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:53 compute-0 nova_compute[248510]: 2025-12-13 09:08:53.215 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:53 compute-0 nova_compute[248510]: 2025-12-13 09:08:53.224 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:08:53 compute-0 nova_compute[248510]: 2025-12-13 09:08:53.225 248514 INFO nova.compute.claims [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:08:53 compute-0 nova_compute[248510]: 2025-12-13 09:08:53.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:53 compute-0 nova_compute[248510]: 2025-12-13 09:08:53.689 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:53 compute-0 nova_compute[248510]: 2025-12-13 09:08:53.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:08:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:08:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190108441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:08:54 compute-0 nova_compute[248510]: 2025-12-13 09:08:54.221 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:54 compute-0 nova_compute[248510]: 2025-12-13 09:08:54.231 248514 DEBUG nova.compute.provider_tree [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:08:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3190108441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:08:54 compute-0 nova_compute[248510]: 2025-12-13 09:08:54.518 248514 DEBUG nova.scheduler.client.report [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:08:54 compute-0 nova_compute[248510]: 2025-12-13 09:08:54.589 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:54 compute-0 nova_compute[248510]: 2025-12-13 09:08:54.591 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:08:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 09:08:54 compute-0 nova_compute[248510]: 2025-12-13 09:08:54.815 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:08:54 compute-0 nova_compute[248510]: 2025-12-13 09:08:54.815 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:08:55 compute-0 nova_compute[248510]: 2025-12-13 09:08:55.224 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:08:55 compute-0 nova_compute[248510]: 2025-12-13 09:08:55.262 248514 DEBUG nova.policy [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:08:55 compute-0 nova_compute[248510]: 2025-12-13 09:08:55.438 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:08:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:55.446 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:55.447 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:08:55.449 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:55 compute-0 nova_compute[248510]: 2025-12-13 09:08:55.972 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:08:55 compute-0 nova_compute[248510]: 2025-12-13 09:08:55.974 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:08:55 compute-0 nova_compute[248510]: 2025-12-13 09:08:55.975 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Creating image(s)
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.008 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.051 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.079 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.084 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.176 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.177 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.178 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.178 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.205 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.210 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bdbb2140-812f-4913-a83e-7dadd1968cde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:08:56 compute-0 ceph-mon[76537]: pgmap v3234: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.559 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bdbb2140-812f-4913-a83e-7dadd1968cde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.645 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:08:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.746 248514 DEBUG nova.objects.instance [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid bdbb2140-812f-4913-a83e-7dadd1968cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.985 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.985 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Ensure instance console log exists: /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.986 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.986 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:08:56 compute-0 nova_compute[248510]: 2025-12-13 09:08:56.987 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:08:57 compute-0 sudo[390257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:08:57 compute-0 sudo[390257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:08:57 compute-0 sudo[390257]: pam_unix(sudo:session): session closed for user root
Dec 13 09:08:57 compute-0 sudo[390282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 09:08:57 compute-0 sudo[390282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:08:57 compute-0 nova_compute[248510]: 2025-12-13 09:08:57.717 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Successfully created port: c509d630-f81b-402a-bae1-eae9e8fca8ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:08:58 compute-0 podman[390350]: 2025-12-13 09:08:58.118752069 +0000 UTC m=+0.094572812 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:08:58 compute-0 podman[390350]: 2025-12-13 09:08:58.224476428 +0000 UTC m=+0.200297171 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:08:58 compute-0 ceph-mon[76537]: pgmap v3235: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Dec 13 09:08:58 compute-0 nova_compute[248510]: 2025-12-13 09:08:58.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:08:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 140 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 727 KiB/s wr, 13 op/s
Dec 13 09:08:59 compute-0 nova_compute[248510]: 2025-12-13 09:08:59.101 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Successfully created port: 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:08:59 compute-0 sudo[390282]: pam_unix(sudo:session): session closed for user root
Dec 13 09:08:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:08:59 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:08:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:08:59 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:08:59 compute-0 sudo[390537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:08:59 compute-0 sudo[390537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:08:59 compute-0 sudo[390537]: pam_unix(sudo:session): session closed for user root
Dec 13 09:08:59 compute-0 sudo[390562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:08:59 compute-0 sudo[390562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:09:00 compute-0 sudo[390562]: pam_unix(sudo:session): session closed for user root
Dec 13 09:09:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:00.156 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:09:00 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:00.158 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:09:00 compute-0 nova_compute[248510]: 2025-12-13 09:09:00.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:09:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:09:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:09:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:09:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:09:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:09:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:09:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:09:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:09:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:09:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:09:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:09:00 compute-0 nova_compute[248510]: 2025-12-13 09:09:00.238 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Successfully updated port: c509d630-f81b-402a-bae1-eae9e8fca8ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:09:00 compute-0 sudo[390618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:09:00 compute-0 sudo[390618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:09:00 compute-0 sudo[390618]: pam_unix(sudo:session): session closed for user root
Dec 13 09:09:00 compute-0 ceph-mon[76537]: pgmap v3236: 321 pgs: 321 active+clean; 140 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 727 KiB/s wr, 13 op/s
Dec 13 09:09:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:09:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:09:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:09:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:09:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:09:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:09:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:09:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:09:00 compute-0 sudo[390643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:09:00 compute-0 sudo[390643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:09:00 compute-0 nova_compute[248510]: 2025-12-13 09:09:00.357 248514 DEBUG nova.compute.manager [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:00 compute-0 nova_compute[248510]: 2025-12-13 09:09:00.358 248514 DEBUG nova.compute.manager [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing instance network info cache due to event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:09:00 compute-0 nova_compute[248510]: 2025-12-13 09:09:00.358 248514 DEBUG oslo_concurrency.lockutils [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:09:00 compute-0 nova_compute[248510]: 2025-12-13 09:09:00.358 248514 DEBUG oslo_concurrency.lockutils [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:09:00 compute-0 nova_compute[248510]: 2025-12-13 09:09:00.359 248514 DEBUG nova.network.neutron [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:09:00 compute-0 podman[390681]: 2025-12-13 09:09:00.623842938 +0000 UTC m=+0.045853364 container create 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:09:00 compute-0 systemd[1]: Started libpod-conmon-5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429.scope.
Dec 13 09:09:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:09:00 compute-0 podman[390681]: 2025-12-13 09:09:00.604629612 +0000 UTC m=+0.026640068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:09:00 compute-0 podman[390681]: 2025-12-13 09:09:00.715444243 +0000 UTC m=+0.137454689 container init 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:09:00 compute-0 podman[390681]: 2025-12-13 09:09:00.725719488 +0000 UTC m=+0.147729924 container start 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:09:00 compute-0 podman[390681]: 2025-12-13 09:09:00.72927943 +0000 UTC m=+0.151289886 container attach 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:09:00 compute-0 laughing_johnson[390697]: 167 167
Dec 13 09:09:00 compute-0 systemd[1]: libpod-5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429.scope: Deactivated successfully.
Dec 13 09:09:00 compute-0 podman[390681]: 2025-12-13 09:09:00.734411362 +0000 UTC m=+0.156421798 container died 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-016b61c2bb8803811178b21a8f60a111a8c55574ef3242aa8049e68d845e1214-merged.mount: Deactivated successfully.
Dec 13 09:09:00 compute-0 podman[390681]: 2025-12-13 09:09:00.777935166 +0000 UTC m=+0.199945592 container remove 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:09:00 compute-0 systemd[1]: libpod-conmon-5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429.scope: Deactivated successfully.
Dec 13 09:09:00 compute-0 podman[390721]: 2025-12-13 09:09:00.974715725 +0000 UTC m=+0.051951032 container create b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:09:01 compute-0 systemd[1]: Started libpod-conmon-b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc.scope.
Dec 13 09:09:01 compute-0 podman[390721]: 2025-12-13 09:09:00.950339366 +0000 UTC m=+0.027574693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:09:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:01 compute-0 podman[390721]: 2025-12-13 09:09:01.079214892 +0000 UTC m=+0.156450289 container init b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:09:01 compute-0 podman[390721]: 2025-12-13 09:09:01.091191291 +0000 UTC m=+0.168426598 container start b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:09:01 compute-0 podman[390721]: 2025-12-13 09:09:01.109462653 +0000 UTC m=+0.186698030 container attach b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:09:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:01 compute-0 nova_compute[248510]: 2025-12-13 09:09:01.389 248514 DEBUG nova.network.neutron [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:09:01 compute-0 sweet_lalande[390737]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:09:01 compute-0 sweet_lalande[390737]: --> All data devices are unavailable
Dec 13 09:09:01 compute-0 nova_compute[248510]: 2025-12-13 09:09:01.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:01 compute-0 systemd[1]: libpod-b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc.scope: Deactivated successfully.
Dec 13 09:09:01 compute-0 podman[390721]: 2025-12-13 09:09:01.721686415 +0000 UTC m=+0.798921782 container died b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 09:09:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a-merged.mount: Deactivated successfully.
Dec 13 09:09:01 compute-0 podman[390721]: 2025-12-13 09:09:01.788049088 +0000 UTC m=+0.865284435 container remove b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:09:01 compute-0 systemd[1]: libpod-conmon-b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc.scope: Deactivated successfully.
Dec 13 09:09:01 compute-0 sudo[390643]: pam_unix(sudo:session): session closed for user root
Dec 13 09:09:01 compute-0 sudo[390769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:09:01 compute-0 sudo[390769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:09:01 compute-0 sudo[390769]: pam_unix(sudo:session): session closed for user root
Dec 13 09:09:01 compute-0 sudo[390794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:09:01 compute-0 sudo[390794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:09:02 compute-0 podman[390831]: 2025-12-13 09:09:02.280889219 +0000 UTC m=+0.050164865 container create d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:09:02 compute-0 ceph-mon[76537]: pgmap v3237: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:02 compute-0 systemd[1]: Started libpod-conmon-d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45.scope.
Dec 13 09:09:02 compute-0 podman[390831]: 2025-12-13 09:09:02.259038905 +0000 UTC m=+0.028314541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:09:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:09:02 compute-0 podman[390831]: 2025-12-13 09:09:02.392557002 +0000 UTC m=+0.161832698 container init d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 09:09:02 compute-0 podman[390831]: 2025-12-13 09:09:02.404929321 +0000 UTC m=+0.174204937 container start d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:09:02 compute-0 podman[390831]: 2025-12-13 09:09:02.409194891 +0000 UTC m=+0.178470607 container attach d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:09:02 compute-0 sweet_leavitt[390847]: 167 167
Dec 13 09:09:02 compute-0 systemd[1]: libpod-d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45.scope: Deactivated successfully.
Dec 13 09:09:02 compute-0 podman[390831]: 2025-12-13 09:09:02.413300257 +0000 UTC m=+0.182575873 container died d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-33f25ebf3ff7e87d2f5ad41b6d34921e45cbf00fef98ed10c808e486ca75a7aa-merged.mount: Deactivated successfully.
Dec 13 09:09:02 compute-0 podman[390831]: 2025-12-13 09:09:02.459990662 +0000 UTC m=+0.229266308 container remove d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:09:02 compute-0 systemd[1]: libpod-conmon-d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45.scope: Deactivated successfully.
Dec 13 09:09:02 compute-0 nova_compute[248510]: 2025-12-13 09:09:02.667 248514 DEBUG nova.network.neutron [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:02 compute-0 podman[390870]: 2025-12-13 09:09:02.755629012 +0000 UTC m=+0.067482583 container create 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:09:02 compute-0 nova_compute[248510]: 2025-12-13 09:09:02.785 248514 DEBUG oslo_concurrency.lockutils [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:09:02 compute-0 systemd[1]: Started libpod-conmon-141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d.scope.
Dec 13 09:09:02 compute-0 podman[390870]: 2025-12-13 09:09:02.734599839 +0000 UTC m=+0.046453440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:09:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:02 compute-0 podman[390870]: 2025-12-13 09:09:02.87255709 +0000 UTC m=+0.184410691 container init 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:09:02 compute-0 podman[390870]: 2025-12-13 09:09:02.890692308 +0000 UTC m=+0.202545879 container start 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:09:02 compute-0 podman[390870]: 2025-12-13 09:09:02.893905641 +0000 UTC m=+0.205759212 container attach 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 13 09:09:03 compute-0 zealous_cannon[390886]: {
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:     "0": [
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:         {
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "devices": [
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "/dev/loop3"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             ],
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_name": "ceph_lv0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_size": "21470642176",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "name": "ceph_lv0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "tags": {
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cluster_name": "ceph",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.crush_device_class": "",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.encrypted": "0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.objectstore": "bluestore",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osd_id": "0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.type": "block",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.vdo": "0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.with_tpm": "0"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             },
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "type": "block",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "vg_name": "ceph_vg0"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:         }
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:     ],
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:     "1": [
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:         {
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "devices": [
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "/dev/loop4"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             ],
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_name": "ceph_lv1",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_size": "21470642176",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "name": "ceph_lv1",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "tags": {
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cluster_name": "ceph",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.crush_device_class": "",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.encrypted": "0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.objectstore": "bluestore",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osd_id": "1",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.type": "block",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.vdo": "0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.with_tpm": "0"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             },
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "type": "block",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "vg_name": "ceph_vg1"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:         }
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:     ],
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:     "2": [
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:         {
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "devices": [
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "/dev/loop5"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             ],
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_name": "ceph_lv2",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_size": "21470642176",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "name": "ceph_lv2",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "tags": {
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.cluster_name": "ceph",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.crush_device_class": "",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.encrypted": "0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.objectstore": "bluestore",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osd_id": "2",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.type": "block",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.vdo": "0",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:                 "ceph.with_tpm": "0"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             },
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "type": "block",
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:             "vg_name": "ceph_vg2"
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:         }
Dec 13 09:09:03 compute-0 zealous_cannon[390886]:     ]
Dec 13 09:09:03 compute-0 zealous_cannon[390886]: }
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.242 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Successfully updated port: 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:09:03 compute-0 systemd[1]: libpod-141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d.scope: Deactivated successfully.
Dec 13 09:09:03 compute-0 podman[390870]: 2025-12-13 09:09:03.252282482 +0000 UTC m=+0.564136133 container died 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.297 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.297 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.298 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:09:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0-merged.mount: Deactivated successfully.
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.442 248514 DEBUG nova.compute.manager [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-changed-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.442 248514 DEBUG nova.compute.manager [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing instance network info cache due to event network-changed-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.443 248514 DEBUG oslo_concurrency.lockutils [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.514 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:09:03 compute-0 podman[390870]: 2025-12-13 09:09:03.521273954 +0000 UTC m=+0.833127555 container remove 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Dec 13 09:09:03 compute-0 systemd[1]: libpod-conmon-141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d.scope: Deactivated successfully.
Dec 13 09:09:03 compute-0 sudo[390794]: pam_unix(sudo:session): session closed for user root
Dec 13 09:09:03 compute-0 nova_compute[248510]: 2025-12-13 09:09:03.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:03 compute-0 sudo[390907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:09:03 compute-0 sudo[390907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:09:03 compute-0 sudo[390907]: pam_unix(sudo:session): session closed for user root
Dec 13 09:09:03 compute-0 sudo[390932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:09:03 compute-0 sudo[390932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:09:04 compute-0 podman[390969]: 2025-12-13 09:09:04.054263562 +0000 UTC m=+0.044654484 container create cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:09:04 compute-0 systemd[1]: Started libpod-conmon-cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198.scope.
Dec 13 09:09:04 compute-0 podman[390969]: 2025-12-13 09:09:04.032624563 +0000 UTC m=+0.023015505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:09:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:09:04 compute-0 podman[390969]: 2025-12-13 09:09:04.144979193 +0000 UTC m=+0.135370095 container init cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:09:04 compute-0 podman[390969]: 2025-12-13 09:09:04.151765718 +0000 UTC m=+0.142156620 container start cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:09:04 compute-0 podman[390969]: 2025-12-13 09:09:04.155386442 +0000 UTC m=+0.145777374 container attach cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 09:09:04 compute-0 bold_allen[390986]: 167 167
Dec 13 09:09:04 compute-0 systemd[1]: libpod-cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198.scope: Deactivated successfully.
Dec 13 09:09:04 compute-0 podman[390969]: 2025-12-13 09:09:04.160722669 +0000 UTC m=+0.151113571 container died cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec 13 09:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aa0b990a37fe48990895f550196c947e2356cbd8a87d4b9ae6d4805cb29b9bc-merged.mount: Deactivated successfully.
Dec 13 09:09:04 compute-0 podman[390969]: 2025-12-13 09:09:04.198111865 +0000 UTC m=+0.188502787 container remove cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:09:04 compute-0 systemd[1]: libpod-conmon-cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198.scope: Deactivated successfully.
Dec 13 09:09:04 compute-0 podman[391009]: 2025-12-13 09:09:04.386550798 +0000 UTC m=+0.030327843 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:09:04 compute-0 ceph-mon[76537]: pgmap v3238: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:04 compute-0 podman[391009]: 2025-12-13 09:09:04.665246932 +0000 UTC m=+0.309023967 container create d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:09:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:04 compute-0 systemd[1]: Started libpod-conmon-d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e.scope.
Dec 13 09:09:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:09:05 compute-0 podman[391009]: 2025-12-13 09:09:05.058532143 +0000 UTC m=+0.702309218 container init d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:09:05 compute-0 podman[391009]: 2025-12-13 09:09:05.069669081 +0000 UTC m=+0.713446116 container start d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:09:05 compute-0 podman[391009]: 2025-12-13 09:09:05.327694171 +0000 UTC m=+0.971471186 container attach d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:09:05 compute-0 lvm[391104]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:09:05 compute-0 lvm[391104]: VG ceph_vg0 finished
Dec 13 09:09:05 compute-0 lvm[391105]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:09:05 compute-0 lvm[391105]: VG ceph_vg1 finished
Dec 13 09:09:05 compute-0 lvm[391107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:09:05 compute-0 lvm[391107]: VG ceph_vg2 finished
Dec 13 09:09:05 compute-0 adoring_nobel[391026]: {}
Dec 13 09:09:06 compute-0 systemd[1]: libpod-d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e.scope: Deactivated successfully.
Dec 13 09:09:06 compute-0 systemd[1]: libpod-d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e.scope: Consumed 1.485s CPU time.
Dec 13 09:09:06 compute-0 podman[391009]: 2025-12-13 09:09:06.010456665 +0000 UTC m=+1.654233730 container died d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:09:06 compute-0 ceph-mon[76537]: pgmap v3239: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190-merged.mount: Deactivated successfully.
Dec 13 09:09:06 compute-0 podman[391009]: 2025-12-13 09:09:06.188988512 +0000 UTC m=+1.832765497 container remove d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 09:09:06 compute-0 systemd[1]: libpod-conmon-d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e.scope: Deactivated successfully.
Dec 13 09:09:06 compute-0 sudo[390932]: pam_unix(sudo:session): session closed for user root
Dec 13 09:09:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:09:06 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:09:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:09:06 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:09:06 compute-0 sudo[391121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:09:06 compute-0 sudo[391121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:09:06 compute-0 sudo[391121]: pam_unix(sudo:session): session closed for user root
Dec 13 09:09:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:06 compute-0 nova_compute[248510]: 2025-12-13 09:09:06.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:07 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:09:07 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.405 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.470 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.471 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance network_info: |[{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.471 248514 DEBUG oslo_concurrency.lockutils [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.471 248514 DEBUG nova.network.neutron [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing network info cache for port 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.476 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start _get_guest_xml network_info=[{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.483 248514 WARNING nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.495 248514 DEBUG nova.virt.libvirt.host [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.496 248514 DEBUG nova.virt.libvirt.host [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.500 248514 DEBUG nova.virt.libvirt.host [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.501 248514 DEBUG nova.virt.libvirt.host [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.501 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.501 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.502 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.502 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.502 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.502 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.503 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.503 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.503 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.503 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.504 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.504 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:09:07 compute-0 nova_compute[248510]: 2025-12-13 09:09:07.507 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:09:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:09:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325214439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.125 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.159 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:09:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:08.161 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.164 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:09:08 compute-0 ceph-mon[76537]: pgmap v3240: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2325214439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:09:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392839541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.769 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.772 248514 DEBUG nova.virt.libvirt.vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:55Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.773 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.775 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.776 248514 DEBUG nova.virt.libvirt.vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:55Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.776 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.778 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:09:08 compute-0 nova_compute[248510]: 2025-12-13 09:09:08.780 248514 DEBUG nova.objects.instance [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid bdbb2140-812f-4913-a83e-7dadd1968cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.034 248514 DEBUG nova.network.neutron [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updated VIF entry in instance network info cache for port 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.035 248514 DEBUG nova.network.neutron [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:09:09
Dec 13 09:09:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:09:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:09:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'images', 'backups']
Dec 13 09:09:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.531 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <uuid>bdbb2140-812f-4913-a83e-7dadd1968cde</uuid>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <name>instance-0000008c</name>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-1296664967</nova:name>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:09:07</nova:creationTime>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:port uuid="c509d630-f81b-402a-bae1-eae9e8fca8ca">
Dec 13 09:09:09 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <nova:port uuid="0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2">
Dec 13 09:09:09 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe95:c79d" ipVersion="6"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <system>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <entry name="serial">bdbb2140-812f-4913-a83e-7dadd1968cde</entry>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <entry name="uuid">bdbb2140-812f-4913-a83e-7dadd1968cde</entry>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </system>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <os>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   </os>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <features>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   </features>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bdbb2140-812f-4913-a83e-7dadd1968cde_disk">
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       </source>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config">
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       </source>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:09:09 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:6d:6c:8b"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <target dev="tapc509d630-f8"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:95:c7:9d"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <target dev="tap0a76fd5a-5a"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/console.log" append="off"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <video>
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </video>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:09:09 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:09:09 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:09:09 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:09:09 compute-0 nova_compute[248510]: </domain>
Dec 13 09:09:09 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.532 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Preparing to wait for external event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.533 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.533 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.534 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.534 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Preparing to wait for external event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.534 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.535 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.535 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.536 248514 DEBUG nova.virt.libvirt.vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:55Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.537 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.538 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.538 248514 DEBUG os_vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.540 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.541 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.546 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc509d630-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.547 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc509d630-f8, col_values=(('external_ids', {'iface-id': 'c509d630-f81b-402a-bae1-eae9e8fca8ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:6c:8b', 'vm-uuid': 'bdbb2140-812f-4913-a83e-7dadd1968cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:09 compute-0 NetworkManager[50376]: <info>  [1765616949.5988] manager: (tapc509d630-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.599 248514 DEBUG oslo_concurrency.lockutils [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.610 248514 INFO os_vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8')
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.611 248514 DEBUG nova.virt.libvirt.vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:55Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.612 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.613 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.613 248514 DEBUG os_vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.614 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.615 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.618 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a76fd5a-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.619 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a76fd5a-5a, col_values=(('external_ids', {'iface-id': '0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:c7:9d', 'vm-uuid': 'bdbb2140-812f-4913-a83e-7dadd1968cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:09 compute-0 NetworkManager[50376]: <info>  [1765616949.6220] manager: (tap0a76fd5a-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/603)
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:09 compute-0 nova_compute[248510]: 2025-12-13 09:09:09.631 248514 INFO os_vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a')
Dec 13 09:09:10 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/392839541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:09:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:09:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:09:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:09:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:09:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:09:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:09:10 compute-0 nova_compute[248510]: 2025-12-13 09:09:10.272 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:09:10 compute-0 nova_compute[248510]: 2025-12-13 09:09:10.272 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:09:10 compute-0 nova_compute[248510]: 2025-12-13 09:09:10.272 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:6d:6c:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:09:10 compute-0 nova_compute[248510]: 2025-12-13 09:09:10.272 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:95:c7:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:09:10 compute-0 nova_compute[248510]: 2025-12-13 09:09:10.273 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Using config drive
Dec 13 09:09:10 compute-0 nova_compute[248510]: 2025-12-13 09:09:10.299 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:09:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Dec 13 09:09:10 compute-0 nova_compute[248510]: 2025-12-13 09:09:10.994 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Creating config drive at /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.000 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6dvpwjl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:09:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:09:11 compute-0 ceph-mon[76537]: pgmap v3241: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.158 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6dvpwjl" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:09:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.194 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.199 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.354 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.356 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Deleting local config drive /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config because it was imported into RBD.
Dec 13 09:09:11 compute-0 NetworkManager[50376]: <info>  [1765616951.4339] manager: (tapc509d630-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/604)
Dec 13 09:09:11 compute-0 kernel: tapc509d630-f8: entered promiscuous mode
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01451|binding|INFO|Claiming lport c509d630-f81b-402a-bae1-eae9e8fca8ca for this chassis.
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01452|binding|INFO|c509d630-f81b-402a-bae1-eae9e8fca8ca: Claiming fa:16:3e:6d:6c:8b 10.100.0.13
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:11 compute-0 NetworkManager[50376]: <info>  [1765616951.4674] manager: (tap0a76fd5a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/605)
Dec 13 09:09:11 compute-0 kernel: tap0a76fd5a-5a: entered promiscuous mode
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01453|binding|INFO|Setting lport c509d630-f81b-402a-bae1-eae9e8fca8ca ovn-installed in OVS
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.477 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:11 compute-0 systemd-udevd[391285]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:09:11 compute-0 systemd-udevd[391284]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01454|if_status|INFO|Not updating pb chassis for 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 now as sb is readonly
Dec 13 09:09:11 compute-0 NetworkManager[50376]: <info>  [1765616951.4955] device (tap0a76fd5a-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:09:11 compute-0 NetworkManager[50376]: <info>  [1765616951.4963] device (tap0a76fd5a-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:09:11 compute-0 NetworkManager[50376]: <info>  [1765616951.4977] device (tapc509d630-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:09:11 compute-0 NetworkManager[50376]: <info>  [1765616951.4983] device (tapc509d630-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:11 compute-0 systemd-machined[210538]: New machine qemu-171-instance-0000008c.
Dec 13 09:09:11 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-0000008c.
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.770 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:6c:8b 10.100.0.13'], port_security=['fa:16:3e:6d:6c:8b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bdbb2140-812f-4913-a83e-7dadd1968cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=642d5a3d-2cd7-49a8-8481-4caf7441de52, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c509d630-f81b-402a-bae1-eae9e8fca8ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c509d630-f81b-402a-bae1-eae9e8fca8ca in datapath 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 bound to our chassis
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01455|binding|INFO|Claiming lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 for this chassis.
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01456|binding|INFO|0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2: Claiming fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01457|binding|INFO|Setting lport c509d630-f81b-402a-bae1-eae9e8fca8ca up in Southbound
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.773 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01458|binding|INFO|Setting lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 ovn-installed in OVS
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.785 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d'], port_security=['fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe95:c79d/64', 'neutron:device_id': 'bdbb2140-812f-4913-a83e-7dadd1968cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d30eddc7-04d5-44b8-966c-6e93f6525e9a, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:09:11 compute-0 ovn_controller[148476]: 2025-12-13T09:09:11Z|01459|binding|INFO|Setting lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 up in Southbound
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.791 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef61a464-206d-40a2-a8f4-0b37a301a3e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.840 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c5353a-f69e-4365-acf1-5867621f4e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.844 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7ffdf96b-baef-4cb7-8c10-d202770168b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.879 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bc61cdb6-b24d-4912-aa43-4b424dd553f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.904 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd349a2f-97e3-4740-9eb5-2773263c1931]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e0a1ad-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:46:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948051, 'reachable_time': 26277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391302, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.923 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01ce2b50-4d41-4a47-8a64-82e94f75f4f5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31e0a1ad-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948068, 'tstamp': 948068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391303, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31e0a1ad-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948073, 'tstamp': 948073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391303, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.926 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e0a1ad-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:11 compute-0 nova_compute[248510]: 2025-12-13 09:09:11.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.929 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31e0a1ad-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.930 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.930 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31e0a1ad-e0, col_values=(('external_ids', {'iface-id': 'bbfdb82a-c1cb-4d3d-b44d-9475a3177d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.931 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.933 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 in datapath 9cd5db3b-c54c-46c9-b59b-e477b2784420 unbound from our chassis
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.935 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cd5db3b-c54c-46c9-b59b-e477b2784420
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.951 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b681e49e-1e0b-43d2-b8cc-d58e2f662a96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.992 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6538b6e-8b0f-4d6b-b2d4-2abc2b0d5db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.996 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[607831b4-611a-4d66-acf0-a3fdbc3dab80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.033 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d51eaa1d-25fe-4556-9755-627470b79760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:12 compute-0 ceph-mon[76537]: pgmap v3242: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Dec 13 09:09:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.058 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b04283-288f-42a1-9ffd-6ff33e15a928]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cd5db3b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:ad:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948147, 'reachable_time': 31724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391351, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.075 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06889cb0-a69f-47fe-baf1-b295fa54fd4a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9cd5db3b-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948163, 'tstamp': 948163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391353, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.076 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cd5db3b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.079 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cd5db3b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.080 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.080 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cd5db3b-c0, col_values=(('external_ids', {'iface-id': '26b2c676-4744-4ec7-b362-11535b0ba025'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.080 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.095 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616952.0948906, bdbb2140-812f-4913-a83e-7dadd1968cde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.095 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] VM Started (Lifecycle Event)
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.152 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.157 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616952.0951285, bdbb2140-812f-4913-a83e-7dadd1968cde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.157 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] VM Paused (Lifecycle Event)
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.210 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.214 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.235 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:09:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.841 248514 DEBUG nova.compute.manager [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.842 248514 DEBUG oslo_concurrency.lockutils [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.842 248514 DEBUG oslo_concurrency.lockutils [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.842 248514 DEBUG oslo_concurrency.lockutils [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:12 compute-0 nova_compute[248510]: 2025-12-13 09:09:12.842 248514 DEBUG nova.compute.manager [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Processing event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.764 248514 DEBUG nova.compute.manager [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.764 248514 DEBUG oslo_concurrency.lockutils [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.765 248514 DEBUG oslo_concurrency.lockutils [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.765 248514 DEBUG oslo_concurrency.lockutils [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.765 248514 DEBUG nova.compute.manager [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Processing event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.766 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.768 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616953.768806, bdbb2140-812f-4913-a83e-7dadd1968cde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] VM Resumed (Lifecycle Event)
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.771 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.774 248514 INFO nova.virt.libvirt.driver [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance spawned successfully.
Dec 13 09:09:13 compute-0 nova_compute[248510]: 2025-12-13 09:09:13.775 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:09:14 compute-0 ceph-mon[76537]: pgmap v3243: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.052 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.059 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.064 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.064 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.065 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.065 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.066 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.066 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.433 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.477 248514 INFO nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Took 18.50 seconds to spawn the instance on the hypervisor.
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.478 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.740 248514 INFO nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Took 21.57 seconds to build instance.
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:14 compute-0 nova_compute[248510]: 2025-12-13 09:09:14.846 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:15 compute-0 nova_compute[248510]: 2025-12-13 09:09:15.051 248514 DEBUG nova.compute.manager [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:15 compute-0 nova_compute[248510]: 2025-12-13 09:09:15.051 248514 DEBUG oslo_concurrency.lockutils [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:15 compute-0 nova_compute[248510]: 2025-12-13 09:09:15.052 248514 DEBUG oslo_concurrency.lockutils [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:15 compute-0 nova_compute[248510]: 2025-12-13 09:09:15.052 248514 DEBUG oslo_concurrency.lockutils [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:15 compute-0 nova_compute[248510]: 2025-12-13 09:09:15.052 248514 DEBUG nova.compute.manager [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:15 compute-0 nova_compute[248510]: 2025-12-13 09:09:15.053 248514 WARNING nova.compute.manager [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received unexpected event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 for instance with vm_state active and task_state None.
Dec 13 09:09:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:09:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249586351' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:09:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:09:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249586351' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:09:16 compute-0 nova_compute[248510]: 2025-12-13 09:09:16.051 248514 DEBUG nova.compute.manager [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:16 compute-0 nova_compute[248510]: 2025-12-13 09:09:16.052 248514 DEBUG oslo_concurrency.lockutils [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:16 compute-0 nova_compute[248510]: 2025-12-13 09:09:16.052 248514 DEBUG oslo_concurrency.lockutils [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:16 compute-0 nova_compute[248510]: 2025-12-13 09:09:16.053 248514 DEBUG oslo_concurrency.lockutils [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:16 compute-0 nova_compute[248510]: 2025-12-13 09:09:16.053 248514 DEBUG nova.compute.manager [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:16 compute-0 nova_compute[248510]: 2025-12-13 09:09:16.053 248514 WARNING nova.compute.manager [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received unexpected event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca for instance with vm_state active and task_state None.
Dec 13 09:09:16 compute-0 ceph-mon[76537]: pgmap v3244: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Dec 13 09:09:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4249586351' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:09:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4249586351' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:09:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Dec 13 09:09:16 compute-0 nova_compute[248510]: 2025-12-13 09:09:16.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:18 compute-0 ceph-mon[76537]: pgmap v3245: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Dec 13 09:09:18 compute-0 nova_compute[248510]: 2025-12-13 09:09:18.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 728 KiB/s rd, 15 KiB/s wr, 34 op/s
Dec 13 09:09:19 compute-0 nova_compute[248510]: 2025-12-13 09:09:19.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:20 compute-0 ceph-mon[76537]: pgmap v3246: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 728 KiB/s rd, 15 KiB/s wr, 34 op/s
Dec 13 09:09:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 13 09:09:20 compute-0 podman[391357]: 2025-12-13 09:09:20.994058163 +0000 UTC m=+0.064686371 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 09:09:21 compute-0 podman[391356]: 2025-12-13 09:09:21.013506975 +0000 UTC m=+0.084889392 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd)
Dec 13 09:09:21 compute-0 podman[391355]: 2025-12-13 09:09:21.036317164 +0000 UTC m=+0.117418832 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:09:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011222335815181286 of space, bias 1.0, pg target 0.33667007445543856 quantized to 32 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697484041444878 of space, bias 1.0, pg target 0.20092452124334634 quantized to 32 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:09:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:09:22 compute-0 ceph-mon[76537]: pgmap v3247: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 13 09:09:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:09:22 compute-0 nova_compute[248510]: 2025-12-13 09:09:22.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:23 compute-0 nova_compute[248510]: 2025-12-13 09:09:23.590 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:23 compute-0 nova_compute[248510]: 2025-12-13 09:09:23.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:23 compute-0 nova_compute[248510]: 2025-12-13 09:09:23.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:09:23 compute-0 nova_compute[248510]: 2025-12-13 09:09:23.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:09:24 compute-0 ceph-mon[76537]: pgmap v3248: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:09:24 compute-0 nova_compute[248510]: 2025-12-13 09:09:24.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 09:09:25 compute-0 ovn_controller[148476]: 2025-12-13T09:09:25Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:6c:8b 10.100.0.13
Dec 13 09:09:25 compute-0 ovn_controller[148476]: 2025-12-13T09:09:25Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:6c:8b 10.100.0.13
Dec 13 09:09:26 compute-0 ceph-mon[76537]: pgmap v3249: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 09:09:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 64 op/s
Dec 13 09:09:27 compute-0 nova_compute[248510]: 2025-12-13 09:09:27.354 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:09:27 compute-0 nova_compute[248510]: 2025-12-13 09:09:27.354 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:09:27 compute-0 nova_compute[248510]: 2025-12-13 09:09:27.355 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:09:27 compute-0 nova_compute[248510]: 2025-12-13 09:09:27.355 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54c767d1-14e1-4a29-be59-440d4e412c4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:09:28 compute-0 ceph-mon[76537]: pgmap v3250: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 64 op/s
Dec 13 09:09:28 compute-0 nova_compute[248510]: 2025-12-13 09:09:28.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 528 KiB/s wr, 85 op/s
Dec 13 09:09:29 compute-0 nova_compute[248510]: 2025-12-13 09:09:29.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:30 compute-0 ceph-mon[76537]: pgmap v3251: 321 pgs: 321 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 528 KiB/s wr, 85 op/s
Dec 13 09:09:30 compute-0 nova_compute[248510]: 2025-12-13 09:09:30.499 248514 DEBUG nova.compute.manager [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:30 compute-0 nova_compute[248510]: 2025-12-13 09:09:30.500 248514 DEBUG nova.compute.manager [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing instance network info cache due to event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:09:30 compute-0 nova_compute[248510]: 2025-12-13 09:09:30.500 248514 DEBUG oslo_concurrency.lockutils [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:09:30 compute-0 nova_compute[248510]: 2025-12-13 09:09:30.501 248514 DEBUG oslo_concurrency.lockutils [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:09:30 compute-0 nova_compute[248510]: 2025-12-13 09:09:30.501 248514 DEBUG nova.network.neutron [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:09:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Dec 13 09:09:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:32 compute-0 ceph-mon[76537]: pgmap v3252: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.136 248514 DEBUG nova.network.neutron [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updated VIF entry in instance network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.136 248514 DEBUG nova.network.neutron [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.290 248514 DEBUG oslo_concurrency.lockutils [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.684 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.731 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.731 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.732 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.732 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.732 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.732 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.795 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.796 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.796 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.796 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:09:32 compute-0 nova_compute[248510]: 2025-12-13 09:09:32.796 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:09:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:09:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412450346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.396 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.589 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.589 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.595 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.596 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.856 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.857 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3113MB free_disk=59.89646621514112GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.857 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.857 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.993 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 54c767d1-14e1-4a29-be59-440d4e412c4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.993 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance bdbb2140-812f-4913-a83e-7dadd1968cde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.993 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:09:33 compute-0 nova_compute[248510]: 2025-12-13 09:09:33.994 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.076 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:09:34 compute-0 ceph-mon[76537]: pgmap v3253: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:09:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1412450346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.375 248514 DEBUG nova.compute.manager [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.376 248514 DEBUG nova.compute.manager [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing instance network info cache due to event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.376 248514 DEBUG oslo_concurrency.lockutils [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.376 248514 DEBUG oslo_concurrency.lockutils [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.376 248514 DEBUG nova.network.neutron [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.463 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.463 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.464 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.464 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.464 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.466 248514 INFO nova.compute.manager [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Terminating instance
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.467 248514 DEBUG nova.compute.manager [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:09:34 compute-0 kernel: tapc509d630-f8 (unregistering): left promiscuous mode
Dec 13 09:09:34 compute-0 NetworkManager[50376]: <info>  [1765616974.5277] device (tapc509d630-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:09:34 compute-0 ovn_controller[148476]: 2025-12-13T09:09:34Z|01460|binding|INFO|Releasing lport c509d630-f81b-402a-bae1-eae9e8fca8ca from this chassis (sb_readonly=0)
Dec 13 09:09:34 compute-0 ovn_controller[148476]: 2025-12-13T09:09:34Z|01461|binding|INFO|Setting lport c509d630-f81b-402a-bae1-eae9e8fca8ca down in Southbound
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.565 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 ovn_controller[148476]: 2025-12-13T09:09:34Z|01462|binding|INFO|Removing iface tapc509d630-f8 ovn-installed in OVS
Dec 13 09:09:34 compute-0 kernel: tap0a76fd5a-5a (unregistering): left promiscuous mode
Dec 13 09:09:34 compute-0 NetworkManager[50376]: <info>  [1765616974.5836] device (tap0a76fd5a-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 ovn_controller[148476]: 2025-12-13T09:09:34Z|01463|binding|INFO|Releasing lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 from this chassis (sb_readonly=1)
Dec 13 09:09:34 compute-0 ovn_controller[148476]: 2025-12-13T09:09:34Z|01464|binding|INFO|Removing iface tap0a76fd5a-5a ovn-installed in OVS
Dec 13 09:09:34 compute-0 ovn_controller[148476]: 2025-12-13T09:09:34Z|01465|if_status|INFO|Dropped 2 log messages in last 255 seconds (most recently, 255 seconds ago) due to excessive rate
Dec 13 09:09:34 compute-0 ovn_controller[148476]: 2025-12-13T09:09:34Z|01466|if_status|INFO|Not setting lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 down as sb is readonly
Dec 13 09:09:34 compute-0 ovn_controller[148476]: 2025-12-13T09:09:34Z|01467|binding|INFO|Setting lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 down in Southbound
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.605 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:6c:8b 10.100.0.13'], port_security=['fa:16:3e:6d:6c:8b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bdbb2140-812f-4913-a83e-7dadd1968cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=642d5a3d-2cd7-49a8-8481-4caf7441de52, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c509d630-f81b-402a-bae1-eae9e8fca8ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.609 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c509d630-f81b-402a-bae1-eae9e8fca8ca in datapath 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 unbound from our chassis
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.618 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.638 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2178f4d-60ab-452f-84d7-8115943a1870]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Dec 13 09:09:34 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008c.scope: Consumed 13.552s CPU time.
Dec 13 09:09:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:09:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/795097538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:09:34 compute-0 systemd-machined[210538]: Machine qemu-171-instance-0000008c terminated.
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.677 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.685 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[955d6f82-f6c5-4a9b-9df6-1733b3f73b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.688 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:09:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.691 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a8558d6b-d0ab-4f23-93b0-bb8258dce154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 NetworkManager[50376]: <info>  [1765616974.7022] manager: (tap0a76fd5a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/606)
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.717 248514 INFO nova.virt.libvirt.driver [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance destroyed successfully.
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.717 248514 DEBUG nova.objects.instance [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid bdbb2140-812f-4913-a83e-7dadd1968cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.727 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3b31adad-4edd-48fc-b47c-be3e05aab511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.743 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8859f1-0dce-45a4-a745-40515034f7af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e0a1ad-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:46:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948051, 'reachable_time': 26277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391502, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.768 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36c71d3c-a8a6-4978-9a9e-4a127c1e0c48]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31e0a1ad-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948068, 'tstamp': 948068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391503, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31e0a1ad-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948073, 'tstamp': 948073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391503, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.771 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e0a1ad-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31e0a1ad-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31e0a1ad-e0, col_values=(('external_ids', {'iface-id': 'bbfdb82a-c1cb-4d3d-b44d-9475a3177d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.785 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.873 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d'], port_security=['fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe95:c79d/64', 'neutron:device_id': 'bdbb2140-812f-4913-a83e-7dadd1968cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d30eddc7-04d5-44b8-966c-6e93f6525e9a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.874 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 in datapath 9cd5db3b-c54c-46c9-b59b-e477b2784420 unbound from our chassis
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.875 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cd5db3b-c54c-46c9-b59b-e477b2784420
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.890 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.894 248514 DEBUG nova.virt.libvirt.vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:09:14Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.895 248514 DEBUG nova.network.os_vif_util [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.895 248514 DEBUG nova.network.os_vif_util [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.896 248514 DEBUG os_vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.898 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc509d630-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.898 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39f435b7-2b27-4a01-8f00-2ae8c2dfa4ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.909 248514 INFO os_vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8')
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.910 248514 DEBUG nova.virt.libvirt.vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:09:14Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.910 248514 DEBUG nova.network.os_vif_util [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.911 248514 DEBUG nova.network.os_vif_util [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.911 248514 DEBUG os_vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.912 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.913 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a76fd5a-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.918 248514 INFO os_vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a')
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.941 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.941 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.950 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[399f0dbb-4b29-4f3f-8583-cd83b5c3a710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.954 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f82ffa15-dc09-41d0-a842-ba3ee4531d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.980 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:34 compute-0 nova_compute[248510]: 2025-12-13 09:09:34.981 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:09:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.997 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b7fd9ba3-2792-49ed-9334-b94532f6925d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.019 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd72d488-fb70-4eaf-991a-6d5e4ed104b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cd5db3b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:ad:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948147, 'reachable_time': 31724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391530, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f47e808-2622-4176-902a-998f32098fc4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9cd5db3b-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948163, 'tstamp': 948163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391531, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cd5db3b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.049 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.050 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cd5db3b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.050 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.051 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cd5db3b-c0, col_values=(('external_ids', {'iface-id': '26b2c676-4744-4ec7-b362-11535b0ba025'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.051 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:09:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/795097538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.214 248514 INFO nova.virt.libvirt.driver [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Deleting instance files /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde_del
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.215 248514 INFO nova.virt.libvirt.driver [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Deletion of /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde_del complete
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.250 248514 DEBUG nova.compute.manager [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-unplugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.250 248514 DEBUG oslo_concurrency.lockutils [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.251 248514 DEBUG oslo_concurrency.lockutils [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.251 248514 DEBUG oslo_concurrency.lockutils [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.251 248514 DEBUG nova.compute.manager [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-unplugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.251 248514 DEBUG nova.compute.manager [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-unplugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.359 248514 INFO nova.compute.manager [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Took 0.89 seconds to destroy the instance on the hypervisor.
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.359 248514 DEBUG oslo.service.loopingcall [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.360 248514 DEBUG nova.compute.manager [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:09:35 compute-0 nova_compute[248510]: 2025-12-13 09:09:35.360 248514 DEBUG nova.network.neutron [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:09:36 compute-0 ceph-mon[76537]: pgmap v3254: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 09:09:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.484 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-unplugged-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.484 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.484 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-unplugged-c509d630-f81b-402a-bae1-eae9e8fca8ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-unplugged-c509d630-f81b-402a-bae1-eae9e8fca8ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.486 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.486 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.486 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.486 248514 WARNING nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received unexpected event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca for instance with vm_state active and task_state deleting.
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.656 248514 DEBUG nova.network.neutron [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updated VIF entry in instance network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.656 248514 DEBUG nova.network.neutron [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.690 248514 DEBUG oslo_concurrency.lockutils [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.918 248514 DEBUG nova.network.neutron [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.936 248514 INFO nova.compute.manager [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Took 1.58 seconds to deallocate network for instance.
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.982 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:36 compute-0 nova_compute[248510]: 2025-12-13 09:09:36.982 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.061 248514 DEBUG oslo_concurrency.processutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.378 248514 DEBUG nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.379 248514 DEBUG oslo_concurrency.lockutils [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.379 248514 DEBUG oslo_concurrency.lockutils [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.380 248514 DEBUG oslo_concurrency.lockutils [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.381 248514 DEBUG nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.381 248514 WARNING nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received unexpected event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 for instance with vm_state deleted and task_state None.
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.382 248514 DEBUG nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-deleted-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.382 248514 DEBUG nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-deleted-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:09:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860343970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.692 248514 DEBUG oslo_concurrency.processutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:09:37 compute-0 nova_compute[248510]: 2025-12-13 09:09:37.697 248514 DEBUG nova.compute.provider_tree [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:09:38 compute-0 nova_compute[248510]: 2025-12-13 09:09:38.106 248514 DEBUG nova.scheduler.client.report [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:09:38 compute-0 nova_compute[248510]: 2025-12-13 09:09:38.131 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:38 compute-0 ceph-mon[76537]: pgmap v3255: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 09:09:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2860343970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:09:38 compute-0 nova_compute[248510]: 2025-12-13 09:09:38.166 248514 INFO nova.scheduler.client.report [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance bdbb2140-812f-4913-a83e-7dadd1968cde
Dec 13 09:09:38 compute-0 nova_compute[248510]: 2025-12-13 09:09:38.274 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:38 compute-0 nova_compute[248510]: 2025-12-13 09:09:38.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 172 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 13 09:09:39 compute-0 nova_compute[248510]: 2025-12-13 09:09:39.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:09:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:09:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:09:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:09:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:09:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:09:40 compute-0 ceph-mon[76537]: pgmap v3256: 321 pgs: 321 active+clean; 172 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 13 09:09:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.6 MiB/s wr, 75 op/s
Dec 13 09:09:40 compute-0 nova_compute[248510]: 2025-12-13 09:09:40.887 248514 DEBUG nova.compute.manager [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:40 compute-0 nova_compute[248510]: 2025-12-13 09:09:40.887 248514 DEBUG nova.compute.manager [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing instance network info cache due to event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:09:40 compute-0 nova_compute[248510]: 2025-12-13 09:09:40.888 248514 DEBUG oslo_concurrency.lockutils [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:09:40 compute-0 nova_compute[248510]: 2025-12-13 09:09:40.888 248514 DEBUG oslo_concurrency.lockutils [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:09:40 compute-0 nova_compute[248510]: 2025-12-13 09:09:40.888 248514 DEBUG nova.network.neutron [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.013 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.014 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.015 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.015 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.016 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.018 248514 INFO nova.compute.manager [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Terminating instance
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.020 248514 DEBUG nova.compute.manager [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:09:41 compute-0 kernel: tap820e6a4a-a7 (unregistering): left promiscuous mode
Dec 13 09:09:41 compute-0 NetworkManager[50376]: <info>  [1765616981.0879] device (tap820e6a4a-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:09:41 compute-0 ovn_controller[148476]: 2025-12-13T09:09:41Z|01468|binding|INFO|Releasing lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c from this chassis (sb_readonly=0)
Dec 13 09:09:41 compute-0 ovn_controller[148476]: 2025-12-13T09:09:41Z|01469|binding|INFO|Setting lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c down in Southbound
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 ovn_controller[148476]: 2025-12-13T09:09:41Z|01470|binding|INFO|Removing iface tap820e6a4a-a7 ovn-installed in OVS
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.114 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.125 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:71:ee 10.100.0.12'], port_security=['fa:16:3e:bc:71:ee 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '54c767d1-14e1-4a29-be59-440d4e412c4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=642d5a3d-2cd7-49a8-8481-4caf7441de52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=820e6a4a-a776-40a1-a7c0-d34cd3d2543c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.126 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c in datapath 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 unbound from our chassis
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.129 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.130 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4534f8-14f5-4caa-9213-9155bc913bae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.131 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 namespace which is not needed anymore
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 kernel: tap06da1719-40 (unregistering): left promiscuous mode
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 NetworkManager[50376]: <info>  [1765616981.1467] device (tap06da1719-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:09:41 compute-0 ovn_controller[148476]: 2025-12-13T09:09:41Z|01471|binding|INFO|Releasing lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c from this chassis (sb_readonly=0)
Dec 13 09:09:41 compute-0 ovn_controller[148476]: 2025-12-13T09:09:41Z|01472|binding|INFO|Setting lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c down in Southbound
Dec 13 09:09:41 compute-0 ovn_controller[148476]: 2025-12-13T09:09:41Z|01473|binding|INFO|Removing iface tap06da1719-40 ovn-installed in OVS
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.170 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903'], port_security=['fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:d903/64', 'neutron:device_id': '54c767d1-14e1-4a29-be59-440d4e412c4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d30eddc7-04d5-44b8-966c-6e93f6525e9a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=06da1719-402b-4c36-b4b8-dcc95ed8b65c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.188 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Dec 13 09:09:41 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008b.scope: Consumed 16.793s CPU time.
Dec 13 09:09:41 compute-0 systemd-machined[210538]: Machine qemu-170-instance-0000008b terminated.
Dec 13 09:09:41 compute-0 NetworkManager[50376]: <info>  [1765616981.2482] manager: (tap820e6a4a-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Dec 13 09:09:41 compute-0 NetworkManager[50376]: <info>  [1765616981.2608] manager: (tap06da1719-40): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.279 248514 INFO nova.virt.libvirt.driver [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance destroyed successfully.
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.280 248514 DEBUG nova.objects.instance [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 54c767d1-14e1-4a29-be59-440d4e412c4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.309 248514 DEBUG nova.virt.libvirt.vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:08:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:08:24Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.310 248514 DEBUG nova.network.os_vif_util [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.311 248514 DEBUG nova.network.os_vif_util [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.312 248514 DEBUG os_vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.314 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap820e6a4a-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [NOTICE]   (389874) : haproxy version is 2.8.14-c23fe91
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [NOTICE]   (389874) : path to executable is /usr/sbin/haproxy
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [WARNING]  (389874) : Exiting Master process...
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [WARNING]  (389874) : Exiting Master process...
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [ALERT]    (389874) : Current worker (389876) exited with code 143 (Terminated)
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [WARNING]  (389874) : All workers exited. Exiting... (0)
Dec 13 09:09:41 compute-0 systemd[1]: libpod-63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922.scope: Deactivated successfully.
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.323 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.326 248514 INFO os_vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7')
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.327 248514 DEBUG nova.virt.libvirt.vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:08:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:08:24Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.328 248514 DEBUG nova.network.os_vif_util [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.328 248514 DEBUG nova.network.os_vif_util [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.329 248514 DEBUG os_vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:09:41 compute-0 podman[391587]: 2025-12-13 09:09:41.330154572 +0000 UTC m=+0.057880635 container died 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.334 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06da1719-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.343 248514 INFO os_vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40')
Dec 13 09:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922-userdata-shm.mount: Deactivated successfully.
Dec 13 09:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d43e0b8694a8e8a523da23bc5772976ab296f6f8c8d08138f08d92ba5a318180-merged.mount: Deactivated successfully.
Dec 13 09:09:41 compute-0 podman[391587]: 2025-12-13 09:09:41.375373339 +0000 UTC m=+0.103099312 container cleanup 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:09:41 compute-0 systemd[1]: libpod-conmon-63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922.scope: Deactivated successfully.
Dec 13 09:09:41 compute-0 podman[391648]: 2025-12-13 09:09:41.459418279 +0000 UTC m=+0.056896520 container remove 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.466 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b72e7d3-d3ad-4e38-804c-170c0e1846b2]: (4, ('Sat Dec 13 09:09:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 (63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922)\n63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922\nSat Dec 13 09:09:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 (63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922)\n63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc00df96-e34c-4872-bf87-af6ea6544260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.470 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e0a1ad-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 kernel: tap31e0a1ad-e0: left promiscuous mode
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f536fc3a-6c17-4f92-8c5f-d7d0a8f2ec97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.507 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0144054-6d63-45ff-aa87-f832fec4818e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.508 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79dac72c-a41c-4782-8537-fd0a46858cfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.536 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15644d06-5430-4bf3-93b9-6102cd86c9b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948042, 'reachable_time': 32120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391666, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d31e0a1ad\x2dee13\x2d4de2\x2dbfd5\x2d7dfcf6ce6ab5.mount: Deactivated successfully.
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.545 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.546 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c6e3c0-8ea5-45bd-8546-67f9cfa84111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.547 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 06da1719-402b-4c36-b4b8-dcc95ed8b65c in datapath 9cd5db3b-c54c-46c9-b59b-e477b2784420 unbound from our chassis
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.550 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9cd5db3b-c54c-46c9-b59b-e477b2784420, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.551 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0edf5d-bcc7-46ea-a35a-5063f05da192]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.553 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 namespace which is not needed anymore
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.667 248514 INFO nova.virt.libvirt.driver [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Deleting instance files /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e_del
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.668 248514 INFO nova.virt.libvirt.driver [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Deletion of /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e_del complete
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [NOTICE]   (389946) : haproxy version is 2.8.14-c23fe91
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [NOTICE]   (389946) : path to executable is /usr/sbin/haproxy
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [WARNING]  (389946) : Exiting Master process...
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [ALERT]    (389946) : Current worker (389948) exited with code 143 (Terminated)
Dec 13 09:09:41 compute-0 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [WARNING]  (389946) : All workers exited. Exiting... (0)
Dec 13 09:09:41 compute-0 systemd[1]: libpod-a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b.scope: Deactivated successfully.
Dec 13 09:09:41 compute-0 podman[391683]: 2025-12-13 09:09:41.710674744 +0000 UTC m=+0.054972310 container died a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b-userdata-shm.mount: Deactivated successfully.
Dec 13 09:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-053d728be1d4b2966613ad5805ec267c950bc3159538ec562a41c9a868f9efff-merged.mount: Deactivated successfully.
Dec 13 09:09:41 compute-0 podman[391683]: 2025-12-13 09:09:41.758625912 +0000 UTC m=+0.102923518 container cleanup a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:09:41 compute-0 systemd[1]: libpod-conmon-a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b.scope: Deactivated successfully.
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.799 248514 DEBUG nova.compute.manager [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-unplugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.799 248514 DEBUG oslo_concurrency.lockutils [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.800 248514 DEBUG oslo_concurrency.lockutils [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.801 248514 DEBUG oslo_concurrency.lockutils [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.801 248514 DEBUG nova.compute.manager [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-unplugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.802 248514 DEBUG nova.compute.manager [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-unplugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:09:41 compute-0 podman[391712]: 2025-12-13 09:09:41.839114098 +0000 UTC m=+0.056190410 container remove a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.845 248514 INFO nova.compute.manager [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Took 0.82 seconds to destroy the instance on the hypervisor.
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.846 248514 DEBUG oslo.service.loopingcall [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.846 248514 DEBUG nova.compute.manager [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.847 248514 DEBUG nova.network.neutron [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.849 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a376c5-574e-41b0-8c58-bd678ac90030]: (4, ('Sat Dec 13 09:09:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 (a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b)\na55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b\nSat Dec 13 09:09:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 (a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b)\na55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.851 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc3cab2-2717-4124-be28-d2445b659fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.853 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cd5db3b-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:09:41 compute-0 kernel: tap9cd5db3b-c0: left promiscuous mode
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 nova_compute[248510]: 2025-12-13 09:09:41.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.874 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9d2fbf-fb6b-4523-a219-7de9abde7d36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.886 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15d6957a-c08d-4fd0-b314-1ae39213f76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.888 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbf5399-8d8e-4001-95ed-959cb3106c9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57b83c01-9fda-4a27-8955-ba71eb9f94bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948139, 'reachable_time': 28482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391725, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.914 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:09:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.915 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[55152dfc-00bf-4c72-ab15-61b87fb0e592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:09:42 compute-0 ceph-mon[76537]: pgmap v3257: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.6 MiB/s wr, 75 op/s
Dec 13 09:09:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d9cd5db3b\x2dc54c\x2d46c9\x2db59b\x2de477b2784420.mount: Deactivated successfully.
Dec 13 09:09:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 30 op/s
Dec 13 09:09:42 compute-0 nova_compute[248510]: 2025-12-13 09:09:42.869 248514 DEBUG nova.network.neutron [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updated VIF entry in instance network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:09:42 compute-0 nova_compute[248510]: 2025-12-13 09:09:42.870 248514 DEBUG nova.network.neutron [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.292 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-unplugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.293 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.294 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.295 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.295 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-unplugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.296 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-unplugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.296 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.297 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.297 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.297 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.298 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.298 248514 WARNING nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received unexpected event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c for instance with vm_state active and task_state deleting.
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.304 248514 DEBUG oslo_concurrency.lockutils [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.752 248514 DEBUG nova.network.neutron [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.769 248514 INFO nova.compute.manager [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Took 1.92 seconds to deallocate network for instance.
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.825 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.826 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.893 248514 DEBUG oslo_concurrency.processutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.966 248514 DEBUG nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.967 248514 DEBUG oslo_concurrency.lockutils [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.968 248514 DEBUG oslo_concurrency.lockutils [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.969 248514 DEBUG oslo_concurrency.lockutils [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.969 248514 DEBUG nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.970 248514 WARNING nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received unexpected event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c for instance with vm_state deleted and task_state None.
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.970 248514 DEBUG nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-deleted-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:43 compute-0 nova_compute[248510]: 2025-12-13 09:09:43.970 248514 DEBUG nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-deleted-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:09:44 compute-0 ceph-mon[76537]: pgmap v3258: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 30 op/s
Dec 13 09:09:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:09:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882559847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:09:44 compute-0 nova_compute[248510]: 2025-12-13 09:09:44.473 248514 DEBUG oslo_concurrency.processutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:09:44 compute-0 nova_compute[248510]: 2025-12-13 09:09:44.481 248514 DEBUG nova.compute.provider_tree [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:09:44 compute-0 nova_compute[248510]: 2025-12-13 09:09:44.499 248514 DEBUG nova.scheduler.client.report [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:09:44 compute-0 nova_compute[248510]: 2025-12-13 09:09:44.518 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:44 compute-0 nova_compute[248510]: 2025-12-13 09:09:44.550 248514 INFO nova.scheduler.client.report [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 54c767d1-14e1-4a29-be59-440d4e412c4e
Dec 13 09:09:44 compute-0 nova_compute[248510]: 2025-12-13 09:09:44.636 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 19 KiB/s wr, 58 op/s
Dec 13 09:09:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1882559847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:09:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:46 compute-0 ceph-mon[76537]: pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 19 KiB/s wr, 58 op/s
Dec 13 09:09:46 compute-0 nova_compute[248510]: 2025-12-13 09:09:46.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Dec 13 09:09:48 compute-0 ceph-mon[76537]: pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Dec 13 09:09:48 compute-0 nova_compute[248510]: 2025-12-13 09:09:48.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Dec 13 09:09:49 compute-0 nova_compute[248510]: 2025-12-13 09:09:49.715 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616974.7133727, bdbb2140-812f-4913-a83e-7dadd1968cde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:09:49 compute-0 nova_compute[248510]: 2025-12-13 09:09:49.716 248514 INFO nova.compute.manager [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] VM Stopped (Lifecycle Event)
Dec 13 09:09:49 compute-0 nova_compute[248510]: 2025-12-13 09:09:49.784 248514 DEBUG nova.compute.manager [None req-f8c90c71-2344-4fa7-a4d5-b8972c14a79c - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:09:50 compute-0 ceph-mon[76537]: pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Dec 13 09:09:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Dec 13 09:09:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:51 compute-0 nova_compute[248510]: 2025-12-13 09:09:51.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:52 compute-0 podman[391749]: 2025-12-13 09:09:52.012875752 +0000 UTC m=+0.090089566 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 09:09:52 compute-0 podman[391750]: 2025-12-13 09:09:52.021222667 +0000 UTC m=+0.096453170 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 09:09:52 compute-0 podman[391748]: 2025-12-13 09:09:52.049300242 +0000 UTC m=+0.130037287 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 13 09:09:52 compute-0 ceph-mon[76537]: pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Dec 13 09:09:52 compute-0 nova_compute[248510]: 2025-12-13 09:09:52.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:52 compute-0 nova_compute[248510]: 2025-12-13 09:09:52.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:09:53 compute-0 nova_compute[248510]: 2025-12-13 09:09:53.605 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:54 compute-0 ceph-mon[76537]: pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:09:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:09:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:55.447 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:09:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:55.448 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:09:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:09:55.448 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:09:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:09:56 compute-0 nova_compute[248510]: 2025-12-13 09:09:56.277 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616981.2758005, 54c767d1-14e1-4a29-be59-440d4e412c4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:09:56 compute-0 nova_compute[248510]: 2025-12-13 09:09:56.277 248514 INFO nova.compute.manager [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] VM Stopped (Lifecycle Event)
Dec 13 09:09:56 compute-0 nova_compute[248510]: 2025-12-13 09:09:56.343 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:56 compute-0 ceph-mon[76537]: pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:09:56 compute-0 nova_compute[248510]: 2025-12-13 09:09:56.456 248514 DEBUG nova.compute.manager [None req-3596b85f-378c-4ecb-9081-ae3e1f4cf9e4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:09:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:09:58 compute-0 ceph-mon[76537]: pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:09:58 compute-0 nova_compute[248510]: 2025-12-13 09:09:58.607 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:09:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:00 compute-0 ceph-mon[76537]: pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:01 compute-0 nova_compute[248510]: 2025-12-13 09:10:01.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:01 compute-0 nova_compute[248510]: 2025-12-13 09:10:01.522 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:01.521 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:10:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:01.523 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:10:02 compute-0 ceph-mon[76537]: pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:02 compute-0 sshd-session[391811]: Invalid user cardano from 80.94.92.165 port 51664
Dec 13 09:10:02 compute-0 sshd-session[391811]: Connection closed by invalid user cardano 80.94.92.165 port 51664 [preauth]
Dec 13 09:10:03 compute-0 nova_compute[248510]: 2025-12-13 09:10:03.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:04 compute-0 ceph-mon[76537]: pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3269: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:06 compute-0 nova_compute[248510]: 2025-12-13 09:10:06.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:06 compute-0 sudo[391813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:10:06 compute-0 sudo[391813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:06 compute-0 sudo[391813]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:06 compute-0 sudo[391838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:10:06 compute-0 sudo[391838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:06.525 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:06 compute-0 ceph-mon[76537]: pgmap v3269: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:07 compute-0 sudo[391838]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:10:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:10:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:10:07 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:10:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:10:07 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:10:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:10:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:10:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:10:07 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:10:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:10:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:10:07 compute-0 sudo[391895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:10:07 compute-0 sudo[391895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:07 compute-0 sudo[391895]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:07 compute-0 sudo[391920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:10:07 compute-0 sudo[391920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:07 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:10:07 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:10:07 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:10:07 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:10:07 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:10:07 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:10:07 compute-0 podman[391958]: 2025-12-13 09:10:07.608228596 +0000 UTC m=+0.049595801 container create 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 09:10:07 compute-0 systemd[1]: Started libpod-conmon-7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86.scope.
Dec 13 09:10:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:10:07 compute-0 podman[391958]: 2025-12-13 09:10:07.590201941 +0000 UTC m=+0.031569166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:10:07 compute-0 podman[391958]: 2025-12-13 09:10:07.692179123 +0000 UTC m=+0.133546348 container init 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:10:07 compute-0 podman[391958]: 2025-12-13 09:10:07.702993882 +0000 UTC m=+0.144361087 container start 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:10:07 compute-0 podman[391958]: 2025-12-13 09:10:07.705808674 +0000 UTC m=+0.147175899 container attach 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:10:07 compute-0 interesting_wright[391974]: 167 167
Dec 13 09:10:07 compute-0 systemd[1]: libpod-7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86.scope: Deactivated successfully.
Dec 13 09:10:07 compute-0 conmon[391974]: conmon 7687d44c78c21b074d12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86.scope/container/memory.events
Dec 13 09:10:07 compute-0 podman[391979]: 2025-12-13 09:10:07.766430869 +0000 UTC m=+0.036892433 container died 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2438b38e79e1082e02ae20d52d114520b29e4bbc42570d42e97d4747c217cd78-merged.mount: Deactivated successfully.
Dec 13 09:10:07 compute-0 podman[391979]: 2025-12-13 09:10:07.80792685 +0000 UTC m=+0.078388414 container remove 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:10:07 compute-0 systemd[1]: libpod-conmon-7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86.scope: Deactivated successfully.
Dec 13 09:10:08 compute-0 podman[392001]: 2025-12-13 09:10:08.076646136 +0000 UTC m=+0.083735232 container create a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 09:10:08 compute-0 podman[392001]: 2025-12-13 09:10:08.036066379 +0000 UTC m=+0.043155515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:10:08 compute-0 systemd[1]: Started libpod-conmon-a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c.scope.
Dec 13 09:10:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:08 compute-0 podman[392001]: 2025-12-13 09:10:08.205106102 +0000 UTC m=+0.212195248 container init a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:10:08 compute-0 podman[392001]: 2025-12-13 09:10:08.220881259 +0000 UTC m=+0.227970355 container start a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:10:08 compute-0 podman[392001]: 2025-12-13 09:10:08.22594939 +0000 UTC m=+0.233038526 container attach a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:10:08 compute-0 ceph-mon[76537]: pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:08 compute-0 nova_compute[248510]: 2025-12-13 09:10:08.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:08 compute-0 heuristic_meninsky[392018]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:10:08 compute-0 heuristic_meninsky[392018]: --> All data devices are unavailable
Dec 13 09:10:08 compute-0 systemd[1]: libpod-a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c.scope: Deactivated successfully.
Dec 13 09:10:08 compute-0 podman[392001]: 2025-12-13 09:10:08.820989099 +0000 UTC m=+0.828078165 container died a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:10:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376-merged.mount: Deactivated successfully.
Dec 13 09:10:08 compute-0 podman[392001]: 2025-12-13 09:10:08.886164941 +0000 UTC m=+0.893254007 container remove a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:10:08 compute-0 systemd[1]: libpod-conmon-a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c.scope: Deactivated successfully.
Dec 13 09:10:08 compute-0 sudo[391920]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:09 compute-0 sudo[392050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:10:09 compute-0 sudo[392050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:09 compute-0 sudo[392050]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:09 compute-0 sudo[392075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:10:09 compute-0 sudo[392075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:10:09
Dec 13 09:10:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:10:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:10:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'volumes', '.mgr', 'backups', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 13 09:10:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:10:09 compute-0 podman[392112]: 2025-12-13 09:10:09.442577183 +0000 UTC m=+0.057554976 container create 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:10:09 compute-0 systemd[1]: Started libpod-conmon-45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af.scope.
Dec 13 09:10:09 compute-0 podman[392112]: 2025-12-13 09:10:09.413138153 +0000 UTC m=+0.028115966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:10:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:10:09 compute-0 podman[392112]: 2025-12-13 09:10:09.525669988 +0000 UTC m=+0.140647821 container init 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:10:09 compute-0 podman[392112]: 2025-12-13 09:10:09.536032746 +0000 UTC m=+0.151010519 container start 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 09:10:09 compute-0 podman[392112]: 2025-12-13 09:10:09.540264135 +0000 UTC m=+0.155242008 container attach 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:10:09 compute-0 bold_merkle[392128]: 167 167
Dec 13 09:10:09 compute-0 podman[392112]: 2025-12-13 09:10:09.543819346 +0000 UTC m=+0.158797119 container died 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 09:10:09 compute-0 systemd[1]: libpod-45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af.scope: Deactivated successfully.
Dec 13 09:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e076ae78eb0801c32f47c73d81c1090f129e28b89fa1b6c2dc5e91e66bcacd23-merged.mount: Deactivated successfully.
Dec 13 09:10:09 compute-0 podman[392112]: 2025-12-13 09:10:09.578504822 +0000 UTC m=+0.193482595 container remove 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Dec 13 09:10:09 compute-0 systemd[1]: libpod-conmon-45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af.scope: Deactivated successfully.
Dec 13 09:10:09 compute-0 podman[392151]: 2025-12-13 09:10:09.784392406 +0000 UTC m=+0.066021435 container create eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:10:09 compute-0 systemd[1]: Started libpod-conmon-eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e.scope.
Dec 13 09:10:09 compute-0 podman[392151]: 2025-12-13 09:10:09.754859744 +0000 UTC m=+0.036488863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:10:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:09 compute-0 podman[392151]: 2025-12-13 09:10:09.893898503 +0000 UTC m=+0.175527562 container init eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 09:10:09 compute-0 podman[392151]: 2025-12-13 09:10:09.902282829 +0000 UTC m=+0.183911858 container start eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:10:09 compute-0 podman[392151]: 2025-12-13 09:10:09.905995965 +0000 UTC m=+0.187625004 container attach eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:10:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:10:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:10:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:10:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:10:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:10:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]: {
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:     "0": [
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:         {
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "devices": [
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "/dev/loop3"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             ],
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_name": "ceph_lv0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_size": "21470642176",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "name": "ceph_lv0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "tags": {
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cluster_name": "ceph",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.crush_device_class": "",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.encrypted": "0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.objectstore": "bluestore",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osd_id": "0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.type": "block",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.vdo": "0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.with_tpm": "0"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             },
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "type": "block",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "vg_name": "ceph_vg0"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:         }
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:     ],
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:     "1": [
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:         {
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "devices": [
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "/dev/loop4"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             ],
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_name": "ceph_lv1",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_size": "21470642176",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "name": "ceph_lv1",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "tags": {
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cluster_name": "ceph",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.crush_device_class": "",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.encrypted": "0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.objectstore": "bluestore",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osd_id": "1",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.type": "block",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.vdo": "0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.with_tpm": "0"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             },
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "type": "block",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "vg_name": "ceph_vg1"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:         }
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:     ],
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:     "2": [
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:         {
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "devices": [
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "/dev/loop5"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             ],
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_name": "ceph_lv2",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_size": "21470642176",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "name": "ceph_lv2",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "tags": {
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.cluster_name": "ceph",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.crush_device_class": "",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.encrypted": "0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.objectstore": "bluestore",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osd_id": "2",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.type": "block",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.vdo": "0",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:                 "ceph.with_tpm": "0"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             },
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "type": "block",
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:             "vg_name": "ceph_vg2"
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:         }
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]:     ]
Dec 13 09:10:10 compute-0 sleepy_hamilton[392168]: }
Dec 13 09:10:10 compute-0 systemd[1]: libpod-eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e.scope: Deactivated successfully.
Dec 13 09:10:10 compute-0 podman[392151]: 2025-12-13 09:10:10.230527962 +0000 UTC m=+0.512157061 container died eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076-merged.mount: Deactivated successfully.
Dec 13 09:10:10 compute-0 ceph-mon[76537]: pgmap v3271: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:10 compute-0 podman[392151]: 2025-12-13 09:10:10.603698773 +0000 UTC m=+0.885327802 container remove eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:10:10 compute-0 sudo[392075]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:10 compute-0 systemd[1]: libpod-conmon-eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e.scope: Deactivated successfully.
Dec 13 09:10:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:10 compute-0 sudo[392189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:10:10 compute-0 sudo[392189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:10 compute-0 sudo[392189]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:10 compute-0 sudo[392214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:10:10 compute-0 sudo[392214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:10:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:10:11 compute-0 podman[392251]: 2025-12-13 09:10:11.13862333 +0000 UTC m=+0.053678107 container create 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:10:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:11 compute-0 systemd[1]: Started libpod-conmon-8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43.scope.
Dec 13 09:10:11 compute-0 podman[392251]: 2025-12-13 09:10:11.116573581 +0000 UTC m=+0.031628348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:10:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:10:11 compute-0 podman[392251]: 2025-12-13 09:10:11.231940739 +0000 UTC m=+0.146995506 container init 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:10:11 compute-0 podman[392251]: 2025-12-13 09:10:11.238866107 +0000 UTC m=+0.153920844 container start 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:10:11 compute-0 podman[392251]: 2025-12-13 09:10:11.242432879 +0000 UTC m=+0.157487636 container attach 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:10:11 compute-0 modest_perlman[392267]: 167 167
Dec 13 09:10:11 compute-0 systemd[1]: libpod-8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43.scope: Deactivated successfully.
Dec 13 09:10:11 compute-0 podman[392251]: 2025-12-13 09:10:11.248194758 +0000 UTC m=+0.163249505 container died 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a3e667c1813d05700829b1d0ad80b3983779356ceb2c73e7f3ebf80627c9f4b-merged.mount: Deactivated successfully.
Dec 13 09:10:11 compute-0 podman[392251]: 2025-12-13 09:10:11.288866658 +0000 UTC m=+0.203921425 container remove 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec 13 09:10:11 compute-0 systemd[1]: libpod-conmon-8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43.scope: Deactivated successfully.
Dec 13 09:10:11 compute-0 nova_compute[248510]: 2025-12-13 09:10:11.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:11 compute-0 podman[392290]: 2025-12-13 09:10:11.476279775 +0000 UTC m=+0.052407453 container create 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 09:10:11 compute-0 systemd[1]: Started libpod-conmon-27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9.scope.
Dec 13 09:10:11 compute-0 podman[392290]: 2025-12-13 09:10:11.453106147 +0000 UTC m=+0.029233915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:10:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:11 compute-0 podman[392290]: 2025-12-13 09:10:11.575569838 +0000 UTC m=+0.151697516 container init 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:10:11 compute-0 podman[392290]: 2025-12-13 09:10:11.585198467 +0000 UTC m=+0.161326135 container start 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 09:10:11 compute-0 podman[392290]: 2025-12-13 09:10:11.587754933 +0000 UTC m=+0.163882611 container attach 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:10:12 compute-0 lvm[392386]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:10:12 compute-0 lvm[392386]: VG ceph_vg1 finished
Dec 13 09:10:12 compute-0 lvm[392385]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:10:12 compute-0 lvm[392385]: VG ceph_vg0 finished
Dec 13 09:10:12 compute-0 lvm[392388]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:10:12 compute-0 lvm[392388]: VG ceph_vg2 finished
Dec 13 09:10:12 compute-0 competent_sanderson[392307]: {}
Dec 13 09:10:12 compute-0 systemd[1]: libpod-27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9.scope: Deactivated successfully.
Dec 13 09:10:12 compute-0 systemd[1]: libpod-27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9.scope: Consumed 1.497s CPU time.
Dec 13 09:10:12 compute-0 podman[392290]: 2025-12-13 09:10:12.468514387 +0000 UTC m=+1.044642135 container died 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10-merged.mount: Deactivated successfully.
Dec 13 09:10:12 compute-0 podman[392290]: 2025-12-13 09:10:12.524177753 +0000 UTC m=+1.100305461 container remove 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 09:10:12 compute-0 systemd[1]: libpod-conmon-27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9.scope: Deactivated successfully.
Dec 13 09:10:12 compute-0 sudo[392214]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:10:12 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:10:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:10:12 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:10:12 compute-0 ceph-mon[76537]: pgmap v3272: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:10:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:10:12 compute-0 sudo[392404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:10:12 compute-0 sudo[392404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:10:12 compute-0 sudo[392404]: pam_unix(sudo:session): session closed for user root
Dec 13 09:10:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:13 compute-0 nova_compute[248510]: 2025-12-13 09:10:13.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:14 compute-0 ceph-mon[76537]: pgmap v3273: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:10:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121122571' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:10:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:10:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121122571' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:10:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4121122571' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:10:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4121122571' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:10:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:16 compute-0 nova_compute[248510]: 2025-12-13 09:10:16.356 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:16 compute-0 ceph-mon[76537]: pgmap v3274: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:16 compute-0 nova_compute[248510]: 2025-12-13 09:10:16.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:16 compute-0 nova_compute[248510]: 2025-12-13 09:10:16.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:18 compute-0 nova_compute[248510]: 2025-12-13 09:10:18.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:18 compute-0 ceph-mon[76537]: pgmap v3275: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:20 compute-0 ceph-mon[76537]: pgmap v3276: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:21 compute-0 nova_compute[248510]: 2025-12-13 09:10:21.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4593073840003745e-05 of space, bias 1.0, pg target 0.004377922152001124 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697401448045072 of space, bias 1.0, pg target 0.20092204344135217 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:10:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:10:21 compute-0 ceph-mon[76537]: pgmap v3277: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:22 compute-0 podman[392432]: 2025-12-13 09:10:22.992184196 +0000 UTC m=+0.069759432 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 09:10:23 compute-0 podman[392431]: 2025-12-13 09:10:23.006705511 +0000 UTC m=+0.082976173 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:10:23 compute-0 podman[392430]: 2025-12-13 09:10:23.038629245 +0000 UTC m=+0.114903637 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:10:23 compute-0 nova_compute[248510]: 2025-12-13 09:10:23.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:23 compute-0 ceph-mon[76537]: pgmap v3278: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:24 compute-0 nova_compute[248510]: 2025-12-13 09:10:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:24 compute-0 nova_compute[248510]: 2025-12-13 09:10:24.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:10:24 compute-0 nova_compute[248510]: 2025-12-13 09:10:24.910 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:10:24 compute-0 nova_compute[248510]: 2025-12-13 09:10:24.910 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:26 compute-0 ceph-mon[76537]: pgmap v3279: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:26 compute-0 nova_compute[248510]: 2025-12-13 09:10:26.362 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:26.854 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:a3:20 2001:db8:0:1:f816:3eff:fe49:a320 2001:db8::f816:3eff:fe49:a320'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe49:a320/64 2001:db8::f816:3eff:fe49:a320/64', 'neutron:device_id': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c9cb1492-8274-4506-acb8-65660baeb5cf) old=Port_Binding(mac=['fa:16:3e:49:a3:20 2001:db8::f816:3eff:fe49:a320'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe49:a320/64', 'neutron:device_id': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:10:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:26.855 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c9cb1492-8274-4506-acb8-65660baeb5cf in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c updated
Dec 13 09:10:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:26.857 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:10:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:26.859 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e44f8100-422d-404f-b49d-cedcd23f8810]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:28 compute-0 ceph-mon[76537]: pgmap v3280: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:28 compute-0 nova_compute[248510]: 2025-12-13 09:10:28.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:28 compute-0 nova_compute[248510]: 2025-12-13 09:10:28.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:28 compute-0 nova_compute[248510]: 2025-12-13 09:10:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:28 compute-0 nova_compute[248510]: 2025-12-13 09:10:28.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:28 compute-0 nova_compute[248510]: 2025-12-13 09:10:28.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:28 compute-0 nova_compute[248510]: 2025-12-13 09:10:28.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:28 compute-0 nova_compute[248510]: 2025-12-13 09:10:28.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:10:28 compute-0 nova_compute[248510]: 2025-12-13 09:10:28.811 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:10:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835442701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:10:29 compute-0 nova_compute[248510]: 2025-12-13 09:10:29.427 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:29 compute-0 nova_compute[248510]: 2025-12-13 09:10:29.671 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:10:29 compute-0 nova_compute[248510]: 2025-12-13 09:10:29.673 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3499MB free_disk=59.987405836582184GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:10:29 compute-0 nova_compute[248510]: 2025-12-13 09:10:29.674 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:29 compute-0 nova_compute[248510]: 2025-12-13 09:10:29.674 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:29 compute-0 nova_compute[248510]: 2025-12-13 09:10:29.756 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:10:29 compute-0 nova_compute[248510]: 2025-12-13 09:10:29.757 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:10:29 compute-0 nova_compute[248510]: 2025-12-13 09:10:29.776 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:30 compute-0 ceph-mon[76537]: pgmap v3281: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/835442701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:10:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:10:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360535993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:10:30 compute-0 nova_compute[248510]: 2025-12-13 09:10:30.329 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:30 compute-0 nova_compute[248510]: 2025-12-13 09:10:30.336 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:10:30 compute-0 nova_compute[248510]: 2025-12-13 09:10:30.356 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:10:30 compute-0 nova_compute[248510]: 2025-12-13 09:10:30.383 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:10:30 compute-0 nova_compute[248510]: 2025-12-13 09:10:30.383 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/360535993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:10:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:31 compute-0 nova_compute[248510]: 2025-12-13 09:10:31.364 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:31 compute-0 nova_compute[248510]: 2025-12-13 09:10:31.384 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:31 compute-0 nova_compute[248510]: 2025-12-13 09:10:31.384 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:10:32 compute-0 ceph-mon[76537]: pgmap v3282: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:32 compute-0 nova_compute[248510]: 2025-12-13 09:10:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:33 compute-0 nova_compute[248510]: 2025-12-13 09:10:33.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:34 compute-0 ceph-mon[76537]: pgmap v3283: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.255 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.256 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.283 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.391 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.392 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.399 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.399 248514 INFO nova.compute.claims [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.514 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:35 compute-0 nova_compute[248510]: 2025-12-13 09:10:35.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:10:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015392230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.062 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.068 248514 DEBUG nova.compute.provider_tree [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.084 248514 DEBUG nova.scheduler.client.report [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:10:36 compute-0 ceph-mon[76537]: pgmap v3284: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1015392230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.105 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.106 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.155 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.156 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:10:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.182 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.208 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.308 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.310 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.310 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Creating image(s)
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.334 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.358 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.383 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.387 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.431 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.473 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.474 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.474 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.475 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.497 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.501 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.741 248514 DEBUG nova.policy [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.807 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:36 compute-0 nova_compute[248510]: 2025-12-13 09:10:36.890 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:10:37 compute-0 nova_compute[248510]: 2025-12-13 09:10:37.007 248514 DEBUG nova.objects.instance [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:10:37 compute-0 nova_compute[248510]: 2025-12-13 09:10:37.029 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:10:37 compute-0 nova_compute[248510]: 2025-12-13 09:10:37.030 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Ensure instance console log exists: /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:10:37 compute-0 nova_compute[248510]: 2025-12-13 09:10:37.031 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:37 compute-0 nova_compute[248510]: 2025-12-13 09:10:37.031 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:37 compute-0 nova_compute[248510]: 2025-12-13 09:10:37.032 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:37 compute-0 nova_compute[248510]: 2025-12-13 09:10:37.747 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Successfully created port: b6bde1c8-9710-4155-9619-7040cbbca806 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:10:38 compute-0 ceph-mon[76537]: pgmap v3285: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:38 compute-0 nova_compute[248510]: 2025-12-13 09:10:38.295 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Successfully created port: 445f61a4-1352-4145-8b2b-f9f87f5435a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:10:38 compute-0 nova_compute[248510]: 2025-12-13 09:10:38.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 53 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 670 KiB/s wr, 11 op/s
Dec 13 09:10:39 compute-0 nova_compute[248510]: 2025-12-13 09:10:39.065 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Successfully updated port: b6bde1c8-9710-4155-9619-7040cbbca806 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:10:39 compute-0 nova_compute[248510]: 2025-12-13 09:10:39.501 248514 DEBUG nova.compute.manager [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:10:39 compute-0 nova_compute[248510]: 2025-12-13 09:10:39.502 248514 DEBUG nova.compute.manager [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing instance network info cache due to event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:10:39 compute-0 nova_compute[248510]: 2025-12-13 09:10:39.502 248514 DEBUG oslo_concurrency.lockutils [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:10:39 compute-0 nova_compute[248510]: 2025-12-13 09:10:39.502 248514 DEBUG oslo_concurrency.lockutils [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:10:39 compute-0 nova_compute[248510]: 2025-12-13 09:10:39.503 248514 DEBUG nova.network.neutron [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:10:39 compute-0 nova_compute[248510]: 2025-12-13 09:10:39.691 248514 DEBUG nova.network.neutron [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:10:40 compute-0 ceph-mon[76537]: pgmap v3286: 321 pgs: 321 active+clean; 53 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 670 KiB/s wr, 11 op/s
Dec 13 09:10:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:10:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:10:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:10:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:10:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:10:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:10:40 compute-0 nova_compute[248510]: 2025-12-13 09:10:40.707 248514 DEBUG nova.network.neutron [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:10:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:40 compute-0 nova_compute[248510]: 2025-12-13 09:10:40.770 248514 DEBUG oslo_concurrency.lockutils [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.008 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Successfully updated port: 445f61a4-1352-4145-8b2b-f9f87f5435a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.025 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.026 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.026 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:10:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.406 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.576 248514 DEBUG nova.compute.manager [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-changed-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.577 248514 DEBUG nova.compute.manager [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing instance network info cache due to event network-changed-445f61a4-1352-4145-8b2b-f9f87f5435a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:10:41 compute-0 nova_compute[248510]: 2025-12-13 09:10:41.577 248514 DEBUG oslo_concurrency.lockutils [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:10:42 compute-0 ceph-mon[76537]: pgmap v3287: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:43 compute-0 nova_compute[248510]: 2025-12-13 09:10:43.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:44 compute-0 ceph-mon[76537]: pgmap v3288: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:44 compute-0 nova_compute[248510]: 2025-12-13 09:10:44.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:45 compute-0 ovn_controller[148476]: 2025-12-13T09:10:45Z|01474|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 13 09:10:46 compute-0 ceph-mon[76537]: pgmap v3289: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:46 compute-0 nova_compute[248510]: 2025-12-13 09:10:46.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.551 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.579 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.580 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance network_info: |[{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.582 248514 DEBUG oslo_concurrency.lockutils [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.582 248514 DEBUG nova.network.neutron [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing network info cache for port 445f61a4-1352-4145-8b2b-f9f87f5435a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.587 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start _get_guest_xml network_info=[{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.592 248514 WARNING nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.602 248514 DEBUG nova.virt.libvirt.host [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.603 248514 DEBUG nova.virt.libvirt.host [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.612 248514 DEBUG nova.virt.libvirt.host [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.612 248514 DEBUG nova.virt.libvirt.host [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.613 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.613 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.613 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.615 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.615 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.615 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.615 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:10:47 compute-0 nova_compute[248510]: 2025-12-13 09:10:47.618 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:48 compute-0 ceph-mon[76537]: pgmap v3290: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:10:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339093545' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.269 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.302 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.307 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:10:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375848692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.842 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.844 248514 DEBUG nova.virt.libvirt.vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:10:36Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.844 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.845 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.846 248514 DEBUG nova.virt.libvirt.vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:10:36Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.846 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.847 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.848 248514 DEBUG nova.objects.instance [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.876 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <uuid>8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8</uuid>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <name>instance-0000008d</name>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-230241554</nova:name>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:10:47</nova:creationTime>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:port uuid="b6bde1c8-9710-4155-9619-7040cbbca806">
Dec 13 09:10:48 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <nova:port uuid="445f61a4-1352-4145-8b2b-f9f87f5435a7">
Dec 13 09:10:48 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe66:7d02" ipVersion="6"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe66:7d02" ipVersion="6"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <system>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <entry name="serial">8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8</entry>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <entry name="uuid">8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8</entry>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </system>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <os>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   </os>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <features>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   </features>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk">
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       </source>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config">
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       </source>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:10:48 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:54:9b:72"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <target dev="tapb6bde1c8-97"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:66:7d:02"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <target dev="tap445f61a4-13"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/console.log" append="off"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <video>
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </video>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:10:48 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:10:48 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:10:48 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:10:48 compute-0 nova_compute[248510]: </domain>
Dec 13 09:10:48 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.877 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Preparing to wait for external event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.878 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.880 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Preparing to wait for external event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.880 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.880 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.880 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.881 248514 DEBUG nova.virt.libvirt.vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:10:36Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.881 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.882 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.883 248514 DEBUG os_vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.883 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.884 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.884 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.889 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6bde1c8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.889 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6bde1c8-97, col_values=(('external_ids', {'iface-id': 'b6bde1c8-9710-4155-9619-7040cbbca806', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:9b:72', 'vm-uuid': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.892 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:10:48 compute-0 NetworkManager[50376]: <info>  [1765617048.8943] manager: (tapb6bde1c8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.905 248514 INFO os_vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97')
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.906 248514 DEBUG nova.virt.libvirt.vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:10:36Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.906 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.907 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.907 248514 DEBUG os_vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.908 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.908 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.911 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap445f61a4-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.912 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap445f61a4-13, col_values=(('external_ids', {'iface-id': '445f61a4-1352-4145-8b2b-f9f87f5435a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:7d:02', 'vm-uuid': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 NetworkManager[50376]: <info>  [1765617048.9152] manager: (tap445f61a4-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.928 248514 INFO os_vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13')
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.995 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.996 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.996 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:54:9b:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.996 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:66:7d:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:10:48 compute-0 nova_compute[248510]: 2025-12-13 09:10:48.997 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Using config drive
Dec 13 09:10:49 compute-0 nova_compute[248510]: 2025-12-13 09:10:49.021 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:10:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:10:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 65K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1386 writes, 5988 keys, 1386 commit groups, 1.0 writes per commit group, ingest: 9.31 MB, 0.02 MB/s
                                           Interval WAL: 1385 writes, 1385 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     22.1      3.67              0.28        46    0.080       0      0       0.0       0.0
                                             L6      1/0    8.38 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8     83.4     70.3      5.54              1.22        45    0.123    294K    24K       0.0       0.0
                                            Sum      1/0    8.38 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8     50.2     51.1      9.21              1.50        91    0.101    294K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.1    119.1    118.7      0.40              0.17         8    0.050     35K   2058       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     83.4     70.3      5.54              1.22        45    0.123    294K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     22.1      3.66              0.28        45    0.081       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.079, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.46 GB write, 0.08 MB/s write, 0.45 GB read, 0.08 MB/s read, 9.2 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 53.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000892 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3328,51.44 MB,16.9221%) FilterBlock(92,819.05 KB,0.263109%) IndexBlock(92,1.33 MB,0.438223%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 09:10:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3339093545' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:10:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3375848692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:10:49 compute-0 nova_compute[248510]: 2025-12-13 09:10:49.763 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Creating config drive at /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config
Dec 13 09:10:49 compute-0 nova_compute[248510]: 2025-12-13 09:10:49.769 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpka2_pk1_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:49 compute-0 nova_compute[248510]: 2025-12-13 09:10:49.928 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpka2_pk1_" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:49 compute-0 nova_compute[248510]: 2025-12-13 09:10:49.969 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:10:49 compute-0 nova_compute[248510]: 2025-12-13 09:10:49.975 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.313 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.315 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Deleting local config drive /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config because it was imported into RBD.
Dec 13 09:10:50 compute-0 kernel: tapb6bde1c8-97: entered promiscuous mode
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.4036] manager: (tapb6bde1c8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/611)
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01475|binding|INFO|Claiming lport b6bde1c8-9710-4155-9619-7040cbbca806 for this chassis.
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01476|binding|INFO|b6bde1c8-9710-4155-9619-7040cbbca806: Claiming fa:16:3e:54:9b:72 10.100.0.6
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.4227] manager: (tap445f61a4-13): new Tun device (/org/freedesktop/NetworkManager/Devices/612)
Dec 13 09:10:50 compute-0 kernel: tap445f61a4-13: entered promiscuous mode
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01477|if_status|INFO|Dropped 5 log messages in last 99 seconds (most recently, 99 seconds ago) due to excessive rate
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01478|if_status|INFO|Not updating pb chassis for 445f61a4-1352-4145-8b2b-f9f87f5435a7 now as sb is readonly
Dec 13 09:10:50 compute-0 systemd-udevd[392868]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:10:50 compute-0 systemd-udevd[392869]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.4626] device (tapb6bde1c8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.4640] device (tap445f61a4-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.4661] device (tapb6bde1c8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.4673] device (tap445f61a4-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:10:50 compute-0 systemd-machined[210538]: New machine qemu-172-instance-0000008d.
Dec 13 09:10:50 compute-0 ceph-mon[76537]: pgmap v3291: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:10:50 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008d.
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.515 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01479|binding|INFO|Setting lport b6bde1c8-9710-4155-9619-7040cbbca806 ovn-installed in OVS
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01480|binding|INFO|Claiming lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 for this chassis.
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01481|binding|INFO|445f61a4-1352-4145-8b2b-f9f87f5435a7: Claiming fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01482|binding|INFO|Setting lport b6bde1c8-9710-4155-9619-7040cbbca806 up in Southbound
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01483|binding|INFO|Setting lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 ovn-installed in OVS
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.638 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:9b:72 10.100.0.6'], port_security=['fa:16:3e:54:9b:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44250a44-17ff-41cb-ae95-e575bff91a4a, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b6bde1c8-9710-4155-9619-7040cbbca806) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.640 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b6bde1c8-9710-4155-9619-7040cbbca806 in datapath a7065d5a-edce-4470-a56d-ab529d56aa3c bound to our chassis
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.641 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7065d5a-edce-4470-a56d-ab529d56aa3c
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.655 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df6dcce1-d257-4ad6-bccf-dd2280df8137]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.656 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7065d5a-e1 in ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.659 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7065d5a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76c398bf-f4a6-432a-b9a0-5285f4b35b01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.660 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf630294-e100-44a1-aad3-0f401a4c46d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.676 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd61417-d16e-470f-baf3-9f2f0ba4b86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.698 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bff5b301-2004-4da7-9741-c715fe09fefd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.704 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02'], port_security=['fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe66:7d02/64 2001:db8::f816:3eff:fe66:7d02/64', 'neutron:device_id': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=445f61a4-1352-4145-8b2b-f9f87f5435a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:10:50 compute-0 ovn_controller[148476]: 2025-12-13T09:10:50Z|01484|binding|INFO|Setting lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 up in Southbound
Dec 13 09:10:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.735 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1e10e5a4-1c98-47fc-8363-b0e3ce0066d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 systemd-udevd[392875]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.741 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1358229-dc93-4300-b076-74c4892b1216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.7426] manager: (tapa7065d5a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/613)
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.783 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d34b82bc-3a62-49c8-bf2d-d4f81653db02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.787 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84644085-cbc4-4cd3-9a2c-8163058010ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.8146] device (tapa7065d5a-e0): carrier: link connected
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.821 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5e501c-e191-4ef5-b1dc-575392643a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.839 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b01ad4f-85e2-4780-930f-a8376952c9ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7065d5a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962803, 'reachable_time': 25861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392923, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.859 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7de223-2ec9-4e06-b177-dab75ef1ecf3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:e795'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962803, 'tstamp': 962803}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392924, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.878 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[871d6a3f-f19f-4cc6-ac17-b7d415d8a793]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7065d5a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962803, 'reachable_time': 25861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392925, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.915 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aabc3f77-f366-49d1-a727-ad5bb80b33cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.994 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f29471fa-d291-43ab-8c31-8836e8ef95a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7065d5a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:10:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7065d5a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:50 compute-0 NetworkManager[50376]: <info>  [1765617050.9987] manager: (tapa7065d5a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Dec 13 09:10:50 compute-0 kernel: tapa7065d5a-e0: entered promiscuous mode
Dec 13 09:10:50 compute-0 nova_compute[248510]: 2025-12-13 09:10:50.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.002 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7065d5a-e0, col_values=(('external_ids', {'iface-id': 'b83346dd-17c7-4f89-a540-efae6916310e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:51 compute-0 ovn_controller[148476]: 2025-12-13T09:10:51Z|01485|binding|INFO|Releasing lport b83346dd-17c7-4f89-a540-efae6916310e from this chassis (sb_readonly=0)
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.003 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.005 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7065d5a-edce-4470-a56d-ab529d56aa3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7065d5a-edce-4470-a56d-ab529d56aa3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.006 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d99794-4f71-4fc1-9895-cc2622eff509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.007 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-a7065d5a-edce-4470-a56d-ab529d56aa3c
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/a7065d5a-edce-4470-a56d-ab529d56aa3c.pid.haproxy
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID a7065d5a-edce-4470-a56d-ab529d56aa3c
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:10:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.007 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'env', 'PROCESS_TAG=haproxy-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7065d5a-edce-4470-a56d-ab529d56aa3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.014 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.044 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617051.0440233, 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.045 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] VM Started (Lifecycle Event)
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.076 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.080 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617051.0441818, 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.080 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] VM Paused (Lifecycle Event)
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.102 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.105 248514 DEBUG nova.compute.manager [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.105 248514 DEBUG oslo_concurrency.lockutils [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.105 248514 DEBUG oslo_concurrency.lockutils [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.106 248514 DEBUG oslo_concurrency.lockutils [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.106 248514 DEBUG nova.compute.manager [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Processing event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.108 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.128 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:10:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.429 248514 DEBUG nova.compute.manager [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.429 248514 DEBUG oslo_concurrency.lockutils [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.429 248514 DEBUG oslo_concurrency.lockutils [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.430 248514 DEBUG oslo_concurrency.lockutils [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.430 248514 DEBUG nova.compute.manager [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Processing event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.430 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.434 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617051.43434, 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.434 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] VM Resumed (Lifecycle Event)
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.436 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.440 248514 INFO nova.virt.libvirt.driver [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance spawned successfully.
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.440 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.464 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.470 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.473 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.474 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.474 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.474 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.475 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.475 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:10:51 compute-0 podman[392982]: 2025-12-13 09:10:51.391557753 +0000 UTC m=+0.022623525 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.531 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.547 248514 INFO nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Took 15.24 seconds to spawn the instance on the hypervisor.
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.548 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.663 248514 INFO nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Took 16.31 seconds to build instance.
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.681 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:51 compute-0 podman[392982]: 2025-12-13 09:10:51.741793543 +0000 UTC m=+0.372859305 container create b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.858 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:51 compute-0 nova_compute[248510]: 2025-12-13 09:10:51.858 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:10:51 compute-0 systemd[1]: Started libpod-conmon-b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c.scope.
Dec 13 09:10:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83ba7e95364800c67b1ef1b047ddcfb67b0d58972ad0e7ad048a84a30676ad2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:52 compute-0 nova_compute[248510]: 2025-12-13 09:10:52.026 248514 DEBUG nova.network.neutron [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updated VIF entry in instance network info cache for port 445f61a4-1352-4145-8b2b-f9f87f5435a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:10:52 compute-0 nova_compute[248510]: 2025-12-13 09:10:52.027 248514 DEBUG nova.network.neutron [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:10:52 compute-0 nova_compute[248510]: 2025-12-13 09:10:52.047 248514 DEBUG oslo_concurrency.lockutils [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:10:52 compute-0 podman[392982]: 2025-12-13 09:10:52.071983236 +0000 UTC m=+0.703049048 container init b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:10:52 compute-0 podman[392982]: 2025-12-13 09:10:52.085626038 +0000 UTC m=+0.716691800 container start b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 09:10:52 compute-0 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [NOTICE]   (393002) : New worker (393004) forked
Dec 13 09:10:52 compute-0 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [NOTICE]   (393002) : Loading success.
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.293 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 445f61a4-1352-4145-8b2b-f9f87f5435a7 in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c unbound from our chassis
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.294 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.310 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe64b9c5-2be8-4317-a216-a4e864fc3bf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.310 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7661cc7b-f1 in ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.312 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7661cc7b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.312 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75b4b4dd-6f76-4971-89a4-0d5b803c5413]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.313 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3a279b-0de1-46db-a943-93632e7c6d93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.324 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a6a2c6-8204-4985-b0f4-f5e68ad174d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.346 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1795afd-e461-42b9-b488-d0cdf5c0ebd1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.391 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[59e65133-3b22-45a1-9c92-01121697a177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 NetworkManager[50376]: <info>  [1765617052.4009] manager: (tap7661cc7b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/615)
Dec 13 09:10:52 compute-0 systemd-udevd[392900]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.400 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7c600024-3c20-4a51-8fc7-6a4cfc51fcd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.437 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaa0b5d-9cf9-4883-adaf-c97621ca3593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.443 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[989cd4b9-bb68-451c-9e72-8219c8d5f974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 NetworkManager[50376]: <info>  [1765617052.4696] device (tap7661cc7b-f0): carrier: link connected
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.476 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b21abdfe-0906-4ec5-b91a-786626e045b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.501 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[63cacdcd-3ade-481b-b20d-de725ff2f7ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7661cc7b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:a3:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962968, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393023, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[342bbd66-753c-4306-a6c3-526bbb04b82a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:a320'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962968, 'tstamp': 962968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393024, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f23984c7-57ab-48fd-9f8e-26378d88c3a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7661cc7b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:a3:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962968, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393025, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e1961034-838d-4a15-9fe5-a479d77475bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.611 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26077b22-25dd-473e-b384-723f730ca9bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.614 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7661cc7b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.614 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.615 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7661cc7b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:52 compute-0 nova_compute[248510]: 2025-12-13 09:10:52.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:52 compute-0 NetworkManager[50376]: <info>  [1765617052.6183] manager: (tap7661cc7b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Dec 13 09:10:52 compute-0 kernel: tap7661cc7b-f0: entered promiscuous mode
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.624 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7661cc7b-f0, col_values=(('external_ids', {'iface-id': 'c9cb1492-8274-4506-acb8-65660baeb5cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:10:52 compute-0 ovn_controller[148476]: 2025-12-13T09:10:52Z|01486|binding|INFO|Releasing lport c9cb1492-8274-4506-acb8-65660baeb5cf from this chassis (sb_readonly=0)
Dec 13 09:10:52 compute-0 nova_compute[248510]: 2025-12-13 09:10:52.625 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.627 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7661cc7b-fcf7-41b6-b117-ca8fede7ad4c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7661cc7b-fcf7-41b6-b117-ca8fede7ad4c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7223f012-76ad-4f84-9531-1326b2e6e4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.629 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/7661cc7b-fcf7-41b6-b117-ca8fede7ad4c.pid.haproxy
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:10:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.630 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'env', 'PROCESS_TAG=haproxy-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7661cc7b-fcf7-41b6-b117-ca8fede7ad4c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:10:52 compute-0 ceph-mon[76537]: pgmap v3292: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Dec 13 09:10:52 compute-0 nova_compute[248510]: 2025-12-13 09:10:52.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:53 compute-0 podman[393056]: 2025-12-13 09:10:53.020723255 +0000 UTC m=+0.066410806 container create bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:10:53 compute-0 podman[393056]: 2025-12-13 09:10:52.978882164 +0000 UTC m=+0.024569776 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:10:53 compute-0 systemd[1]: Started libpod-conmon-bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411.scope.
Dec 13 09:10:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f32a15c927b95fe7b352ffa0d269ecc44713de0e4d769f1bd0a4e926820f3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:10:53 compute-0 podman[393056]: 2025-12-13 09:10:53.189895182 +0000 UTC m=+0.235582723 container init bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 09:10:53 compute-0 podman[393056]: 2025-12-13 09:10:53.197285182 +0000 UTC m=+0.242972693 container start bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:10:53 compute-0 podman[393069]: 2025-12-13 09:10:53.210914514 +0000 UTC m=+0.139230655 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, tcib_managed=true)
Dec 13 09:10:53 compute-0 podman[393070]: 2025-12-13 09:10:53.211592351 +0000 UTC m=+0.145029534 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 09:10:53 compute-0 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [NOTICE]   (393125) : New worker (393132) forked
Dec 13 09:10:53 compute-0 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [NOTICE]   (393125) : Loading success.
Dec 13 09:10:53 compute-0 podman[393083]: 2025-12-13 09:10:53.24835502 +0000 UTC m=+0.153039131 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.567 248514 DEBUG nova.compute.manager [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.567 248514 DEBUG oslo_concurrency.lockutils [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.568 248514 DEBUG oslo_concurrency.lockutils [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.568 248514 DEBUG oslo_concurrency.lockutils [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.569 248514 DEBUG nova.compute.manager [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.569 248514 WARNING nova.compute.manager [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received unexpected event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 for instance with vm_state active and task_state None.
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.573 248514 DEBUG nova.compute.manager [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.573 248514 DEBUG oslo_concurrency.lockutils [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.574 248514 DEBUG oslo_concurrency.lockutils [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.574 248514 DEBUG oslo_concurrency.lockutils [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.575 248514 DEBUG nova.compute.manager [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.575 248514 WARNING nova.compute.manager [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received unexpected event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 for instance with vm_state active and task_state None.
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:53 compute-0 nova_compute[248510]: 2025-12-13 09:10:53.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:54 compute-0 ceph-mon[76537]: pgmap v3293: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:10:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:10:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:55.448 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:10:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:55.449 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:10:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:10:55.449 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:10:55 compute-0 ceph-mon[76537]: pgmap v3294: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:10:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:10:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:10:56 compute-0 NetworkManager[50376]: <info>  [1765617056.7431] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/617)
Dec 13 09:10:56 compute-0 nova_compute[248510]: 2025-12-13 09:10:56.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:56 compute-0 NetworkManager[50376]: <info>  [1765617056.7449] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Dec 13 09:10:56 compute-0 ovn_controller[148476]: 2025-12-13T09:10:56Z|01487|binding|INFO|Releasing lport c9cb1492-8274-4506-acb8-65660baeb5cf from this chassis (sb_readonly=0)
Dec 13 09:10:56 compute-0 ovn_controller[148476]: 2025-12-13T09:10:56Z|01488|binding|INFO|Releasing lport b83346dd-17c7-4f89-a540-efae6916310e from this chassis (sb_readonly=0)
Dec 13 09:10:56 compute-0 ovn_controller[148476]: 2025-12-13T09:10:56Z|01489|binding|INFO|Releasing lport c9cb1492-8274-4506-acb8-65660baeb5cf from this chassis (sb_readonly=0)
Dec 13 09:10:56 compute-0 ovn_controller[148476]: 2025-12-13T09:10:56Z|01490|binding|INFO|Releasing lport b83346dd-17c7-4f89-a540-efae6916310e from this chassis (sb_readonly=0)
Dec 13 09:10:56 compute-0 nova_compute[248510]: 2025-12-13 09:10:56.802 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:56 compute-0 nova_compute[248510]: 2025-12-13 09:10:56.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:57 compute-0 nova_compute[248510]: 2025-12-13 09:10:57.708 248514 DEBUG nova.compute.manager [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:10:57 compute-0 nova_compute[248510]: 2025-12-13 09:10:57.709 248514 DEBUG nova.compute.manager [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing instance network info cache due to event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:10:57 compute-0 nova_compute[248510]: 2025-12-13 09:10:57.709 248514 DEBUG oslo_concurrency.lockutils [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:10:57 compute-0 nova_compute[248510]: 2025-12-13 09:10:57.710 248514 DEBUG oslo_concurrency.lockutils [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:10:57 compute-0 nova_compute[248510]: 2025-12-13 09:10:57.710 248514 DEBUG nova.network.neutron [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:10:57 compute-0 nova_compute[248510]: 2025-12-13 09:10:57.782 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:10:57 compute-0 ceph-mon[76537]: pgmap v3295: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:10:58 compute-0 nova_compute[248510]: 2025-12-13 09:10:58.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:10:58 compute-0 nova_compute[248510]: 2025-12-13 09:10:58.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:10:59 compute-0 nova_compute[248510]: 2025-12-13 09:10:59.310 248514 DEBUG nova.network.neutron [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updated VIF entry in instance network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:10:59 compute-0 nova_compute[248510]: 2025-12-13 09:10:59.310 248514 DEBUG nova.network.neutron [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:10:59 compute-0 nova_compute[248510]: 2025-12-13 09:10:59.345 248514 DEBUG oslo_concurrency.lockutils [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:11:00 compute-0 ceph-mon[76537]: pgmap v3296: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:11:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:11:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:02 compute-0 ceph-mon[76537]: pgmap v3297: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:11:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:11:03 compute-0 nova_compute[248510]: 2025-12-13 09:11:03.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:03 compute-0 nova_compute[248510]: 2025-12-13 09:11:03.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:04 compute-0 ceph-mon[76537]: pgmap v3298: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:11:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 109 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec 13 09:11:04 compute-0 ovn_controller[148476]: 2025-12-13T09:11:04Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:9b:72 10.100.0.6
Dec 13 09:11:04 compute-0 ovn_controller[148476]: 2025-12-13T09:11:04Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:9b:72 10.100.0.6
Dec 13 09:11:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:06 compute-0 ceph-mon[76537]: pgmap v3299: 321 pgs: 321 active+clean; 109 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec 13 09:11:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 109 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 191 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Dec 13 09:11:08 compute-0 ceph-mon[76537]: pgmap v3300: 321 pgs: 321 active+clean; 109 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 191 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.634393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617068634449, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1963, "num_deletes": 251, "total_data_size": 3407520, "memory_usage": 3455552, "flush_reason": "Manual Compaction"}
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Dec 13 09:11:08 compute-0 nova_compute[248510]: 2025-12-13 09:11:08.722 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 116 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617068779878, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3312728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63983, "largest_seqno": 65945, "table_properties": {"data_size": 3303776, "index_size": 5573, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18032, "raw_average_key_size": 20, "raw_value_size": 3286051, "raw_average_value_size": 3655, "num_data_blocks": 247, "num_entries": 899, "num_filter_entries": 899, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616857, "oldest_key_time": 1765616857, "file_creation_time": 1765617068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 145775 microseconds, and 12559 cpu microseconds.
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.780116) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3312728 bytes OK
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.780230) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.783700) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.783714) EVENT_LOG_v1 {"time_micros": 1765617068783709, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.783731) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3399264, prev total WAL file size 3399264, number of live WAL files 2.
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.784835) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3235KB)], [152(8579KB)]
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617068784955, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 12097855, "oldest_snapshot_seqno": -1}
Dec 13 09:11:08 compute-0 nova_compute[248510]: 2025-12-13 09:11:08.929 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8578 keys, 10304293 bytes, temperature: kUnknown
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617068976568, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10304293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10250280, "index_size": 31436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 224378, "raw_average_key_size": 26, "raw_value_size": 10100606, "raw_average_value_size": 1177, "num_data_blocks": 1213, "num_entries": 8578, "num_filter_entries": 8578, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:11:08 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:11:09 compute-0 sshd-session[393148]: Invalid user solv from 193.32.162.146 port 59964
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.976862) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10304293 bytes
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.023157) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.1 rd, 53.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 9092, records dropped: 514 output_compression: NoCompression
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.023204) EVENT_LOG_v1 {"time_micros": 1765617069023186, "job": 94, "event": "compaction_finished", "compaction_time_micros": 191704, "compaction_time_cpu_micros": 47728, "output_level": 6, "num_output_files": 1, "total_output_size": 10304293, "num_input_records": 9092, "num_output_records": 8578, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617069024828, "job": 94, "event": "table_file_deletion", "file_number": 154}
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617069027978, "job": 94, "event": "table_file_deletion", "file_number": 152}
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.784690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:11:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:11:09 compute-0 sshd-session[393148]: Connection closed by invalid user solv 193.32.162.146 port 59964 [preauth]
Dec 13 09:11:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:11:09
Dec 13 09:11:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:11:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:11:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'images', '.mgr', '.rgw.root']
Dec 13 09:11:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:11:09 compute-0 nova_compute[248510]: 2025-12-13 09:11:09.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:09 compute-0 nova_compute[248510]: 2025-12-13 09:11:09.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:11:09 compute-0 ceph-mon[76537]: pgmap v3301: 321 pgs: 321 active+clean; 116 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Dec 13 09:11:09 compute-0 nova_compute[248510]: 2025-12-13 09:11:09.981 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:11:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:11:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:11:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:11:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:11:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:11:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:11:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:11:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:11:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:11 compute-0 ceph-mon[76537]: pgmap v3302: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:11:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:11:12 compute-0 sudo[393150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:11:12 compute-0 sudo[393150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:12 compute-0 sudo[393150]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:12 compute-0 sudo[393175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:11:12 compute-0 sudo[393175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:13 compute-0 sudo[393175]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:11:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:11:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:11:13 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:11:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:11:13 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:11:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:11:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:11:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:11:13 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:11:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:11:13 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:11:13 compute-0 sudo[393233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:11:13 compute-0 sudo[393233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:13 compute-0 sudo[393233]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:13 compute-0 nova_compute[248510]: 2025-12-13 09:11:13.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:13 compute-0 sudo[393258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:11:13 compute-0 sudo[393258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:13 compute-0 nova_compute[248510]: 2025-12-13 09:11:13.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:13 compute-0 ceph-mon[76537]: pgmap v3303: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:11:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:11:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:11:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:11:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:11:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:11:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:11:14 compute-0 podman[393295]: 2025-12-13 09:11:14.08346692 +0000 UTC m=+0.050670939 container create b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:11:14 compute-0 systemd[1]: Started libpod-conmon-b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2.scope.
Dec 13 09:11:14 compute-0 podman[393295]: 2025-12-13 09:11:14.053045424 +0000 UTC m=+0.020249514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:11:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:11:14 compute-0 podman[393295]: 2025-12-13 09:11:14.174796687 +0000 UTC m=+0.142000716 container init b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:11:14 compute-0 podman[393295]: 2025-12-13 09:11:14.185574585 +0000 UTC m=+0.152778574 container start b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:11:14 compute-0 podman[393295]: 2025-12-13 09:11:14.189321412 +0000 UTC m=+0.156525421 container attach b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:11:14 compute-0 sad_khorana[393311]: 167 167
Dec 13 09:11:14 compute-0 systemd[1]: libpod-b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2.scope: Deactivated successfully.
Dec 13 09:11:14 compute-0 podman[393295]: 2025-12-13 09:11:14.19582432 +0000 UTC m=+0.163028359 container died b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b378f1bd454c92c416782d7ffd6d03df425f5b41178497c1d620d5571bf43c-merged.mount: Deactivated successfully.
Dec 13 09:11:14 compute-0 podman[393295]: 2025-12-13 09:11:14.252665767 +0000 UTC m=+0.219869756 container remove b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:11:14 compute-0 systemd[1]: libpod-conmon-b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2.scope: Deactivated successfully.
Dec 13 09:11:14 compute-0 podman[393334]: 2025-12-13 09:11:14.496807389 +0000 UTC m=+0.049939590 container create 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:11:14 compute-0 systemd[1]: Started libpod-conmon-62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17.scope.
Dec 13 09:11:14 compute-0 podman[393334]: 2025-12-13 09:11:14.475502239 +0000 UTC m=+0.028634480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:11:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:14 compute-0 podman[393334]: 2025-12-13 09:11:14.727832892 +0000 UTC m=+0.280965123 container init 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:11:14 compute-0 podman[393334]: 2025-12-13 09:11:14.739708928 +0000 UTC m=+0.292841129 container start 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:11:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:11:14 compute-0 podman[393334]: 2025-12-13 09:11:14.781918988 +0000 UTC m=+0.335051229 container attach 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:11:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:11:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4044870584' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:11:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:11:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4044870584' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:11:15 compute-0 musing_feynman[393351]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:11:15 compute-0 musing_feynman[393351]: --> All data devices are unavailable
Dec 13 09:11:15 compute-0 systemd[1]: libpod-62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17.scope: Deactivated successfully.
Dec 13 09:11:15 compute-0 podman[393334]: 2025-12-13 09:11:15.354937887 +0000 UTC m=+0.908070128 container died 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 09:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336-merged.mount: Deactivated successfully.
Dec 13 09:11:15 compute-0 podman[393334]: 2025-12-13 09:11:15.417149573 +0000 UTC m=+0.970281764 container remove 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:11:15 compute-0 systemd[1]: libpod-conmon-62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17.scope: Deactivated successfully.
Dec 13 09:11:15 compute-0 sudo[393258]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:15 compute-0 sudo[393382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:11:15 compute-0 sudo[393382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:15 compute-0 sudo[393382]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:15 compute-0 sudo[393407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:11:15 compute-0 sudo[393407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:15 compute-0 podman[393444]: 2025-12-13 09:11:15.922820055 +0000 UTC m=+0.040444775 container create 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:11:15 compute-0 systemd[1]: Started libpod-conmon-24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f.scope.
Dec 13 09:11:15 compute-0 ceph-mon[76537]: pgmap v3304: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:11:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4044870584' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:11:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4044870584' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:11:15 compute-0 podman[393444]: 2025-12-13 09:11:15.904626336 +0000 UTC m=+0.022251086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:11:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:11:16 compute-0 podman[393444]: 2025-12-13 09:11:16.023059093 +0000 UTC m=+0.140683793 container init 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:11:16 compute-0 podman[393444]: 2025-12-13 09:11:16.02875229 +0000 UTC m=+0.146376990 container start 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:11:16 compute-0 thirsty_mirzakhani[393460]: 167 167
Dec 13 09:11:16 compute-0 podman[393444]: 2025-12-13 09:11:16.032057725 +0000 UTC m=+0.149682425 container attach 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:11:16 compute-0 systemd[1]: libpod-24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f.scope: Deactivated successfully.
Dec 13 09:11:16 compute-0 podman[393444]: 2025-12-13 09:11:16.032723652 +0000 UTC m=+0.150348352 container died 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a6c0f478a41092c61225cf1b999d299a400f775480c20392903f124cefa74f-merged.mount: Deactivated successfully.
Dec 13 09:11:16 compute-0 podman[393444]: 2025-12-13 09:11:16.081650335 +0000 UTC m=+0.199275035 container remove 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:11:16 compute-0 systemd[1]: libpod-conmon-24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f.scope: Deactivated successfully.
Dec 13 09:11:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:16 compute-0 podman[393483]: 2025-12-13 09:11:16.255625406 +0000 UTC m=+0.048124144 container create 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 09:11:16 compute-0 systemd[1]: Started libpod-conmon-53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa.scope.
Dec 13 09:11:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:11:16 compute-0 podman[393483]: 2025-12-13 09:11:16.235932747 +0000 UTC m=+0.028431505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:16 compute-0 podman[393483]: 2025-12-13 09:11:16.351646044 +0000 UTC m=+0.144144812 container init 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:11:16 compute-0 podman[393483]: 2025-12-13 09:11:16.358228274 +0000 UTC m=+0.150727032 container start 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:11:16 compute-0 podman[393483]: 2025-12-13 09:11:16.361495368 +0000 UTC m=+0.153994086 container attach 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:11:16 compute-0 tender_driscoll[393502]: {
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:     "0": [
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:         {
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "devices": [
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "/dev/loop3"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             ],
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_name": "ceph_lv0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_size": "21470642176",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "name": "ceph_lv0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "tags": {
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cluster_name": "ceph",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.crush_device_class": "",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.encrypted": "0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.objectstore": "bluestore",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osd_id": "0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.type": "block",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.vdo": "0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.with_tpm": "0"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             },
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "type": "block",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "vg_name": "ceph_vg0"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:         }
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:     ],
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:     "1": [
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:         {
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "devices": [
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "/dev/loop4"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             ],
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_name": "ceph_lv1",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_size": "21470642176",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "name": "ceph_lv1",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "tags": {
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cluster_name": "ceph",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.crush_device_class": "",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.encrypted": "0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.objectstore": "bluestore",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osd_id": "1",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.type": "block",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.vdo": "0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.with_tpm": "0"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             },
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "type": "block",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "vg_name": "ceph_vg1"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:         }
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:     ],
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:     "2": [
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:         {
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "devices": [
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "/dev/loop5"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             ],
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_name": "ceph_lv2",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_size": "21470642176",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "name": "ceph_lv2",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "tags": {
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.cluster_name": "ceph",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.crush_device_class": "",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.encrypted": "0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.objectstore": "bluestore",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osd_id": "2",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.type": "block",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.vdo": "0",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:                 "ceph.with_tpm": "0"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             },
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "type": "block",
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:             "vg_name": "ceph_vg2"
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:         }
Dec 13 09:11:16 compute-0 tender_driscoll[393502]:     ]
Dec 13 09:11:16 compute-0 tender_driscoll[393502]: }
Dec 13 09:11:16 compute-0 systemd[1]: libpod-53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa.scope: Deactivated successfully.
Dec 13 09:11:16 compute-0 podman[393483]: 2025-12-13 09:11:16.729444726 +0000 UTC m=+0.521943474 container died 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:11:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 107 KiB/s wr, 19 op/s
Dec 13 09:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17-merged.mount: Deactivated successfully.
Dec 13 09:11:16 compute-0 podman[393483]: 2025-12-13 09:11:16.788951002 +0000 UTC m=+0.581449730 container remove 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:11:16 compute-0 systemd[1]: libpod-conmon-53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa.scope: Deactivated successfully.
Dec 13 09:11:16 compute-0 sudo[393407]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:16 compute-0 sudo[393523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:11:16 compute-0 sudo[393523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:16 compute-0 sudo[393523]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:16 compute-0 nova_compute[248510]: 2025-12-13 09:11:16.975 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:17 compute-0 sudo[393548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:11:17 compute-0 sudo[393548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:17 compute-0 podman[393585]: 2025-12-13 09:11:17.281722601 +0000 UTC m=+0.042103848 container create d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:11:17 compute-0 systemd[1]: Started libpod-conmon-d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced.scope.
Dec 13 09:11:17 compute-0 podman[393585]: 2025-12-13 09:11:17.261779996 +0000 UTC m=+0.022161283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:11:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:11:17 compute-0 podman[393585]: 2025-12-13 09:11:17.388563989 +0000 UTC m=+0.148945316 container init d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 09:11:17 compute-0 podman[393585]: 2025-12-13 09:11:17.395960099 +0000 UTC m=+0.156341336 container start d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:11:17 compute-0 podman[393585]: 2025-12-13 09:11:17.39945819 +0000 UTC m=+0.159839457 container attach d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:11:17 compute-0 laughing_satoshi[393602]: 167 167
Dec 13 09:11:17 compute-0 systemd[1]: libpod-d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced.scope: Deactivated successfully.
Dec 13 09:11:17 compute-0 podman[393585]: 2025-12-13 09:11:17.401619956 +0000 UTC m=+0.162001233 container died d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 09:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d11c7ed76ba4296c320b7cc36c6fa16d2ad2049192036b833b55b09f55a640cb-merged.mount: Deactivated successfully.
Dec 13 09:11:17 compute-0 podman[393585]: 2025-12-13 09:11:17.456635216 +0000 UTC m=+0.217016493 container remove d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 09:11:17 compute-0 systemd[1]: libpod-conmon-d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced.scope: Deactivated successfully.
Dec 13 09:11:17 compute-0 podman[393625]: 2025-12-13 09:11:17.7179434 +0000 UTC m=+0.073032316 container create bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 09:11:17 compute-0 systemd[1]: Started libpod-conmon-bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1.scope.
Dec 13 09:11:17 compute-0 podman[393625]: 2025-12-13 09:11:17.690732498 +0000 UTC m=+0.045821484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:11:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:11:17 compute-0 podman[393625]: 2025-12-13 09:11:17.81982839 +0000 UTC m=+0.174917316 container init bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 09:11:17 compute-0 podman[393625]: 2025-12-13 09:11:17.834400616 +0000 UTC m=+0.189489552 container start bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:11:17 compute-0 podman[393625]: 2025-12-13 09:11:17.839932809 +0000 UTC m=+0.195021745 container attach bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:11:17 compute-0 ceph-mon[76537]: pgmap v3305: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 107 KiB/s wr, 19 op/s
Dec 13 09:11:18 compute-0 lvm[393720]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:11:18 compute-0 lvm[393720]: VG ceph_vg0 finished
Dec 13 09:11:18 compute-0 lvm[393721]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:11:18 compute-0 lvm[393721]: VG ceph_vg1 finished
Dec 13 09:11:18 compute-0 lvm[393723]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:11:18 compute-0 lvm[393723]: VG ceph_vg2 finished
Dec 13 09:11:18 compute-0 lvm[393724]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:11:18 compute-0 lvm[393724]: VG ceph_vg0 finished
Dec 13 09:11:18 compute-0 nice_yonath[393642]: {}
Dec 13 09:11:18 compute-0 systemd[1]: libpod-bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1.scope: Deactivated successfully.
Dec 13 09:11:18 compute-0 systemd[1]: libpod-bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1.scope: Consumed 1.391s CPU time.
Dec 13 09:11:18 compute-0 podman[393625]: 2025-12-13 09:11:18.743928722 +0000 UTC m=+1.099017668 container died bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:11:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 107 KiB/s wr, 19 op/s
Dec 13 09:11:18 compute-0 nova_compute[248510]: 2025-12-13 09:11:18.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:18 compute-0 nova_compute[248510]: 2025-12-13 09:11:18.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230-merged.mount: Deactivated successfully.
Dec 13 09:11:18 compute-0 podman[393625]: 2025-12-13 09:11:18.804030603 +0000 UTC m=+1.159119519 container remove bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 09:11:18 compute-0 systemd[1]: libpod-conmon-bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1.scope: Deactivated successfully.
Dec 13 09:11:18 compute-0 sudo[393548]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:11:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:11:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:11:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:11:18 compute-0 nova_compute[248510]: 2025-12-13 09:11:18.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:18 compute-0 sudo[393739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:11:18 compute-0 sudo[393739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:11:18 compute-0 sudo[393739]: pam_unix(sudo:session): session closed for user root
Dec 13 09:11:19 compute-0 ceph-mon[76537]: pgmap v3306: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 107 KiB/s wr, 19 op/s
Dec 13 09:11:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:11:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:11:19 compute-0 nova_compute[248510]: 2025-12-13 09:11:19.940 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:19 compute-0 nova_compute[248510]: 2025-12-13 09:11:19.941 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:19 compute-0 nova_compute[248510]: 2025-12-13 09:11:19.965 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.094 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.095 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.102 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.103 248514 INFO nova.compute.claims [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.311 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 100 KiB/s wr, 8 op/s
Dec 13 09:11:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:11:20 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212060461' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:11:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2212060461' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.881 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.888 248514 DEBUG nova.compute.provider_tree [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.908 248514 DEBUG nova.scheduler.client.report [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.941 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:20 compute-0 nova_compute[248510]: 2025-12-13 09:11:20.942 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.003 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.004 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.147 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.183 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:11:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.301 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.303 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.303 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Creating image(s)
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.343 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.381 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.417 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.422 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007738589215823556 of space, bias 1.0, pg target 0.23215767647470667 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697360151345169 of space, bias 1.0, pg target 0.20092080454035507 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:11:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.488 248514 DEBUG nova.policy [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.534 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.536 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.537 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.537 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.567 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:11:21 compute-0 nova_compute[248510]: 2025-12-13 09:11:21.576 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:21 compute-0 ceph-mon[76537]: pgmap v3307: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 100 KiB/s wr, 8 op/s
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.023 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.096 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.177 248514 DEBUG nova.objects.instance [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d2b3fb1-4256-4dd2-bbba-188212ddd10e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.203 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.203 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Ensure instance console log exists: /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.204 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.204 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.205 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 09:11:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:22.839 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:11:22 compute-0 nova_compute[248510]: 2025-12-13 09:11:22.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:22 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:22.840 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:11:23 compute-0 nova_compute[248510]: 2025-12-13 09:11:23.070 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Successfully created port: 10f9c496-a049-40c4-a29b-fe4279219d91 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:11:23 compute-0 nova_compute[248510]: 2025-12-13 09:11:23.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:23 compute-0 nova_compute[248510]: 2025-12-13 09:11:23.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:24 compute-0 podman[393954]: 2025-12-13 09:11:24.001346601 +0000 UTC m=+0.075353195 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 09:11:24 compute-0 podman[393953]: 2025-12-13 09:11:24.001447494 +0000 UTC m=+0.083588338 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 09:11:24 compute-0 podman[393952]: 2025-12-13 09:11:24.02145232 +0000 UTC m=+0.103874382 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 13 09:11:24 compute-0 ceph-mon[76537]: pgmap v3308: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.225 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.276 248514 WARNING nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.276 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.276 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 0d2b3fb1-4256-4dd2-bbba-188212ddd10e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.277 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.278 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.278 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.309 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.826 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.826 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.826 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.856 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 09:11:24 compute-0 nova_compute[248510]: 2025-12-13 09:11:24.961 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Successfully created port: 47c08f82-5e0f-4d58-bc8a-5c049f6846fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:11:25 compute-0 nova_compute[248510]: 2025-12-13 09:11:25.095 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:11:25 compute-0 nova_compute[248510]: 2025-12-13 09:11:25.096 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:11:25 compute-0 nova_compute[248510]: 2025-12-13 09:11:25.096 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:11:25 compute-0 nova_compute[248510]: 2025-12-13 09:11:25.096 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:11:26 compute-0 nova_compute[248510]: 2025-12-13 09:11:26.153 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Successfully updated port: 10f9c496-a049-40c4-a29b-fe4279219d91 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:11:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:26 compute-0 nova_compute[248510]: 2025-12-13 09:11:26.342 248514 DEBUG nova.compute.manager [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:11:26 compute-0 nova_compute[248510]: 2025-12-13 09:11:26.342 248514 DEBUG nova.compute.manager [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing instance network info cache due to event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:11:26 compute-0 nova_compute[248510]: 2025-12-13 09:11:26.343 248514 DEBUG oslo_concurrency.lockutils [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:11:26 compute-0 nova_compute[248510]: 2025-12-13 09:11:26.343 248514 DEBUG oslo_concurrency.lockutils [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:11:26 compute-0 nova_compute[248510]: 2025-12-13 09:11:26.343 248514 DEBUG nova.network.neutron [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:11:26 compute-0 ceph-mon[76537]: pgmap v3309: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:26 compute-0 nova_compute[248510]: 2025-12-13 09:11:26.768 248514 DEBUG nova.network.neutron [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:11:27 compute-0 nova_compute[248510]: 2025-12-13 09:11:27.162 248514 DEBUG nova.network.neutron [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:11:27 compute-0 nova_compute[248510]: 2025-12-13 09:11:27.194 248514 DEBUG oslo_concurrency.lockutils [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:11:27 compute-0 nova_compute[248510]: 2025-12-13 09:11:27.226 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Successfully updated port: 47c08f82-5e0f-4d58-bc8a-5c049f6846fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:11:27 compute-0 nova_compute[248510]: 2025-12-13 09:11:27.247 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:11:27 compute-0 nova_compute[248510]: 2025-12-13 09:11:27.247 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:11:27 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:11:27 compute-0 nova_compute[248510]: 2025-12-13 09:11:27.248 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:11:27 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:11:27 compute-0 nova_compute[248510]: 2025-12-13 09:11:27.493 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.257 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.295 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.296 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.300 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.457 248514 DEBUG nova.compute.manager [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-changed-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.458 248514 DEBUG nova.compute.manager [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing instance network info cache due to event network-changed-47c08f82-5e0f-4d58-bc8a-5c049f6846fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.459 248514 DEBUG oslo_concurrency.lockutils [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:11:28 compute-0 ceph-mon[76537]: pgmap v3310: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.831 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.832 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.833 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.833 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.834 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:28 compute-0 nova_compute[248510]: 2025-12-13 09:11:28.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:11:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637127471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.393 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.484 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.485 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.711 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.712 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3299MB free_disk=59.921106703579426GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.712 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.712 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.807 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.829 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.829 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0d2b3fb1-4256-4dd2-bbba-188212ddd10e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.830 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.830 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:11:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:29.843 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.862 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.863 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance network_info: |[{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.864 248514 DEBUG oslo_concurrency.lockutils [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.864 248514 DEBUG nova.network.neutron [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing network info cache for port 47c08f82-5e0f-4d58-bc8a-5c049f6846fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.871 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start _get_guest_xml network_info=[{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.884 248514 WARNING nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.890 248514 DEBUG nova.virt.libvirt.host [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.891 248514 DEBUG nova.virt.libvirt.host [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.904 248514 DEBUG nova.virt.libvirt.host [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.905 248514 DEBUG nova.virt.libvirt.host [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.906 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.906 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.907 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.908 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.908 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.908 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.909 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.909 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.910 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.910 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.911 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.911 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.916 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:29 compute-0 ceph-mon[76537]: pgmap v3311: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2637127471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:11:29 compute-0 nova_compute[248510]: 2025-12-13 09:11:29.973 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:11:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516835256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:11:30 compute-0 nova_compute[248510]: 2025-12-13 09:11:30.597 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:30 compute-0 nova_compute[248510]: 2025-12-13 09:11:30.603 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:11:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:11:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536569844' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:11:30 compute-0 nova_compute[248510]: 2025-12-13 09:11:30.633 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.717s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:30 compute-0 nova_compute[248510]: 2025-12-13 09:11:30.658 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:11:30 compute-0 nova_compute[248510]: 2025-12-13 09:11:30.663 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:30 compute-0 nova_compute[248510]: 2025-12-13 09:11:30.812 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:11:30 compute-0 nova_compute[248510]: 2025-12-13 09:11:30.843 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:11:30 compute-0 nova_compute[248510]: 2025-12-13 09:11:30.843 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:11:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965586814' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:11:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/516835256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:11:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1536569844' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.242 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.245 248514 DEBUG nova.virt.libvirt.vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:11:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.245 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.247 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.249 248514 DEBUG nova.virt.libvirt.vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:11:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.249 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.250 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.252 248514 DEBUG nova.objects.instance [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d2b3fb1-4256-4dd2-bbba-188212ddd10e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.330 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <uuid>0d2b3fb1-4256-4dd2-bbba-188212ddd10e</uuid>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <name>instance-0000008e</name>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-1965828306</nova:name>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:11:29</nova:creationTime>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:port uuid="10f9c496-a049-40c4-a29b-fe4279219d91">
Dec 13 09:11:31 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <nova:port uuid="47c08f82-5e0f-4d58-bc8a-5c049f6846fd">
Dec 13 09:11:31 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe39:7e4a" ipVersion="6"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe39:7e4a" ipVersion="6"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <system>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <entry name="serial">0d2b3fb1-4256-4dd2-bbba-188212ddd10e</entry>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <entry name="uuid">0d2b3fb1-4256-4dd2-bbba-188212ddd10e</entry>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </system>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <os>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   </os>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <features>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   </features>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk">
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       </source>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config">
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       </source>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:11:31 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:5f:00:ab"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <target dev="tap10f9c496-a0"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:39:7e:4a"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <target dev="tap47c08f82-5e"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/console.log" append="off"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <video>
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </video>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:11:31 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:11:31 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:11:31 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:11:31 compute-0 nova_compute[248510]: </domain>
Dec 13 09:11:31 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.332 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Preparing to wait for external event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.332 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.333 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.333 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.334 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Preparing to wait for external event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.334 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.335 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.335 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.337 248514 DEBUG nova.virt.libvirt.vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:11:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.337 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.338 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.340 248514 DEBUG os_vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.343 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.343 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.355 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10f9c496-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.356 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10f9c496-a0, col_values=(('external_ids', {'iface-id': '10f9c496-a049-40c4-a29b-fe4279219d91', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:00:ab', 'vm-uuid': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.359 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:31 compute-0 NetworkManager[50376]: <info>  [1765617091.3606] manager: (tap10f9c496-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.362 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.370 248514 INFO os_vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0')
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.372 248514 DEBUG nova.virt.libvirt.vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:11:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.373 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.374 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.375 248514 DEBUG os_vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.376 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.377 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.380 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47c08f82-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.381 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47c08f82-5e, col_values=(('external_ids', {'iface-id': '47c08f82-5e0f-4d58-bc8a-5c049f6846fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:7e:4a', 'vm-uuid': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:31 compute-0 NetworkManager[50376]: <info>  [1765617091.3835] manager: (tap47c08f82-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.385 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.394 248514 INFO os_vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e')
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.517 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.518 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.518 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:5f:00:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.519 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:39:7e:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.520 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Using config drive
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.549 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.846 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.846 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:31 compute-0 nova_compute[248510]: 2025-12-13 09:11:31.847 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:11:32 compute-0 ceph-mon[76537]: pgmap v3312: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2965586814' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:11:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:32 compute-0 nova_compute[248510]: 2025-12-13 09:11:32.942 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Creating config drive at /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config
Dec 13 09:11:32 compute-0 nova_compute[248510]: 2025-12-13 09:11:32.951 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0qcoyc6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.009 248514 DEBUG nova.network.neutron [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updated VIF entry in instance network info cache for port 47c08f82-5e0f-4d58-bc8a-5c049f6846fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.011 248514 DEBUG nova.network.neutron [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.036 248514 DEBUG oslo_concurrency.lockutils [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.118 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0qcoyc6" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.160 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.165 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.577 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.578 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Deleting local config drive /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config because it was imported into RBD.
Dec 13 09:11:33 compute-0 NetworkManager[50376]: <info>  [1765617093.6539] manager: (tap10f9c496-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Dec 13 09:11:33 compute-0 kernel: tap10f9c496-a0: entered promiscuous mode
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:33 compute-0 ovn_controller[148476]: 2025-12-13T09:11:33Z|01491|binding|INFO|Claiming lport 10f9c496-a049-40c4-a29b-fe4279219d91 for this chassis.
Dec 13 09:11:33 compute-0 ovn_controller[148476]: 2025-12-13T09:11:33Z|01492|binding|INFO|10f9c496-a049-40c4-a29b-fe4279219d91: Claiming fa:16:3e:5f:00:ab 10.100.0.11
Dec 13 09:11:33 compute-0 ovn_controller[148476]: 2025-12-13T09:11:33Z|01493|binding|INFO|Setting lport 10f9c496-a049-40c4-a29b-fe4279219d91 ovn-installed in OVS
Dec 13 09:11:33 compute-0 kernel: tap47c08f82-5e: entered promiscuous mode
Dec 13 09:11:33 compute-0 NetworkManager[50376]: <info>  [1765617093.6846] manager: (tap47c08f82-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Dec 13 09:11:33 compute-0 ovn_controller[148476]: 2025-12-13T09:11:33Z|01494|if_status|INFO|Dropped 12 log messages in last 43 seconds (most recently, 43 seconds ago) due to excessive rate
Dec 13 09:11:33 compute-0 ovn_controller[148476]: 2025-12-13T09:11:33Z|01495|if_status|INFO|Not updating pb chassis for 47c08f82-5e0f-4d58-bc8a-5c049f6846fd now as sb is readonly
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.685 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:33 compute-0 systemd-udevd[394199]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:11:33 compute-0 systemd-udevd[394201]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:11:33 compute-0 NetworkManager[50376]: <info>  [1765617093.7007] device (tap10f9c496-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:11:33 compute-0 NetworkManager[50376]: <info>  [1765617093.7016] device (tap10f9c496-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:11:33 compute-0 NetworkManager[50376]: <info>  [1765617093.7021] device (tap47c08f82-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:11:33 compute-0 NetworkManager[50376]: <info>  [1765617093.7028] device (tap47c08f82-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:33 compute-0 systemd-machined[210538]: New machine qemu-173-instance-0000008e.
Dec 13 09:11:33 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008e.
Dec 13 09:11:33 compute-0 nova_compute[248510]: 2025-12-13 09:11:33.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.073 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:00:ab 10.100.0.11'], port_security=['fa:16:3e:5f:00:ab 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44250a44-17ff-41cb-ae95-e575bff91a4a, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=10f9c496-a049-40c4-a29b-fe4279219d91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.074 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 10f9c496-a049-40c4-a29b-fe4279219d91 in datapath a7065d5a-edce-4470-a56d-ab529d56aa3c bound to our chassis
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.075 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7065d5a-edce-4470-a56d-ab529d56aa3c
Dec 13 09:11:34 compute-0 ovn_controller[148476]: 2025-12-13T09:11:34Z|01496|binding|INFO|Claiming lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd for this chassis.
Dec 13 09:11:34 compute-0 ovn_controller[148476]: 2025-12-13T09:11:34Z|01497|binding|INFO|47c08f82-5e0f-4d58-bc8a-5c049f6846fd: Claiming fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a
Dec 13 09:11:34 compute-0 ovn_controller[148476]: 2025-12-13T09:11:34Z|01498|binding|INFO|Setting lport 10f9c496-a049-40c4-a29b-fe4279219d91 up in Southbound
Dec 13 09:11:34 compute-0 ovn_controller[148476]: 2025-12-13T09:11:34Z|01499|binding|INFO|Setting lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd ovn-installed in OVS
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.101 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8b51fa-ef14-4484-91e9-6bf41c49f773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.147 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2def0345-5218-45ff-a93d-69710f8263fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.151 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8836a3-a881-49bc-ba7f-ce5905407fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.192 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a957350f-fee7-4b7c-b0b4-da414781b471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.219 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8ebf96-033e-48c2-9e2a-75620c7101a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7065d5a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962803, 'reachable_time': 25861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394218, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.242 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64cfdff7-142b-46a1-a396-c8ea5b3544d5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa7065d5a-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962816, 'tstamp': 962816}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394219, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa7065d5a-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962820, 'tstamp': 962820}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394219, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.245 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7065d5a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.249 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7065d5a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.249 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.249 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7065d5a-e0, col_values=(('external_ids', {'iface-id': 'b83346dd-17c7-4f89-a540-efae6916310e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.250 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.297 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a'], port_security=['fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe39:7e4a/64 2001:db8::f816:3eff:fe39:7e4a/64', 'neutron:device_id': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=47c08f82-5e0f-4d58-bc8a-5c049f6846fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.298 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 47c08f82-5e0f-4d58-bc8a-5c049f6846fd in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c unbound from our chassis
Dec 13 09:11:34 compute-0 ovn_controller[148476]: 2025-12-13T09:11:34Z|01500|binding|INFO|Setting lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd up in Southbound
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.300 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.329 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26a62669-c33a-49b6-8149-870a9e82c21e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ceph-mon[76537]: pgmap v3313: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.369 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a763b7c9-acd4-4bfa-9d95-6add1ada7fb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.372 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b9295965-6624-447e-b305-cc6c15c4d40f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.411 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[733930f5-2226-425e-8eae-0b6e46cd8337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.438 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c9d47b-7929-4cec-9854-589e798c104a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7661cc7b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:a3:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962968, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394225, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe47072-3aa4-42e0-91b4-a2db75967565]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7661cc7b-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962982, 'tstamp': 962982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394226, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.463 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7661cc7b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.466 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7661cc7b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.466 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.466 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7661cc7b-f0, col_values=(('external_ids', {'iface-id': 'c9cb1492-8274-4506-acb8-65660baeb5cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:11:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.467 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:11:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.924 248514 DEBUG nova.compute.manager [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.925 248514 DEBUG oslo_concurrency.lockutils [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.925 248514 DEBUG oslo_concurrency.lockutils [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.926 248514 DEBUG oslo_concurrency.lockutils [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.926 248514 DEBUG nova.compute.manager [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Processing event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.969 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617094.9688623, 0d2b3fb1-4256-4dd2-bbba-188212ddd10e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:11:34 compute-0 nova_compute[248510]: 2025-12-13 09:11:34.970 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] VM Started (Lifecycle Event)
Dec 13 09:11:35 compute-0 nova_compute[248510]: 2025-12-13 09:11:35.272 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:11:35 compute-0 nova_compute[248510]: 2025-12-13 09:11:35.277 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617094.969139, 0d2b3fb1-4256-4dd2-bbba-188212ddd10e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:11:35 compute-0 nova_compute[248510]: 2025-12-13 09:11:35.278 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] VM Paused (Lifecycle Event)
Dec 13 09:11:35 compute-0 nova_compute[248510]: 2025-12-13 09:11:35.304 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:11:35 compute-0 nova_compute[248510]: 2025-12-13 09:11:35.308 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:11:35 compute-0 nova_compute[248510]: 2025-12-13 09:11:35.353 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:11:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:36 compute-0 ceph-mon[76537]: pgmap v3314: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 09:11:36 compute-0 nova_compute[248510]: 2025-12-13 09:11:36.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 4 op/s
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.008 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.008 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No event matching network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 in dict_keys([('network-vif-plugged', '47c08f82-5e0f-4d58-bc8a-5c049f6846fd')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 WARNING nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received unexpected event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 for instance with vm_state building and task_state spawning.
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Processing event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.011 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.011 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.011 248514 WARNING nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received unexpected event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd for instance with vm_state building and task_state spawning.
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.011 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.015 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617097.0153348, 0d2b3fb1-4256-4dd2-bbba-188212ddd10e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.015 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] VM Resumed (Lifecycle Event)
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.017 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.019 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance spawned successfully.
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.019 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.043 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.048 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.051 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.051 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.051 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.052 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.052 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.052 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.196 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.207 248514 INFO nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Took 15.91 seconds to spawn the instance on the hypervisor.
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.208 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.279 248514 INFO nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Took 17.21 seconds to build instance.
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.302 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.302 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.303 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.303 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:37 compute-0 nova_compute[248510]: 2025-12-13 09:11:37.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:11:38 compute-0 ceph-mon[76537]: pgmap v3315: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 4 op/s
Dec 13 09:11:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 616 KiB/s rd, 14 KiB/s wr, 27 op/s
Dec 13 09:11:38 compute-0 nova_compute[248510]: 2025-12-13 09:11:38.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:11:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:11:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:11:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:11:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:11:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:11:40 compute-0 ceph-mon[76537]: pgmap v3316: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 616 KiB/s rd, 14 KiB/s wr, 27 op/s
Dec 13 09:11:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 67 op/s
Dec 13 09:11:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:41 compute-0 nova_compute[248510]: 2025-12-13 09:11:41.387 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:41 compute-0 nova_compute[248510]: 2025-12-13 09:11:41.863 248514 DEBUG nova.compute.manager [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:11:41 compute-0 nova_compute[248510]: 2025-12-13 09:11:41.864 248514 DEBUG nova.compute.manager [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing instance network info cache due to event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:11:41 compute-0 nova_compute[248510]: 2025-12-13 09:11:41.865 248514 DEBUG oslo_concurrency.lockutils [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:11:41 compute-0 nova_compute[248510]: 2025-12-13 09:11:41.866 248514 DEBUG oslo_concurrency.lockutils [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:11:41 compute-0 nova_compute[248510]: 2025-12-13 09:11:41.866 248514 DEBUG nova.network.neutron [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:11:42 compute-0 ceph-mon[76537]: pgmap v3317: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 67 op/s
Dec 13 09:11:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 67 op/s
Dec 13 09:11:43 compute-0 nova_compute[248510]: 2025-12-13 09:11:43.226 248514 DEBUG nova.network.neutron [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updated VIF entry in instance network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:11:43 compute-0 nova_compute[248510]: 2025-12-13 09:11:43.227 248514 DEBUG nova.network.neutron [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:11:43 compute-0 nova_compute[248510]: 2025-12-13 09:11:43.261 248514 DEBUG oslo_concurrency.lockutils [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:11:43 compute-0 nova_compute[248510]: 2025-12-13 09:11:43.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:44 compute-0 ceph-mon[76537]: pgmap v3318: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 67 op/s
Dec 13 09:11:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:11:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:46 compute-0 nova_compute[248510]: 2025-12-13 09:11:46.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:46 compute-0 ceph-mon[76537]: pgmap v3319: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:11:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Dec 13 09:11:47 compute-0 ceph-mon[76537]: pgmap v3320: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Dec 13 09:11:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 171 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 573 KiB/s wr, 76 op/s
Dec 13 09:11:48 compute-0 nova_compute[248510]: 2025-12-13 09:11:48.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:50 compute-0 ceph-mon[76537]: pgmap v3321: 321 pgs: 321 active+clean; 171 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 573 KiB/s wr, 76 op/s
Dec 13 09:11:50 compute-0 ovn_controller[148476]: 2025-12-13T09:11:50Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:00:ab 10.100.0.11
Dec 13 09:11:50 compute-0 ovn_controller[148476]: 2025-12-13T09:11:50Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:00:ab 10.100.0.11
Dec 13 09:11:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 184 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 67 op/s
Dec 13 09:11:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:51 compute-0 nova_compute[248510]: 2025-12-13 09:11:51.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 184 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Dec 13 09:11:52 compute-0 ceph-mon[76537]: pgmap v3322: 321 pgs: 321 active+clean; 184 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 67 op/s
Dec 13 09:11:53 compute-0 nova_compute[248510]: 2025-12-13 09:11:53.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:53 compute-0 ceph-mon[76537]: pgmap v3323: 321 pgs: 321 active+clean; 184 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Dec 13 09:11:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 517 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Dec 13 09:11:55 compute-0 podman[394273]: 2025-12-13 09:11:55.009094163 +0000 UTC m=+0.075071549 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:11:55 compute-0 podman[394274]: 2025-12-13 09:11:55.021209936 +0000 UTC m=+0.091839232 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:11:55 compute-0 podman[394272]: 2025-12-13 09:11:55.065018636 +0000 UTC m=+0.133154588 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 09:11:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:55.449 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:11:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:55.450 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:11:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:11:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:11:55 compute-0 ceph-mon[76537]: pgmap v3324: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 517 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Dec 13 09:11:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:11:56 compute-0 nova_compute[248510]: 2025-12-13 09:11:56.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:11:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:11:58 compute-0 ceph-mon[76537]: pgmap v3325: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:11:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:11:58 compute-0 nova_compute[248510]: 2025-12-13 09:11:58.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:00 compute-0 ceph-mon[76537]: pgmap v3326: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:12:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Dec 13 09:12:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:01 compute-0 nova_compute[248510]: 2025-12-13 09:12:01.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:02 compute-0 ceph-mon[76537]: pgmap v3327: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Dec 13 09:12:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 506 KiB/s wr, 43 op/s
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.186 248514 DEBUG nova.compute.manager [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.187 248514 DEBUG nova.compute.manager [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing instance network info cache due to event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.187 248514 DEBUG oslo_concurrency.lockutils [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.188 248514 DEBUG oslo_concurrency.lockutils [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.188 248514 DEBUG nova.network.neutron [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.285 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.285 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.286 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.286 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.286 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.287 248514 INFO nova.compute.manager [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Terminating instance
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.289 248514 DEBUG nova.compute.manager [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:12:03 compute-0 kernel: tap10f9c496-a0 (unregistering): left promiscuous mode
Dec 13 09:12:03 compute-0 NetworkManager[50376]: <info>  [1765617123.3423] device (tap10f9c496-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:12:03 compute-0 ovn_controller[148476]: 2025-12-13T09:12:03Z|01501|binding|INFO|Releasing lport 10f9c496-a049-40c4-a29b-fe4279219d91 from this chassis (sb_readonly=0)
Dec 13 09:12:03 compute-0 ovn_controller[148476]: 2025-12-13T09:12:03Z|01502|binding|INFO|Setting lport 10f9c496-a049-40c4-a29b-fe4279219d91 down in Southbound
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.351 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_controller[148476]: 2025-12-13T09:12:03Z|01503|binding|INFO|Removing iface tap10f9c496-a0 ovn-installed in OVS
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.359 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:00:ab 10.100.0.11'], port_security=['fa:16:3e:5f:00:ab 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44250a44-17ff-41cb-ae95-e575bff91a4a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=10f9c496-a049-40c4-a29b-fe4279219d91) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.361 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 10f9c496-a049-40c4-a29b-fe4279219d91 in datapath a7065d5a-edce-4470-a56d-ab529d56aa3c unbound from our chassis
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.362 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7065d5a-edce-4470-a56d-ab529d56aa3c
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.377 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36be4744-650b-40da-855e-0bb564f7ca08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 kernel: tap47c08f82-5e (unregistering): left promiscuous mode
Dec 13 09:12:03 compute-0 NetworkManager[50376]: <info>  [1765617123.3828] device (tap47c08f82-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:12:03 compute-0 ovn_controller[148476]: 2025-12-13T09:12:03Z|01504|binding|INFO|Releasing lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd from this chassis (sb_readonly=0)
Dec 13 09:12:03 compute-0 ovn_controller[148476]: 2025-12-13T09:12:03Z|01505|binding|INFO|Setting lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd down in Southbound
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_controller[148476]: 2025-12-13T09:12:03Z|01506|binding|INFO|Removing iface tap47c08f82-5e ovn-installed in OVS
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.401 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a'], port_security=['fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe39:7e4a/64 2001:db8::f816:3eff:fe39:7e4a/64', 'neutron:device_id': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=47c08f82-5e0f-4d58-bc8a-5c049f6846fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.406 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.416 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e35cf1cb-5cb8-49d9-a778-1ae25111c078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.420 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65d25e56-6a46-4573-b723-a21e2e388557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.451 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2122d3-43a5-4d92-8c79-04c2030555ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Dec 13 09:12:03 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008e.scope: Consumed 14.705s CPU time.
Dec 13 09:12:03 compute-0 systemd-machined[210538]: Machine qemu-173-instance-0000008e terminated.
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.472 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b688fef9-a902-49fa-8126-bad169ab3dc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7065d5a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962803, 'reachable_time': 25861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394351, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.491 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab087fe7-a339-4add-9096-a7bf07d1768c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa7065d5a-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962816, 'tstamp': 962816}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394352, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa7065d5a-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962820, 'tstamp': 962820}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394352, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.493 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7065d5a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.502 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.503 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7065d5a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.503 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.504 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7065d5a-e0, col_values=(('external_ids', {'iface-id': 'b83346dd-17c7-4f89-a540-efae6916310e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.504 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.505 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 47c08f82-5e0f-4d58-bc8a-5c049f6846fd in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c unbound from our chassis
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.506 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.525 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[80b8940b-0aed-4fcf-b5b2-cdc16672d769]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 NetworkManager[50376]: <info>  [1765617123.5295] manager: (tap47c08f82-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/623)
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.543 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance destroyed successfully.
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.544 248514 DEBUG nova.objects.instance [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 0d2b3fb1-4256-4dd2-bbba-188212ddd10e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.563 248514 DEBUG nova.virt.libvirt.vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:11:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:11:37Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.564 248514 DEBUG nova.network.os_vif_util [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.565 248514 DEBUG nova.network.os_vif_util [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.566 248514 DEBUG os_vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.569 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10f9c496-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.577 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d835e00-4e68-43a1-9ec1-bb34b96e99d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.578 248514 INFO os_vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0')
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.580 248514 DEBUG nova.virt.libvirt.vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:11:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:11:37Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.580 248514 DEBUG nova.network.os_vif_util [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.580 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[41d22265-e327-4a68-a1f7-3d4254bc41d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.581 248514 DEBUG nova.network.os_vif_util [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.581 248514 DEBUG os_vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.583 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.583 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47c08f82-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.588 248514 INFO os_vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e')
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.618 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2620b1-55c8-4406-9aa9-eaf9d9d1f2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.630 248514 DEBUG nova.compute.manager [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-unplugged-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.631 248514 DEBUG oslo_concurrency.lockutils [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.631 248514 DEBUG oslo_concurrency.lockutils [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.632 248514 DEBUG oslo_concurrency.lockutils [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.632 248514 DEBUG nova.compute.manager [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-unplugged-10f9c496-a049-40c4-a29b-fe4279219d91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.633 248514 DEBUG nova.compute.manager [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-unplugged-10f9c496-a049-40c4-a29b-fe4279219d91 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.641 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cda5bc02-c815-4921-81a0-8739de99d87f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7661cc7b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:a3:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962968, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394402, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.663 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56ed7812-f866-4fd5-baea-caedfae55edb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7661cc7b-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962982, 'tstamp': 962982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394403, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.665 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7661cc7b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.668 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7661cc7b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.668 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.669 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7661cc7b-f0, col_values=(('external_ids', {'iface-id': 'c9cb1492-8274-4506-acb8-65660baeb5cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:03 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.669 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:12:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:12:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 172K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 43K writes, 15K syncs, 2.77 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3460 writes, 13K keys, 3460 commit groups, 1.0 writes per commit group, ingest: 16.23 MB, 0.03 MB/s
                                           Interval WAL: 3459 writes, 1381 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 09:12:03 compute-0 nova_compute[248510]: 2025-12-13 09:12:03.794 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:04 compute-0 ceph-mon[76537]: pgmap v3328: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 506 KiB/s wr, 43 op/s
Dec 13 09:12:04 compute-0 nova_compute[248510]: 2025-12-13 09:12:04.667 248514 DEBUG nova.network.neutron [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updated VIF entry in instance network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:12:04 compute-0 nova_compute[248510]: 2025-12-13 09:12:04.668 248514 DEBUG nova.network.neutron [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:12:04 compute-0 nova_compute[248510]: 2025-12-13 09:12:04.707 248514 DEBUG oslo_concurrency.lockutils [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:12:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 515 KiB/s wr, 46 op/s
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.706 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.707 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.707 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.708 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.709 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.709 248514 WARNING nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received unexpected event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 for instance with vm_state active and task_state deleting.
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.710 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-unplugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.710 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.711 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.711 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.714 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-unplugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.714 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-unplugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.714 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.714 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.715 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.715 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.715 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.715 248514 WARNING nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received unexpected event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd for instance with vm_state active and task_state deleting.
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.798 248514 INFO nova.virt.libvirt.driver [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Deleting instance files /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e_del
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.799 248514 INFO nova.virt.libvirt.driver [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Deletion of /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e_del complete
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.867 248514 INFO nova.compute.manager [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Took 2.58 seconds to destroy the instance on the hypervisor.
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.868 248514 DEBUG oslo.service.loopingcall [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.868 248514 DEBUG nova.compute.manager [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:12:05 compute-0 nova_compute[248510]: 2025-12-13 09:12:05.869 248514 DEBUG nova.network.neutron [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:12:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:06 compute-0 ceph-mon[76537]: pgmap v3329: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 515 KiB/s wr, 46 op/s
Dec 13 09:12:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 22 KiB/s wr, 3 op/s
Dec 13 09:12:06 compute-0 nova_compute[248510]: 2025-12-13 09:12:06.854 248514 DEBUG nova.compute.manager [req-018b02b7-441b-44ac-96f3-ccf3eeea436a req-c0663206-9d64-4617-9ea7-75b393359dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-deleted-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:06 compute-0 nova_compute[248510]: 2025-12-13 09:12:06.855 248514 INFO nova.compute.manager [req-018b02b7-441b-44ac-96f3-ccf3eeea436a req-c0663206-9d64-4617-9ea7-75b393359dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Neutron deleted interface 47c08f82-5e0f-4d58-bc8a-5c049f6846fd; detaching it from the instance and deleting it from the info cache
Dec 13 09:12:06 compute-0 nova_compute[248510]: 2025-12-13 09:12:06.856 248514 DEBUG nova.network.neutron [req-018b02b7-441b-44ac-96f3-ccf3eeea436a req-c0663206-9d64-4617-9ea7-75b393359dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:12:06 compute-0 nova_compute[248510]: 2025-12-13 09:12:06.888 248514 DEBUG nova.compute.manager [req-018b02b7-441b-44ac-96f3-ccf3eeea436a req-c0663206-9d64-4617-9ea7-75b393359dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Detach interface failed, port_id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd, reason: Instance 0d2b3fb1-4256-4dd2-bbba-188212ddd10e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.171 248514 DEBUG nova.network.neutron [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.195 248514 INFO nova.compute.manager [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Took 1.33 seconds to deallocate network for instance.
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.237 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.238 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.323 248514 DEBUG oslo_concurrency.processutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:12:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:12:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118093410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.910 248514 DEBUG oslo_concurrency.processutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.921 248514 DEBUG nova.compute.provider_tree [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.943 248514 DEBUG nova.scheduler.client.report [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:12:07 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.971 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:08 compute-0 nova_compute[248510]: 2025-12-13 09:12:07.999 248514 INFO nova.scheduler.client.report [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 0d2b3fb1-4256-4dd2-bbba-188212ddd10e
Dec 13 09:12:08 compute-0 nova_compute[248510]: 2025-12-13 09:12:08.073 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:08 compute-0 ceph-mon[76537]: pgmap v3330: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 22 KiB/s wr, 3 op/s
Dec 13 09:12:08 compute-0 nova_compute[248510]: 2025-12-13 09:12:08.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 23 KiB/s wr, 14 op/s
Dec 13 09:12:08 compute-0 nova_compute[248510]: 2025-12-13 09:12:08.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:08 compute-0 nova_compute[248510]: 2025-12-13 09:12:08.987 248514 DEBUG nova.compute.manager [req-914c688f-f70c-4a3d-bc37-67a9e7759f03 req-d68eccbe-098b-4b5d-a785-6363c8c0e86c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-deleted-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:09 compute-0 nova_compute[248510]: 2025-12-13 09:12:09.377 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:09 compute-0 nova_compute[248510]: 2025-12-13 09:12:09.378 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:09 compute-0 nova_compute[248510]: 2025-12-13 09:12:09.378 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:09 compute-0 nova_compute[248510]: 2025-12-13 09:12:09.378 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:09 compute-0 nova_compute[248510]: 2025-12-13 09:12:09.379 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:09 compute-0 nova_compute[248510]: 2025-12-13 09:12:09.380 248514 INFO nova.compute.manager [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Terminating instance
Dec 13 09:12:09 compute-0 nova_compute[248510]: 2025-12-13 09:12:09.381 248514 DEBUG nova.compute.manager [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:12:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:12:09
Dec 13 09:12:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:12:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:12:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms']
Dec 13 09:12:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:12:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3118093410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:12:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:12:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:12:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:12:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:12:10 compute-0 kernel: tapb6bde1c8-97 (unregistering): left promiscuous mode
Dec 13 09:12:10 compute-0 NetworkManager[50376]: <info>  [1765617130.1580] device (tapb6bde1c8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:12:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:12:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:12:10 compute-0 ovn_controller[148476]: 2025-12-13T09:12:10Z|01507|binding|INFO|Releasing lport b6bde1c8-9710-4155-9619-7040cbbca806 from this chassis (sb_readonly=0)
Dec 13 09:12:10 compute-0 ovn_controller[148476]: 2025-12-13T09:12:10Z|01508|binding|INFO|Setting lport b6bde1c8-9710-4155-9619-7040cbbca806 down in Southbound
Dec 13 09:12:10 compute-0 ovn_controller[148476]: 2025-12-13T09:12:10Z|01509|binding|INFO|Removing iface tapb6bde1c8-97 ovn-installed in OVS
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.225 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 kernel: tap445f61a4-13 (unregistering): left promiscuous mode
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 NetworkManager[50376]: <info>  [1765617130.2675] device (tap445f61a4-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 ovn_controller[148476]: 2025-12-13T09:12:10Z|01510|binding|INFO|Releasing lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 from this chassis (sb_readonly=1)
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 ovn_controller[148476]: 2025-12-13T09:12:10Z|01511|binding|INFO|Removing iface tap445f61a4-13 ovn-installed in OVS
Dec 13 09:12:10 compute-0 ovn_controller[148476]: 2025-12-13T09:12:10Z|01512|if_status|INFO|Dropped 1 log messages in last 156 seconds (most recently, 156 seconds ago) due to excessive rate
Dec 13 09:12:10 compute-0 ovn_controller[148476]: 2025-12-13T09:12:10Z|01513|if_status|INFO|Not setting lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 down as sb is readonly
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Dec 13 09:12:10 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Consumed 16.184s CPU time.
Dec 13 09:12:10 compute-0 systemd-machined[210538]: Machine qemu-172-instance-0000008d terminated.
Dec 13 09:12:10 compute-0 NetworkManager[50376]: <info>  [1765617130.4089] manager: (tapb6bde1c8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/624)
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 NetworkManager[50376]: <info>  [1765617130.4223] manager: (tap445f61a4-13): new Tun device (/org/freedesktop/NetworkManager/Devices/625)
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.427 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.441 248514 INFO nova.virt.libvirt.driver [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance destroyed successfully.
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.442 248514 DEBUG nova.objects.instance [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:12:10 compute-0 ovn_controller[148476]: 2025-12-13T09:12:10Z|01514|binding|INFO|Setting lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 down in Southbound
Dec 13 09:12:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.605 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:9b:72 10.100.0.6'], port_security=['fa:16:3e:54:9b:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44250a44-17ff-41cb-ae95-e575bff91a4a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b6bde1c8-9710-4155-9619-7040cbbca806) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:12:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.608 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b6bde1c8-9710-4155-9619-7040cbbca806 in datapath a7065d5a-edce-4470-a56d-ab529d56aa3c unbound from our chassis
Dec 13 09:12:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.611 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7065d5a-edce-4470-a56d-ab529d56aa3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:12:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92415575-0112-4249-b3de-5a8f417ed2f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.614 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c namespace which is not needed anymore
Dec 13 09:12:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.886 248514 DEBUG nova.virt.libvirt.vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:10:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:10:51Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.888 248514 DEBUG nova.network.os_vif_util [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.890 248514 DEBUG nova.network.os_vif_util [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.892 248514 DEBUG os_vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.895 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6bde1c8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.911 248514 INFO os_vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97')
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.912 248514 DEBUG nova.virt.libvirt.vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:10:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:10:51Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.913 248514 DEBUG nova.network.os_vif_util [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.914 248514 DEBUG nova.network.os_vif_util [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.914 248514 DEBUG os_vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.917 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap445f61a4-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:10 compute-0 nova_compute[248510]: 2025-12-13 09:12:10.924 248514 INFO os_vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13')
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:12:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:11.025 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02'], port_security=['fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe66:7d02/64 2001:db8::f816:3eff:fe66:7d02/64', 'neutron:device_id': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=445f61a4-1352-4145-8b2b-f9f87f5435a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:12:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:12:11 compute-0 ceph-mon[76537]: pgmap v3331: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 23 KiB/s wr, 14 op/s
Dec 13 09:12:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:12:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.8 total, 600.0 interval
                                           Cumulative writes: 45K writes, 175K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3990 writes, 15K keys, 3990 commit groups, 1.0 writes per commit group, ingest: 19.11 MB, 0.03 MB/s
                                           Interval WAL: 3990 writes, 1573 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b5a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b5a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.363       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.363       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.363       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b5a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 09:12:11 compute-0 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [NOTICE]   (393002) : haproxy version is 2.8.14-c23fe91
Dec 13 09:12:11 compute-0 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [NOTICE]   (393002) : path to executable is /usr/sbin/haproxy
Dec 13 09:12:11 compute-0 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [WARNING]  (393002) : Exiting Master process...
Dec 13 09:12:11 compute-0 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [ALERT]    (393002) : Current worker (393004) exited with code 143 (Terminated)
Dec 13 09:12:11 compute-0 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [WARNING]  (393002) : All workers exited. Exiting... (0)
Dec 13 09:12:11 compute-0 systemd[1]: libpod-b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c.scope: Deactivated successfully.
Dec 13 09:12:11 compute-0 podman[394473]: 2025-12-13 09:12:11.213764942 +0000 UTC m=+0.445173237 container died b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.572 248514 DEBUG nova.compute.manager [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.573 248514 DEBUG nova.compute.manager [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing instance network info cache due to event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.573 248514 DEBUG oslo_concurrency.lockutils [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.574 248514 DEBUG oslo_concurrency.lockutils [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.574 248514 DEBUG nova.network.neutron [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.757 248514 DEBUG nova.compute.manager [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-unplugged-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.758 248514 DEBUG oslo_concurrency.lockutils [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.758 248514 DEBUG oslo_concurrency.lockutils [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.759 248514 DEBUG oslo_concurrency.lockutils [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.759 248514 DEBUG nova.compute.manager [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-unplugged-b6bde1c8-9710-4155-9619-7040cbbca806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:12:11 compute-0 nova_compute[248510]: 2025-12-13 09:12:11.760 248514 DEBUG nova.compute.manager [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-unplugged-b6bde1c8-9710-4155-9619-7040cbbca806 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:12:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:12 compute-0 ceph-mon[76537]: pgmap v3332: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 09:12:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 09:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d83ba7e95364800c67b1ef1b047ddcfb67b0d58972ad0e7ad048a84a30676ad2-merged.mount: Deactivated successfully.
Dec 13 09:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c-userdata-shm.mount: Deactivated successfully.
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.766 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-unplugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.767 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.767 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.768 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.768 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-unplugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.768 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-unplugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.769 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.769 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.769 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.770 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.770 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.770 248514 WARNING nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received unexpected event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 for instance with vm_state active and task_state deleting.
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:13 compute-0 podman[394473]: 2025-12-13 09:12:13.938665012 +0000 UTC m=+3.170073307 container cleanup b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.945 248514 DEBUG nova.compute.manager [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.945 248514 DEBUG oslo_concurrency.lockutils [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.946 248514 DEBUG oslo_concurrency.lockutils [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.946 248514 DEBUG oslo_concurrency.lockutils [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.946 248514 DEBUG nova.compute.manager [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:12:13 compute-0 nova_compute[248510]: 2025-12-13 09:12:13.947 248514 WARNING nova.compute.manager [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received unexpected event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 for instance with vm_state active and task_state deleting.
Dec 13 09:12:13 compute-0 systemd[1]: libpod-conmon-b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c.scope: Deactivated successfully.
Dec 13 09:12:14 compute-0 nova_compute[248510]: 2025-12-13 09:12:14.135 248514 DEBUG nova.network.neutron [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updated VIF entry in instance network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:12:14 compute-0 nova_compute[248510]: 2025-12-13 09:12:14.136 248514 DEBUG nova.network.neutron [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:12:14 compute-0 ceph-mon[76537]: pgmap v3333: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 09:12:14 compute-0 nova_compute[248510]: 2025-12-13 09:12:14.169 248514 DEBUG oslo_concurrency.lockutils [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:12:14 compute-0 podman[394523]: 2025-12-13 09:12:14.686410886 +0000 UTC m=+0.706827279 container remove b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.697 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f46d537b-c24c-49c7-8cf2-34814f9d24fd]: (4, ('Sat Dec 13 09:12:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c (b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c)\nb9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c\nSat Dec 13 09:12:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c (b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c)\nb9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.700 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[281b6c24-15bb-426b-9a5b-2813df9d2c44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7065d5a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:14 compute-0 nova_compute[248510]: 2025-12-13 09:12:14.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:14 compute-0 kernel: tapa7065d5a-e0: left promiscuous mode
Dec 13 09:12:14 compute-0 nova_compute[248510]: 2025-12-13 09:12:14.724 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[386760d0-107e-4a49-abf9-9a2db7279103]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.746 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d38b87-5aca-4694-81b5-e7704fca8c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.748 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[de19b97d-cfad-4cf7-8ac1-c5c054bcfa66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.766 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[982eb3d1-93f1-410d-998a-1f99f5900451]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962794, 'reachable_time': 15662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394537, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.770 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.771 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae606ff-e1bc-4064-9c70-23d44ca667b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 445f61a4-1352-4145-8b2b-f9f87f5435a7 in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c unbound from our chassis
Dec 13 09:12:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 11 KiB/s wr, 35 op/s
Dec 13 09:12:14 compute-0 systemd[1]: run-netns-ovnmeta\x2da7065d5a\x2dedce\x2d4470\x2da56d\x2dab529d56aa3c.mount: Deactivated successfully.
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.775 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.776 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcce4e7-3bcb-48af-ade7-6bf5bd8f65d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:14 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.777 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c namespace which is not needed anymore
Dec 13 09:12:14 compute-0 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [NOTICE]   (393125) : haproxy version is 2.8.14-c23fe91
Dec 13 09:12:14 compute-0 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [NOTICE]   (393125) : path to executable is /usr/sbin/haproxy
Dec 13 09:12:14 compute-0 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [WARNING]  (393125) : Exiting Master process...
Dec 13 09:12:14 compute-0 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [ALERT]    (393125) : Current worker (393132) exited with code 143 (Terminated)
Dec 13 09:12:14 compute-0 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [WARNING]  (393125) : All workers exited. Exiting... (0)
Dec 13 09:12:14 compute-0 systemd[1]: libpod-bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411.scope: Deactivated successfully.
Dec 13 09:12:14 compute-0 podman[394559]: 2025-12-13 09:12:14.969136897 +0000 UTC m=+0.086741377 container died bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 09:12:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:12:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3293831454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:12:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:12:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3293831454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:12:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3293831454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:12:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3293831454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:12:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411-userdata-shm.mount: Deactivated successfully.
Dec 13 09:12:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-20f32a15c927b95fe7b352ffa0d269ecc44713de0e4d769f1bd0a4e926820f3a-merged.mount: Deactivated successfully.
Dec 13 09:12:15 compute-0 podman[394559]: 2025-12-13 09:12:15.240729169 +0000 UTC m=+0.358333639 container cleanup bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:12:15 compute-0 systemd[1]: libpod-conmon-bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411.scope: Deactivated successfully.
Dec 13 09:12:15 compute-0 podman[394590]: 2025-12-13 09:12:15.534868286 +0000 UTC m=+0.258481364 container remove bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.547 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e8c45e-7096-4840-8caf-781f4fabbadb]: (4, ('Sat Dec 13 09:12:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c (bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411)\nbba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411\nSat Dec 13 09:12:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c (bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411)\nbba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1396e19-d6c6-4dd7-820a-42af51d3d6a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7661cc7b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:15 compute-0 nova_compute[248510]: 2025-12-13 09:12:15.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:15 compute-0 kernel: tap7661cc7b-f0: left promiscuous mode
Dec 13 09:12:15 compute-0 nova_compute[248510]: 2025-12-13 09:12:15.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8fe364-169d-41ed-be93-bc739554430d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ad30806a-68af-4b39-bddb-b3343bcbd2b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.590 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[290defd4-93bb-4ac4-96e9-fc0582fc887c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[839da859-7d21-4f93-9ed6-58b9ebba202c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962959, 'reachable_time': 36125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394605, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.615 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:12:15 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.616 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[78aee92c-d891-4fdf-bac4-7848dfaa20b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d7661cc7b\x2dfcf7\x2d41b6\x2db117\x2dca8fede7ad4c.mount: Deactivated successfully.
Dec 13 09:12:15 compute-0 nova_compute[248510]: 2025-12-13 09:12:15.840 248514 INFO nova.virt.libvirt.driver [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Deleting instance files /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_del
Dec 13 09:12:15 compute-0 nova_compute[248510]: 2025-12-13 09:12:15.841 248514 INFO nova.virt.libvirt.driver [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Deletion of /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_del complete
Dec 13 09:12:15 compute-0 nova_compute[248510]: 2025-12-13 09:12:15.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:16 compute-0 ceph-mon[76537]: pgmap v3334: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 11 KiB/s wr, 35 op/s
Dec 13 09:12:16 compute-0 nova_compute[248510]: 2025-12-13 09:12:16.758 248514 INFO nova.compute.manager [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Took 7.38 seconds to destroy the instance on the hypervisor.
Dec 13 09:12:16 compute-0 nova_compute[248510]: 2025-12-13 09:12:16.759 248514 DEBUG oslo.service.loopingcall [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:12:16 compute-0 nova_compute[248510]: 2025-12-13 09:12:16.759 248514 DEBUG nova.compute.manager [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:12:16 compute-0 nova_compute[248510]: 2025-12-13 09:12:16.759 248514 DEBUG nova.network.neutron [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:12:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 33 op/s
Dec 13 09:12:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.513914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137514013, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 808, "num_deletes": 257, "total_data_size": 1119077, "memory_usage": 1145024, "flush_reason": "Manual Compaction"}
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137536169, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 1099957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65946, "largest_seqno": 66753, "table_properties": {"data_size": 1095810, "index_size": 1862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9006, "raw_average_key_size": 19, "raw_value_size": 1087530, "raw_average_value_size": 2299, "num_data_blocks": 83, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617069, "oldest_key_time": 1765617069, "file_creation_time": 1765617137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 22298 microseconds, and 7455 cpu microseconds.
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.536219) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 1099957 bytes OK
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.536244) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538294) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538316) EVENT_LOG_v1 {"time_micros": 1765617137538309, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1115012, prev total WAL file size 1115012, number of live WAL files 2.
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373632' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(1074KB)], [155(10062KB)]
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137539129, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 11404250, "oldest_snapshot_seqno": -1}
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8526 keys, 11283682 bytes, temperature: kUnknown
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137768365, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 11283682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11228494, "index_size": 32739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 224230, "raw_average_key_size": 26, "raw_value_size": 11078212, "raw_average_value_size": 1299, "num_data_blocks": 1267, "num_entries": 8526, "num_filter_entries": 8526, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.768875) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 11283682 bytes
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.790478) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.7 rd, 49.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(20.6) write-amplify(10.3) OK, records in: 9051, records dropped: 525 output_compression: NoCompression
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.790525) EVENT_LOG_v1 {"time_micros": 1765617137790506, "job": 96, "event": "compaction_finished", "compaction_time_micros": 229469, "compaction_time_cpu_micros": 48788, "output_level": 6, "num_output_files": 1, "total_output_size": 11283682, "num_input_records": 9051, "num_output_records": 8526, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137791538, "job": 96, "event": "table_file_deletion", "file_number": 157}
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137793703, "job": 96, "event": "table_file_deletion", "file_number": 155}
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:12:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:12:17 compute-0 nova_compute[248510]: 2025-12-13 09:12:17.904 248514 DEBUG nova.compute.manager [req-86dbae2b-e019-4a96-bbdb-430335b44f4d req-b57e1029-202d-4c04-94f2-0fedf823fb3f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-deleted-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:17 compute-0 nova_compute[248510]: 2025-12-13 09:12:17.905 248514 INFO nova.compute.manager [req-86dbae2b-e019-4a96-bbdb-430335b44f4d req-b57e1029-202d-4c04-94f2-0fedf823fb3f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Neutron deleted interface 445f61a4-1352-4145-8b2b-f9f87f5435a7; detaching it from the instance and deleting it from the info cache
Dec 13 09:12:17 compute-0 nova_compute[248510]: 2025-12-13 09:12:17.905 248514 DEBUG nova.network.neutron [req-86dbae2b-e019-4a96-bbdb-430335b44f4d req-b57e1029-202d-4c04-94f2-0fedf823fb3f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:12:17 compute-0 nova_compute[248510]: 2025-12-13 09:12:17.950 248514 DEBUG nova.compute.manager [req-86dbae2b-e019-4a96-bbdb-430335b44f4d req-b57e1029-202d-4c04-94f2-0fedf823fb3f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Detach interface failed, port_id=445f61a4-1352-4145-8b2b-f9f87f5435a7, reason: Instance 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:12:18 compute-0 ceph-mon[76537]: pgmap v3335: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 33 op/s
Dec 13 09:12:18 compute-0 nova_compute[248510]: 2025-12-13 09:12:18.541 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617123.539736, 0d2b3fb1-4256-4dd2-bbba-188212ddd10e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:12:18 compute-0 nova_compute[248510]: 2025-12-13 09:12:18.542 248514 INFO nova.compute.manager [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] VM Stopped (Lifecycle Event)
Dec 13 09:12:18 compute-0 nova_compute[248510]: 2025-12-13 09:12:18.565 248514 DEBUG nova.compute.manager [None req-83d0274f-295e-4818-ada7-f2a9d3d040bd - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:12:18 compute-0 nova_compute[248510]: 2025-12-13 09:12:18.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 94 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 KiB/s wr, 38 op/s
Dec 13 09:12:18 compute-0 nova_compute[248510]: 2025-12-13 09:12:18.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:19 compute-0 sudo[394607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:12:19 compute-0 sudo[394607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:19 compute-0 sudo[394607]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.069 248514 DEBUG nova.network.neutron [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.097 248514 INFO nova.compute.manager [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Took 2.34 seconds to deallocate network for instance.
Dec 13 09:12:19 compute-0 sudo[394632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:12:19 compute-0 sudo[394632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.153 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.154 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.237 248514 DEBUG oslo_concurrency.processutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:12:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:12:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.2 total, 600.0 interval
                                           Cumulative writes: 35K writes, 143K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2470 writes, 10K keys, 2470 commit groups, 1.0 writes per commit group, ingest: 13.34 MB, 0.02 MB/s
                                           Interval WAL: 2470 writes, 940 syncs, 2.63 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.035       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.035       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.035       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f23a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f23a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f23a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 09:12:19 compute-0 sudo[394632]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:12:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:12:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:12:19 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:12:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:12:19 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:12:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:12:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:12:19 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:12:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:12:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:12:19 compute-0 sudo[394707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:12:19 compute-0 sudo[394707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:19 compute-0 sudo[394707]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:12:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210104534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.897 248514 DEBUG oslo_concurrency.processutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.905 248514 DEBUG nova.compute.provider_tree [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.930 248514 DEBUG nova.scheduler.client.report [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:12:19 compute-0 sudo[394732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.957 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:19 compute-0 sudo[394732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.990 248514 INFO nova.scheduler.client.report [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8
Dec 13 09:12:19 compute-0 nova_compute[248510]: 2025-12-13 09:12:19.998 248514 DEBUG nova.compute.manager [req-2bd4862b-6c59-4eb6-ac26-d5f7440eb2ed req-a4d83b53-addf-471a-b4bd-a784d4304014 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-deleted-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:12:20 compute-0 nova_compute[248510]: 2025-12-13 09:12:20.066 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:20 compute-0 podman[394772]: 2025-12-13 09:12:20.274898886 +0000 UTC m=+0.050582489 container create b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:12:20 compute-0 systemd[1]: Started libpod-conmon-b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71.scope.
Dec 13 09:12:20 compute-0 podman[394772]: 2025-12-13 09:12:20.251303365 +0000 UTC m=+0.026987038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:12:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:12:20 compute-0 podman[394772]: 2025-12-13 09:12:20.387195373 +0000 UTC m=+0.162879086 container init b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 09:12:20 compute-0 podman[394772]: 2025-12-13 09:12:20.39746076 +0000 UTC m=+0.173144353 container start b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 09:12:20 compute-0 podman[394772]: 2025-12-13 09:12:20.402531758 +0000 UTC m=+0.178215401 container attach b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:12:20 compute-0 elegant_hofstadter[394789]: 167 167
Dec 13 09:12:20 compute-0 systemd[1]: libpod-b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71.scope: Deactivated successfully.
Dec 13 09:12:20 compute-0 conmon[394789]: conmon b333856ad8653ff0899b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71.scope/container/memory.events
Dec 13 09:12:20 compute-0 podman[394772]: 2025-12-13 09:12:20.408254101 +0000 UTC m=+0.183937734 container died b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-05745bca6dd78adba63758da1c048b1176c90dabadea03b75a42adcb058abba0-merged.mount: Deactivated successfully.
Dec 13 09:12:20 compute-0 podman[394772]: 2025-12-13 09:12:20.463083076 +0000 UTC m=+0.238766679 container remove b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:12:20 compute-0 systemd[1]: libpod-conmon-b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71.scope: Deactivated successfully.
Dec 13 09:12:20 compute-0 ceph-mon[76537]: pgmap v3336: 321 pgs: 321 active+clean; 94 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 KiB/s wr, 38 op/s
Dec 13 09:12:20 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:12:20 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:12:20 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:12:20 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:12:20 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:12:20 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:12:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1210104534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:12:20 compute-0 podman[394813]: 2025-12-13 09:12:20.68972661 +0000 UTC m=+0.069073583 container create 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:12:20 compute-0 systemd[1]: Started libpod-conmon-7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b.scope.
Dec 13 09:12:20 compute-0 podman[394813]: 2025-12-13 09:12:20.658927568 +0000 UTC m=+0.038274591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:12:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 KiB/s wr, 43 op/s
Dec 13 09:12:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:20 compute-0 podman[394813]: 2025-12-13 09:12:20.799292018 +0000 UTC m=+0.178639001 container init 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:12:20 compute-0 podman[394813]: 2025-12-13 09:12:20.810985872 +0000 UTC m=+0.190332815 container start 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 09:12:20 compute-0 podman[394813]: 2025-12-13 09:12:20.815205178 +0000 UTC m=+0.194552111 container attach 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:12:20 compute-0 nova_compute[248510]: 2025-12-13 09:12:20.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:21 compute-0 sweet_goldwasser[394828]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:12:21 compute-0 sweet_goldwasser[394828]: --> All data devices are unavailable
Dec 13 09:12:21 compute-0 systemd[1]: libpod-7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b.scope: Deactivated successfully.
Dec 13 09:12:21 compute-0 podman[394813]: 2025-12-13 09:12:21.394519348 +0000 UTC m=+0.773866301 container died 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4640145867863087e-05 of space, bias 1.0, pg target 0.004392043760358926 quantized to 32 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697318854645266 of space, bias 1.0, pg target 0.20091956563935798 quantized to 32 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:12:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:12:22 compute-0 ceph-mon[76537]: pgmap v3337: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 KiB/s wr, 43 op/s
Dec 13 09:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6-merged.mount: Deactivated successfully.
Dec 13 09:12:22 compute-0 podman[394813]: 2025-12-13 09:12:22.132138318 +0000 UTC m=+1.511485291 container remove 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:12:22 compute-0 systemd[1]: libpod-conmon-7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b.scope: Deactivated successfully.
Dec 13 09:12:22 compute-0 sudo[394732]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:22 compute-0 sudo[394862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:12:22 compute-0 sudo[394862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:22 compute-0 sudo[394862]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:22 compute-0 sudo[394887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:12:22 compute-0 sudo[394887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:22 compute-0 podman[394924]: 2025-12-13 09:12:22.666770547 +0000 UTC m=+0.040944398 container create c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:12:22 compute-0 systemd[1]: Started libpod-conmon-c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971.scope.
Dec 13 09:12:22 compute-0 podman[394924]: 2025-12-13 09:12:22.6473477 +0000 UTC m=+0.021521551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:12:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:12:22 compute-0 podman[394924]: 2025-12-13 09:12:22.772309174 +0000 UTC m=+0.146483015 container init c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:12:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:12:22 compute-0 podman[394924]: 2025-12-13 09:12:22.780684124 +0000 UTC m=+0.154857965 container start c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:12:22 compute-0 podman[394924]: 2025-12-13 09:12:22.785464104 +0000 UTC m=+0.159637915 container attach c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:12:22 compute-0 strange_franklin[394940]: 167 167
Dec 13 09:12:22 compute-0 systemd[1]: libpod-c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971.scope: Deactivated successfully.
Dec 13 09:12:22 compute-0 podman[394924]: 2025-12-13 09:12:22.789132566 +0000 UTC m=+0.163306377 container died c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc861060424efba21eca2bf2672320a3fa51ec5bb13f7e32b2c06f2e049cf11d-merged.mount: Deactivated successfully.
Dec 13 09:12:22 compute-0 podman[394924]: 2025-12-13 09:12:22.833261263 +0000 UTC m=+0.207435074 container remove c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:12:22 compute-0 systemd[1]: libpod-conmon-c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971.scope: Deactivated successfully.
Dec 13 09:12:23 compute-0 podman[394962]: 2025-12-13 09:12:23.036297374 +0000 UTC m=+0.046819475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:12:23 compute-0 podman[394962]: 2025-12-13 09:12:23.151156335 +0000 UTC m=+0.161678416 container create eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:12:23 compute-0 nova_compute[248510]: 2025-12-13 09:12:23.176 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:23.176 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:12:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:23.178 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:12:23 compute-0 systemd[1]: Started libpod-conmon-eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48.scope.
Dec 13 09:12:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:23 compute-0 podman[394962]: 2025-12-13 09:12:23.255636465 +0000 UTC m=+0.266158596 container init eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:12:23 compute-0 podman[394962]: 2025-12-13 09:12:23.26741395 +0000 UTC m=+0.277936031 container start eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:12:23 compute-0 podman[394962]: 2025-12-13 09:12:23.272514658 +0000 UTC m=+0.283036799 container attach eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:12:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 09:12:23 compute-0 musing_jang[394978]: {
Dec 13 09:12:23 compute-0 musing_jang[394978]:     "0": [
Dec 13 09:12:23 compute-0 musing_jang[394978]:         {
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "devices": [
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "/dev/loop3"
Dec 13 09:12:23 compute-0 musing_jang[394978]:             ],
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_name": "ceph_lv0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_size": "21470642176",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "name": "ceph_lv0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "tags": {
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cluster_name": "ceph",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.crush_device_class": "",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.encrypted": "0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.objectstore": "bluestore",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osd_id": "0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.type": "block",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.vdo": "0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.with_tpm": "0"
Dec 13 09:12:23 compute-0 musing_jang[394978]:             },
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "type": "block",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "vg_name": "ceph_vg0"
Dec 13 09:12:23 compute-0 musing_jang[394978]:         }
Dec 13 09:12:23 compute-0 musing_jang[394978]:     ],
Dec 13 09:12:23 compute-0 musing_jang[394978]:     "1": [
Dec 13 09:12:23 compute-0 musing_jang[394978]:         {
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "devices": [
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "/dev/loop4"
Dec 13 09:12:23 compute-0 musing_jang[394978]:             ],
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_name": "ceph_lv1",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_size": "21470642176",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "name": "ceph_lv1",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "tags": {
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cluster_name": "ceph",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.crush_device_class": "",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.encrypted": "0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.objectstore": "bluestore",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osd_id": "1",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.type": "block",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.vdo": "0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.with_tpm": "0"
Dec 13 09:12:23 compute-0 musing_jang[394978]:             },
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "type": "block",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "vg_name": "ceph_vg1"
Dec 13 09:12:23 compute-0 musing_jang[394978]:         }
Dec 13 09:12:23 compute-0 musing_jang[394978]:     ],
Dec 13 09:12:23 compute-0 musing_jang[394978]:     "2": [
Dec 13 09:12:23 compute-0 musing_jang[394978]:         {
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "devices": [
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "/dev/loop5"
Dec 13 09:12:23 compute-0 musing_jang[394978]:             ],
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_name": "ceph_lv2",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_size": "21470642176",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "name": "ceph_lv2",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "tags": {
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.cluster_name": "ceph",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.crush_device_class": "",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.encrypted": "0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.objectstore": "bluestore",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osd_id": "2",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.type": "block",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.vdo": "0",
Dec 13 09:12:23 compute-0 musing_jang[394978]:                 "ceph.with_tpm": "0"
Dec 13 09:12:23 compute-0 musing_jang[394978]:             },
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "type": "block",
Dec 13 09:12:23 compute-0 musing_jang[394978]:             "vg_name": "ceph_vg2"
Dec 13 09:12:23 compute-0 musing_jang[394978]:         }
Dec 13 09:12:23 compute-0 musing_jang[394978]:     ]
Dec 13 09:12:23 compute-0 musing_jang[394978]: }
Dec 13 09:12:23 compute-0 systemd[1]: libpod-eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48.scope: Deactivated successfully.
Dec 13 09:12:23 compute-0 podman[394962]: 2025-12-13 09:12:23.609775847 +0000 UTC m=+0.620297898 container died eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 09:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c-merged.mount: Deactivated successfully.
Dec 13 09:12:23 compute-0 podman[394962]: 2025-12-13 09:12:23.659251908 +0000 UTC m=+0.669773959 container remove eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:12:23 compute-0 systemd[1]: libpod-conmon-eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48.scope: Deactivated successfully.
Dec 13 09:12:23 compute-0 sudo[394887]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:23 compute-0 sudo[395001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:12:23 compute-0 sudo[395001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:23 compute-0 sudo[395001]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:23 compute-0 nova_compute[248510]: 2025-12-13 09:12:23.802 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:23 compute-0 sudo[395026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:12:23 compute-0 sudo[395026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:24 compute-0 podman[395064]: 2025-12-13 09:12:24.148851077 +0000 UTC m=+0.042111417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:12:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.170 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.171 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.172 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.190 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:12:25 compute-0 ceph-mon[76537]: pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:12:25 compute-0 podman[395064]: 2025-12-13 09:12:25.413356001 +0000 UTC m=+1.306616351 container create cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.440 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617130.4387507, 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.441 248514 INFO nova.compute.manager [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] VM Stopped (Lifecycle Event)
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.746 248514 DEBUG nova.compute.manager [None req-05bbff23-3e4c-42b6-b80c-b95265d70dd0 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:25 compute-0 nova_compute[248510]: 2025-12-13 09:12:25.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:25 compute-0 systemd[1]: Started libpod-conmon-cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07.scope.
Dec 13 09:12:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:12:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:26.181 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:12:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec 13 09:12:27 compute-0 podman[395064]: 2025-12-13 09:12:27.22987177 +0000 UTC m=+3.123132140 container init cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:12:27 compute-0 podman[395064]: 2025-12-13 09:12:27.241383459 +0000 UTC m=+3.134643799 container start cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:12:27 compute-0 serene_keldysh[395110]: 167 167
Dec 13 09:12:27 compute-0 systemd[1]: libpod-cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07.scope: Deactivated successfully.
Dec 13 09:12:27 compute-0 ceph-mon[76537]: pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:12:27 compute-0 podman[395064]: 2025-12-13 09:12:27.47509761 +0000 UTC m=+3.368357930 container attach cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:12:27 compute-0 podman[395064]: 2025-12-13 09:12:27.477819648 +0000 UTC m=+3.371079968 container died cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 09:12:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-37beb4cb1dbebfda2390f74992059a35e3bfb0b6d9346c7845776239a8b77b16-merged.mount: Deactivated successfully.
Dec 13 09:12:28 compute-0 podman[395064]: 2025-12-13 09:12:28.754503917 +0000 UTC m=+4.647764247 container remove cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:12:28 compute-0 nova_compute[248510]: 2025-12-13 09:12:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:28 compute-0 ceph-mon[76537]: pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec 13 09:12:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec 13 09:12:28 compute-0 nova_compute[248510]: 2025-12-13 09:12:28.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:28 compute-0 systemd[1]: libpod-conmon-cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07.scope: Deactivated successfully.
Dec 13 09:12:28 compute-0 nova_compute[248510]: 2025-12-13 09:12:28.832 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:28 compute-0 nova_compute[248510]: 2025-12-13 09:12:28.833 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:28 compute-0 nova_compute[248510]: 2025-12-13 09:12:28.834 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:28 compute-0 nova_compute[248510]: 2025-12-13 09:12:28.834 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:12:28 compute-0 nova_compute[248510]: 2025-12-13 09:12:28.835 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:12:28 compute-0 podman[395079]: 2025-12-13 09:12:28.857726887 +0000 UTC m=+3.404615050 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Dec 13 09:12:28 compute-0 podman[395080]: 2025-12-13 09:12:28.877442421 +0000 UTC m=+3.413698327 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:12:28 compute-0 podman[395078]: 2025-12-13 09:12:28.897269448 +0000 UTC m=+3.444133460 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 09:12:29 compute-0 podman[395169]: 2025-12-13 09:12:28.981419399 +0000 UTC m=+0.038789184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:12:29 compute-0 podman[395169]: 2025-12-13 09:12:29.108627129 +0000 UTC m=+0.165996914 container create d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:12:29 compute-0 systemd[1]: Started libpod-conmon-d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05.scope.
Dec 13 09:12:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:12:29 compute-0 podman[395169]: 2025-12-13 09:12:29.269811192 +0000 UTC m=+0.327180937 container init d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:12:29 compute-0 podman[395169]: 2025-12-13 09:12:29.279476464 +0000 UTC m=+0.336846199 container start d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:29 compute-0 podman[395169]: 2025-12-13 09:12:29.326189676 +0000 UTC m=+0.383559421 container attach d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:12:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:12:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832433437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.430 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.608 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.610 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3477MB free_disk=59.98740301281214GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.610 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.611 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.693 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.695 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.711 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.736 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.737 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.769 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.797 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:12:29 compute-0 nova_compute[248510]: 2025-12-13 09:12:29.817 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:12:29 compute-0 lvm[395307]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:12:29 compute-0 lvm[395306]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:12:29 compute-0 lvm[395306]: VG ceph_vg0 finished
Dec 13 09:12:29 compute-0 lvm[395307]: VG ceph_vg1 finished
Dec 13 09:12:29 compute-0 lvm[395309]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:12:29 compute-0 lvm[395309]: VG ceph_vg2 finished
Dec 13 09:12:30 compute-0 amazing_kapitsa[395204]: {}
Dec 13 09:12:30 compute-0 systemd[1]: libpod-d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05.scope: Deactivated successfully.
Dec 13 09:12:30 compute-0 systemd[1]: libpod-d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05.scope: Consumed 1.323s CPU time.
Dec 13 09:12:30 compute-0 podman[395169]: 2025-12-13 09:12:30.085386686 +0000 UTC m=+1.142756471 container died d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Dec 13 09:12:30 compute-0 ceph-mon[76537]: pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec 13 09:12:30 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/832433437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f-merged.mount: Deactivated successfully.
Dec 13 09:12:30 compute-0 podman[395169]: 2025-12-13 09:12:30.447236171 +0000 UTC m=+1.504605926 container remove d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:12:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:12:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461985336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:12:30 compute-0 sudo[395026]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:30 compute-0 nova_compute[248510]: 2025-12-13 09:12:30.503 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:12:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:12:30 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:12:30 compute-0 nova_compute[248510]: 2025-12-13 09:12:30.513 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:12:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:12:30 compute-0 nova_compute[248510]: 2025-12-13 09:12:30.535 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:12:30 compute-0 systemd[1]: libpod-conmon-d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05.scope: Deactivated successfully.
Dec 13 09:12:30 compute-0 nova_compute[248510]: 2025-12-13 09:12:30.559 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:12:30 compute-0 nova_compute[248510]: 2025-12-13 09:12:30.560 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:30 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:12:30 compute-0 sudo[395326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:12:30 compute-0 sudo[395326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:12:30 compute-0 sudo[395326]: pam_unix(sudo:session): session closed for user root
Dec 13 09:12:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 341 B/s wr, 16 op/s
Dec 13 09:12:30 compute-0 nova_compute[248510]: 2025-12-13 09:12:30.932 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1461985336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:12:31 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:12:31 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:12:32 compute-0 ceph-mon[76537]: pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 341 B/s wr, 16 op/s
Dec 13 09:12:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:33 compute-0 ceph-mon[76537]: pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:33 compute-0 nova_compute[248510]: 2025-12-13 09:12:33.560 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:33 compute-0 nova_compute[248510]: 2025-12-13 09:12:33.560 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:33 compute-0 nova_compute[248510]: 2025-12-13 09:12:33.561 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:12:33 compute-0 nova_compute[248510]: 2025-12-13 09:12:33.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:35 compute-0 nova_compute[248510]: 2025-12-13 09:12:35.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:35 compute-0 nova_compute[248510]: 2025-12-13 09:12:35.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:35 compute-0 ceph-mon[76537]: pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:38 compute-0 ceph-mon[76537]: pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:38 compute-0 nova_compute[248510]: 2025-12-13 09:12:38.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:39 compute-0 nova_compute[248510]: 2025-12-13 09:12:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:12:40 compute-0 ceph-mon[76537]: pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:12:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:12:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:12:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:12:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:12:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:12:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:40 compute-0 nova_compute[248510]: 2025-12-13 09:12:40.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:42 compute-0 ceph-mon[76537]: pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:43 compute-0 nova_compute[248510]: 2025-12-13 09:12:43.812 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:44 compute-0 ceph-mon[76537]: pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:45 compute-0 nova_compute[248510]: 2025-12-13 09:12:45.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:46 compute-0 ceph-mon[76537]: pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:48 compute-0 ceph-mon[76537]: pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:48 compute-0 nova_compute[248510]: 2025-12-13 09:12:48.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:50 compute-0 ceph-mon[76537]: pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:50 compute-0 nova_compute[248510]: 2025-12-13 09:12:50.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:52 compute-0 ceph-mon[76537]: pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:52.585 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:6b:85 2001:db8:0:1:f816:3eff:feb7:6b85 2001:db8::f816:3eff:feb7:6b85'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb7:6b85/64 2001:db8::f816:3eff:feb7:6b85/64', 'neutron:device_id': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a155cc1b-19d3-4430-9d83-b19e30ef1a95) old=Port_Binding(mac=['fa:16:3e:b7:6b:85 2001:db8::f816:3eff:feb7:6b85'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb7:6b85/64', 'neutron:device_id': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:12:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:52.586 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a155cc1b-19d3-4430-9d83-b19e30ef1a95 in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 updated
Dec 13 09:12:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:52.587 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff80f4de-0e76-47f2-b06e-6d3900b63130, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:12:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:52.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0eb68a-6f42-40ee-b6d8-91b8a16020f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:12:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:53 compute-0 ceph-mon[76537]: pgmap v3353: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:53 compute-0 nova_compute[248510]: 2025-12-13 09:12:53.816 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:54 compute-0 sshd-session[394270]: Connection closed by 61.245.11.87 port 56924 [preauth]
Dec 13 09:12:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:55.450 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:12:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:12:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:12:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:12:55 compute-0 nova_compute[248510]: 2025-12-13 09:12:55.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:55 compute-0 ceph-mon[76537]: pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:12:58 compute-0 ceph-mon[76537]: pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:12:58 compute-0 nova_compute[248510]: 2025-12-13 09:12:58.869 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:12:58 compute-0 podman[395353]: 2025-12-13 09:12:58.999331391 +0000 UTC m=+0.073522575 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:12:59 compute-0 podman[395351]: 2025-12-13 09:12:59.010421649 +0000 UTC m=+0.086375117 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 09:12:59 compute-0 podman[395352]: 2025-12-13 09:12:59.049447088 +0000 UTC m=+0.126469593 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:13:00 compute-0 ceph-mon[76537]: pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:13:00 compute-0 nova_compute[248510]: 2025-12-13 09:13:00.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:13:00 compute-0 nova_compute[248510]: 2025-12-13 09:13:00.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:01 compute-0 nova_compute[248510]: 2025-12-13 09:13:01.947 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:01 compute-0 nova_compute[248510]: 2025-12-13 09:13:01.947 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:01 compute-0 nova_compute[248510]: 2025-12-13 09:13:01.966 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.058 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.060 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.070 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.070 248514 INFO nova.compute.claims [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.176 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:02 compute-0 ceph-mon[76537]: pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:13:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:13:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104872258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:13:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.797 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.805 248514 DEBUG nova.compute.provider_tree [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.939 248514 DEBUG nova.scheduler.client.report [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.968 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:02 compute-0 nova_compute[248510]: 2025-12-13 09:13:02.969 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.114 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.115 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.141 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.480 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:13:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3104872258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:13:03 compute-0 ceph-mon[76537]: pgmap v3358: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.588 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.589 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.590 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Creating image(s)
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.619 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.648 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.674 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.678 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.776 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.777 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.778 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.778 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.806 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.813 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.856 248514 DEBUG nova.policy [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:13:03 compute-0 nova_compute[248510]: 2025-12-13 09:13:03.872 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:04 compute-0 nova_compute[248510]: 2025-12-13 09:13:04.141 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:04 compute-0 nova_compute[248510]: 2025-12-13 09:13:04.217 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:13:04 compute-0 nova_compute[248510]: 2025-12-13 09:13:04.315 248514 DEBUG nova.objects.instance [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:13:04 compute-0 nova_compute[248510]: 2025-12-13 09:13:04.490 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:13:04 compute-0 nova_compute[248510]: 2025-12-13 09:13:04.491 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Ensure instance console log exists: /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:13:04 compute-0 nova_compute[248510]: 2025-12-13 09:13:04.492 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:04 compute-0 nova_compute[248510]: 2025-12-13 09:13:04.492 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:04 compute-0 nova_compute[248510]: 2025-12-13 09:13:04.492 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 76 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 09:13:05 compute-0 ceph-mon[76537]: pgmap v3359: 321 pgs: 321 active+clean; 76 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 09:13:05 compute-0 nova_compute[248510]: 2025-12-13 09:13:05.946 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:06 compute-0 nova_compute[248510]: 2025-12-13 09:13:06.316 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Successfully created port: 37c91936-0589-4bd3-9413-3af7db3e8feb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:13:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 76 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 09:13:07 compute-0 nova_compute[248510]: 2025-12-13 09:13:07.395 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Successfully created port: 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:13:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:07 compute-0 ceph-mon[76537]: pgmap v3360: 321 pgs: 321 active+clean; 76 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 09:13:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 88 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Dec 13 09:13:08 compute-0 nova_compute[248510]: 2025-12-13 09:13:08.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:13:09
Dec 13 09:13:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:13:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:13:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root']
Dec 13 09:13:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:13:09 compute-0 ceph-mon[76537]: pgmap v3361: 321 pgs: 321 active+clean; 88 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Dec 13 09:13:09 compute-0 nova_compute[248510]: 2025-12-13 09:13:09.922 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Successfully updated port: 37c91936-0589-4bd3-9413-3af7db3e8feb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.023 248514 DEBUG nova.compute.manager [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.024 248514 DEBUG nova.compute.manager [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing instance network info cache due to event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.025 248514 DEBUG oslo_concurrency.lockutils [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.025 248514 DEBUG oslo_concurrency.lockutils [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.025 248514 DEBUG nova.network.neutron [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:13:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:13:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:13:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:13:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:13:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:13:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.254 248514 DEBUG nova.network.neutron [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.708 248514 DEBUG nova.network.neutron [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.728 248514 DEBUG oslo_concurrency.lockutils [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:13:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:10 compute-0 nova_compute[248510]: 2025-12-13 09:13:10.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:13:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:13:11 compute-0 sshd-session[395599]: Invalid user polkadot from 80.94.92.165 port 54338
Dec 13 09:13:11 compute-0 sshd-session[395599]: Connection closed by invalid user polkadot 80.94.92.165 port 54338 [preauth]
Dec 13 09:13:11 compute-0 nova_compute[248510]: 2025-12-13 09:13:11.530 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Successfully updated port: 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:13:11 compute-0 nova_compute[248510]: 2025-12-13 09:13:11.555 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:13:11 compute-0 nova_compute[248510]: 2025-12-13 09:13:11.556 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:13:11 compute-0 nova_compute[248510]: 2025-12-13 09:13:11.556 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:13:11 compute-0 nova_compute[248510]: 2025-12-13 09:13:11.767 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:13:11 compute-0 ceph-mon[76537]: pgmap v3362: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:12 compute-0 nova_compute[248510]: 2025-12-13 09:13:12.117 248514 DEBUG nova.compute.manager [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-changed-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:12 compute-0 nova_compute[248510]: 2025-12-13 09:13:12.117 248514 DEBUG nova.compute.manager [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing instance network info cache due to event network-changed-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:13:12 compute-0 nova_compute[248510]: 2025-12-13 09:13:12.117 248514 DEBUG oslo_concurrency.lockutils [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:13:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:13 compute-0 nova_compute[248510]: 2025-12-13 09:13:13.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:13 compute-0 ceph-mon[76537]: pgmap v3363: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:13:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713716095' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:13:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:13:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713716095' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:13:15 compute-0 ovn_controller[148476]: 2025-12-13T09:13:15Z|01515|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 13 09:13:15 compute-0 ceph-mon[76537]: pgmap v3364: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3713716095' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:13:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3713716095' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:13:15 compute-0 nova_compute[248510]: 2025-12-13 09:13:15.944 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:13:15 compute-0 nova_compute[248510]: 2025-12-13 09:13:15.950 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:15 compute-0 nova_compute[248510]: 2025-12-13 09:13:15.983 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:13:15 compute-0 nova_compute[248510]: 2025-12-13 09:13:15.984 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance network_info: |[{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:13:15 compute-0 nova_compute[248510]: 2025-12-13 09:13:15.984 248514 DEBUG oslo_concurrency.lockutils [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:13:15 compute-0 nova_compute[248510]: 2025-12-13 09:13:15.984 248514 DEBUG nova.network.neutron [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing network info cache for port 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:13:15 compute-0 nova_compute[248510]: 2025-12-13 09:13:15.989 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start _get_guest_xml network_info=[{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:13:15 compute-0 nova_compute[248510]: 2025-12-13 09:13:15.996 248514 WARNING nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.011 248514 DEBUG nova.virt.libvirt.host [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.013 248514 DEBUG nova.virt.libvirt.host [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.019 248514 DEBUG nova.virt.libvirt.host [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.020 248514 DEBUG nova.virt.libvirt.host [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.021 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.021 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.022 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.022 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.022 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.023 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.023 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.023 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.023 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.024 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.024 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.024 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.028 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:13:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352414849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.610 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.651 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:16 compute-0 nova_compute[248510]: 2025-12-13 09:13:16.657 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 395 KiB/s wr, 2 op/s
Dec 13 09:13:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3352414849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:13:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:13:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271516082' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.227 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.229 248514 DEBUG nova.virt.libvirt.vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.229 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.230 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.231 248514 DEBUG nova.virt.libvirt.vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.232 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.232 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.234 248514 DEBUG nova.objects.instance [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.259 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <uuid>707e75d9-f6d2-413d-a727-c3ecbfea90c1</uuid>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <name>instance-0000008f</name>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-132103940</nova:name>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:13:15</nova:creationTime>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:port uuid="37c91936-0589-4bd3-9413-3af7db3e8feb">
Dec 13 09:13:17 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <nova:port uuid="9538b93c-6b21-4b5f-b6d3-89fd1b840f5d">
Dec 13 09:13:17 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe39:da84" ipVersion="6"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe39:da84" ipVersion="6"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <system>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <entry name="serial">707e75d9-f6d2-413d-a727-c3ecbfea90c1</entry>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <entry name="uuid">707e75d9-f6d2-413d-a727-c3ecbfea90c1</entry>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </system>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <os>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   </os>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <features>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   </features>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk">
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       </source>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config">
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       </source>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:13:17 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:77:6c:85"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <target dev="tap37c91936-05"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:39:da:84"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <target dev="tap9538b93c-6b"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/console.log" append="off"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <video>
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </video>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:13:17 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:13:17 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:13:17 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:13:17 compute-0 nova_compute[248510]: </domain>
Dec 13 09:13:17 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.261 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Preparing to wait for external event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.261 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.261 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.262 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.262 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Preparing to wait for external event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.262 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.263 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.263 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.264 248514 DEBUG nova.virt.libvirt.vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.264 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.265 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.265 248514 DEBUG os_vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.266 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.266 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.267 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.270 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37c91936-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.271 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37c91936-05, col_values=(('external_ids', {'iface-id': '37c91936-0589-4bd3-9413-3af7db3e8feb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:6c:85', 'vm-uuid': '707e75d9-f6d2-413d-a727-c3ecbfea90c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:17 compute-0 NetworkManager[50376]: <info>  [1765617197.2748] manager: (tap37c91936-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.286 248514 INFO os_vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05')
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.287 248514 DEBUG nova.virt.libvirt.vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.288 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.289 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.290 248514 DEBUG os_vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.292 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.292 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.295 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.296 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9538b93c-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.297 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9538b93c-6b, col_values=(('external_ids', {'iface-id': '9538b93c-6b21-4b5f-b6d3-89fd1b840f5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:da:84', 'vm-uuid': '707e75d9-f6d2-413d-a727-c3ecbfea90c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.299 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:17 compute-0 NetworkManager[50376]: <info>  [1765617197.3004] manager: (tap9538b93c-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.307 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.308 248514 INFO os_vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b')
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.370 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.370 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.371 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:77:6c:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.371 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:39:da:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.372 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Using config drive
Dec 13 09:13:17 compute-0 nova_compute[248510]: 2025-12-13 09:13:17.397 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:17 compute-0 ceph-mon[76537]: pgmap v3365: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 395 KiB/s wr, 2 op/s
Dec 13 09:13:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1271516082' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.102 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Creating config drive at /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.107 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpui2vk4ei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.282 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpui2vk4ei" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.313 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.317 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.478 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.479 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Deleting local config drive /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config because it was imported into RBD.
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.5564] manager: (tap37c91936-05): new Tun device (/org/freedesktop/NetworkManager/Devices/628)
Dec 13 09:13:18 compute-0 kernel: tap37c91936-05: entered promiscuous mode
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01516|binding|INFO|Claiming lport 37c91936-0589-4bd3-9413-3af7db3e8feb for this chassis.
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01517|binding|INFO|37c91936-0589-4bd3-9413-3af7db3e8feb: Claiming fa:16:3e:77:6c:85 10.100.0.11
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:18 compute-0 kernel: tap9538b93c-6b: entered promiscuous mode
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.5745] manager: (tap9538b93c-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.581 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:6c:85 10.100.0.11'], port_security=['fa:16:3e:77:6c:85 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '707e75d9-f6d2-413d-a727-c3ecbfea90c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9e8c352-1166-4841-b5aa-79bfa0acca5d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=37c91936-0589-4bd3-9413-3af7db3e8feb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.582 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 37c91936-0589-4bd3-9413-3af7db3e8feb in datapath ef1a3009-9f3e-4c4a-9ee4-04bec3653434 bound to our chassis
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.584 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef1a3009-9f3e-4c4a-9ee4-04bec3653434
Dec 13 09:13:18 compute-0 systemd-udevd[395743]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:13:18 compute-0 systemd-udevd[395742]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.598 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2b6ab0-0a96-4b5e-b73e-bb75b28ebcc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.600 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef1a3009-91 in ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.602 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef1a3009-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5748dc73-435f-4ee8-bf88-f46bce7dc4fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecdf364-0254-4f63-9cc6-25fbc3180f0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.6134] device (tap9538b93c-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.6145] device (tap9538b93c-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.6151] device (tap37c91936-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.6159] device (tap37c91936-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:13:18 compute-0 systemd-machined[210538]: New machine qemu-174-instance-0000008f.
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.623 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[10ed4b20-a038-413d-82d2-74411d4ec27d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.654 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2ae638-3d1c-47cd-8336-2a8fe19dbf58]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008f.
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01518|binding|INFO|Claiming lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d for this chassis.
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01519|binding|INFO|9538b93c-6b21-4b5f-b6d3-89fd1b840f5d: Claiming fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01520|binding|INFO|Setting lport 37c91936-0589-4bd3-9413-3af7db3e8feb ovn-installed in OVS
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01521|binding|INFO|Setting lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d ovn-installed in OVS
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.689 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.703 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[82bd100d-4996-4fdf-b78d-c3cf55c7b8be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.7122] manager: (tapef1a3009-90): new Veth device (/org/freedesktop/NetworkManager/Devices/630)
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.712 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57bdad6b-8220-4201-ba4e-ea1985c51c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.755 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b80a570a-e672-43b4-8484-bfa7e0770ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.760 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[83f172e2-024c-4ef8-95db-2a6396c27e40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01522|binding|INFO|Setting lport 37c91936-0589-4bd3-9413-3af7db3e8feb up in Southbound
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01523|binding|INFO|Setting lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d up in Southbound
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.772 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84'], port_security=['fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe39:da84/64 2001:db8::f816:3eff:fe39:da84/64', 'neutron:device_id': '707e75d9-f6d2-413d-a727-c3ecbfea90c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.7884] device (tapef1a3009-90): carrier: link connected
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.796 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[936e3dea-b394-45f7-bbf1-8da6c457ed2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 395 KiB/s wr, 2 op/s
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.821 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[217fed17-4ac5-4b2d-9fe1-9948b78d28ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef1a3009-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:51:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977600, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395779, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.837 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6391f351-1aef-4f0d-94d1-863ea72b9813]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:5195'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977600, 'tstamp': 977600}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395780, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.836 248514 DEBUG nova.network.neutron [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated VIF entry in instance network info cache for port 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.836 248514 DEBUG nova.network.neutron [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.859 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[114b6d44-5150-4548-9728-a62455b32eb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef1a3009-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:51:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977600, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395781, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.882 248514 DEBUG oslo_concurrency.lockutils [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:13:18 compute-0 nova_compute[248510]: 2025-12-13 09:13:18.883 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.891 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35468b07-ac0c-40ac-a888-e049db1bcd5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.976 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1df77a08-13d2-4421-9817-0048e1420ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.978 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef1a3009-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.978 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.980 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef1a3009-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:18 compute-0 kernel: tapef1a3009-90: entered promiscuous mode
Dec 13 09:13:18 compute-0 NetworkManager[50376]: <info>  [1765617198.9834] manager: (tapef1a3009-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Dec 13 09:13:18 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.985 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef1a3009-90, col_values=(('external_ids', {'iface-id': '365ef2b2-09ce-4255-947a-81f3896d22ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:18 compute-0 ovn_controller[148476]: 2025-12-13T09:13:18Z|01524|binding|INFO|Releasing lport 365ef2b2-09ce-4255-947a-81f3896d22ee from this chassis (sb_readonly=0)
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.006 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef1a3009-9f3e-4c4a-9ee4-04bec3653434.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef1a3009-9f3e-4c4a-9ee4-04bec3653434.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.008 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06d73351-3270-480d-89eb-c79f7891a462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.009 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-ef1a3009-9f3e-4c4a-9ee4-04bec3653434
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/ef1a3009-9f3e-4c4a-9ee4-04bec3653434.pid.haproxy
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID ef1a3009-9f3e-4c4a-9ee4-04bec3653434
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.011 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'env', 'PROCESS_TAG=haproxy-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef1a3009-9f3e-4c4a-9ee4-04bec3653434.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.039 248514 DEBUG nova.compute.manager [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.042 248514 DEBUG oslo_concurrency.lockutils [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.042 248514 DEBUG oslo_concurrency.lockutils [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.043 248514 DEBUG oslo_concurrency.lockutils [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.043 248514 DEBUG nova.compute.manager [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Processing event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.063 248514 DEBUG nova.compute.manager [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.064 248514 DEBUG oslo_concurrency.lockutils [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.064 248514 DEBUG oslo_concurrency.lockutils [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.064 248514 DEBUG oslo_concurrency.lockutils [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.065 248514 DEBUG nova.compute.manager [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Processing event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.284 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617199.2840216, 707e75d9-f6d2-413d-a727-c3ecbfea90c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.285 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] VM Started (Lifecycle Event)
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.287 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.292 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.296 248514 INFO nova.virt.libvirt.driver [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance spawned successfully.
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.296 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.329 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.336 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.337 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.338 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.338 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.339 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.339 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.344 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.393 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.394 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617199.2841537, 707e75d9-f6d2-413d-a727-c3ecbfea90c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.394 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] VM Paused (Lifecycle Event)
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.422 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.426 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617199.2901063, 707e75d9-f6d2-413d-a727-c3ecbfea90c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.426 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] VM Resumed (Lifecycle Event)
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.438 248514 INFO nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Took 15.85 seconds to spawn the instance on the hypervisor.
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.438 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:13:19 compute-0 podman[395856]: 2025-12-13 09:13:19.438979722 +0000 UTC m=+0.049061702 container create 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.447 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.455 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.483 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:13:19 compute-0 systemd[1]: Started libpod-conmon-1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a.scope.
Dec 13 09:13:19 compute-0 podman[395856]: 2025-12-13 09:13:19.411836291 +0000 UTC m=+0.021918311 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.521 248514 INFO nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Took 17.50 seconds to build instance.
Dec 13 09:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e219d71c2457e91e96bcf457fd2d2a6b9c400e89e423017a2889e15b6fb339/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.543 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:19 compute-0 podman[395856]: 2025-12-13 09:13:19.552729055 +0000 UTC m=+0.162811035 container init 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:13:19 compute-0 podman[395856]: 2025-12-13 09:13:19.557895594 +0000 UTC m=+0.167977574 container start 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:13:19 compute-0 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [NOTICE]   (395876) : New worker (395878) forked
Dec 13 09:13:19 compute-0 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [NOTICE]   (395876) : Loading success.
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.627 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 unbound from our chassis
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.631 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff80f4de-0e76-47f2-b06e-6d3900b63130
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.645 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[963669cf-5ab6-414b-9793-b253a3a19de9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.646 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff80f4de-01 in ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.649 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff80f4de-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.649 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3160cbd2-9be3-4709-9e8b-543a44f11b3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.651 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[996bdab5-03ba-4686-8ad4-3e43811ca534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.664 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[39326667-9ab6-4ea9-8274-2881d23105e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.687 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7837abc-941f-4cbb-a07e-f3c99b95e242]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.711 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf1eb83-81ca-471f-b880-c8f4a517fbf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07f54147-df29-460e-88c8-32465d8dc5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 NetworkManager[50376]: <info>  [1765617199.7273] manager: (tapff80f4de-00): new Veth device (/org/freedesktop/NetworkManager/Devices/632)
Dec 13 09:13:19 compute-0 systemd-udevd[395774]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:19 compute-0 nova_compute[248510]: 2025-12-13 09:13:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.774 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5d2828-ee22-4be2-8ae4-786033385f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.779 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3856c9de-8aca-4900-a35f-b51458846b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 NetworkManager[50376]: <info>  [1765617199.8177] device (tapff80f4de-00): carrier: link connected
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.825 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[264724d5-fedd-4344-8b26-386b1c353287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.846 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb973c39-48d9-4178-a145-975584af65bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff80f4de-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:6b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977703, 'reachable_time': 20848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395899, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.876 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7166d0f-851f-4616-aa2d-cb7a4a175b5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:6b85'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977703, 'tstamp': 977703}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395900, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.905 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55020b81-1694-4c36-915a-60e886d0e6b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff80f4de-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:6b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977703, 'reachable_time': 20848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395901, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.947 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[32356e02-c725-4415-b405-6cc852a18b8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.998 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc1dea0-b8d1-4ff6-8d1f-ea693df0435a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.000 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff80f4de-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.001 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.002 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff80f4de-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:20 compute-0 nova_compute[248510]: 2025-12-13 09:13:20.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:20 compute-0 NetworkManager[50376]: <info>  [1765617200.0046] manager: (tapff80f4de-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Dec 13 09:13:20 compute-0 kernel: tapff80f4de-00: entered promiscuous mode
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.010 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff80f4de-00, col_values=(('external_ids', {'iface-id': 'a155cc1b-19d3-4430-9d83-b19e30ef1a95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:20 compute-0 ovn_controller[148476]: 2025-12-13T09:13:20Z|01525|binding|INFO|Releasing lport a155cc1b-19d3-4430-9d83-b19e30ef1a95 from this chassis (sb_readonly=0)
Dec 13 09:13:20 compute-0 nova_compute[248510]: 2025-12-13 09:13:20.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.015 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff80f4de-0e76-47f2-b06e-6d3900b63130.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff80f4de-0e76-47f2-b06e-6d3900b63130.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.016 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6428577-3a7c-4e05-b8b5-9e1c00cfd580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.017 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-ff80f4de-0e76-47f2-b06e-6d3900b63130
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/ff80f4de-0e76-47f2-b06e-6d3900b63130.pid.haproxy
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID ff80f4de-0e76-47f2-b06e-6d3900b63130
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:13:20 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.018 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'env', 'PROCESS_TAG=haproxy-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff80f4de-0e76-47f2-b06e-6d3900b63130.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:13:20 compute-0 nova_compute[248510]: 2025-12-13 09:13:20.026 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:20 compute-0 ceph-mon[76537]: pgmap v3366: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 395 KiB/s wr, 2 op/s
Dec 13 09:13:20 compute-0 podman[395932]: 2025-12-13 09:13:20.45927251 +0000 UTC m=+0.064460478 container create 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 09:13:20 compute-0 systemd[1]: Started libpod-conmon-8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6.scope.
Dec 13 09:13:20 compute-0 podman[395932]: 2025-12-13 09:13:20.428140039 +0000 UTC m=+0.033328037 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:13:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5335169775ac2f2f0d9de9ac62c16a1b6366679c57b207e1bc023d06103d75f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:20 compute-0 podman[395932]: 2025-12-13 09:13:20.552612261 +0000 UTC m=+0.157800249 container init 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Dec 13 09:13:20 compute-0 podman[395932]: 2025-12-13 09:13:20.561495514 +0000 UTC m=+0.166683492 container start 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:13:20 compute-0 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [NOTICE]   (395952) : New worker (395954) forked
Dec 13 09:13:20 compute-0 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [NOTICE]   (395952) : Loading success.
Dec 13 09:13:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 12 KiB/s wr, 4 op/s
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.153 248514 DEBUG nova.compute.manager [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.154 248514 DEBUG oslo_concurrency.lockutils [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.154 248514 DEBUG oslo_concurrency.lockutils [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.154 248514 DEBUG oslo_concurrency.lockutils [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.154 248514 DEBUG nova.compute.manager [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.155 248514 WARNING nova.compute.manager [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received unexpected event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d for instance with vm_state active and task_state None.
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.240 248514 DEBUG nova.compute.manager [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.241 248514 DEBUG oslo_concurrency.lockutils [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.241 248514 DEBUG oslo_concurrency.lockutils [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.241 248514 DEBUG oslo_concurrency.lockutils [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.241 248514 DEBUG nova.compute.manager [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:13:21 compute-0 nova_compute[248510]: 2025-12-13 09:13:21.242 248514 WARNING nova.compute.manager [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received unexpected event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb for instance with vm_state active and task_state None.
Dec 13 09:13:21 compute-0 ceph-mon[76537]: pgmap v3367: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 12 KiB/s wr, 4 op/s
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003629306754213902 of space, bias 1.0, pg target 0.10887920262641705 quantized to 32 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697318854645266 of space, bias 1.0, pg target 0.20091956563935798 quantized to 32 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:13:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:13:22 compute-0 nova_compute[248510]: 2025-12-13 09:13:22.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 12 KiB/s wr, 3 op/s
Dec 13 09:13:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:23.245 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:13:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:23.248 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:13:23 compute-0 nova_compute[248510]: 2025-12-13 09:13:23.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:23 compute-0 nova_compute[248510]: 2025-12-13 09:13:23.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:23 compute-0 ovn_controller[148476]: 2025-12-13T09:13:23Z|01526|binding|INFO|Releasing lport 365ef2b2-09ce-4255-947a-81f3896d22ee from this chassis (sb_readonly=0)
Dec 13 09:13:23 compute-0 ovn_controller[148476]: 2025-12-13T09:13:23Z|01527|binding|INFO|Releasing lport a155cc1b-19d3-4430-9d83-b19e30ef1a95 from this chassis (sb_readonly=0)
Dec 13 09:13:23 compute-0 NetworkManager[50376]: <info>  [1765617203.8914] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Dec 13 09:13:23 compute-0 NetworkManager[50376]: <info>  [1765617203.8922] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Dec 13 09:13:23 compute-0 ovn_controller[148476]: 2025-12-13T09:13:23Z|01528|binding|INFO|Releasing lport 365ef2b2-09ce-4255-947a-81f3896d22ee from this chassis (sb_readonly=0)
Dec 13 09:13:23 compute-0 ovn_controller[148476]: 2025-12-13T09:13:23Z|01529|binding|INFO|Releasing lport a155cc1b-19d3-4430-9d83-b19e30ef1a95 from this chassis (sb_readonly=0)
Dec 13 09:13:23 compute-0 nova_compute[248510]: 2025-12-13 09:13:23.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:23 compute-0 nova_compute[248510]: 2025-12-13 09:13:23.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:24 compute-0 nova_compute[248510]: 2025-12-13 09:13:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:24 compute-0 nova_compute[248510]: 2025-12-13 09:13:24.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:13:24 compute-0 nova_compute[248510]: 2025-12-13 09:13:24.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:13:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:13:25 compute-0 nova_compute[248510]: 2025-12-13 09:13:25.026 248514 DEBUG nova.compute.manager [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:25 compute-0 nova_compute[248510]: 2025-12-13 09:13:25.027 248514 DEBUG nova.compute.manager [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing instance network info cache due to event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:13:25 compute-0 nova_compute[248510]: 2025-12-13 09:13:25.028 248514 DEBUG oslo_concurrency.lockutils [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:13:25 compute-0 nova_compute[248510]: 2025-12-13 09:13:25.029 248514 DEBUG oslo_concurrency.lockutils [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:13:25 compute-0 nova_compute[248510]: 2025-12-13 09:13:25.029 248514 DEBUG nova.network.neutron [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:13:25 compute-0 nova_compute[248510]: 2025-12-13 09:13:25.096 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:13:25 compute-0 ceph-mon[76537]: pgmap v3368: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 12 KiB/s wr, 3 op/s
Dec 13 09:13:26 compute-0 ceph-mon[76537]: pgmap v3369: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:13:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:13:27 compute-0 nova_compute[248510]: 2025-12-13 09:13:27.176 248514 DEBUG nova.network.neutron [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated VIF entry in instance network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:13:27 compute-0 nova_compute[248510]: 2025-12-13 09:13:27.177 248514 DEBUG nova.network.neutron [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:13:27 compute-0 nova_compute[248510]: 2025-12-13 09:13:27.252 248514 DEBUG oslo_concurrency.lockutils [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:13:27 compute-0 nova_compute[248510]: 2025-12-13 09:13:27.252 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:13:27 compute-0 nova_compute[248510]: 2025-12-13 09:13:27.253 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:13:27 compute-0 nova_compute[248510]: 2025-12-13 09:13:27.253 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:13:27 compute-0 nova_compute[248510]: 2025-12-13 09:13:27.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:27 compute-0 ceph-mon[76537]: pgmap v3370: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:13:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:13:28 compute-0 nova_compute[248510]: 2025-12-13 09:13:28.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.691 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.712 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.713 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.713 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.831 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.832 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.833 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.833 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:13:29 compute-0 nova_compute[248510]: 2025-12-13 09:13:29.834 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:29 compute-0 podman[395967]: 2025-12-13 09:13:29.986481274 +0000 UTC m=+0.062214501 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:13:29 compute-0 podman[395966]: 2025-12-13 09:13:29.998001433 +0000 UTC m=+0.083003413 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:13:30 compute-0 ceph-mon[76537]: pgmap v3371: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:13:30 compute-0 podman[395965]: 2025-12-13 09:13:30.036394026 +0000 UTC m=+0.125292193 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 13 09:13:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:13:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114344371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:13:30 compute-0 nova_compute[248510]: 2025-12-13 09:13:30.514 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:13:30 compute-0 sudo[396046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:13:30 compute-0 sudo[396046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:30 compute-0 nova_compute[248510]: 2025-12-13 09:13:30.847 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:13:30 compute-0 nova_compute[248510]: 2025-12-13 09:13:30.849 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:13:30 compute-0 sudo[396046]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:30 compute-0 sudo[396071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:13:30 compute-0 sudo[396071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:31 compute-0 nova_compute[248510]: 2025-12-13 09:13:31.105 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:13:31 compute-0 nova_compute[248510]: 2025-12-13 09:13:31.106 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3322MB free_disk=59.96650728955865GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:13:31 compute-0 nova_compute[248510]: 2025-12-13 09:13:31.106 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:31 compute-0 nova_compute[248510]: 2025-12-13 09:13:31.107 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:31.252 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:31 compute-0 nova_compute[248510]: 2025-12-13 09:13:31.270 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:13:31 compute-0 nova_compute[248510]: 2025-12-13 09:13:31.271 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:13:31 compute-0 nova_compute[248510]: 2025-12-13 09:13:31.271 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:13:31 compute-0 nova_compute[248510]: 2025-12-13 09:13:31.521 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:31 compute-0 sudo[396071]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:13:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:13:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:13:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:13:31 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Dec 13 09:13:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:13:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3114344371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:13:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:13:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:13:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:13:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:13:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:13:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:13:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:13:32 compute-0 sudo[396147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:13:32 compute-0 sudo[396147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:32 compute-0 sudo[396147]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:32 compute-0 sudo[396172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:13:32 compute-0 sudo[396172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:32 compute-0 nova_compute[248510]: 2025-12-13 09:13:32.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:13:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500816159' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:13:32 compute-0 nova_compute[248510]: 2025-12-13 09:13:32.371 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.851s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:32 compute-0 nova_compute[248510]: 2025-12-13 09:13:32.383 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:13:32 compute-0 podman[396212]: 2025-12-13 09:13:32.475725445 +0000 UTC m=+0.095887656 container create a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:13:32 compute-0 podman[396212]: 2025-12-13 09:13:32.40698285 +0000 UTC m=+0.027145061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:13:32 compute-0 systemd[1]: Started libpod-conmon-a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54.scope.
Dec 13 09:13:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:13:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:32 compute-0 podman[396212]: 2025-12-13 09:13:32.737783767 +0000 UTC m=+0.357946028 container init a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:13:32 compute-0 podman[396212]: 2025-12-13 09:13:32.748858915 +0000 UTC m=+0.369021096 container start a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 09:13:32 compute-0 gifted_almeida[396228]: 167 167
Dec 13 09:13:32 compute-0 systemd[1]: libpod-a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54.scope: Deactivated successfully.
Dec 13 09:13:32 compute-0 ceph-mon[76537]: pgmap v3372: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:13:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:13:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:13:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:13:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:13:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:13:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:13:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3500816159' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:13:32 compute-0 podman[396212]: 2025-12-13 09:13:32.801111096 +0000 UTC m=+0.421273317 container attach a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:13:32 compute-0 podman[396212]: 2025-12-13 09:13:32.803491965 +0000 UTC m=+0.423654146 container died a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 09:13:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Dec 13 09:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f3d708c4143ea8749c738a2eae980460585ea2052cea3cda4137f8caaa12548-merged.mount: Deactivated successfully.
Dec 13 09:13:33 compute-0 podman[396212]: 2025-12-13 09:13:33.151488073 +0000 UTC m=+0.771650244 container remove a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:13:33 compute-0 nova_compute[248510]: 2025-12-13 09:13:33.163 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:13:33 compute-0 systemd[1]: libpod-conmon-a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54.scope: Deactivated successfully.
Dec 13 09:13:33 compute-0 nova_compute[248510]: 2025-12-13 09:13:33.299 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:13:33 compute-0 nova_compute[248510]: 2025-12-13 09:13:33.300 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:33 compute-0 podman[396251]: 2025-12-13 09:13:33.389752019 +0000 UTC m=+0.065089094 container create 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:13:33 compute-0 podman[396251]: 2025-12-13 09:13:33.348617147 +0000 UTC m=+0.023954242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:13:33 compute-0 systemd[1]: Started libpod-conmon-30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1.scope.
Dec 13 09:13:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:33 compute-0 podman[396251]: 2025-12-13 09:13:33.571907117 +0000 UTC m=+0.247244212 container init 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:13:33 compute-0 podman[396251]: 2025-12-13 09:13:33.581765495 +0000 UTC m=+0.257102570 container start 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 09:13:33 compute-0 podman[396251]: 2025-12-13 09:13:33.608455854 +0000 UTC m=+0.283792969 container attach 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:13:33 compute-0 ceph-mon[76537]: pgmap v3373: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Dec 13 09:13:33 compute-0 nova_compute[248510]: 2025-12-13 09:13:33.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:34 compute-0 gracious_mestorf[396268]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:13:34 compute-0 gracious_mestorf[396268]: --> All data devices are unavailable
Dec 13 09:13:34 compute-0 systemd[1]: libpod-30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1.scope: Deactivated successfully.
Dec 13 09:13:34 compute-0 podman[396251]: 2025-12-13 09:13:34.08635 +0000 UTC m=+0.761687075 container died 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:13:34 compute-0 ovn_controller[148476]: 2025-12-13T09:13:34Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:6c:85 10.100.0.11
Dec 13 09:13:34 compute-0 ovn_controller[148476]: 2025-12-13T09:13:34Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:6c:85 10.100.0.11
Dec 13 09:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50-merged.mount: Deactivated successfully.
Dec 13 09:13:34 compute-0 podman[396251]: 2025-12-13 09:13:34.591759915 +0000 UTC m=+1.267097030 container remove 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 09:13:34 compute-0 systemd[1]: libpod-conmon-30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1.scope: Deactivated successfully.
Dec 13 09:13:34 compute-0 sudo[396172]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:34 compute-0 sudo[396302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:13:34 compute-0 sudo[396302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:34 compute-0 sudo[396302]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:34 compute-0 sudo[396327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:13:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 113 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 118 op/s
Dec 13 09:13:34 compute-0 sudo[396327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:35 compute-0 podman[396363]: 2025-12-13 09:13:35.14700083 +0000 UTC m=+0.024053334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:13:35 compute-0 podman[396363]: 2025-12-13 09:13:35.259684237 +0000 UTC m=+0.136736731 container create 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:13:35 compute-0 nova_compute[248510]: 2025-12-13 09:13:35.301 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:35 compute-0 nova_compute[248510]: 2025-12-13 09:13:35.302 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:35 compute-0 nova_compute[248510]: 2025-12-13 09:13:35.302 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:13:35 compute-0 systemd[1]: Started libpod-conmon-33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c.scope.
Dec 13 09:13:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:13:35 compute-0 podman[396363]: 2025-12-13 09:13:35.804346177 +0000 UTC m=+0.681398671 container init 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:13:35 compute-0 podman[396363]: 2025-12-13 09:13:35.814241806 +0000 UTC m=+0.691294280 container start 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 09:13:35 compute-0 thirsty_lehmann[396380]: 167 167
Dec 13 09:13:35 compute-0 systemd[1]: libpod-33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c.scope: Deactivated successfully.
Dec 13 09:13:35 compute-0 podman[396363]: 2025-12-13 09:13:35.993963203 +0000 UTC m=+0.871015697 container attach 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 09:13:35 compute-0 podman[396363]: 2025-12-13 09:13:35.994768243 +0000 UTC m=+0.871820717 container died 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 09:13:36 compute-0 ceph-mon[76537]: pgmap v3374: 321 pgs: 321 active+clean; 113 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 118 op/s
Dec 13 09:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-03053423c19a82c35f949015a07e58b0d63e30b2610544e8bc341c28286565a9-merged.mount: Deactivated successfully.
Dec 13 09:13:36 compute-0 podman[396363]: 2025-12-13 09:13:36.333714544 +0000 UTC m=+1.210767018 container remove 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:13:36 compute-0 systemd[1]: libpod-conmon-33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c.scope: Deactivated successfully.
Dec 13 09:13:36 compute-0 podman[396404]: 2025-12-13 09:13:36.616710552 +0000 UTC m=+0.103827555 container create f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:13:36 compute-0 podman[396404]: 2025-12-13 09:13:36.558628325 +0000 UTC m=+0.045745358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:13:36 compute-0 systemd[1]: Started libpod-conmon-f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296.scope.
Dec 13 09:13:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:36 compute-0 nova_compute[248510]: 2025-12-13 09:13:36.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 113 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.0 MiB/s wr, 48 op/s
Dec 13 09:13:36 compute-0 podman[396404]: 2025-12-13 09:13:36.842995027 +0000 UTC m=+0.330112020 container init f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:13:36 compute-0 podman[396404]: 2025-12-13 09:13:36.85468354 +0000 UTC m=+0.341800533 container start f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:13:36 compute-0 podman[396404]: 2025-12-13 09:13:36.932773259 +0000 UTC m=+0.419890272 container attach f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 09:13:37 compute-0 zen_shaw[396420]: {
Dec 13 09:13:37 compute-0 zen_shaw[396420]:     "0": [
Dec 13 09:13:37 compute-0 zen_shaw[396420]:         {
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "devices": [
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "/dev/loop3"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             ],
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_name": "ceph_lv0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_size": "21470642176",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "name": "ceph_lv0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "tags": {
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cluster_name": "ceph",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.crush_device_class": "",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.encrypted": "0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.objectstore": "bluestore",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osd_id": "0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.type": "block",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.vdo": "0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.with_tpm": "0"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             },
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "type": "block",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "vg_name": "ceph_vg0"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:         }
Dec 13 09:13:37 compute-0 zen_shaw[396420]:     ],
Dec 13 09:13:37 compute-0 zen_shaw[396420]:     "1": [
Dec 13 09:13:37 compute-0 zen_shaw[396420]:         {
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "devices": [
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "/dev/loop4"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             ],
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_name": "ceph_lv1",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_size": "21470642176",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "name": "ceph_lv1",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "tags": {
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cluster_name": "ceph",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.crush_device_class": "",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.encrypted": "0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.objectstore": "bluestore",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osd_id": "1",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.type": "block",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.vdo": "0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.with_tpm": "0"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             },
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "type": "block",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "vg_name": "ceph_vg1"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:         }
Dec 13 09:13:37 compute-0 zen_shaw[396420]:     ],
Dec 13 09:13:37 compute-0 zen_shaw[396420]:     "2": [
Dec 13 09:13:37 compute-0 zen_shaw[396420]:         {
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "devices": [
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "/dev/loop5"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             ],
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_name": "ceph_lv2",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_size": "21470642176",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "name": "ceph_lv2",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "tags": {
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.cluster_name": "ceph",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.crush_device_class": "",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.encrypted": "0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.objectstore": "bluestore",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osd_id": "2",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.type": "block",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.vdo": "0",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:                 "ceph.with_tpm": "0"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             },
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "type": "block",
Dec 13 09:13:37 compute-0 zen_shaw[396420]:             "vg_name": "ceph_vg2"
Dec 13 09:13:37 compute-0 zen_shaw[396420]:         }
Dec 13 09:13:37 compute-0 zen_shaw[396420]:     ]
Dec 13 09:13:37 compute-0 zen_shaw[396420]: }
Dec 13 09:13:37 compute-0 systemd[1]: libpod-f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296.scope: Deactivated successfully.
Dec 13 09:13:37 compute-0 podman[396404]: 2025-12-13 09:13:37.159959196 +0000 UTC m=+0.647076189 container died f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:13:37 compute-0 nova_compute[248510]: 2025-12-13 09:13:37.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87-merged.mount: Deactivated successfully.
Dec 13 09:13:37 compute-0 podman[396404]: 2025-12-13 09:13:37.54605736 +0000 UTC m=+1.033174363 container remove f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:13:37 compute-0 sudo[396327]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:37 compute-0 systemd[1]: libpod-conmon-f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296.scope: Deactivated successfully.
Dec 13 09:13:37 compute-0 sudo[396442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:13:37 compute-0 sudo[396442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:37 compute-0 sudo[396442]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:37 compute-0 sudo[396467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:13:37 compute-0 sudo[396467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:38 compute-0 podman[396504]: 2025-12-13 09:13:38.0835609 +0000 UTC m=+0.105052095 container create 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:13:38 compute-0 podman[396504]: 2025-12-13 09:13:38.004406006 +0000 UTC m=+0.025897231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:13:38 compute-0 ceph-mon[76537]: pgmap v3375: 321 pgs: 321 active+clean; 113 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.0 MiB/s wr, 48 op/s
Dec 13 09:13:38 compute-0 systemd[1]: Started libpod-conmon-522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1.scope.
Dec 13 09:13:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:13:38 compute-0 podman[396504]: 2025-12-13 09:13:38.20756941 +0000 UTC m=+0.229060635 container init 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:13:38 compute-0 podman[396504]: 2025-12-13 09:13:38.214155145 +0000 UTC m=+0.235646340 container start 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:13:38 compute-0 elastic_sutherland[396521]: 167 167
Dec 13 09:13:38 compute-0 systemd[1]: libpod-522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1.scope: Deactivated successfully.
Dec 13 09:13:38 compute-0 podman[396504]: 2025-12-13 09:13:38.217954891 +0000 UTC m=+0.239446106 container attach 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:13:38 compute-0 podman[396504]: 2025-12-13 09:13:38.218370411 +0000 UTC m=+0.239861606 container died 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd8371c3e500d2619489dd22ba125346f1ec4429767e4d34b6ac371465194dbc-merged.mount: Deactivated successfully.
Dec 13 09:13:38 compute-0 podman[396504]: 2025-12-13 09:13:38.258819106 +0000 UTC m=+0.280310301 container remove 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:13:38 compute-0 systemd[1]: libpod-conmon-522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1.scope: Deactivated successfully.
Dec 13 09:13:38 compute-0 podman[396543]: 2025-12-13 09:13:38.474875634 +0000 UTC m=+0.081431513 container create 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:13:38 compute-0 podman[396543]: 2025-12-13 09:13:38.419126176 +0000 UTC m=+0.025681905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:13:38 compute-0 systemd[1]: Started libpod-conmon-0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767.scope.
Dec 13 09:13:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:13:38 compute-0 podman[396543]: 2025-12-13 09:13:38.612816474 +0000 UTC m=+0.219372153 container init 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 09:13:38 compute-0 podman[396543]: 2025-12-13 09:13:38.626370754 +0000 UTC m=+0.232926443 container start 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:13:38 compute-0 podman[396543]: 2025-12-13 09:13:38.637530564 +0000 UTC m=+0.244086213 container attach 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:13:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 117 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Dec 13 09:13:38 compute-0 nova_compute[248510]: 2025-12-13 09:13:38.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:39 compute-0 lvm[396638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:13:39 compute-0 lvm[396639]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:13:39 compute-0 lvm[396638]: VG ceph_vg0 finished
Dec 13 09:13:39 compute-0 lvm[396639]: VG ceph_vg1 finished
Dec 13 09:13:39 compute-0 lvm[396641]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:13:39 compute-0 lvm[396641]: VG ceph_vg2 finished
Dec 13 09:13:39 compute-0 stoic_mclean[396560]: {}
Dec 13 09:13:39 compute-0 systemd[1]: libpod-0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767.scope: Deactivated successfully.
Dec 13 09:13:39 compute-0 podman[396543]: 2025-12-13 09:13:39.493613315 +0000 UTC m=+1.100168984 container died 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 09:13:39 compute-0 systemd[1]: libpod-0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767.scope: Consumed 1.461s CPU time.
Dec 13 09:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542-merged.mount: Deactivated successfully.
Dec 13 09:13:39 compute-0 podman[396543]: 2025-12-13 09:13:39.54166342 +0000 UTC m=+1.148219069 container remove 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:13:39 compute-0 systemd[1]: libpod-conmon-0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767.scope: Deactivated successfully.
Dec 13 09:13:39 compute-0 sudo[396467]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:13:39 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:13:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:13:39 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:13:39 compute-0 sudo[396656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:13:39 compute-0 sudo[396656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:13:39 compute-0 sudo[396656]: pam_unix(sudo:session): session closed for user root
Dec 13 09:13:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:13:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:13:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:13:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:13:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:13:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:13:40 compute-0 ceph-mon[76537]: pgmap v3376: 321 pgs: 321 active+clean; 117 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Dec 13 09:13:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:13:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:13:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:13:41 compute-0 ceph-mon[76537]: pgmap v3377: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:13:41 compute-0 nova_compute[248510]: 2025-12-13 09:13:41.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:13:42 compute-0 nova_compute[248510]: 2025-12-13 09:13:42.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:13:43 compute-0 ceph-mon[76537]: pgmap v3378: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:13:43 compute-0 nova_compute[248510]: 2025-12-13 09:13:43.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:44 compute-0 nova_compute[248510]: 2025-12-13 09:13:44.401 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:44 compute-0 nova_compute[248510]: 2025-12-13 09:13:44.401 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:44 compute-0 nova_compute[248510]: 2025-12-13 09:13:44.424 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:13:44 compute-0 nova_compute[248510]: 2025-12-13 09:13:44.511 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:44 compute-0 nova_compute[248510]: 2025-12-13 09:13:44.511 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:44 compute-0 nova_compute[248510]: 2025-12-13 09:13:44.524 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:13:44 compute-0 nova_compute[248510]: 2025-12-13 09:13:44.524 248514 INFO nova.compute.claims [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:13:44 compute-0 nova_compute[248510]: 2025-12-13 09:13:44.677 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:13:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:13:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788640244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.241 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.251 248514 DEBUG nova.compute.provider_tree [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.411 248514 DEBUG nova.scheduler.client.report [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.437 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.438 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.503 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.503 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.536 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.555 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.881 248514 DEBUG nova.policy [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.993 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.995 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:13:45 compute-0 nova_compute[248510]: 2025-12-13 09:13:45.996 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Creating image(s)
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.044 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.076 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:46 compute-0 ceph-mon[76537]: pgmap v3379: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:13:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1788640244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.285 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.289 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.375 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.376 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.377 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.378 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.412 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:46 compute-0 nova_compute[248510]: 2025-12-13 09:13:46.418 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 103 KiB/s wr, 15 op/s
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.075 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.152 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.239 248514 DEBUG nova.objects.instance [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.315 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.315 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Ensure instance console log exists: /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.316 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.316 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.316 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:47 compute-0 nova_compute[248510]: 2025-12-13 09:13:47.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:48 compute-0 nova_compute[248510]: 2025-12-13 09:13:48.010 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Successfully created port: fc1f4300-f39f-49b1-8d88-be328fb018fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:13:48 compute-0 ceph-mon[76537]: pgmap v3380: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 103 KiB/s wr, 15 op/s
Dec 13 09:13:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 132 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 481 KiB/s wr, 27 op/s
Dec 13 09:13:48 compute-0 nova_compute[248510]: 2025-12-13 09:13:48.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:48 compute-0 nova_compute[248510]: 2025-12-13 09:13:48.934 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Successfully created port: e86a788a-8070-426b-8b30-20117bfe712f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:13:49 compute-0 nova_compute[248510]: 2025-12-13 09:13:49.664 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Successfully updated port: fc1f4300-f39f-49b1-8d88-be328fb018fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:13:49 compute-0 nova_compute[248510]: 2025-12-13 09:13:49.757 248514 DEBUG nova.compute.manager [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:49 compute-0 nova_compute[248510]: 2025-12-13 09:13:49.757 248514 DEBUG nova.compute.manager [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing instance network info cache due to event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:13:49 compute-0 nova_compute[248510]: 2025-12-13 09:13:49.758 248514 DEBUG oslo_concurrency.lockutils [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:13:49 compute-0 nova_compute[248510]: 2025-12-13 09:13:49.758 248514 DEBUG oslo_concurrency.lockutils [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:13:49 compute-0 nova_compute[248510]: 2025-12-13 09:13:49.759 248514 DEBUG nova.network.neutron [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:13:49 compute-0 nova_compute[248510]: 2025-12-13 09:13:49.949 248514 DEBUG nova.network.neutron [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:13:50 compute-0 nova_compute[248510]: 2025-12-13 09:13:50.277 248514 DEBUG nova.network.neutron [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:13:50 compute-0 ceph-mon[76537]: pgmap v3381: 321 pgs: 321 active+clean; 132 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 481 KiB/s wr, 27 op/s
Dec 13 09:13:50 compute-0 nova_compute[248510]: 2025-12-13 09:13:50.303 248514 DEBUG oslo_concurrency.lockutils [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:13:50 compute-0 nova_compute[248510]: 2025-12-13 09:13:50.501 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Successfully updated port: e86a788a-8070-426b-8b30-20117bfe712f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:13:50 compute-0 nova_compute[248510]: 2025-12-13 09:13:50.532 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:13:50 compute-0 nova_compute[248510]: 2025-12-13 09:13:50.532 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:13:50 compute-0 nova_compute[248510]: 2025-12-13 09:13:50.532 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:13:50 compute-0 nova_compute[248510]: 2025-12-13 09:13:50.679 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:13:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Dec 13 09:13:51 compute-0 ceph-mon[76537]: pgmap v3382: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Dec 13 09:13:51 compute-0 nova_compute[248510]: 2025-12-13 09:13:51.885 248514 DEBUG nova.compute.manager [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-changed-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:51 compute-0 nova_compute[248510]: 2025-12-13 09:13:51.886 248514 DEBUG nova.compute.manager [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing instance network info cache due to event network-changed-e86a788a-8070-426b-8b30-20117bfe712f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:13:51 compute-0 nova_compute[248510]: 2025-12-13 09:13:51.886 248514 DEBUG oslo_concurrency.lockutils [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:13:52 compute-0 nova_compute[248510]: 2025-12-13 09:13:52.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.484 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.513 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.513 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance network_info: |[{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.514 248514 DEBUG oslo_concurrency.lockutils [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.514 248514 DEBUG nova.network.neutron [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing network info cache for port e86a788a-8070-426b-8b30-20117bfe712f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.519 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start _get_guest_xml network_info=[{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.526 248514 WARNING nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.532 248514 DEBUG nova.virt.libvirt.host [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.533 248514 DEBUG nova.virt.libvirt.host [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.544 248514 DEBUG nova.virt.libvirt.host [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.545 248514 DEBUG nova.virt.libvirt.host [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.545 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.546 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.546 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.546 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.546 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.547 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.547 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.547 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.547 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.548 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.548 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.548 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.551 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:53 compute-0 nova_compute[248510]: 2025-12-13 09:13:53.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:53 compute-0 ceph-mon[76537]: pgmap v3383: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:13:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150298232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.098 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.121 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.126 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:13:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527906899' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.693 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.697 248514 DEBUG nova.virt.libvirt.vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:45Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.699 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.700 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.702 248514 DEBUG nova.virt.libvirt.vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:45Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.703 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.704 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.706 248514 DEBUG nova.objects.instance [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.735 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <uuid>ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4</uuid>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <name>instance-00000090</name>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-1859978450</nova:name>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:13:53</nova:creationTime>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:port uuid="fc1f4300-f39f-49b1-8d88-be328fb018fe">
Dec 13 09:13:54 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <nova:port uuid="e86a788a-8070-426b-8b30-20117bfe712f">
Dec 13 09:13:54 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb1:dc7" ipVersion="6"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb1:dc7" ipVersion="6"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <system>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <entry name="serial">ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4</entry>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <entry name="uuid">ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4</entry>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </system>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <os>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   </os>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <features>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   </features>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk">
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       </source>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config">
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       </source>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:13:54 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:dc:f2:dc"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <target dev="tapfc1f4300-f3"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:b1:0d:c7"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <target dev="tape86a788a-80"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/console.log" append="off"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <video>
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </video>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:13:54 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:13:54 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:13:54 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:13:54 compute-0 nova_compute[248510]: </domain>
Dec 13 09:13:54 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.737 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Preparing to wait for external event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.738 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.738 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.738 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.739 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Preparing to wait for external event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.739 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.739 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.739 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.741 248514 DEBUG nova.virt.libvirt.vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:45Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.741 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.742 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.743 248514 DEBUG os_vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.744 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.745 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.750 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc1f4300-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.750 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc1f4300-f3, col_values=(('external_ids', {'iface-id': 'fc1f4300-f39f-49b1-8d88-be328fb018fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:f2:dc', 'vm-uuid': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:54 compute-0 NetworkManager[50376]: <info>  [1765617234.7535] manager: (tapfc1f4300-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.763 248514 INFO os_vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3')
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.763 248514 DEBUG nova.virt.libvirt.vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:45Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.764 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.765 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.765 248514 DEBUG os_vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.766 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.766 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.770 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape86a788a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.771 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape86a788a-80, col_values=(('external_ids', {'iface-id': 'e86a788a-8070-426b-8b30-20117bfe712f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:0d:c7', 'vm-uuid': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:54 compute-0 NetworkManager[50376]: <info>  [1765617234.7732] manager: (tape86a788a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.778 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.779 248514 INFO os_vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80')
Dec 13 09:13:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.853 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.853 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.854 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:dc:f2:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.854 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:b1:0d:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.855 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Using config drive
Dec 13 09:13:54 compute-0 nova_compute[248510]: 2025-12-13 09:13:54.883 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1150298232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:13:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1527906899' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:13:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:55.450 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:55.452 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:56 compute-0 ceph-mon[76537]: pgmap v3384: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.062 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Creating config drive at /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.067 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrj_42al execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.220 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrj_42al" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.250 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.253 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.398 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.399 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Deleting local config drive /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config because it was imported into RBD.
Dec 13 09:13:56 compute-0 NetworkManager[50376]: <info>  [1765617236.4655] manager: (tapfc1f4300-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Dec 13 09:13:56 compute-0 kernel: tapfc1f4300-f3: entered promiscuous mode
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01530|binding|INFO|Claiming lport fc1f4300-f39f-49b1-8d88-be328fb018fe for this chassis.
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01531|binding|INFO|fc1f4300-f39f-49b1-8d88-be328fb018fe: Claiming fa:16:3e:dc:f2:dc 10.100.0.9
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.526 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:f2:dc 10.100.0.9'], port_security=['fa:16:3e:dc:f2:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9e8c352-1166-4841-b5aa-79bfa0acca5d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=fc1f4300-f39f-49b1-8d88-be328fb018fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:13:56 compute-0 NetworkManager[50376]: <info>  [1765617236.5280] manager: (tape86a788a-80): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.527 158419 INFO neutron.agent.ovn.metadata.agent [-] Port fc1f4300-f39f-49b1-8d88-be328fb018fe in datapath ef1a3009-9f3e-4c4a-9ee4-04bec3653434 bound to our chassis
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.530 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef1a3009-9f3e-4c4a-9ee4-04bec3653434
Dec 13 09:13:56 compute-0 kernel: tape86a788a-80: entered promiscuous mode
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01532|binding|INFO|Setting lport fc1f4300-f39f-49b1-8d88-be328fb018fe ovn-installed in OVS
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01533|binding|INFO|Setting lport fc1f4300-f39f-49b1-8d88-be328fb018fe up in Southbound
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01534|if_status|INFO|Dropped 5 log messages in last 143 seconds (most recently, 143 seconds ago) due to excessive rate
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01535|if_status|INFO|Not updating pb chassis for e86a788a-8070-426b-8b30-20117bfe712f now as sb is readonly
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01536|binding|INFO|Claiming lport e86a788a-8070-426b-8b30-20117bfe712f for this chassis.
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01537|binding|INFO|e86a788a-8070-426b-8b30-20117bfe712f: Claiming fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7
Dec 13 09:13:56 compute-0 systemd-udevd[397010]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:13:56 compute-0 systemd-udevd[397011]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.555 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7'], port_security=['fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb1:dc7/64 2001:db8::f816:3eff:feb1:dc7/64', 'neutron:device_id': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e86a788a-8070-426b-8b30-20117bfe712f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.554 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed613a1-495a-4b02-8979-d57e59c41d54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01538|binding|INFO|Setting lport e86a788a-8070-426b-8b30-20117bfe712f ovn-installed in OVS
Dec 13 09:13:56 compute-0 ovn_controller[148476]: 2025-12-13T09:13:56Z|01539|binding|INFO|Setting lport e86a788a-8070-426b-8b30-20117bfe712f up in Southbound
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:56 compute-0 NetworkManager[50376]: <info>  [1765617236.5650] device (tape86a788a-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:13:56 compute-0 NetworkManager[50376]: <info>  [1765617236.5662] device (tape86a788a-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:13:56 compute-0 NetworkManager[50376]: <info>  [1765617236.5668] device (tapfc1f4300-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:13:56 compute-0 NetworkManager[50376]: <info>  [1765617236.5674] device (tapfc1f4300-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:13:56 compute-0 systemd-machined[210538]: New machine qemu-175-instance-00000090.
Dec 13 09:13:56 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-00000090.
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.597 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a8447734-c857-48f1-ac19-0fa2267a9354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.601 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cb407d11-f1b7-48ef-95aa-c4e78bca0718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.642 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[538dfb14-6550-4573-bd43-7a6dcab91569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21f530ac-f11a-4ac8-b09b-fe2559d57f5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef1a3009-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:51:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977600, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397025, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4dca82-215d-4e92-a4c5-410c707a0716]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef1a3009-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977613, 'tstamp': 977613}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397027, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef1a3009-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977618, 'tstamp': 977618}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397027, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.688 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef1a3009-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.690 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.694 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef1a3009-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.694 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.695 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef1a3009-90, col_values=(('external_ids', {'iface-id': '365ef2b2-09ce-4255-947a-81f3896d22ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.695 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.697 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e86a788a-8070-426b-8b30-20117bfe712f in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 unbound from our chassis
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.699 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff80f4de-0e76-47f2-b06e-6d3900b63130
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a504d476-558f-4043-a845-dea8a4348612]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.756 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8178bfa8-9ce8-488c-bbca-3aabbebc8ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.760 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[097636bc-f784-4a2f-a52b-115716199f14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.796 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d61c512a-911f-425f-92f6-885a5c86d86b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.819 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3e7fe4-285f-4bef-ac2a-044e996fd23e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff80f4de-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:6b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977703, 'reachable_time': 20848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397049, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.836 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bc63ea-271f-40a7-985b-2575179b9d0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapff80f4de-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977720, 'tstamp': 977720}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397060, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.837 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff80f4de-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.840 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff80f4de-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.840 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.841 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff80f4de-00, col_values=(('external_ids', {'iface-id': 'a155cc1b-19d3-4430-9d83-b19e30ef1a95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:13:56 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.841 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.992 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617236.9916697, ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:13:56 compute-0 nova_compute[248510]: 2025-12-13 09:13:56.992 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] VM Started (Lifecycle Event)
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.014 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.019 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617236.9917843, ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.019 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] VM Paused (Lifecycle Event)
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.039 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.043 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.062 248514 DEBUG nova.compute.manager [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.064 248514 DEBUG oslo_concurrency.lockutils [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.065 248514 DEBUG oslo_concurrency.lockutils [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.065 248514 DEBUG oslo_concurrency.lockutils [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.066 248514 DEBUG nova.compute.manager [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Processing event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.069 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.196 248514 DEBUG nova.network.neutron [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updated VIF entry in instance network info cache for port e86a788a-8070-426b-8b30-20117bfe712f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.197 248514 DEBUG nova.network.neutron [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:13:57 compute-0 nova_compute[248510]: 2025-12-13 09:13:57.218 248514 DEBUG oslo_concurrency.lockutils [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:13:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:13:58 compute-0 ceph-mon[76537]: pgmap v3385: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:13:58 compute-0 nova_compute[248510]: 2025-12-13 09:13:58.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.164 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.165 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.165 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.166 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.166 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No event matching network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe in dict_keys([('network-vif-plugged', 'e86a788a-8070-426b-8b30-20117bfe712f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.166 248514 WARNING nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe for instance with vm_state building and task_state spawning.
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.167 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.167 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.168 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.168 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.169 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Processing event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.169 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.169 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.170 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.170 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.171 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.172 248514 WARNING nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f for instance with vm_state building and task_state spawning.
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.174 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.179 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617239.1788175, ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.180 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] VM Resumed (Lifecycle Event)
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.185 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.190 248514 INFO nova.virt.libvirt.driver [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance spawned successfully.
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.191 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.209 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.217 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.223 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.224 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.225 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.225 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.225 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.226 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.238 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.291 248514 INFO nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Took 13.30 seconds to spawn the instance on the hypervisor.
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.292 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.357 248514 INFO nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Took 14.88 seconds to build instance.
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.378 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:13:59 compute-0 nova_compute[248510]: 2025-12-13 09:13:59.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:00 compute-0 ceph-mon[76537]: pgmap v3386: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:14:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 MiB/s wr, 30 op/s
Dec 13 09:14:01 compute-0 podman[397081]: 2025-12-13 09:14:01.002806617 +0000 UTC m=+0.070923300 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:14:01 compute-0 podman[397080]: 2025-12-13 09:14:01.012257864 +0000 UTC m=+0.082390277 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Dec 13 09:14:01 compute-0 podman[397079]: 2025-12-13 09:14:01.069236513 +0000 UTC m=+0.145681815 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:14:01 compute-0 ceph-mon[76537]: pgmap v3387: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 MiB/s wr, 30 op/s
Dec 13 09:14:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 14 op/s
Dec 13 09:14:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Dec 13 09:14:03 compute-0 nova_compute[248510]: 2025-12-13 09:14:03.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:03 compute-0 ceph-mon[76537]: pgmap v3388: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 14 op/s
Dec 13 09:14:04 compute-0 nova_compute[248510]: 2025-12-13 09:14:04.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 104 op/s
Dec 13 09:14:05 compute-0 nova_compute[248510]: 2025-12-13 09:14:05.472 248514 DEBUG nova.compute.manager [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:05 compute-0 nova_compute[248510]: 2025-12-13 09:14:05.473 248514 DEBUG nova.compute.manager [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing instance network info cache due to event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:14:05 compute-0 nova_compute[248510]: 2025-12-13 09:14:05.473 248514 DEBUG oslo_concurrency.lockutils [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:14:05 compute-0 nova_compute[248510]: 2025-12-13 09:14:05.473 248514 DEBUG oslo_concurrency.lockutils [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:14:05 compute-0 nova_compute[248510]: 2025-12-13 09:14:05.474 248514 DEBUG nova.network.neutron [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:14:05 compute-0 ceph-mon[76537]: pgmap v3389: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 104 op/s
Dec 13 09:14:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 104 op/s
Dec 13 09:14:07 compute-0 nova_compute[248510]: 2025-12-13 09:14:07.039 248514 DEBUG nova.network.neutron [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updated VIF entry in instance network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:14:07 compute-0 nova_compute[248510]: 2025-12-13 09:14:07.040 248514 DEBUG nova.network.neutron [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:14:07 compute-0 nova_compute[248510]: 2025-12-13 09:14:07.067 248514 DEBUG oslo_concurrency.lockutils [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:14:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:08 compute-0 ceph-mon[76537]: pgmap v3390: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 104 op/s
Dec 13 09:14:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 120 op/s
Dec 13 09:14:08 compute-0 nova_compute[248510]: 2025-12-13 09:14:08.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:14:09
Dec 13 09:14:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:14:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:14:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta']
Dec 13 09:14:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:14:09 compute-0 nova_compute[248510]: 2025-12-13 09:14:09.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:10 compute-0 ceph-mon[76537]: pgmap v3391: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 120 op/s
Dec 13 09:14:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:14:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:14:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:14:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:14:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:14:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:14:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 134 op/s
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:14:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:14:11 compute-0 ovn_controller[148476]: 2025-12-13T09:14:11Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:f2:dc 10.100.0.9
Dec 13 09:14:11 compute-0 ovn_controller[148476]: 2025-12-13T09:14:11Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:f2:dc 10.100.0.9
Dec 13 09:14:12 compute-0 ceph-mon[76537]: pgmap v3392: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 134 op/s
Dec 13 09:14:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 KiB/s wr, 120 op/s
Dec 13 09:14:13 compute-0 nova_compute[248510]: 2025-12-13 09:14:13.912 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:14 compute-0 ceph-mon[76537]: pgmap v3393: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 KiB/s wr, 120 op/s
Dec 13 09:14:14 compute-0 nova_compute[248510]: 2025-12-13 09:14:14.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Dec 13 09:14:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:14:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577487572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:14:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:14:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577487572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:14:16 compute-0 ceph-mon[76537]: pgmap v3394: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Dec 13 09:14:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2577487572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:14:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2577487572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:14:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Dec 13 09:14:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:18 compute-0 ceph-mon[76537]: pgmap v3395: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Dec 13 09:14:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Dec 13 09:14:18 compute-0 nova_compute[248510]: 2025-12-13 09:14:18.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:19 compute-0 nova_compute[248510]: 2025-12-13 09:14:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:19 compute-0 nova_compute[248510]: 2025-12-13 09:14:19.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:20 compute-0 ceph-mon[76537]: pgmap v3396: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Dec 13 09:14:20 compute-0 nova_compute[248510]: 2025-12-13 09:14:20.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015327102653432375 of space, bias 1.0, pg target 0.45981307960297124 quantized to 32 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697272279419812 of space, bias 1.0, pg target 0.20091816838259435 quantized to 32 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:14:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:14:22 compute-0 ceph-mon[76537]: pgmap v3397: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 13 09:14:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:14:23 compute-0 sshd-session[397146]: Invalid user solv from 193.32.162.146 port 42782
Dec 13 09:14:23 compute-0 sshd-session[397146]: Connection closed by invalid user solv 193.32.162.146 port 42782 [preauth]
Dec 13 09:14:23 compute-0 nova_compute[248510]: 2025-12-13 09:14:23.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:24 compute-0 nova_compute[248510]: 2025-12-13 09:14:24.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:14:25 compute-0 ceph-mon[76537]: pgmap v3398: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:14:25 compute-0 nova_compute[248510]: 2025-12-13 09:14:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:25 compute-0 nova_compute[248510]: 2025-12-13 09:14:25.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:14:25 compute-0 nova_compute[248510]: 2025-12-13 09:14:25.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:14:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:25.856 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:14:25 compute-0 nova_compute[248510]: 2025-12-13 09:14:25.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:25.858 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:14:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:25.859 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 ceph-mon[76537]: pgmap v3399: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.198 248514 DEBUG nova.compute.manager [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.199 248514 DEBUG nova.compute.manager [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing instance network info cache due to event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.199 248514 DEBUG oslo_concurrency.lockutils [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.199 248514 DEBUG oslo_concurrency.lockutils [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.200 248514 DEBUG nova.network.neutron [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.297 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.298 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.299 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.299 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.299 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.300 248514 INFO nova.compute.manager [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Terminating instance
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.301 248514 DEBUG nova.compute.manager [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:14:26 compute-0 kernel: tapfc1f4300-f3 (unregistering): left promiscuous mode
Dec 13 09:14:26 compute-0 NetworkManager[50376]: <info>  [1765617266.3544] device (tapfc1f4300-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.354 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.355 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.355 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.356 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 ovn_controller[148476]: 2025-12-13T09:14:26Z|01540|binding|INFO|Releasing lport fc1f4300-f39f-49b1-8d88-be328fb018fe from this chassis (sb_readonly=0)
Dec 13 09:14:26 compute-0 ovn_controller[148476]: 2025-12-13T09:14:26Z|01541|binding|INFO|Setting lport fc1f4300-f39f-49b1-8d88-be328fb018fe down in Southbound
Dec 13 09:14:26 compute-0 ovn_controller[148476]: 2025-12-13T09:14:26Z|01542|binding|INFO|Removing iface tapfc1f4300-f3 ovn-installed in OVS
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.375 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:f2:dc 10.100.0.9'], port_security=['fa:16:3e:dc:f2:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9e8c352-1166-4841-b5aa-79bfa0acca5d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=fc1f4300-f39f-49b1-8d88-be328fb018fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.377 158419 INFO neutron.agent.ovn.metadata.agent [-] Port fc1f4300-f39f-49b1-8d88-be328fb018fe in datapath ef1a3009-9f3e-4c4a-9ee4-04bec3653434 unbound from our chassis
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.380 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef1a3009-9f3e-4c4a-9ee4-04bec3653434
Dec 13 09:14:26 compute-0 kernel: tape86a788a-80 (unregistering): left promiscuous mode
Dec 13 09:14:26 compute-0 NetworkManager[50376]: <info>  [1765617266.3873] device (tape86a788a-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.396 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c45843-05f3-43c8-9879-9c4d6e4a1c67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_controller[148476]: 2025-12-13T09:14:26Z|01543|binding|INFO|Releasing lport e86a788a-8070-426b-8b30-20117bfe712f from this chassis (sb_readonly=0)
Dec 13 09:14:26 compute-0 ovn_controller[148476]: 2025-12-13T09:14:26Z|01544|binding|INFO|Setting lport e86a788a-8070-426b-8b30-20117bfe712f down in Southbound
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.400 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 ovn_controller[148476]: 2025-12-13T09:14:26Z|01545|binding|INFO|Removing iface tape86a788a-80 ovn-installed in OVS
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.402 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.408 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7'], port_security=['fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb1:dc7/64 2001:db8::f816:3eff:feb1:dc7/64', 'neutron:device_id': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e86a788a-8070-426b-8b30-20117bfe712f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.441 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d58259-eaf9-4ec3-8a72-f629bcb07a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.444 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bf066882-4c5c-4d3a-b439-352df9212621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000090.scope: Deactivated successfully.
Dec 13 09:14:26 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000090.scope: Consumed 13.340s CPU time.
Dec 13 09:14:26 compute-0 systemd-machined[210538]: Machine qemu-175-instance-00000090 terminated.
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.484 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0f568865-d44b-42ba-9dd1-8340c1c687d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50fc0df2-efb3-439b-afe0-430139a32570]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef1a3009-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:51:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977600, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397165, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.525 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4b261254-b014-4ad1-a0e9-2882d77b0fc7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef1a3009-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977613, 'tstamp': 977613}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397166, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef1a3009-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977618, 'tstamp': 977618}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397166, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.528 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef1a3009-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 NetworkManager[50376]: <info>  [1765617266.5570] manager: (tape86a788a-80): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.561 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef1a3009-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.562 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.562 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef1a3009-90, col_values=(('external_ids', {'iface-id': '365ef2b2-09ce-4255-947a-81f3896d22ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.563 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.564 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e86a788a-8070-426b-8b30-20117bfe712f in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 unbound from our chassis
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.565 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff80f4de-0e76-47f2-b06e-6d3900b63130
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.571 248514 INFO nova.virt.libvirt.driver [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance destroyed successfully.
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.572 248514 DEBUG nova.objects.instance [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.580 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[117bdafc-1383-4fca-9950-b20890375a9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.596 248514 DEBUG nova.virt.libvirt.vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:13:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:13:59Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.597 248514 DEBUG nova.network.os_vif_util [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.598 248514 DEBUG nova.network.os_vif_util [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.598 248514 DEBUG os_vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.600 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc1f4300-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.605 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.608 248514 INFO os_vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3')
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.609 248514 DEBUG nova.virt.libvirt.vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:13:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:13:59Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.609 248514 DEBUG nova.network.os_vif_util [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.610 248514 DEBUG nova.network.os_vif_util [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.610 248514 DEBUG os_vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.612 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape86a788a-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.615 248514 INFO os_vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80')
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.618 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[70521fda-b164-4879-83d0-ff68d18f59b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.620 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fc939233-6f33-4a19-add5-4bdb04764ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.655 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a78f0251-e9ee-4759-9884-a814b67cc937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.674 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b9e5af-feec-457b-a034-42086006f2d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff80f4de-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:6b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977703, 'reachable_time': 20848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397215, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.691 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecac775-3b63-4c23-af70-ac0dd234fac3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapff80f4de-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977720, 'tstamp': 977720}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397216, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.693 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff80f4de-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff80f4de-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff80f4de-00, col_values=(('external_ids', {'iface-id': 'a155cc1b-19d3-4430-9d83-b19e30ef1a95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.698 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.711 248514 DEBUG nova.compute.manager [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-unplugged-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.711 248514 DEBUG oslo_concurrency.lockutils [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.712 248514 DEBUG oslo_concurrency.lockutils [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.712 248514 DEBUG oslo_concurrency.lockutils [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.712 248514 DEBUG nova.compute.manager [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-unplugged-fc1f4300-f39f-49b1-8d88-be328fb018fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.712 248514 DEBUG nova.compute.manager [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-unplugged-fc1f4300-f39f-49b1-8d88-be328fb018fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:14:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.864 248514 INFO nova.virt.libvirt.driver [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Deleting instance files /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_del
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.865 248514 INFO nova.virt.libvirt.driver [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Deletion of /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_del complete
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.947 248514 INFO nova.compute.manager [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Took 0.65 seconds to destroy the instance on the hypervisor.
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.948 248514 DEBUG oslo.service.loopingcall [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.948 248514 DEBUG nova.compute.manager [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:14:26 compute-0 nova_compute[248510]: 2025-12-13 09:14:26.948 248514 DEBUG nova.network.neutron [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:14:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:28 compute-0 ceph-mon[76537]: pgmap v3400: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.309 248514 DEBUG nova.network.neutron [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.313 248514 DEBUG nova.compute.manager [req-98668b05-d762-45c2-be55-84515e6e2468 req-037ba3b4-f797-48a3-9210-879329df36a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-deleted-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.313 248514 INFO nova.compute.manager [req-98668b05-d762-45c2-be55-84515e6e2468 req-037ba3b4-f797-48a3-9210-879329df36a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Neutron deleted interface e86a788a-8070-426b-8b30-20117bfe712f; detaching it from the instance and deleting it from the info cache
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.313 248514 DEBUG nova.network.neutron [req-98668b05-d762-45c2-be55-84515e6e2468 req-037ba3b4-f797-48a3-9210-879329df36a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.321 248514 DEBUG nova.network.neutron [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updated VIF entry in instance network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.321 248514 DEBUG nova.network.neutron [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.346 248514 INFO nova.compute.manager [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Took 1.40 seconds to deallocate network for instance.
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.347 248514 DEBUG oslo_concurrency.lockutils [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.352 248514 DEBUG nova.compute.manager [req-98668b05-d762-45c2-be55-84515e6e2468 req-037ba3b4-f797-48a3-9210-879329df36a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Detach interface failed, port_id=e86a788a-8070-426b-8b30-20117bfe712f, reason: Instance ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.395 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.395 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.487 248514 DEBUG oslo_concurrency.processutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.798 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 WARNING nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe for instance with vm_state deleted and task_state None.
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-unplugged-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-unplugged-e86a788a-8070-426b-8b30-20117bfe712f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 WARNING nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-unplugged-e86a788a-8070-426b-8b30-20117bfe712f for instance with vm_state deleted and task_state None.
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 WARNING nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f for instance with vm_state deleted and task_state None.
Dec 13 09:14:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 177 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 7 op/s
Dec 13 09:14:28 compute-0 nova_compute[248510]: 2025-12-13 09:14:28.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:14:29 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329259479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:14:29 compute-0 nova_compute[248510]: 2025-12-13 09:14:29.047 248514 DEBUG oslo_concurrency.processutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:14:29 compute-0 nova_compute[248510]: 2025-12-13 09:14:29.056 248514 DEBUG nova.compute.provider_tree [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:14:29 compute-0 nova_compute[248510]: 2025-12-13 09:14:29.075 248514 DEBUG nova.scheduler.client.report [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:14:29 compute-0 nova_compute[248510]: 2025-12-13 09:14:29.105 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:29 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/329259479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:14:29 compute-0 nova_compute[248510]: 2025-12-13 09:14:29.181 248514 INFO nova.scheduler.client.report [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4
Dec 13 09:14:29 compute-0 nova_compute[248510]: 2025-12-13 09:14:29.283 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:30 compute-0 ceph-mon[76537]: pgmap v3401: 321 pgs: 321 active+clean; 177 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 7 op/s
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.426 248514 DEBUG nova.compute.manager [req-e771d645-979a-41f0-9cbc-2126227d3c06 req-71f0af84-a910-4112-b17d-405fc66ecb57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-deleted-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.427 248514 INFO nova.compute.manager [req-e771d645-979a-41f0-9cbc-2126227d3c06 req-71f0af84-a910-4112-b17d-405fc66ecb57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Neutron deleted interface fc1f4300-f39f-49b1-8d88-be328fb018fe; detaching it from the instance and deleting it from the info cache
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.427 248514 DEBUG nova.network.neutron [req-e771d645-979a-41f0-9cbc-2126227d3c06 req-71f0af84-a910-4112-b17d-405fc66ecb57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.431 248514 DEBUG nova.compute.manager [req-e771d645-979a-41f0-9cbc-2126227d3c06 req-71f0af84-a910-4112-b17d-405fc66ecb57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Detach interface failed, port_id=fc1f4300-f39f-49b1-8d88-be328fb018fe, reason: Instance ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:14:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.879 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.903 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.904 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.904 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.905 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.932 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.933 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.933 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.934 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:14:30 compute-0 nova_compute[248510]: 2025-12-13 09:14:30.934 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.408 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.410 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.410 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.411 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.411 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.414 248514 INFO nova.compute.manager [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Terminating instance
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.415 248514 DEBUG nova.compute.manager [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:14:31 compute-0 kernel: tap37c91936-05 (unregistering): left promiscuous mode
Dec 13 09:14:31 compute-0 NetworkManager[50376]: <info>  [1765617271.4726] device (tap37c91936-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:14:31 compute-0 ovn_controller[148476]: 2025-12-13T09:14:31Z|01546|binding|INFO|Releasing lport 37c91936-0589-4bd3-9413-3af7db3e8feb from this chassis (sb_readonly=0)
Dec 13 09:14:31 compute-0 ovn_controller[148476]: 2025-12-13T09:14:31Z|01547|binding|INFO|Setting lport 37c91936-0589-4bd3-9413-3af7db3e8feb down in Southbound
Dec 13 09:14:31 compute-0 ovn_controller[148476]: 2025-12-13T09:14:31Z|01548|binding|INFO|Removing iface tap37c91936-05 ovn-installed in OVS
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.495 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:6c:85 10.100.0.11'], port_security=['fa:16:3e:77:6c:85 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '707e75d9-f6d2-413d-a727-c3ecbfea90c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9e8c352-1166-4841-b5aa-79bfa0acca5d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=37c91936-0589-4bd3-9413-3af7db3e8feb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.503 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 37c91936-0589-4bd3-9413-3af7db3e8feb in datapath ef1a3009-9f3e-4c4a-9ee4-04bec3653434 unbound from our chassis
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.505 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef1a3009-9f3e-4c4a-9ee4-04bec3653434, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.505 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[afcb63c0-dc0f-4db0-98f2-833a284ed583]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.509 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 namespace which is not needed anymore
Dec 13 09:14:31 compute-0 kernel: tap9538b93c-6b (unregistering): left promiscuous mode
Dec 13 09:14:31 compute-0 NetworkManager[50376]: <info>  [1765617271.5205] device (tap9538b93c-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 ovn_controller[148476]: 2025-12-13T09:14:31Z|01549|binding|INFO|Releasing lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d from this chassis (sb_readonly=0)
Dec 13 09:14:31 compute-0 ovn_controller[148476]: 2025-12-13T09:14:31Z|01550|binding|INFO|Setting lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d down in Southbound
Dec 13 09:14:31 compute-0 ovn_controller[148476]: 2025-12-13T09:14:31Z|01551|binding|INFO|Removing iface tap9538b93c-6b ovn-installed in OVS
Dec 13 09:14:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:14:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005340049' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.549 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84'], port_security=['fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe39:da84/64 2001:db8::f816:3eff:fe39:da84/64', 'neutron:device_id': '707e75d9-f6d2-413d-a727-c3ecbfea90c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.563 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.582 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:14:31 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Dec 13 09:14:31 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008f.scope: Consumed 16.538s CPU time.
Dec 13 09:14:31 compute-0 systemd-machined[210538]: Machine qemu-174-instance-0000008f terminated.
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 podman[397264]: 2025-12-13 09:14:31.627056205 +0000 UTC m=+0.102015110 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:14:31 compute-0 podman[397263]: 2025-12-13 09:14:31.633373983 +0000 UTC m=+0.106992964 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd)
Dec 13 09:14:31 compute-0 podman[397262]: 2025-12-13 09:14:31.665778136 +0000 UTC m=+0.131382146 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.671 248514 INFO nova.virt.libvirt.driver [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance destroyed successfully.
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.672 248514 DEBUG nova.objects.instance [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:14:31 compute-0 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [NOTICE]   (395876) : haproxy version is 2.8.14-c23fe91
Dec 13 09:14:31 compute-0 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [NOTICE]   (395876) : path to executable is /usr/sbin/haproxy
Dec 13 09:14:31 compute-0 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [WARNING]  (395876) : Exiting Master process...
Dec 13 09:14:31 compute-0 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [WARNING]  (395876) : Exiting Master process...
Dec 13 09:14:31 compute-0 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [ALERT]    (395876) : Current worker (395878) exited with code 143 (Terminated)
Dec 13 09:14:31 compute-0 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [WARNING]  (395876) : All workers exited. Exiting... (0)
Dec 13 09:14:31 compute-0 systemd[1]: libpod-1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a.scope: Deactivated successfully.
Dec 13 09:14:31 compute-0 podman[397341]: 2025-12-13 09:14:31.689932901 +0000 UTC m=+0.060949779 container died 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.690 248514 DEBUG nova.virt.libvirt.vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:13:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:13:19Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.691 248514 DEBUG nova.network.os_vif_util [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.692 248514 DEBUG nova.network.os_vif_util [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.693 248514 DEBUG os_vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.695 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37c91936-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.705 248514 INFO os_vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05')
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.706 248514 DEBUG nova.virt.libvirt.vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:13:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:13:19Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.707 248514 DEBUG nova.network.os_vif_util [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.708 248514 DEBUG nova.network.os_vif_util [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.708 248514 DEBUG os_vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.712 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9538b93c-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.717 248514 INFO os_vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b')
Dec 13 09:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a-userdata-shm.mount: Deactivated successfully.
Dec 13 09:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e219d71c2457e91e96bcf457fd2d2a6b9c400e89e423017a2889e15b6fb339-merged.mount: Deactivated successfully.
Dec 13 09:14:31 compute-0 podman[397341]: 2025-12-13 09:14:31.735383201 +0000 UTC m=+0.106400079 container cleanup 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:14:31 compute-0 systemd[1]: libpod-conmon-1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a.scope: Deactivated successfully.
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.771 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.772 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.809 248514 DEBUG nova.compute.manager [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-unplugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.809 248514 DEBUG oslo_concurrency.lockutils [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.810 248514 DEBUG oslo_concurrency.lockutils [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.810 248514 DEBUG oslo_concurrency.lockutils [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.810 248514 DEBUG nova.compute.manager [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-unplugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.810 248514 DEBUG nova.compute.manager [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-unplugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:14:31 compute-0 podman[397418]: 2025-12-13 09:14:31.814862824 +0000 UTC m=+0.055440102 container remove 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.822 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c58bddde-0ab0-4e0f-a1f9-3122d027cbd1]: (4, ('Sat Dec 13 09:14:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 (1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a)\n1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a\nSat Dec 13 09:14:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 (1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a)\n1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.825 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5d004b-4c70-4ae3-99e0-69df8533c7c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.826 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef1a3009-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.829 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 kernel: tapef1a3009-90: left promiscuous mode
Dec 13 09:14:31 compute-0 nova_compute[248510]: 2025-12-13 09:14:31.843 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.847 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0f36a0-8499-49ea-b2a7-2b9f67f7582d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.866 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b55d66a-44ee-4243-97fe-5835514d6e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.868 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c60fc68-5761-45ed-822c-40f027197170]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c187e78b-e96d-454d-b44e-93364e1db578]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977591, 'reachable_time': 19949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397435, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 systemd[1]: run-netns-ovnmeta\x2def1a3009\x2d9f3e\x2d4c4a\x2d9ee4\x2d04bec3653434.mount: Deactivated successfully.
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.891 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.892 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b99435a1-94e7-43ef-b0bd-e1214e8023fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.892 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 unbound from our chassis
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.894 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff80f4de-0e76-47f2-b06e-6d3900b63130, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.894 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b90ed9a5-be50-49a8-bca0-3682eebc5f66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.895 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 namespace which is not needed anymore
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.001 248514 INFO nova.virt.libvirt.driver [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Deleting instance files /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1_del
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.002 248514 INFO nova.virt.libvirt.driver [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Deletion of /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1_del complete
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.008 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.009 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3439MB free_disk=59.941806520335376GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.009 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.010 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:32 compute-0 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [NOTICE]   (395952) : haproxy version is 2.8.14-c23fe91
Dec 13 09:14:32 compute-0 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [NOTICE]   (395952) : path to executable is /usr/sbin/haproxy
Dec 13 09:14:32 compute-0 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [WARNING]  (395952) : Exiting Master process...
Dec 13 09:14:32 compute-0 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [ALERT]    (395952) : Current worker (395954) exited with code 143 (Terminated)
Dec 13 09:14:32 compute-0 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [WARNING]  (395952) : All workers exited. Exiting... (0)
Dec 13 09:14:32 compute-0 systemd[1]: libpod-8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6.scope: Deactivated successfully.
Dec 13 09:14:32 compute-0 podman[397454]: 2025-12-13 09:14:32.042253967 +0000 UTC m=+0.043422680 container died 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6-userdata-shm.mount: Deactivated successfully.
Dec 13 09:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5335169775ac2f2f0d9de9ac62c16a1b6366679c57b207e1bc023d06103d75f6-merged.mount: Deactivated successfully.
Dec 13 09:14:32 compute-0 podman[397454]: 2025-12-13 09:14:32.077684795 +0000 UTC m=+0.078853498 container cleanup 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:14:32 compute-0 systemd[1]: libpod-conmon-8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6.scope: Deactivated successfully.
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.101 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.101 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.102 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.109 248514 INFO nova.compute.manager [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Took 0.69 seconds to destroy the instance on the hypervisor.
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.109 248514 DEBUG oslo.service.loopingcall [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.109 248514 DEBUG nova.compute.manager [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.110 248514 DEBUG nova.network.neutron [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:14:32 compute-0 podman[397483]: 2025-12-13 09:14:32.141833464 +0000 UTC m=+0.043045490 container remove 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.147 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cc4361-31a0-4656-bc52-3580d41f3058]: (4, ('Sat Dec 13 09:14:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 (8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6)\n8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6\nSat Dec 13 09:14:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 (8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6)\n8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.149 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[23f73562-33dd-4aa7-a21c-89d50d9f98e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.150 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff80f4de-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:14:32 compute-0 kernel: tapff80f4de-00: left promiscuous mode
Dec 13 09:14:32 compute-0 ceph-mon[76537]: pgmap v3402: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Dec 13 09:14:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1005340049' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.169 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50f9bd48-a50a-49ce-b4f4-b301c0a1e376]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0183e1-1867-4472-959f-832ff0adfa4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.185 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40aaed85-b496-4d3f-86ea-49d922368f02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.204 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[191d745b-a977-403e-a406-a140b8cb9eda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977692, 'reachable_time': 25316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397499, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.206 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:14:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.206 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f2ee75-989d-41d8-94ab-08545691d75a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.508 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.509 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing instance network info cache due to event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.510 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.510 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.510 248514 DEBUG nova.network.neutron [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:14:32 compute-0 systemd[1]: run-netns-ovnmeta\x2dff80f4de\x2d0e76\x2d47f2\x2db06e\x2d6d3900b63130.mount: Deactivated successfully.
Dec 13 09:14:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:14:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111490107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:14:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.745 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.751 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.774 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.800 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:14:32 compute-0 nova_compute[248510]: 2025-12-13 09:14:32.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 09:14:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2111490107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.667 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.963 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.995 248514 DEBUG nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.996 248514 DEBUG oslo_concurrency.lockutils [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.997 248514 DEBUG oslo_concurrency.lockutils [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.997 248514 DEBUG oslo_concurrency.lockutils [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.997 248514 DEBUG nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.998 248514 WARNING nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received unexpected event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d for instance with vm_state active and task_state deleting.
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.998 248514 DEBUG nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-deleted-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.998 248514 INFO nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Neutron deleted interface 37c91936-0589-4bd3-9413-3af7db3e8feb; detaching it from the instance and deleting it from the info cache
Dec 13 09:14:33 compute-0 nova_compute[248510]: 2025-12-13 09:14:33.999 248514 DEBUG nova.network.neutron [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.111 248514 DEBUG nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Detach interface failed, port_id=37c91936-0589-4bd3-9413-3af7db3e8feb, reason: Instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:14:34 compute-0 ceph-mon[76537]: pgmap v3403: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.297 248514 DEBUG nova.network.neutron [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.315 248514 INFO nova.compute.manager [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Took 2.21 seconds to deallocate network for instance.
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.619 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.620 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.691 248514 DEBUG oslo_concurrency.processutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:14:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 12 KiB/s wr, 57 op/s
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.959 248514 DEBUG nova.network.neutron [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated VIF entry in instance network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.961 248514 DEBUG nova.network.neutron [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.982 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.983 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-unplugged-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.983 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.984 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.984 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.984 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-unplugged-37c91936-0589-4bd3-9413-3af7db3e8feb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.985 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-unplugged-37c91936-0589-4bd3-9413-3af7db3e8feb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.985 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.985 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.986 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.986 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.986 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:14:34 compute-0 nova_compute[248510]: 2025-12-13 09:14:34.987 248514 WARNING nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received unexpected event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb for instance with vm_state active and task_state deleting.
Dec 13 09:14:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:14:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/83806067' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:14:35 compute-0 nova_compute[248510]: 2025-12-13 09:14:35.290 248514 DEBUG oslo_concurrency.processutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:14:35 compute-0 nova_compute[248510]: 2025-12-13 09:14:35.298 248514 DEBUG nova.compute.provider_tree [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:14:35 compute-0 nova_compute[248510]: 2025-12-13 09:14:35.321 248514 DEBUG nova.scheduler.client.report [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:14:35 compute-0 nova_compute[248510]: 2025-12-13 09:14:35.360 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:35 compute-0 nova_compute[248510]: 2025-12-13 09:14:35.408 248514 INFO nova.scheduler.client.report [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1
Dec 13 09:14:35 compute-0 nova_compute[248510]: 2025-12-13 09:14:35.479 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:36 compute-0 nova_compute[248510]: 2025-12-13 09:14:36.188 248514 DEBUG nova.compute.manager [req-1f944e3a-19e8-476c-a440-5a5d13870c4a req-da96ce95-e8fb-4ddd-9347-9d7f9b36f889 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-deleted-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:14:36 compute-0 nova_compute[248510]: 2025-12-13 09:14:36.188 248514 INFO nova.compute.manager [req-1f944e3a-19e8-476c-a440-5a5d13870c4a req-da96ce95-e8fb-4ddd-9347-9d7f9b36f889 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Neutron deleted interface 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d; detaching it from the instance and deleting it from the info cache
Dec 13 09:14:36 compute-0 nova_compute[248510]: 2025-12-13 09:14:36.189 248514 DEBUG nova.network.neutron [req-1f944e3a-19e8-476c-a440-5a5d13870c4a req-da96ce95-e8fb-4ddd-9347-9d7f9b36f889 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 13 09:14:36 compute-0 ceph-mon[76537]: pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 12 KiB/s wr, 57 op/s
Dec 13 09:14:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/83806067' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:14:36 compute-0 nova_compute[248510]: 2025-12-13 09:14:36.191 248514 DEBUG nova.compute.manager [req-1f944e3a-19e8-476c-a440-5a5d13870c4a req-da96ce95-e8fb-4ddd-9347-9d7f9b36f889 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Detach interface failed, port_id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d, reason: Instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:14:36 compute-0 nova_compute[248510]: 2025-12-13 09:14:36.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:36 compute-0 nova_compute[248510]: 2025-12-13 09:14:36.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 09:14:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:38 compute-0 ceph-mon[76537]: pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 09:14:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 09:14:38 compute-0 nova_compute[248510]: 2025-12-13 09:14:38.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:39 compute-0 ceph-mon[76537]: pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 09:14:39 compute-0 sudo[397543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:14:39 compute-0 sudo[397543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:39 compute-0 sudo[397543]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:39 compute-0 sudo[397568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:14:39 compute-0 sudo[397568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:14:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:14:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:14:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:14:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:14:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:14:40 compute-0 sudo[397568]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:14:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:14:40 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:14:40 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:14:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:14:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:14:40 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:14:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:14:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:14:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:14:40 compute-0 sudo[397623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:14:40 compute-0 sudo[397623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:40 compute-0 sudo[397623]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:40 compute-0 sudo[397648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:14:40 compute-0 sudo[397648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.8 KiB/s wr, 50 op/s
Dec 13 09:14:40 compute-0 podman[397685]: 2025-12-13 09:14:40.992721366 +0000 UTC m=+0.098072760 container create fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 09:14:41 compute-0 podman[397685]: 2025-12-13 09:14:40.919442828 +0000 UTC m=+0.024794242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:14:41 compute-0 systemd[1]: Started libpod-conmon-fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9.scope.
Dec 13 09:14:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:14:41 compute-0 podman[397685]: 2025-12-13 09:14:41.100820067 +0000 UTC m=+0.206171481 container init fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:14:41 compute-0 podman[397685]: 2025-12-13 09:14:41.112414968 +0000 UTC m=+0.217766362 container start fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 09:14:41 compute-0 podman[397685]: 2025-12-13 09:14:41.116607293 +0000 UTC m=+0.221958737 container attach fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 09:14:41 compute-0 heuristic_taussig[397701]: 167 167
Dec 13 09:14:41 compute-0 systemd[1]: libpod-fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9.scope: Deactivated successfully.
Dec 13 09:14:41 compute-0 conmon[397701]: conmon fdf047bfa4400cf388bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9.scope/container/memory.events
Dec 13 09:14:41 compute-0 podman[397685]: 2025-12-13 09:14:41.120972543 +0000 UTC m=+0.226323937 container died fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 09:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-82a85eee03b34e73ba7fa06479a74d9ec30f67b644d570cdc37ba5052f4f04a8-merged.mount: Deactivated successfully.
Dec 13 09:14:41 compute-0 podman[397685]: 2025-12-13 09:14:41.29309573 +0000 UTC m=+0.398447134 container remove fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:14:41 compute-0 systemd[1]: libpod-conmon-fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9.scope: Deactivated successfully.
Dec 13 09:14:41 compute-0 podman[397726]: 2025-12-13 09:14:41.461691358 +0000 UTC m=+0.051670577 container create dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 09:14:41 compute-0 systemd[1]: Started libpod-conmon-dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8.scope.
Dec 13 09:14:41 compute-0 ceph-mon[76537]: pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.8 KiB/s wr, 50 op/s
Dec 13 09:14:41 compute-0 podman[397726]: 2025-12-13 09:14:41.437714247 +0000 UTC m=+0.027693466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:14:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:41 compute-0 podman[397726]: 2025-12-13 09:14:41.572581559 +0000 UTC m=+0.162560768 container init dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec 13 09:14:41 compute-0 nova_compute[248510]: 2025-12-13 09:14:41.575 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617266.569178, ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:14:41 compute-0 nova_compute[248510]: 2025-12-13 09:14:41.576 248514 INFO nova.compute.manager [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] VM Stopped (Lifecycle Event)
Dec 13 09:14:41 compute-0 podman[397726]: 2025-12-13 09:14:41.591985676 +0000 UTC m=+0.181964865 container start dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 09:14:41 compute-0 podman[397726]: 2025-12-13 09:14:41.595932525 +0000 UTC m=+0.185911714 container attach dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 09:14:41 compute-0 nova_compute[248510]: 2025-12-13 09:14:41.604 248514 DEBUG nova.compute.manager [None req-2ffe027d-610e-48d9-99c6-501324836918 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:14:41 compute-0 nova_compute[248510]: 2025-12-13 09:14:41.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:42 compute-0 strange_swanson[397742]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:14:42 compute-0 strange_swanson[397742]: --> All data devices are unavailable
Dec 13 09:14:42 compute-0 systemd[1]: libpod-dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8.scope: Deactivated successfully.
Dec 13 09:14:42 compute-0 podman[397762]: 2025-12-13 09:14:42.198624001 +0000 UTC m=+0.051007631 container died dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 09:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e-merged.mount: Deactivated successfully.
Dec 13 09:14:42 compute-0 podman[397762]: 2025-12-13 09:14:42.252353488 +0000 UTC m=+0.104737078 container remove dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:14:42 compute-0 systemd[1]: libpod-conmon-dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8.scope: Deactivated successfully.
Dec 13 09:14:42 compute-0 sudo[397648]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:42 compute-0 sudo[397777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:14:42 compute-0 sudo[397777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:42 compute-0 sudo[397777]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:42 compute-0 sudo[397802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:14:42 compute-0 sudo[397802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:42 compute-0 podman[397839]: 2025-12-13 09:14:42.747887525 +0000 UTC m=+0.058843846 container create ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 09:14:42 compute-0 systemd[1]: Started libpod-conmon-ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1.scope.
Dec 13 09:14:42 compute-0 podman[397839]: 2025-12-13 09:14:42.71699114 +0000 UTC m=+0.027947521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:14:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:14:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:14:42 compute-0 podman[397839]: 2025-12-13 09:14:42.857023193 +0000 UTC m=+0.167979544 container init ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 09:14:42 compute-0 podman[397839]: 2025-12-13 09:14:42.865780892 +0000 UTC m=+0.176737213 container start ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 09:14:42 compute-0 podman[397839]: 2025-12-13 09:14:42.869842934 +0000 UTC m=+0.180799255 container attach ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:14:42 compute-0 pensive_golick[397856]: 167 167
Dec 13 09:14:42 compute-0 systemd[1]: libpod-ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1.scope: Deactivated successfully.
Dec 13 09:14:42 compute-0 podman[397839]: 2025-12-13 09:14:42.872818269 +0000 UTC m=+0.183774600 container died ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6e433f08f3d8145ee0386b597303e263bf14dd2da9a5f2469b0cf1ab966dc53-merged.mount: Deactivated successfully.
Dec 13 09:14:42 compute-0 podman[397839]: 2025-12-13 09:14:42.915033167 +0000 UTC m=+0.225989488 container remove ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 09:14:42 compute-0 systemd[1]: libpod-conmon-ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1.scope: Deactivated successfully.
Dec 13 09:14:43 compute-0 podman[397880]: 2025-12-13 09:14:43.122017889 +0000 UTC m=+0.051889563 container create 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:14:43 compute-0 systemd[1]: Started libpod-conmon-735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70.scope.
Dec 13 09:14:43 compute-0 podman[397880]: 2025-12-13 09:14:43.098984701 +0000 UTC m=+0.028856395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:14:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:43 compute-0 podman[397880]: 2025-12-13 09:14:43.217047882 +0000 UTC m=+0.146919576 container init 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:14:43 compute-0 podman[397880]: 2025-12-13 09:14:43.225762071 +0000 UTC m=+0.155633765 container start 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 09:14:43 compute-0 podman[397880]: 2025-12-13 09:14:43.230014867 +0000 UTC m=+0.159886561 container attach 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]: {
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:     "0": [
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:         {
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "devices": [
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "/dev/loop3"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             ],
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_name": "ceph_lv0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_size": "21470642176",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "name": "ceph_lv0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "tags": {
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cluster_name": "ceph",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.crush_device_class": "",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.encrypted": "0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.objectstore": "bluestore",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osd_id": "0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.type": "block",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.vdo": "0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.with_tpm": "0"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             },
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "type": "block",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "vg_name": "ceph_vg0"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:         }
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:     ],
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:     "1": [
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:         {
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "devices": [
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "/dev/loop4"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             ],
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_name": "ceph_lv1",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_size": "21470642176",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "name": "ceph_lv1",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "tags": {
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cluster_name": "ceph",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.crush_device_class": "",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.encrypted": "0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.objectstore": "bluestore",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osd_id": "1",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.type": "block",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.vdo": "0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.with_tpm": "0"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             },
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "type": "block",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "vg_name": "ceph_vg1"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:         }
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:     ],
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:     "2": [
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:         {
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "devices": [
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "/dev/loop5"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             ],
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_name": "ceph_lv2",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_size": "21470642176",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "name": "ceph_lv2",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "tags": {
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.cluster_name": "ceph",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.crush_device_class": "",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.encrypted": "0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.objectstore": "bluestore",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osd_id": "2",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.type": "block",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.vdo": "0",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:                 "ceph.with_tpm": "0"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             },
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "type": "block",
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:             "vg_name": "ceph_vg2"
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:         }
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]:     ]
Dec 13 09:14:43 compute-0 eloquent_gagarin[397897]: }
Dec 13 09:14:43 compute-0 systemd[1]: libpod-735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70.scope: Deactivated successfully.
Dec 13 09:14:43 compute-0 podman[397880]: 2025-12-13 09:14:43.563381688 +0000 UTC m=+0.493253362 container died 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12-merged.mount: Deactivated successfully.
Dec 13 09:14:43 compute-0 podman[397880]: 2025-12-13 09:14:43.614534791 +0000 UTC m=+0.544406485 container remove 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:14:43 compute-0 systemd[1]: libpod-conmon-735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70.scope: Deactivated successfully.
Dec 13 09:14:43 compute-0 sudo[397802]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:43 compute-0 sudo[397918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:14:43 compute-0 sudo[397918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:43 compute-0 sudo[397918]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:43 compute-0 nova_compute[248510]: 2025-12-13 09:14:43.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:14:43 compute-0 sudo[397943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:14:43 compute-0 sudo[397943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:43 compute-0 ceph-mon[76537]: pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:14:43 compute-0 nova_compute[248510]: 2025-12-13 09:14:43.968 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:44 compute-0 podman[397981]: 2025-12-13 09:14:44.106399727 +0000 UTC m=+0.056733603 container create 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 09:14:44 compute-0 systemd[1]: Started libpod-conmon-848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34.scope.
Dec 13 09:14:44 compute-0 podman[397981]: 2025-12-13 09:14:44.081093353 +0000 UTC m=+0.031427259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:14:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:14:44 compute-0 podman[397981]: 2025-12-13 09:14:44.214656903 +0000 UTC m=+0.164990789 container init 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 09:14:44 compute-0 podman[397981]: 2025-12-13 09:14:44.228421278 +0000 UTC m=+0.178755134 container start 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:14:44 compute-0 podman[397981]: 2025-12-13 09:14:44.232532191 +0000 UTC m=+0.182866147 container attach 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:14:44 compute-0 strange_torvalds[397998]: 167 167
Dec 13 09:14:44 compute-0 systemd[1]: libpod-848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34.scope: Deactivated successfully.
Dec 13 09:14:44 compute-0 conmon[397998]: conmon 848847b13eb16cd50181 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34.scope/container/memory.events
Dec 13 09:14:44 compute-0 podman[397981]: 2025-12-13 09:14:44.237321481 +0000 UTC m=+0.187655377 container died 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 09:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e9e0a42229df10f2d6689cc8f3658afbbe6a8b8fb5da65db42c7778a72074e7-merged.mount: Deactivated successfully.
Dec 13 09:14:44 compute-0 podman[397981]: 2025-12-13 09:14:44.297117471 +0000 UTC m=+0.247451337 container remove 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:14:44 compute-0 systemd[1]: libpod-conmon-848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34.scope: Deactivated successfully.
Dec 13 09:14:44 compute-0 podman[398021]: 2025-12-13 09:14:44.467110324 +0000 UTC m=+0.051673557 container create b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:14:44 compute-0 systemd[1]: Started libpod-conmon-b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb.scope.
Dec 13 09:14:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:14:44 compute-0 podman[398021]: 2025-12-13 09:14:44.447742938 +0000 UTC m=+0.032306161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:14:44 compute-0 podman[398021]: 2025-12-13 09:14:44.555970523 +0000 UTC m=+0.140533786 container init b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:14:44 compute-0 podman[398021]: 2025-12-13 09:14:44.564917697 +0000 UTC m=+0.149480920 container start b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 09:14:44 compute-0 podman[398021]: 2025-12-13 09:14:44.568985699 +0000 UTC m=+0.153549012 container attach b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:14:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:14:45 compute-0 lvm[398116]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:14:45 compute-0 lvm[398116]: VG ceph_vg0 finished
Dec 13 09:14:45 compute-0 lvm[398117]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:14:45 compute-0 lvm[398117]: VG ceph_vg1 finished
Dec 13 09:14:45 compute-0 lvm[398119]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:14:45 compute-0 lvm[398119]: VG ceph_vg2 finished
Dec 13 09:14:45 compute-0 tender_banach[398038]: {}
Dec 13 09:14:45 compute-0 systemd[1]: libpod-b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb.scope: Deactivated successfully.
Dec 13 09:14:45 compute-0 systemd[1]: libpod-b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb.scope: Consumed 1.423s CPU time.
Dec 13 09:14:45 compute-0 podman[398021]: 2025-12-13 09:14:45.427912911 +0000 UTC m=+1.012476174 container died b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904-merged.mount: Deactivated successfully.
Dec 13 09:14:45 compute-0 podman[398021]: 2025-12-13 09:14:45.493981439 +0000 UTC m=+1.078544702 container remove b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:14:45 compute-0 systemd[1]: libpod-conmon-b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb.scope: Deactivated successfully.
Dec 13 09:14:45 compute-0 sudo[397943]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:14:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:14:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:14:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:14:45 compute-0 sudo[398134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:14:45 compute-0 sudo[398134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:14:45 compute-0 sudo[398134]: pam_unix(sudo:session): session closed for user root
Dec 13 09:14:45 compute-0 ceph-mon[76537]: pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:14:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:14:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:14:46 compute-0 nova_compute[248510]: 2025-12-13 09:14:46.666 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617271.6649587, 707e75d9-f6d2-413d-a727-c3ecbfea90c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:14:46 compute-0 nova_compute[248510]: 2025-12-13 09:14:46.668 248514 INFO nova.compute.manager [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] VM Stopped (Lifecycle Event)
Dec 13 09:14:46 compute-0 nova_compute[248510]: 2025-12-13 09:14:46.703 248514 DEBUG nova.compute.manager [None req-606fc9f8-7447-4a40-a3af-cc74e0026038 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:14:46 compute-0 nova_compute[248510]: 2025-12-13 09:14:46.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:47 compute-0 nova_compute[248510]: 2025-12-13 09:14:47.056 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:47 compute-0 nova_compute[248510]: 2025-12-13 09:14:47.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:47 compute-0 ceph-mon[76537]: pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:48 compute-0 nova_compute[248510]: 2025-12-13 09:14:48.987 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:50 compute-0 ceph-mon[76537]: pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:51 compute-0 ceph-mon[76537]: pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:51 compute-0 nova_compute[248510]: 2025-12-13 09:14:51.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:53 compute-0 nova_compute[248510]: 2025-12-13 09:14:53.989 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:54 compute-0 ceph-mon[76537]: pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.033426) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294033535, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1507, "num_deletes": 251, "total_data_size": 2521833, "memory_usage": 2565040, "flush_reason": "Manual Compaction"}
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294051836, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2454808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66754, "largest_seqno": 68260, "table_properties": {"data_size": 2447735, "index_size": 4143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14685, "raw_average_key_size": 20, "raw_value_size": 2433650, "raw_average_value_size": 3315, "num_data_blocks": 185, "num_entries": 734, "num_filter_entries": 734, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617138, "oldest_key_time": 1765617138, "file_creation_time": 1765617294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 18472 microseconds, and 7421 cpu microseconds.
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.051908) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2454808 bytes OK
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.051941) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.053478) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.053495) EVENT_LOG_v1 {"time_micros": 1765617294053490, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.053523) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2515209, prev total WAL file size 2515209, number of live WAL files 2.
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.054574) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2397KB)], [158(10MB)]
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294054718, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 13738490, "oldest_snapshot_seqno": -1}
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8746 keys, 11940151 bytes, temperature: kUnknown
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294153131, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11940151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11882873, "index_size": 34317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 229488, "raw_average_key_size": 26, "raw_value_size": 11728109, "raw_average_value_size": 1340, "num_data_blocks": 1329, "num_entries": 8746, "num_filter_entries": 8746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.153588) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11940151 bytes
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.156280) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.4 rd, 121.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.8 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(10.5) write-amplify(4.9) OK, records in: 9260, records dropped: 514 output_compression: NoCompression
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.156302) EVENT_LOG_v1 {"time_micros": 1765617294156291, "job": 98, "event": "compaction_finished", "compaction_time_micros": 98551, "compaction_time_cpu_micros": 40445, "output_level": 6, "num_output_files": 1, "total_output_size": 11940151, "num_input_records": 9260, "num_output_records": 8746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294157012, "job": 98, "event": "table_file_deletion", "file_number": 160}
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294159712, "job": 98, "event": "table_file_deletion", "file_number": 158}
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.054456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:14:54 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:14:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:14:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:55.452 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:14:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:14:55.452 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:14:56 compute-0 ceph-mon[76537]: pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:56 compute-0 nova_compute[248510]: 2025-12-13 09:14:56.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:14:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:57 compute-0 ceph-mon[76537]: pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:14:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:14:58 compute-0 nova_compute[248510]: 2025-12-13 09:14:58.991 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:00 compute-0 ceph-mon[76537]: pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:01 compute-0 ceph-mon[76537]: pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:01 compute-0 nova_compute[248510]: 2025-12-13 09:15:01.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:02 compute-0 podman[398161]: 2025-12-13 09:15:02.001244729 +0000 UTC m=+0.084764942 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 13 09:15:02 compute-0 podman[398162]: 2025-12-13 09:15:02.009965287 +0000 UTC m=+0.085709867 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:15:02 compute-0 podman[398160]: 2025-12-13 09:15:02.050009824 +0000 UTC m=+0.130046031 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 09:15:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:03 compute-0 nova_compute[248510]: 2025-12-13 09:15:03.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:04 compute-0 ceph-mon[76537]: pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:05 compute-0 nova_compute[248510]: 2025-12-13 09:15:05.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:06 compute-0 ceph-mon[76537]: pgmap v3419: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:06 compute-0 nova_compute[248510]: 2025-12-13 09:15:06.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:07 compute-0 ceph-mon[76537]: pgmap v3420: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:08 compute-0 nova_compute[248510]: 2025-12-13 09:15:08.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:15:09
Dec 13 09:15:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:15:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:15:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 13 09:15:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:15:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:15:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:15:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:15:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:15:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:15:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:15:10 compute-0 ceph-mon[76537]: pgmap v3421: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:15:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:15:11 compute-0 ceph-mon[76537]: pgmap v3422: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:11 compute-0 nova_compute[248510]: 2025-12-13 09:15:11.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:13 compute-0 nova_compute[248510]: 2025-12-13 09:15:13.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:14 compute-0 ceph-mon[76537]: pgmap v3423: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:15:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3564067138' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:15:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:15:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3564067138' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:15:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3564067138' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:15:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3564067138' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:15:16 compute-0 ceph-mon[76537]: pgmap v3424: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:16 compute-0 nova_compute[248510]: 2025-12-13 09:15:16.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:17 compute-0 ceph-mon[76537]: pgmap v3425: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:19 compute-0 nova_compute[248510]: 2025-12-13 09:15:19.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:19 compute-0 nova_compute[248510]: 2025-12-13 09:15:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:19 compute-0 ceph-mon[76537]: pgmap v3426: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4722195890038757e-05 of space, bias 1.0, pg target 0.004416658767011627 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697272279419812 of space, bias 1.0, pg target 0.20091816838259435 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:15:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:15:21 compute-0 nova_compute[248510]: 2025-12-13 09:15:21.741 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:21 compute-0 ceph-mon[76537]: pgmap v3427: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:22 compute-0 nova_compute[248510]: 2025-12-13 09:15:22.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:23 compute-0 ceph-mon[76537]: pgmap v3428: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:24 compute-0 nova_compute[248510]: 2025-12-13 09:15:24.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:25 compute-0 nova_compute[248510]: 2025-12-13 09:15:25.622 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:25 compute-0 nova_compute[248510]: 2025-12-13 09:15:25.623 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:25 compute-0 nova_compute[248510]: 2025-12-13 09:15:25.648 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:15:25 compute-0 nova_compute[248510]: 2025-12-13 09:15:25.762 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:25 compute-0 nova_compute[248510]: 2025-12-13 09:15:25.763 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:25 compute-0 nova_compute[248510]: 2025-12-13 09:15:25.790 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:15:25 compute-0 nova_compute[248510]: 2025-12-13 09:15:25.791 248514 INFO nova.compute.claims [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:15:25 compute-0 nova_compute[248510]: 2025-12-13 09:15:25.917 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:25 compute-0 ceph-mon[76537]: pgmap v3429: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:15:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576176643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.551 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.558 248514 DEBUG nova.compute.provider_tree [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.582 248514 DEBUG nova.scheduler.client.report [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.609 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.610 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.659 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.660 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.686 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.708 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.826 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.827 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.827 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Creating image(s)
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.854 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:15:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.882 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.910 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:15:26 compute-0 nova_compute[248510]: 2025-12-13 09:15:26.916 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/576176643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.014 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.015 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.017 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.017 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.041 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.045 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.394 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.466 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.553 248514 DEBUG nova.objects.instance [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 44149dbe-362b-4930-a63b-d04c9a3b3b4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.684 248514 DEBUG nova.policy [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.700 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.701 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Ensure instance console log exists: /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.701 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.702 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:27 compute-0 nova_compute[248510]: 2025-12-13 09:15:27.702 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:27 compute-0 ceph-mon[76537]: pgmap v3430: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 62 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1002 KiB/s wr, 1 op/s
Dec 13 09:15:29 compute-0 nova_compute[248510]: 2025-12-13 09:15:29.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:29 compute-0 ceph-mon[76537]: pgmap v3431: 321 pgs: 321 active+clean; 62 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1002 KiB/s wr, 1 op/s
Dec 13 09:15:30 compute-0 nova_compute[248510]: 2025-12-13 09:15:30.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:30 compute-0 nova_compute[248510]: 2025-12-13 09:15:30.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:30 compute-0 nova_compute[248510]: 2025-12-13 09:15:30.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:30 compute-0 nova_compute[248510]: 2025-12-13 09:15:30.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:30 compute-0 nova_compute[248510]: 2025-12-13 09:15:30.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:30 compute-0 nova_compute[248510]: 2025-12-13 09:15:30.802 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:15:30 compute-0 nova_compute[248510]: 2025-12-13 09:15:30.802 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:15:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990506696' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.352 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.543 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.544 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3487MB free_disk=59.96664601098746GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.544 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.545 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.632 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 44149dbe-362b-4930-a63b-d04c9a3b3b4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.632 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.632 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.688 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:31 compute-0 nova_compute[248510]: 2025-12-13 09:15:31.952 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Successfully created port: 66abeeb9-b4e2-4901-9437-be8cd001222f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:15:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:15:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627510535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:15:32 compute-0 nova_compute[248510]: 2025-12-13 09:15:32.231 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:32 compute-0 nova_compute[248510]: 2025-12-13 09:15:32.238 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:15:32 compute-0 nova_compute[248510]: 2025-12-13 09:15:32.257 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:15:32 compute-0 nova_compute[248510]: 2025-12-13 09:15:32.287 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:15:32 compute-0 nova_compute[248510]: 2025-12-13 09:15:32.288 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:32 compute-0 ceph-mon[76537]: pgmap v3432: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3990506696' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:15:32 compute-0 nova_compute[248510]: 2025-12-13 09:15:32.583 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Successfully created port: 3b47d34a-6968-411d-8f9d-38a835c0fa77 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:15:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:33 compute-0 podman[398459]: 2025-12-13 09:15:33.018437094 +0000 UTC m=+0.091767037 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 09:15:33 compute-0 podman[398460]: 2025-12-13 09:15:33.020864995 +0000 UTC m=+0.090400184 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 09:15:33 compute-0 podman[398458]: 2025-12-13 09:15:33.093545216 +0000 UTC m=+0.170794477 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 09:15:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3627510535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:15:33 compute-0 ceph-mon[76537]: pgmap v3433: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.043 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Successfully updated port: 66abeeb9-b4e2-4901-9437-be8cd001222f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.216 248514 DEBUG nova.compute.manager [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.217 248514 DEBUG nova.compute.manager [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing instance network info cache due to event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.218 248514 DEBUG oslo_concurrency.lockutils [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.218 248514 DEBUG oslo_concurrency.lockutils [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.218 248514 DEBUG nova.network.neutron [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.288 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.289 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.289 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:15:34 compute-0 nova_compute[248510]: 2025-12-13 09:15:34.477 248514 DEBUG nova.network.neutron [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:15:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:35 compute-0 nova_compute[248510]: 2025-12-13 09:15:35.018 248514 DEBUG nova.network.neutron [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:15:35 compute-0 nova_compute[248510]: 2025-12-13 09:15:35.058 248514 DEBUG oslo_concurrency.lockutils [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:15:36 compute-0 ceph-mon[76537]: pgmap v3434: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.017 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Successfully updated port: 3b47d34a-6968-411d-8f9d-38a835c0fa77 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.137 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.138 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.139 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.270 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.319 248514 DEBUG nova.compute.manager [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-changed-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.319 248514 DEBUG nova.compute.manager [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing instance network info cache due to event network-changed-3b47d34a-6968-411d-8f9d-38a835c0fa77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.319 248514 DEBUG oslo_concurrency.lockutils [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.750 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:36 compute-0 nova_compute[248510]: 2025-12-13 09:15:36.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:37 compute-0 ovn_controller[148476]: 2025-12-13T09:15:37Z|01552|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 13 09:15:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:38 compute-0 ceph-mon[76537]: pgmap v3435: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:38 compute-0 nova_compute[248510]: 2025-12-13 09:15:38.973 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.756 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.757 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance network_info: |[{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.758 248514 DEBUG oslo_concurrency.lockutils [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.759 248514 DEBUG nova.network.neutron [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing network info cache for port 3b47d34a-6968-411d-8f9d-38a835c0fa77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.763 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start _get_guest_xml network_info=[{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.769 248514 WARNING nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.777 248514 DEBUG nova.virt.libvirt.host [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.778 248514 DEBUG nova.virt.libvirt.host [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.781 248514 DEBUG nova.virt.libvirt.host [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.782 248514 DEBUG nova.virt.libvirt.host [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.782 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.782 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.783 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.783 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.783 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.783 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.784 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.784 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.784 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.784 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.785 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.785 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:15:39 compute-0 nova_compute[248510]: 2025-12-13 09:15:39.788 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:40 compute-0 ceph-mon[76537]: pgmap v3436: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:15:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:15:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:15:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:15:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:15:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:15:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:15:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:15:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4016951900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.380 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.410 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.415 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 812 KiB/s wr, 25 op/s
Dec 13 09:15:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:15:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3175098924' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.966 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.969 248514 DEBUG nova.virt.libvirt.vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:15:26Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.969 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.970 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.971 248514 DEBUG nova.virt.libvirt.vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:15:26Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.971 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.972 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:15:40 compute-0 nova_compute[248510]: 2025-12-13 09:15:40.973 248514 DEBUG nova.objects.instance [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44149dbe-362b-4930-a63b-d04c9a3b3b4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:15:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4016951900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:15:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3175098924' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.615 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <uuid>44149dbe-362b-4930-a63b-d04c9a3b3b4c</uuid>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <name>instance-00000091</name>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-1513449558</nova:name>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:15:39</nova:creationTime>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:port uuid="66abeeb9-b4e2-4901-9437-be8cd001222f">
Dec 13 09:15:41 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <nova:port uuid="3b47d34a-6968-411d-8f9d-38a835c0fa77">
Dec 13 09:15:41 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1a:e9f0" ipVersion="6"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <system>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <entry name="serial">44149dbe-362b-4930-a63b-d04c9a3b3b4c</entry>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <entry name="uuid">44149dbe-362b-4930-a63b-d04c9a3b3b4c</entry>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </system>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <os>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   </os>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <features>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   </features>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk">
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       </source>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config">
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       </source>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:15:41 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:08:09:e2"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <target dev="tap66abeeb9-b4"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:1a:e9:f0"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <target dev="tap3b47d34a-69"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/console.log" append="off"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <video>
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </video>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:15:41 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:15:41 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:15:41 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:15:41 compute-0 nova_compute[248510]: </domain>
Dec 13 09:15:41 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.617 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Preparing to wait for external event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.618 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.618 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.618 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.619 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Preparing to wait for external event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.619 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.619 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.619 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.620 248514 DEBUG nova.virt.libvirt.vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:15:26Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.620 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.621 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.621 248514 DEBUG os_vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.622 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.623 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.628 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66abeeb9-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.629 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66abeeb9-b4, col_values=(('external_ids', {'iface-id': '66abeeb9-b4e2-4901-9437-be8cd001222f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:09:e2', 'vm-uuid': '44149dbe-362b-4930-a63b-d04c9a3b3b4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:41 compute-0 NetworkManager[50376]: <info>  [1765617341.6328] manager: (tap66abeeb9-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.644 248514 INFO os_vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4')
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.645 248514 DEBUG nova.virt.libvirt.vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:15:26Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.646 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.647 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.648 248514 DEBUG os_vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.649 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.650 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.653 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b47d34a-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.653 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b47d34a-69, col_values=(('external_ids', {'iface-id': '3b47d34a-6968-411d-8f9d-38a835c0fa77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:e9:f0', 'vm-uuid': '44149dbe-362b-4930-a63b-d04c9a3b3b4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:41 compute-0 NetworkManager[50376]: <info>  [1765617341.6570] manager: (tap3b47d34a-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:41 compute-0 nova_compute[248510]: 2025-12-13 09:15:41.669 248514 INFO os_vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69')
Dec 13 09:15:42 compute-0 ceph-mon[76537]: pgmap v3437: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 812 KiB/s wr, 25 op/s
Dec 13 09:15:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:42 compute-0 nova_compute[248510]: 2025-12-13 09:15:42.961 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:15:42 compute-0 nova_compute[248510]: 2025-12-13 09:15:42.961 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:15:42 compute-0 nova_compute[248510]: 2025-12-13 09:15:42.962 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:08:09:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:15:42 compute-0 nova_compute[248510]: 2025-12-13 09:15:42.962 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:1a:e9:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:15:42 compute-0 nova_compute[248510]: 2025-12-13 09:15:42.963 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Using config drive
Dec 13 09:15:42 compute-0 nova_compute[248510]: 2025-12-13 09:15:42.995 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.509 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Creating config drive at /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.520 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq4jvc5cf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.586 248514 DEBUG nova.network.neutron [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updated VIF entry in instance network info cache for port 3b47d34a-6968-411d-8f9d-38a835c0fa77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.588 248514 DEBUG nova.network.neutron [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.637 248514 DEBUG oslo_concurrency.lockutils [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.695 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq4jvc5cf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.731 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.735 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.886 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.888 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Deleting local config drive /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config because it was imported into RBD.
Dec 13 09:15:43 compute-0 NetworkManager[50376]: <info>  [1765617343.9590] manager: (tap66abeeb9-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/643)
Dec 13 09:15:43 compute-0 kernel: tap66abeeb9-b4: entered promiscuous mode
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:43 compute-0 ovn_controller[148476]: 2025-12-13T09:15:43Z|01553|binding|INFO|Claiming lport 66abeeb9-b4e2-4901-9437-be8cd001222f for this chassis.
Dec 13 09:15:43 compute-0 ovn_controller[148476]: 2025-12-13T09:15:43Z|01554|binding|INFO|66abeeb9-b4e2-4901-9437-be8cd001222f: Claiming fa:16:3e:08:09:e2 10.100.0.7
Dec 13 09:15:43 compute-0 kernel: tap3b47d34a-69: entered promiscuous mode
Dec 13 09:15:43 compute-0 nova_compute[248510]: 2025-12-13 09:15:43.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:43 compute-0 NetworkManager[50376]: <info>  [1765617343.9816] manager: (tap3b47d34a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/644)
Dec 13 09:15:43 compute-0 ovn_controller[148476]: 2025-12-13T09:15:43Z|01555|if_status|INFO|Dropped 2 log messages in last 108 seconds (most recently, 108 seconds ago) due to excessive rate
Dec 13 09:15:43 compute-0 ovn_controller[148476]: 2025-12-13T09:15:43Z|01556|if_status|INFO|Not updating pb chassis for 3b47d34a-6968-411d-8f9d-38a835c0fa77 now as sb is readonly
Dec 13 09:15:43 compute-0 ovn_controller[148476]: 2025-12-13T09:15:43Z|01557|binding|INFO|Claiming lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 for this chassis.
Dec 13 09:15:43 compute-0 ovn_controller[148476]: 2025-12-13T09:15:43Z|01558|binding|INFO|3b47d34a-6968-411d-8f9d-38a835c0fa77: Claiming fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0
Dec 13 09:15:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:43.989 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:09:e2 10.100.0.7'], port_security=['fa:16:3e:08:09:e2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '44149dbe-362b-4930-a63b-d04c9a3b3b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fc9ec4-4452-4225-b100-75f3859e091a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74bfbdc0-d183-4405-bf5b-7ce9dc4c0882, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=66abeeb9-b4e2-4901-9437-be8cd001222f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:15:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:43.993 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 66abeeb9-b4e2-4901-9437-be8cd001222f in datapath d3fc9ec4-4452-4225-b100-75f3859e091a bound to our chassis
Dec 13 09:15:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:43.995 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0'], port_security=['fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe1a:e9f0/64', 'neutron:device_id': '44149dbe-362b-4930-a63b-d04c9a3b3b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=637bfd4d-c545-47be-afdb-625fd0ecaffb, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b47d34a-6968-411d-8f9d-38a835c0fa77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:15:43 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:43.999 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fc9ec4-4452-4225-b100-75f3859e091a
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6cd296-0ba4-4058-af46-871d71ea9e6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.019 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3fc9ec4-41 in ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.021 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3fc9ec4-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:15:44 compute-0 systemd-udevd[398660]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:15:44 compute-0 systemd-udevd[398661]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.021 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f9818e5a-c0a2-4b4d-9ac4-411383ea4ae8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.022 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[10c375dd-0a27-4f5c-a60f-8f7309987f00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 systemd-machined[210538]: New machine qemu-176-instance-00000091.
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.040 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c9086443-d4cc-4c57-9431-06049b61445c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 NetworkManager[50376]: <info>  [1765617344.0423] device (tap66abeeb9-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:15:44 compute-0 NetworkManager[50376]: <info>  [1765617344.0435] device (tap66abeeb9-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:15:44 compute-0 NetworkManager[50376]: <info>  [1765617344.0443] device (tap3b47d34a-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:15:44 compute-0 NetworkManager[50376]: <info>  [1765617344.0453] device (tap3b47d34a-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:15:44 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-00000091.
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.066 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfb2b0d-d987-4331-873f-20716b0db226]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:44 compute-0 ovn_controller[148476]: 2025-12-13T09:15:44Z|01559|binding|INFO|Setting lport 66abeeb9-b4e2-4901-9437-be8cd001222f ovn-installed in OVS
Dec 13 09:15:44 compute-0 ovn_controller[148476]: 2025-12-13T09:15:44Z|01560|binding|INFO|Setting lport 66abeeb9-b4e2-4901-9437-be8cd001222f up in Southbound
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:44 compute-0 ovn_controller[148476]: 2025-12-13T09:15:44Z|01561|binding|INFO|Setting lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 ovn-installed in OVS
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.102 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c133e0-02db-4489-8fa1-a6da1c6f7fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 NetworkManager[50376]: <info>  [1765617344.1091] manager: (tapd3fc9ec4-40): new Veth device (/org/freedesktop/NetworkManager/Devices/645)
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.108 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89fd99d5-46cb-40d9-931a-ec6ba9a47dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_controller[148476]: 2025-12-13T09:15:44Z|01562|binding|INFO|Setting lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 up in Southbound
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.143 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[214a853a-9b85-48cf-b8fd-c1a119cea5c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.146 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bbaf9a95-42b0-466b-b0c4-f79c6893464c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 NetworkManager[50376]: <info>  [1765617344.1704] device (tapd3fc9ec4-40): carrier: link connected
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.178 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e97c620c-0de3-413f-9913-2eb3341c193d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ceph-mon[76537]: pgmap v3438: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.197 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac64a9df-21b5-4d78-93b8-39df7b61329e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fc9ec4-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:82:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 446], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992138, 'reachable_time': 20078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398694, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.221 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cb8c3e-18a8-43dd-a236-12e7137d62d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:82c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992138, 'tstamp': 992138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398695, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.247 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57032f95-44cf-4464-b847-b46c3b55840a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fc9ec4-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:82:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 446], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992138, 'reachable_time': 20078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398696, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.297 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[753daf97-5e5f-40ef-9d5c-b7ed8073bdce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.378 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6497679a-14ff-4875-8a73-54e30ee487ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.381 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fc9ec4-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.381 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.382 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fc9ec4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:44 compute-0 NetworkManager[50376]: <info>  [1765617344.3849] manager: (tapd3fc9ec4-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Dec 13 09:15:44 compute-0 kernel: tapd3fc9ec4-40: entered promiscuous mode
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.386 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fc9ec4-40, col_values=(('external_ids', {'iface-id': 'f954b813-27c7-46fa-b00a-23e0bb5d820d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.387 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:44 compute-0 ovn_controller[148476]: 2025-12-13T09:15:44Z|01563|binding|INFO|Releasing lport f954b813-27c7-46fa-b00a-23e0bb5d820d from this chassis (sb_readonly=0)
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.405 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3fc9ec4-4452-4225-b100-75f3859e091a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3fc9ec4-4452-4225-b100-75f3859e091a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.407 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62a9b8a6-4658-40ca-bab3-f22f8fce182c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.407 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-d3fc9ec4-4452-4225-b100-75f3859e091a
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/d3fc9ec4-4452-4225-b100-75f3859e091a.pid.haproxy
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID d3fc9ec4-4452-4225-b100-75f3859e091a
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.409 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'env', 'PROCESS_TAG=haproxy-d3fc9ec4-4452-4225-b100-75f3859e091a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3fc9ec4-4452-4225-b100-75f3859e091a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.667 248514 DEBUG nova.compute.manager [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.668 248514 DEBUG oslo_concurrency.lockutils [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.668 248514 DEBUG oslo_concurrency.lockutils [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.668 248514 DEBUG oslo_concurrency.lockutils [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.669 248514 DEBUG nova.compute.manager [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Processing event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.688 248514 DEBUG nova.compute.manager [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.688 248514 DEBUG oslo_concurrency.lockutils [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.689 248514 DEBUG oslo_concurrency.lockutils [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.689 248514 DEBUG oslo_concurrency.lockutils [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.690 248514 DEBUG nova.compute.manager [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Processing event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.690 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617344.6895018, 44149dbe-362b-4930-a63b-d04c9a3b3b4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.690 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] VM Started (Lifecycle Event)
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.693 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.696 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.701 248514 INFO nova.virt.libvirt.driver [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance spawned successfully.
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.701 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.724 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.731 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.736 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.736 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.736 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.737 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.737 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.737 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.770 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.771 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617344.689662, 44149dbe-362b-4930-a63b-d04c9a3b3b4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.771 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] VM Paused (Lifecycle Event)
Dec 13 09:15:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.808 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.810 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.813 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617344.6956758, 44149dbe-362b-4930-a63b-d04c9a3b3b4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.813 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] VM Resumed (Lifecycle Event)
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.824 248514 INFO nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Took 18.00 seconds to spawn the instance on the hypervisor.
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.825 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.854 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.856 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:15:44 compute-0 podman[398771]: 2025-12-13 09:15:44.87925182 +0000 UTC m=+0.093676215 container create 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:15:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.889 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.903 248514 INFO nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Took 19.18 seconds to build instance.
Dec 13 09:15:44 compute-0 podman[398771]: 2025-12-13 09:15:44.807972834 +0000 UTC m=+0.022397259 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:15:44 compute-0 nova_compute[248510]: 2025-12-13 09:15:44.919 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:44 compute-0 systemd[1]: Started libpod-conmon-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb.scope.
Dec 13 09:15:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72ad838abac5f9443ea2e98c9a253d9a9467a978cbeac3dbb207a420c8706cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:44 compute-0 podman[398771]: 2025-12-13 09:15:44.993692741 +0000 UTC m=+0.208117156 container init 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 09:15:45 compute-0 podman[398771]: 2025-12-13 09:15:45.000561642 +0000 UTC m=+0.214986057 container start 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:15:45 compute-0 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [NOTICE]   (398790) : New worker (398792) forked
Dec 13 09:15:45 compute-0 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [NOTICE]   (398790) : Loading success.
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.070 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b47d34a-6968-411d-8f9d-38a835c0fa77 in datapath 9f757ab8-2a8f-4771-8492-2bcf521016bb unbound from our chassis
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.072 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f757ab8-2a8f-4771-8492-2bcf521016bb
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.085 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[726093cb-7575-43c2-822c-45bd5fa920b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.087 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f757ab8-21 in ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.089 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f757ab8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.089 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c8bed7-ec4f-49a4-9a6d-79dd45b37dc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.090 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5814855e-61e5-450f-8973-0c3fada5a30e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.102 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[df8e8183-545a-4ea2-a98a-8e0f6c0c53c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.118 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f234cdc5-6f4b-4ee8-92c2-1b97968115a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.155 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[555bce7b-629c-4f8f-8159-d3c467b35c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.161 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c227b4c0-4f8e-47b4-bf21-1130f1eb6279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 NetworkManager[50376]: <info>  [1765617345.1631] manager: (tap9f757ab8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/647)
Dec 13 09:15:45 compute-0 systemd-udevd[398690]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.218 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[509606e9-c9e8-4730-a9ea-f49fb270c046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.224 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6562d251-d3b2-48a6-991b-e2a998ae26a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 NetworkManager[50376]: <info>  [1765617345.2554] device (tap9f757ab8-20): carrier: link connected
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.261 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eae2a628-ad93-4785-bd30-8766338daaa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[02b99b13-bfa1-4ded-a5fc-288ff8c59d77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f757ab8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c4:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992247, 'reachable_time': 37493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398811, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.298 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53a8d9ed-6180-4c8e-aa17-8158bae88596]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c4fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992247, 'tstamp': 992247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398812, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.320 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa367df1-e34b-49d4-ab94-6da42bce2bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f757ab8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c4:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992247, 'reachable_time': 37493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398813, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.357 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[251ddd37-91c8-4af1-b802-6dd566937f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.404 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2154ef2c-0019-42b8-b7e8-ea6cbd4838d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.408 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f757ab8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.409 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.409 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f757ab8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:45 compute-0 nova_compute[248510]: 2025-12-13 09:15:45.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:45 compute-0 kernel: tap9f757ab8-20: entered promiscuous mode
Dec 13 09:15:45 compute-0 NetworkManager[50376]: <info>  [1765617345.4125] manager: (tap9f757ab8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Dec 13 09:15:45 compute-0 nova_compute[248510]: 2025-12-13 09:15:45.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.417 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f757ab8-20, col_values=(('external_ids', {'iface-id': '8c81cc53-1ad8-47d5-9def-f8710a05a423'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:45 compute-0 nova_compute[248510]: 2025-12-13 09:15:45.418 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:45 compute-0 ovn_controller[148476]: 2025-12-13T09:15:45Z|01564|binding|INFO|Releasing lport 8c81cc53-1ad8-47d5-9def-f8710a05a423 from this chassis (sb_readonly=0)
Dec 13 09:15:45 compute-0 nova_compute[248510]: 2025-12-13 09:15:45.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.435 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f757ab8-2a8f-4771-8492-2bcf521016bb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f757ab8-2a8f-4771-8492-2bcf521016bb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.436 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f706cf-f5cf-421c-a235-9db849e3b715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.437 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-9f757ab8-2a8f-4771-8492-2bcf521016bb
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/9f757ab8-2a8f-4771-8492-2bcf521016bb.pid.haproxy
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID 9f757ab8-2a8f-4771-8492-2bcf521016bb
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:15:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.439 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'env', 'PROCESS_TAG=haproxy-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f757ab8-2a8f-4771-8492-2bcf521016bb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:15:45 compute-0 sudo[398822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:15:45 compute-0 sudo[398822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:45 compute-0 sudo[398822]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:45 compute-0 nova_compute[248510]: 2025-12-13 09:15:45.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:45 compute-0 sudo[398857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:15:45 compute-0 sudo[398857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:45 compute-0 podman[398891]: 2025-12-13 09:15:45.875027462 +0000 UTC m=+0.059225837 container create 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:15:45 compute-0 systemd[1]: Started libpod-conmon-2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a.scope.
Dec 13 09:15:45 compute-0 podman[398891]: 2025-12-13 09:15:45.84125683 +0000 UTC m=+0.025455225 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:15:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f319c184b06470198864244c31cd681dddd4dfb8c67f2ea0cdfe3b222afde25a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:45 compute-0 podman[398891]: 2025-12-13 09:15:45.959319392 +0000 UTC m=+0.143517777 container init 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:15:45 compute-0 podman[398891]: 2025-12-13 09:15:45.965629359 +0000 UTC m=+0.149827744 container start 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 09:15:45 compute-0 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [NOTICE]   (398911) : New worker (398913) forked
Dec 13 09:15:45 compute-0 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [NOTICE]   (398911) : Loading success.
Dec 13 09:15:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:46.033 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:15:46 compute-0 ceph-mon[76537]: pgmap v3439: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 09:15:46 compute-0 sudo[398857]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:15:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:15:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:15:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:15:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:15:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:15:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:15:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:15:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:15:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:15:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:15:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:15:46 compute-0 sudo[398952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:15:46 compute-0 sudo[398952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:46 compute-0 sudo[398952]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:46 compute-0 sudo[398977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:15:46 compute-0 sudo[398977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.803 248514 DEBUG nova.compute.manager [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.804 248514 DEBUG oslo_concurrency.lockutils [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.804 248514 DEBUG oslo_concurrency.lockutils [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.804 248514 DEBUG oslo_concurrency.lockutils [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.804 248514 DEBUG nova.compute.manager [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.805 248514 WARNING nova.compute.manager [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received unexpected event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f for instance with vm_state active and task_state None.
Dec 13 09:15:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.957 248514 DEBUG nova.compute.manager [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.957 248514 DEBUG oslo_concurrency.lockutils [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.958 248514 DEBUG oslo_concurrency.lockutils [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.959 248514 DEBUG oslo_concurrency.lockutils [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.959 248514 DEBUG nova.compute.manager [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:15:46 compute-0 nova_compute[248510]: 2025-12-13 09:15:46.960 248514 WARNING nova.compute.manager [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received unexpected event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 for instance with vm_state active and task_state None.
Dec 13 09:15:47 compute-0 podman[399016]: 2025-12-13 09:15:47.021419716 +0000 UTC m=+0.079303317 container create 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:15:47 compute-0 podman[399016]: 2025-12-13 09:15:46.983755937 +0000 UTC m=+0.041639548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:15:47 compute-0 systemd[1]: Started libpod-conmon-1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2.scope.
Dec 13 09:15:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:15:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:15:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:15:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:15:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:15:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:15:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:15:47 compute-0 podman[399016]: 2025-12-13 09:15:47.385816476 +0000 UTC m=+0.443700147 container init 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:15:47 compute-0 podman[399016]: 2025-12-13 09:15:47.401200069 +0000 UTC m=+0.459083690 container start 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:15:47 compute-0 strange_chatelet[399032]: 167 167
Dec 13 09:15:47 compute-0 systemd[1]: libpod-1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2.scope: Deactivated successfully.
Dec 13 09:15:47 compute-0 podman[399016]: 2025-12-13 09:15:47.413758592 +0000 UTC m=+0.471642263 container attach 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:15:47 compute-0 podman[399016]: 2025-12-13 09:15:47.414704185 +0000 UTC m=+0.472587806 container died 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-74057b586b1a3c924a34bb7263d0dd78f5b14ff80544de0d85793a0b0e68633c-merged.mount: Deactivated successfully.
Dec 13 09:15:47 compute-0 podman[399016]: 2025-12-13 09:15:47.469843499 +0000 UTC m=+0.527727080 container remove 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 09:15:47 compute-0 systemd[1]: libpod-conmon-1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2.scope: Deactivated successfully.
Dec 13 09:15:47 compute-0 podman[399056]: 2025-12-13 09:15:47.665179387 +0000 UTC m=+0.042284115 container create d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:15:47 compute-0 systemd[1]: Started libpod-conmon-d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828.scope.
Dec 13 09:15:47 compute-0 podman[399056]: 2025-12-13 09:15:47.649170128 +0000 UTC m=+0.026274876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:15:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:47 compute-0 podman[399056]: 2025-12-13 09:15:47.782559471 +0000 UTC m=+0.159664229 container init d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 09:15:47 compute-0 podman[399056]: 2025-12-13 09:15:47.796917769 +0000 UTC m=+0.174022497 container start d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:15:47 compute-0 podman[399056]: 2025-12-13 09:15:47.807031581 +0000 UTC m=+0.184136339 container attach d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:15:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:48 compute-0 sweet_davinci[399073]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:15:48 compute-0 sweet_davinci[399073]: --> All data devices are unavailable
Dec 13 09:15:48 compute-0 systemd[1]: libpod-d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828.scope: Deactivated successfully.
Dec 13 09:15:48 compute-0 podman[399056]: 2025-12-13 09:15:48.381932546 +0000 UTC m=+0.759037274 container died d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:15:48 compute-0 ceph-mon[76537]: pgmap v3440: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 09:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af-merged.mount: Deactivated successfully.
Dec 13 09:15:48 compute-0 podman[399056]: 2025-12-13 09:15:48.447446119 +0000 UTC m=+0.824550847 container remove d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:15:48 compute-0 systemd[1]: libpod-conmon-d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828.scope: Deactivated successfully.
Dec 13 09:15:48 compute-0 sudo[398977]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:48 compute-0 sudo[399106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:15:48 compute-0 sudo[399106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:48 compute-0 sudo[399106]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:48 compute-0 sudo[399131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:15:48 compute-0 sudo[399131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 44 op/s
Dec 13 09:15:48 compute-0 podman[399170]: 2025-12-13 09:15:48.95880901 +0000 UTC m=+0.049994236 container create 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 09:15:48 compute-0 systemd[1]: Started libpod-conmon-5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3.scope.
Dec 13 09:15:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:15:49 compute-0 podman[399170]: 2025-12-13 09:15:48.939218912 +0000 UTC m=+0.030404128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:15:49 compute-0 podman[399170]: 2025-12-13 09:15:49.052854954 +0000 UTC m=+0.144040200 container init 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:15:49 compute-0 podman[399170]: 2025-12-13 09:15:49.064705019 +0000 UTC m=+0.155890255 container start 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:15:49 compute-0 podman[399170]: 2025-12-13 09:15:49.068914824 +0000 UTC m=+0.160100070 container attach 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:15:49 compute-0 festive_boyd[399186]: 167 167
Dec 13 09:15:49 compute-0 systemd[1]: libpod-5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3.scope: Deactivated successfully.
Dec 13 09:15:49 compute-0 conmon[399186]: conmon 5c29feabcaf12f6a2140 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3.scope/container/memory.events
Dec 13 09:15:49 compute-0 podman[399170]: 2025-12-13 09:15:49.107064604 +0000 UTC m=+0.198249800 container died 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:15:49 compute-0 nova_compute[248510]: 2025-12-13 09:15:49.108 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-abbcfe0964e2c6444d1fbe8a0afe41d9c79f0667730f1d583c204f858b609767-merged.mount: Deactivated successfully.
Dec 13 09:15:49 compute-0 podman[399170]: 2025-12-13 09:15:49.162170128 +0000 UTC m=+0.253355334 container remove 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 09:15:49 compute-0 systemd[1]: libpod-conmon-5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3.scope: Deactivated successfully.
Dec 13 09:15:49 compute-0 podman[399209]: 2025-12-13 09:15:49.436539514 +0000 UTC m=+0.070593240 container create 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:15:49 compute-0 ceph-mon[76537]: pgmap v3441: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 44 op/s
Dec 13 09:15:49 compute-0 podman[399209]: 2025-12-13 09:15:49.403331447 +0000 UTC m=+0.037385223 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:15:49 compute-0 systemd[1]: Started libpod-conmon-3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab.scope.
Dec 13 09:15:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:49 compute-0 podman[399209]: 2025-12-13 09:15:49.570719278 +0000 UTC m=+0.204772964 container init 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:15:49 compute-0 podman[399209]: 2025-12-13 09:15:49.5800336 +0000 UTC m=+0.214087296 container start 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:15:49 compute-0 podman[399209]: 2025-12-13 09:15:49.656220148 +0000 UTC m=+0.290273864 container attach 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:15:49 compute-0 quizzical_curie[399225]: {
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:     "0": [
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:         {
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "devices": [
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "/dev/loop3"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             ],
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_name": "ceph_lv0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_size": "21470642176",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "name": "ceph_lv0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "tags": {
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cluster_name": "ceph",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.crush_device_class": "",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.encrypted": "0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.objectstore": "bluestore",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osd_id": "0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.type": "block",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.vdo": "0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.with_tpm": "0"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             },
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "type": "block",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "vg_name": "ceph_vg0"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:         }
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:     ],
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:     "1": [
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:         {
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "devices": [
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "/dev/loop4"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             ],
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_name": "ceph_lv1",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_size": "21470642176",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "name": "ceph_lv1",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "tags": {
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cluster_name": "ceph",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.crush_device_class": "",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.encrypted": "0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.objectstore": "bluestore",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osd_id": "1",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.type": "block",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.vdo": "0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.with_tpm": "0"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             },
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "type": "block",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "vg_name": "ceph_vg1"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:         }
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:     ],
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:     "2": [
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:         {
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "devices": [
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "/dev/loop5"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             ],
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_name": "ceph_lv2",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_size": "21470642176",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "name": "ceph_lv2",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "tags": {
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.cluster_name": "ceph",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.crush_device_class": "",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.encrypted": "0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.objectstore": "bluestore",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osd_id": "2",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.type": "block",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.vdo": "0",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:                 "ceph.with_tpm": "0"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             },
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "type": "block",
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:             "vg_name": "ceph_vg2"
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:         }
Dec 13 09:15:49 compute-0 quizzical_curie[399225]:     ]
Dec 13 09:15:49 compute-0 quizzical_curie[399225]: }
Dec 13 09:15:49 compute-0 systemd[1]: libpod-3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab.scope: Deactivated successfully.
Dec 13 09:15:49 compute-0 podman[399209]: 2025-12-13 09:15:49.892224978 +0000 UTC m=+0.526278694 container died 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05-merged.mount: Deactivated successfully.
Dec 13 09:15:49 compute-0 podman[399209]: 2025-12-13 09:15:49.955608258 +0000 UTC m=+0.589661944 container remove 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:15:49 compute-0 systemd[1]: libpod-conmon-3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab.scope: Deactivated successfully.
Dec 13 09:15:49 compute-0 sudo[399131]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:50 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:50.040 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:15:50 compute-0 sudo[399245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:15:50 compute-0 sudo[399245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:50 compute-0 sudo[399245]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:50 compute-0 sudo[399270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:15:50 compute-0 sudo[399270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:50 compute-0 podman[399307]: 2025-12-13 09:15:50.493202603 +0000 UTC m=+0.056626972 container create 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:15:50 compute-0 systemd[1]: Started libpod-conmon-5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b.scope.
Dec 13 09:15:50 compute-0 podman[399307]: 2025-12-13 09:15:50.464264982 +0000 UTC m=+0.027689411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:15:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:15:50 compute-0 podman[399307]: 2025-12-13 09:15:50.606401843 +0000 UTC m=+0.169826232 container init 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:15:50 compute-0 podman[399307]: 2025-12-13 09:15:50.616153466 +0000 UTC m=+0.179577815 container start 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:15:50 compute-0 boring_swirles[399323]: 167 167
Dec 13 09:15:50 compute-0 podman[399307]: 2025-12-13 09:15:50.623477218 +0000 UTC m=+0.186901577 container attach 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:15:50 compute-0 systemd[1]: libpod-5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b.scope: Deactivated successfully.
Dec 13 09:15:50 compute-0 podman[399307]: 2025-12-13 09:15:50.624713529 +0000 UTC m=+0.188137898 container died 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c85ddf19f596662a01d7fbc58fb15bea37b58720d198e143415268d59e6db68b-merged.mount: Deactivated successfully.
Dec 13 09:15:50 compute-0 podman[399307]: 2025-12-13 09:15:50.679870393 +0000 UTC m=+0.243294772 container remove 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:15:50 compute-0 systemd[1]: libpod-conmon-5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b.scope: Deactivated successfully.
Dec 13 09:15:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:15:50 compute-0 podman[399347]: 2025-12-13 09:15:50.90606172 +0000 UTC m=+0.050718775 container create ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:15:50 compute-0 systemd[1]: Started libpod-conmon-ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347.scope.
Dec 13 09:15:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:15:50 compute-0 podman[399347]: 2025-12-13 09:15:50.881849406 +0000 UTC m=+0.026506491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:15:51 compute-0 podman[399347]: 2025-12-13 09:15:51.015895976 +0000 UTC m=+0.160553051 container init ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:15:51 compute-0 podman[399347]: 2025-12-13 09:15:51.025354492 +0000 UTC m=+0.170011547 container start ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 09:15:51 compute-0 podman[399347]: 2025-12-13 09:15:51.02927971 +0000 UTC m=+0.173936765 container attach ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 09:15:51 compute-0 nova_compute[248510]: 2025-12-13 09:15:51.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:51 compute-0 lvm[399441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:15:51 compute-0 lvm[399442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:15:51 compute-0 lvm[399441]: VG ceph_vg0 finished
Dec 13 09:15:51 compute-0 lvm[399442]: VG ceph_vg1 finished
Dec 13 09:15:51 compute-0 lvm[399444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:15:51 compute-0 lvm[399444]: VG ceph_vg2 finished
Dec 13 09:15:51 compute-0 ovn_controller[148476]: 2025-12-13T09:15:51Z|01565|binding|INFO|Releasing lport 8c81cc53-1ad8-47d5-9def-f8710a05a423 from this chassis (sb_readonly=0)
Dec 13 09:15:51 compute-0 ovn_controller[148476]: 2025-12-13T09:15:51Z|01566|binding|INFO|Releasing lport f954b813-27c7-46fa-b00a-23e0bb5d820d from this chassis (sb_readonly=0)
Dec 13 09:15:51 compute-0 NetworkManager[50376]: <info>  [1765617351.8408] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/649)
Dec 13 09:15:51 compute-0 nova_compute[248510]: 2025-12-13 09:15:51.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:51 compute-0 NetworkManager[50376]: <info>  [1765617351.8424] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Dec 13 09:15:51 compute-0 ovn_controller[148476]: 2025-12-13T09:15:51Z|01567|binding|INFO|Releasing lport 8c81cc53-1ad8-47d5-9def-f8710a05a423 from this chassis (sb_readonly=0)
Dec 13 09:15:51 compute-0 ovn_controller[148476]: 2025-12-13T09:15:51Z|01568|binding|INFO|Releasing lport f954b813-27c7-46fa-b00a-23e0bb5d820d from this chassis (sb_readonly=0)
Dec 13 09:15:51 compute-0 nova_compute[248510]: 2025-12-13 09:15:51.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:51 compute-0 nova_compute[248510]: 2025-12-13 09:15:51.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:51 compute-0 nostalgic_golick[399363]: {}
Dec 13 09:15:51 compute-0 systemd[1]: libpod-ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347.scope: Deactivated successfully.
Dec 13 09:15:51 compute-0 podman[399347]: 2025-12-13 09:15:51.962900403 +0000 UTC m=+1.107557458 container died ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:15:51 compute-0 systemd[1]: libpod-ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347.scope: Consumed 1.494s CPU time.
Dec 13 09:15:51 compute-0 ceph-mon[76537]: pgmap v3442: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5-merged.mount: Deactivated successfully.
Dec 13 09:15:52 compute-0 podman[399347]: 2025-12-13 09:15:52.028941939 +0000 UTC m=+1.173598994 container remove ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 09:15:52 compute-0 systemd[1]: libpod-conmon-ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347.scope: Deactivated successfully.
Dec 13 09:15:52 compute-0 sudo[399270]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:15:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:15:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:15:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:15:52 compute-0 sudo[399459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:15:52 compute-0 sudo[399459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:15:52 compute-0 sudo[399459]: pam_unix(sudo:session): session closed for user root
Dec 13 09:15:52 compute-0 nova_compute[248510]: 2025-12-13 09:15:52.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:52 compute-0 nova_compute[248510]: 2025-12-13 09:15:52.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:15:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:15:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:15:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:15:53 compute-0 nova_compute[248510]: 2025-12-13 09:15:53.700 248514 DEBUG nova.compute.manager [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:15:53 compute-0 nova_compute[248510]: 2025-12-13 09:15:53.701 248514 DEBUG nova.compute.manager [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing instance network info cache due to event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:15:53 compute-0 nova_compute[248510]: 2025-12-13 09:15:53.701 248514 DEBUG oslo_concurrency.lockutils [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:15:53 compute-0 nova_compute[248510]: 2025-12-13 09:15:53.702 248514 DEBUG oslo_concurrency.lockutils [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:15:53 compute-0 nova_compute[248510]: 2025-12-13 09:15:53.702 248514 DEBUG nova.network.neutron [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:15:54 compute-0 nova_compute[248510]: 2025-12-13 09:15:54.108 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:54 compute-0 ceph-mon[76537]: pgmap v3443: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:15:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:15:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:55.452 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:15:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:55.453 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:15:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:15:55.455 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:15:56 compute-0 ceph-mon[76537]: pgmap v3444: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:15:56 compute-0 nova_compute[248510]: 2025-12-13 09:15:56.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:15:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 68 op/s
Dec 13 09:15:57 compute-0 nova_compute[248510]: 2025-12-13 09:15:57.683 248514 DEBUG nova.network.neutron [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updated VIF entry in instance network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:15:57 compute-0 nova_compute[248510]: 2025-12-13 09:15:57.684 248514 DEBUG nova.network.neutron [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:15:57 compute-0 nova_compute[248510]: 2025-12-13 09:15:57.708 248514 DEBUG oslo_concurrency.lockutils [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:15:57 compute-0 nova_compute[248510]: 2025-12-13 09:15:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:15:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.071471) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358071546, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 765, "num_deletes": 251, "total_data_size": 1027448, "memory_usage": 1041872, "flush_reason": "Manual Compaction"}
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358089424, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 651224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68261, "largest_seqno": 69025, "table_properties": {"data_size": 647948, "index_size": 1119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8809, "raw_average_key_size": 20, "raw_value_size": 640984, "raw_average_value_size": 1501, "num_data_blocks": 50, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617295, "oldest_key_time": 1765617295, "file_creation_time": 1765617358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 17997 microseconds, and 4615 cpu microseconds.
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.089474) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 651224 bytes OK
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.089497) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.098838) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.098898) EVENT_LOG_v1 {"time_micros": 1765617358098885, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.098935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1023570, prev total WAL file size 1023570, number of live WAL files 2.
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.099872) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373631' seq:72057594037927935, type:22 .. '6D6772737461740033303133' seq:0, type:0; will stop at (end)
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(635KB)], [161(11MB)]
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358099950, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12591375, "oldest_snapshot_seqno": -1}
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8685 keys, 9475647 bytes, temperature: kUnknown
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358198690, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9475647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9422809, "index_size": 30005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 228384, "raw_average_key_size": 26, "raw_value_size": 9273167, "raw_average_value_size": 1067, "num_data_blocks": 1152, "num_entries": 8685, "num_filter_entries": 8685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.198944) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9475647 bytes
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.201247) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.4 rd, 95.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(33.9) write-amplify(14.6) OK, records in: 9173, records dropped: 488 output_compression: NoCompression
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.201264) EVENT_LOG_v1 {"time_micros": 1765617358201256, "job": 100, "event": "compaction_finished", "compaction_time_micros": 98807, "compaction_time_cpu_micros": 45732, "output_level": 6, "num_output_files": 1, "total_output_size": 9475647, "num_input_records": 9173, "num_output_records": 8685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358201512, "job": 100, "event": "table_file_deletion", "file_number": 163}
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358203459, "job": 100, "event": "table_file_deletion", "file_number": 161}
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.099808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:15:58 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:15:58 compute-0 ceph-mon[76537]: pgmap v3445: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 68 op/s
Dec 13 09:15:58 compute-0 ovn_controller[148476]: 2025-12-13T09:15:58Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:09:e2 10.100.0.7
Dec 13 09:15:58 compute-0 ovn_controller[148476]: 2025-12-13T09:15:58Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:09:e2 10.100.0.7
Dec 13 09:15:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 92 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 693 KiB/s wr, 86 op/s
Dec 13 09:15:59 compute-0 nova_compute[248510]: 2025-12-13 09:15:59.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:00 compute-0 ceph-mon[76537]: pgmap v3446: 321 pgs: 321 active+clean; 92 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 693 KiB/s wr, 86 op/s
Dec 13 09:16:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Dec 13 09:16:01 compute-0 nova_compute[248510]: 2025-12-13 09:16:01.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:02 compute-0 ceph-mon[76537]: pgmap v3447: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Dec 13 09:16:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec 13 09:16:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:03 compute-0 ceph-mon[76537]: pgmap v3448: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec 13 09:16:04 compute-0 podman[399487]: 2025-12-13 09:16:04.011353295 +0000 UTC m=+0.084472126 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 13 09:16:04 compute-0 podman[399486]: 2025-12-13 09:16:04.041296281 +0000 UTC m=+0.123020646 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:16:04 compute-0 podman[399485]: 2025-12-13 09:16:04.055447034 +0000 UTC m=+0.135755124 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:16:04 compute-0 nova_compute[248510]: 2025-12-13 09:16:04.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:16:05 compute-0 ceph-mon[76537]: pgmap v3449: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:16:06 compute-0 nova_compute[248510]: 2025-12-13 09:16:06.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:16:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:07 compute-0 ceph-mon[76537]: pgmap v3450: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:16:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:16:09 compute-0 nova_compute[248510]: 2025-12-13 09:16:09.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:16:09
Dec 13 09:16:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:16:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:16:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'images']
Dec 13 09:16:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:16:09 compute-0 ceph-mon[76537]: pgmap v3451: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:16:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:16:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:16:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:16:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:16:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:16:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:16:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:16:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:16:11 compute-0 nova_compute[248510]: 2025-12-13 09:16:11.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:11 compute-0 ceph-mon[76537]: pgmap v3452: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Dec 13 09:16:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 43 KiB/s wr, 8 op/s
Dec 13 09:16:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:13 compute-0 ceph-mon[76537]: pgmap v3453: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 43 KiB/s wr, 8 op/s
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.239 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.240 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.268 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.376 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.376 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.389 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.389 248514 INFO nova.compute.claims [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:16:14 compute-0 nova_compute[248510]: 2025-12-13 09:16:14.552 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 45 KiB/s wr, 8 op/s
Dec 13 09:16:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:16:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220735548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.124 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.133 248514 DEBUG nova.compute.provider_tree [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:16:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:16:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1793862639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:16:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:16:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1793862639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.159 248514 DEBUG nova.scheduler.client.report [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.193 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.195 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.252 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.253 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.282 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.307 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.403 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.405 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.405 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Creating image(s)
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.439 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.479 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.518 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.525 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.575 248514 DEBUG nova.policy [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.610 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.611 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.612 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.612 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.643 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:16:15 compute-0 nova_compute[248510]: 2025-12-13 09:16:15.648 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.003 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:16 compute-0 ceph-mon[76537]: pgmap v3454: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 45 KiB/s wr, 8 op/s
Dec 13 09:16:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2220735548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:16:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1793862639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:16:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1793862639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.076 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.204 248514 DEBUG nova.objects.instance [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.227 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.227 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Ensure instance console log exists: /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.228 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.229 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.229 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.791 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.792 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:16:16 compute-0 nova_compute[248510]: 2025-12-13 09:16:16.816 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:16:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 09:16:17 compute-0 nova_compute[248510]: 2025-12-13 09:16:17.673 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Successfully created port: fb0ef518-cd2c-4c22-8d93-009f10641109 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:16:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:18 compute-0 ceph-mon[76537]: pgmap v3455: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 09:16:18 compute-0 nova_compute[248510]: 2025-12-13 09:16:18.623 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Successfully created port: 398cfb7c-bd58-4bb7-8778-e485fd13a934 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:16:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 131 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 161 KiB/s wr, 2 op/s
Dec 13 09:16:19 compute-0 nova_compute[248510]: 2025-12-13 09:16:19.119 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:19 compute-0 nova_compute[248510]: 2025-12-13 09:16:19.797 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:20 compute-0 ceph-mon[76537]: pgmap v3456: 321 pgs: 321 active+clean; 131 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 161 KiB/s wr, 2 op/s
Dec 13 09:16:20 compute-0 nova_compute[248510]: 2025-12-13 09:16:20.295 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Successfully updated port: fb0ef518-cd2c-4c22-8d93-009f10641109 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:16:20 compute-0 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG nova.compute.manager [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:16:20 compute-0 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG nova.compute.manager [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing instance network info cache due to event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:16:20 compute-0 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG oslo_concurrency.lockutils [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:16:20 compute-0 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG oslo_concurrency.lockutils [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:16:20 compute-0 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG nova.network.neutron [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:16:20 compute-0 nova_compute[248510]: 2025-12-13 09:16:20.728 248514 DEBUG nova.network.neutron [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:16:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:16:21 compute-0 nova_compute[248510]: 2025-12-13 09:16:21.364 248514 DEBUG nova.network.neutron [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:16:21 compute-0 nova_compute[248510]: 2025-12-13 09:16:21.383 248514 DEBUG oslo_concurrency.lockutils [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:16:21 compute-0 nova_compute[248510]: 2025-12-13 09:16:21.419 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Successfully updated port: 398cfb7c-bd58-4bb7-8778-e485fd13a934 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:16:21 compute-0 nova_compute[248510]: 2025-12-13 09:16:21.445 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:16:21 compute-0 nova_compute[248510]: 2025-12-13 09:16:21.446 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:16:21 compute-0 nova_compute[248510]: 2025-12-13 09:16:21.446 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011199236055863373 of space, bias 1.0, pg target 0.3359770816759012 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697230982719909 of space, bias 1.0, pg target 0.20091692948159728 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:16:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:16:21 compute-0 nova_compute[248510]: 2025-12-13 09:16:21.654 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:16:21 compute-0 nova_compute[248510]: 2025-12-13 09:16:21.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:22 compute-0 ceph-mon[76537]: pgmap v3457: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:16:22 compute-0 nova_compute[248510]: 2025-12-13 09:16:22.574 248514 DEBUG nova.compute.manager [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-changed-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:16:22 compute-0 nova_compute[248510]: 2025-12-13 09:16:22.575 248514 DEBUG nova.compute.manager [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing instance network info cache due to event network-changed-398cfb7c-bd58-4bb7-8778-e485fd13a934. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:16:22 compute-0 nova_compute[248510]: 2025-12-13 09:16:22.575 248514 DEBUG oslo_concurrency.lockutils [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:16:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:16:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:23 compute-0 sshd-session[399738]: Invalid user solana from 80.94.92.165 port 57010
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.612 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.638 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.639 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance network_info: |[{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.640 248514 DEBUG oslo_concurrency.lockutils [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.641 248514 DEBUG nova.network.neutron [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing network info cache for port 398cfb7c-bd58-4bb7-8778-e485fd13a934 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.647 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start _get_guest_xml network_info=[{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:16:23 compute-0 sshd-session[399738]: Connection closed by invalid user solana 80.94.92.165 port 57010 [preauth]
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.655 248514 WARNING nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.664 248514 DEBUG nova.virt.libvirt.host [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.666 248514 DEBUG nova.virt.libvirt.host [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.677 248514 DEBUG nova.virt.libvirt.host [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.678 248514 DEBUG nova.virt.libvirt.host [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.678 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.679 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.680 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.680 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.681 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.681 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.682 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.682 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.683 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.683 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.684 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.684 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:16:23 compute-0 nova_compute[248510]: 2025-12-13 09:16:23.690 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:24 compute-0 ceph-mon[76537]: pgmap v3458: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:16:24 compute-0 nova_compute[248510]: 2025-12-13 09:16:24.121 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:16:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3415804229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:16:24 compute-0 nova_compute[248510]: 2025-12-13 09:16:24.395 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.705s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:24 compute-0 nova_compute[248510]: 2025-12-13 09:16:24.430 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:16:24 compute-0 nova_compute[248510]: 2025-12-13 09:16:24.435 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:24 compute-0 nova_compute[248510]: 2025-12-13 09:16:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:24 compute-0 nova_compute[248510]: 2025-12-13 09:16:24.903 248514 DEBUG nova.network.neutron [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updated VIF entry in instance network info cache for port 398cfb7c-bd58-4bb7-8778-e485fd13a934. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:16:24 compute-0 nova_compute[248510]: 2025-12-13 09:16:24.904 248514 DEBUG nova.network.neutron [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:16:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:16:24 compute-0 nova_compute[248510]: 2025-12-13 09:16:24.943 248514 DEBUG oslo_concurrency.lockutils [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:16:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:16:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3258990419' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:16:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3415804229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:16:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3258990419' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.078 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.080 248514 DEBUG nova.virt.libvirt.vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:16:15Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.080 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.082 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.083 248514 DEBUG nova.virt.libvirt.vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:16:15Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.083 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.084 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.085 248514 DEBUG nova.objects.instance [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.108 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <uuid>c6e4d841-78ee-4a00-87ca-a6c2d542a9b7</uuid>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <name>instance-00000092</name>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-2101964116</nova:name>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:16:23</nova:creationTime>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:port uuid="fb0ef518-cd2c-4c22-8d93-009f10641109">
Dec 13 09:16:25 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <nova:port uuid="398cfb7c-bd58-4bb7-8778-e485fd13a934">
Dec 13 09:16:25 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee3:e271" ipVersion="6"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <system>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <entry name="serial">c6e4d841-78ee-4a00-87ca-a6c2d542a9b7</entry>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <entry name="uuid">c6e4d841-78ee-4a00-87ca-a6c2d542a9b7</entry>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </system>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <os>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   </os>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <features>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   </features>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk">
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       </source>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config">
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       </source>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:16:25 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:43:ef:0d"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <target dev="tapfb0ef518-cd"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:e3:e2:71"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <target dev="tap398cfb7c-bd"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/console.log" append="off"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <video>
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </video>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:16:25 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:16:25 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:16:25 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:16:25 compute-0 nova_compute[248510]: </domain>
Dec 13 09:16:25 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.110 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Preparing to wait for external event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.111 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.111 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.111 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.112 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Preparing to wait for external event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.112 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.112 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.112 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.113 248514 DEBUG nova.virt.libvirt.vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:16:15Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.113 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.114 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.114 248514 DEBUG os_vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.115 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.116 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.120 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.120 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb0ef518-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.121 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb0ef518-cd, col_values=(('external_ids', {'iface-id': 'fb0ef518-cd2c-4c22-8d93-009f10641109', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:ef:0d', 'vm-uuid': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.122 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:25 compute-0 NetworkManager[50376]: <info>  [1765617385.1235] manager: (tapfb0ef518-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.134 248514 INFO os_vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd')
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.135 248514 DEBUG nova.virt.libvirt.vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:16:15Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.135 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.136 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.136 248514 DEBUG os_vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.137 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.137 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.137 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.140 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap398cfb7c-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.140 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap398cfb7c-bd, col_values=(('external_ids', {'iface-id': '398cfb7c-bd58-4bb7-8778-e485fd13a934', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:e2:71', 'vm-uuid': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:25 compute-0 NetworkManager[50376]: <info>  [1765617385.1426] manager: (tap398cfb7c-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.151 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.152 248514 INFO os_vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd')
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.217 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.218 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.218 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:43:ef:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.218 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:e3:e2:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.218 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Using config drive
Dec 13 09:16:25 compute-0 nova_compute[248510]: 2025-12-13 09:16:25.243 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:16:26 compute-0 ceph-mon[76537]: pgmap v3459: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:16:26 compute-0 nova_compute[248510]: 2025-12-13 09:16:26.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:26 compute-0 nova_compute[248510]: 2025-12-13 09:16:26.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:16:26 compute-0 nova_compute[248510]: 2025-12-13 09:16:26.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:16:26 compute-0 nova_compute[248510]: 2025-12-13 09:16:26.813 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 09:16:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.244 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.244 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.244 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.245 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 44149dbe-362b-4930-a63b-d04c9a3b3b4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.302 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Creating config drive at /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.311 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9m_lobec execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.478 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9m_lobec" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.511 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.519 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.726 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.727 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Deleting local config drive /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config because it was imported into RBD.
Dec 13 09:16:27 compute-0 kernel: tapfb0ef518-cd: entered promiscuous mode
Dec 13 09:16:27 compute-0 NetworkManager[50376]: <info>  [1765617387.7994] manager: (tapfb0ef518-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/653)
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01569|binding|INFO|Claiming lport fb0ef518-cd2c-4c22-8d93-009f10641109 for this chassis.
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01570|binding|INFO|fb0ef518-cd2c-4c22-8d93-009f10641109: Claiming fa:16:3e:43:ef:0d 10.100.0.6
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.815 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:ef:0d 10.100.0.6'], port_security=['fa:16:3e:43:ef:0d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fc9ec4-4452-4225-b100-75f3859e091a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74bfbdc0-d183-4405-bf5b-7ce9dc4c0882, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=fb0ef518-cd2c-4c22-8d93-009f10641109) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.818 158419 INFO neutron.agent.ovn.metadata.agent [-] Port fb0ef518-cd2c-4c22-8d93-009f10641109 in datapath d3fc9ec4-4452-4225-b100-75f3859e091a bound to our chassis
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.820 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fc9ec4-4452-4225-b100-75f3859e091a
Dec 13 09:16:27 compute-0 NetworkManager[50376]: <info>  [1765617387.8217] manager: (tap398cfb7c-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Dec 13 09:16:27 compute-0 kernel: tap398cfb7c-bd: entered promiscuous mode
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01571|binding|INFO|Setting lport fb0ef518-cd2c-4c22-8d93-009f10641109 ovn-installed in OVS
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01572|binding|INFO|Setting lport fb0ef518-cd2c-4c22-8d93-009f10641109 up in Southbound
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01573|if_status|INFO|Not updating pb chassis for 398cfb7c-bd58-4bb7-8778-e485fd13a934 now as sb is readonly
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01574|binding|INFO|Claiming lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 for this chassis.
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01575|binding|INFO|398cfb7c-bd58-4bb7-8778-e485fd13a934: Claiming fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01576|binding|INFO|Setting lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 ovn-installed in OVS
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:27 compute-0 systemd-udevd[399882]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:16:27 compute-0 systemd-udevd[399881]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:16:27 compute-0 ovn_controller[148476]: 2025-12-13T09:16:27Z|01577|binding|INFO|Setting lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 up in Southbound
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.851 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271'], port_security=['fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee3:e271/64', 'neutron:device_id': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=637bfd4d-c545-47be-afdb-625fd0ecaffb, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=398cfb7c-bd58-4bb7-8778-e485fd13a934) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:16:27 compute-0 NetworkManager[50376]: <info>  [1765617387.8675] device (tapfb0ef518-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:16:27 compute-0 NetworkManager[50376]: <info>  [1765617387.8683] device (tapfb0ef518-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.862 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3a94be-526e-4960-a423-d976daba31c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:27 compute-0 NetworkManager[50376]: <info>  [1765617387.8688] device (tap398cfb7c-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:16:27 compute-0 NetworkManager[50376]: <info>  [1765617387.8693] device (tap398cfb7c-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:16:27 compute-0 systemd-machined[210538]: New machine qemu-177-instance-00000092.
Dec 13 09:16:27 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-00000092.
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.901 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f5ce95-1783-4542-8985-8047459b9404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.905 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[625c4715-bd97-48ea-bc38-4a459d7dbf2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.943 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[99554286-60fe-4009-98cd-f61cfae63ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.963 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[16fb628c-02a0-4e7d-b604-65a8871e9650]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fc9ec4-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:82:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 446], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992138, 'reachable_time': 20078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399898, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.986 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1f3ac5-cee0-4291-b155-0db17be60a52]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd3fc9ec4-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992154, 'tstamp': 992154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399899, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd3fc9ec4-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992158, 'tstamp': 992158}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399899, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.988 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fc9ec4-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:27 compute-0 nova_compute[248510]: 2025-12-13 09:16:27.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.991 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fc9ec4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.991 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.992 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fc9ec4-40, col_values=(('external_ids', {'iface-id': 'f954b813-27c7-46fa-b00a-23e0bb5d820d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.992 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.994 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 398cfb7c-bd58-4bb7-8778-e485fd13a934 in datapath 9f757ab8-2a8f-4771-8492-2bcf521016bb unbound from our chassis
Dec 13 09:16:27 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.995 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f757ab8-2a8f-4771-8492-2bcf521016bb
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.011 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[054e14b0-5bc7-43d8-b2a9-d38694180f76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.043 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f81e6e7e-9df4-4a85-9b44-0445f58aa14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.048 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2abfadc-8411-4b04-bca7-3560ce5431e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.077 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[572d8da6-4fdd-457d-b736-ffc9827e2c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5009a761-0004-476b-b294-feb2ff16f32b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f757ab8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c4:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992247, 'reachable_time': 37493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399906, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:28 compute-0 ceph-mon[76537]: pgmap v3460: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.116 248514 DEBUG nova.compute.manager [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.116 248514 DEBUG oslo_concurrency.lockutils [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.116 248514 DEBUG oslo_concurrency.lockutils [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.117 248514 DEBUG oslo_concurrency.lockutils [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.117 248514 DEBUG nova.compute.manager [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Processing event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.123 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87d3402d-c9a2-47b2-abdd-e368ff733dbf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f757ab8-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992261, 'tstamp': 992261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399907, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.125 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f757ab8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.126 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.128 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f757ab8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.128 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.128 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f757ab8-20, col_values=(('external_ids', {'iface-id': '8c81cc53-1ad8-47d5-9def-f8710a05a423'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.129 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.164 248514 DEBUG nova.compute.manager [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.165 248514 DEBUG oslo_concurrency.lockutils [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.165 248514 DEBUG oslo_concurrency.lockutils [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.166 248514 DEBUG oslo_concurrency.lockutils [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.166 248514 DEBUG nova.compute.manager [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Processing event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.321 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:16:28 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.323 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.493 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617388.4931283, c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.494 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] VM Started (Lifecycle Event)
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.497 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.501 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.504 248514 INFO nova.virt.libvirt.driver [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance spawned successfully.
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.504 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.519 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.526 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.531 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.531 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.532 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.532 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.532 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.533 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.561 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.562 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617388.4943027, c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.562 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] VM Paused (Lifecycle Event)
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.591 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.596 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617388.5000918, c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.596 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] VM Resumed (Lifecycle Event)
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.609 248514 INFO nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Took 13.20 seconds to spawn the instance on the hypervisor.
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.609 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.621 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.624 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.649 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.689 248514 INFO nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Took 14.35 seconds to build instance.
Dec 13 09:16:28 compute-0 nova_compute[248510]: 2025-12-13 09:16:28.705 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 09:16:29 compute-0 nova_compute[248510]: 2025-12-13 09:16:29.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.224 248514 DEBUG nova.compute.manager [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.224 248514 DEBUG oslo_concurrency.lockutils [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.225 248514 DEBUG oslo_concurrency.lockutils [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.225 248514 DEBUG oslo_concurrency.lockutils [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.225 248514 DEBUG nova.compute.manager [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.225 248514 WARNING nova.compute.manager [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received unexpected event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 for instance with vm_state active and task_state None.
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.259 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:16:30 compute-0 ceph-mon[76537]: pgmap v3461: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.306 248514 DEBUG nova.compute.manager [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.306 248514 DEBUG oslo_concurrency.lockutils [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.307 248514 DEBUG oslo_concurrency.lockutils [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.307 248514 DEBUG oslo_concurrency.lockutils [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.307 248514 DEBUG nova.compute.manager [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.307 248514 WARNING nova.compute.manager [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received unexpected event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 for instance with vm_state active and task_state None.
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.312 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.312 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:16:30 compute-0 nova_compute[248510]: 2025-12-13 09:16:30.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 72 op/s
Dec 13 09:16:32 compute-0 ceph-mon[76537]: pgmap v3462: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 72 op/s
Dec 13 09:16:32 compute-0 nova_compute[248510]: 2025-12-13 09:16:32.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 14 KiB/s wr, 47 op/s
Dec 13 09:16:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.131 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.131 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.131 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.132 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.132 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:33.326 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:33 compute-0 ceph-mon[76537]: pgmap v3463: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 14 KiB/s wr, 47 op/s
Dec 13 09:16:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:16:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408043946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.744 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.879 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.881 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.888 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:16:33 compute-0 nova_compute[248510]: 2025-12-13 09:16:33.889 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.112 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.114 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3095MB free_disk=59.92095153965056GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.115 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.115 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.206 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 44149dbe-362b-4930-a63b-d04c9a3b3b4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.207 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.207 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.207 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.283 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:16:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/408043946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.825 248514 DEBUG nova.compute.manager [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.825 248514 DEBUG nova.compute.manager [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing instance network info cache due to event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.826 248514 DEBUG oslo_concurrency.lockutils [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.826 248514 DEBUG oslo_concurrency.lockutils [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.827 248514 DEBUG nova.network.neutron [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:16:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:16:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378584537' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.887 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.893 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:16:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.914 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.947 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:16:34 compute-0 nova_compute[248510]: 2025-12-13 09:16:34.948 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:34 compute-0 podman[399998]: 2025-12-13 09:16:34.983772975 +0000 UTC m=+0.066136059 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 09:16:34 compute-0 podman[399997]: 2025-12-13 09:16:34.985814306 +0000 UTC m=+0.070773735 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:16:35 compute-0 podman[399996]: 2025-12-13 09:16:35.019747161 +0000 UTC m=+0.104702380 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:16:35 compute-0 nova_compute[248510]: 2025-12-13 09:16:35.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2378584537' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:16:35 compute-0 ceph-mon[76537]: pgmap v3464: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:16:35 compute-0 nova_compute[248510]: 2025-12-13 09:16:35.946 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:35 compute-0 nova_compute[248510]: 2025-12-13 09:16:35.946 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:35 compute-0 nova_compute[248510]: 2025-12-13 09:16:35.946 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:16:36 compute-0 nova_compute[248510]: 2025-12-13 09:16:36.088 248514 DEBUG nova.network.neutron [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updated VIF entry in instance network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:16:36 compute-0 nova_compute[248510]: 2025-12-13 09:16:36.088 248514 DEBUG nova.network.neutron [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:16:36 compute-0 nova_compute[248510]: 2025-12-13 09:16:36.129 248514 DEBUG oslo_concurrency.lockutils [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:16:36 compute-0 nova_compute[248510]: 2025-12-13 09:16:36.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:16:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:38 compute-0 ceph-mon[76537]: pgmap v3465: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:16:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:16:39 compute-0 nova_compute[248510]: 2025-12-13 09:16:39.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:39 compute-0 ceph-mon[76537]: pgmap v3466: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:16:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:16:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:16:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:16:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:16:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:16:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:16:40 compute-0 nova_compute[248510]: 2025-12-13 09:16:40.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Dec 13 09:16:41 compute-0 ceph-mon[76537]: pgmap v3467: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Dec 13 09:16:41 compute-0 ovn_controller[148476]: 2025-12-13T09:16:41Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:ef:0d 10.100.0.6
Dec 13 09:16:41 compute-0 ovn_controller[148476]: 2025-12-13T09:16:41Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:ef:0d 10.100.0.6
Dec 13 09:16:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 1.3 KiB/s wr, 27 op/s
Dec 13 09:16:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:43 compute-0 ceph-mon[76537]: pgmap v3468: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 1.3 KiB/s wr, 27 op/s
Dec 13 09:16:44 compute-0 nova_compute[248510]: 2025-12-13 09:16:44.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 13 09:16:45 compute-0 nova_compute[248510]: 2025-12-13 09:16:45.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:45 compute-0 ceph-mon[76537]: pgmap v3469: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 13 09:16:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:16:47 compute-0 nova_compute[248510]: 2025-12-13 09:16:47.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:16:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:48 compute-0 ceph-mon[76537]: pgmap v3470: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:16:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:16:49 compute-0 nova_compute[248510]: 2025-12-13 09:16:49.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:50 compute-0 ceph-mon[76537]: pgmap v3471: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:16:50 compute-0 nova_compute[248510]: 2025-12-13 09:16:50.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:16:52 compute-0 ceph-mon[76537]: pgmap v3472: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:16:52 compute-0 sudo[400057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:16:52 compute-0 sudo[400057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:16:52 compute-0 sudo[400057]: pam_unix(sudo:session): session closed for user root
Dec 13 09:16:52 compute-0 sudo[400082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 09:16:52 compute-0 sudo[400082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:16:52 compute-0 sudo[400082]: pam_unix(sudo:session): session closed for user root
Dec 13 09:16:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:16:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:16:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:16:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:16:53 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:16:53 compute-0 sudo[400127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:16:53 compute-0 sudo[400127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:16:53 compute-0 sudo[400127]: pam_unix(sudo:session): session closed for user root
Dec 13 09:16:53 compute-0 sudo[400152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:16:53 compute-0 sudo[400152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:16:53 compute-0 sudo[400152]: pam_unix(sudo:session): session closed for user root
Dec 13 09:16:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:16:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:16:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:16:54 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:16:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:16:54 compute-0 nova_compute[248510]: 2025-12-13 09:16:54.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:54 compute-0 ceph-mon[76537]: pgmap v3473: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:16:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:16:54 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:16:54 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:16:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:16:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:16:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:16:54 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:16:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:16:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:16:54 compute-0 sudo[400209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:16:54 compute-0 sudo[400209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:16:54 compute-0 sudo[400209]: pam_unix(sudo:session): session closed for user root
Dec 13 09:16:54 compute-0 sudo[400234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:16:54 compute-0 sudo[400234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:16:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:16:55 compute-0 nova_compute[248510]: 2025-12-13 09:16:55.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:55 compute-0 podman[400271]: 2025-12-13 09:16:55.235672881 +0000 UTC m=+0.050226633 container create 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:16:55 compute-0 systemd[1]: Started libpod-conmon-83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0.scope.
Dec 13 09:16:55 compute-0 podman[400271]: 2025-12-13 09:16:55.215312973 +0000 UTC m=+0.029866755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:16:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:16:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:55.453 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:55.453 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:55.455 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:55 compute-0 podman[400271]: 2025-12-13 09:16:55.575664072 +0000 UTC m=+0.390217844 container init 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:16:55 compute-0 podman[400271]: 2025-12-13 09:16:55.585527698 +0000 UTC m=+0.400081440 container start 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:16:55 compute-0 elated_agnesi[400287]: 167 167
Dec 13 09:16:55 compute-0 systemd[1]: libpod-83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0.scope: Deactivated successfully.
Dec 13 09:16:55 compute-0 conmon[400287]: conmon 83fed1b9815e4cef10a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0.scope/container/memory.events
Dec 13 09:16:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:16:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:16:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:16:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:16:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:16:55 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:16:55 compute-0 ceph-mon[76537]: pgmap v3474: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:16:55 compute-0 podman[400271]: 2025-12-13 09:16:55.661910781 +0000 UTC m=+0.476464583 container attach 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:16:55 compute-0 podman[400271]: 2025-12-13 09:16:55.66386739 +0000 UTC m=+0.478421162 container died 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5788fe39492688df29275c2f179edc036cdd1f33fd32b9b2f8b2e304a8a9ba01-merged.mount: Deactivated successfully.
Dec 13 09:16:55 compute-0 podman[400271]: 2025-12-13 09:16:55.739913565 +0000 UTC m=+0.554467327 container remove 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:16:55 compute-0 systemd[1]: libpod-conmon-83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0.scope: Deactivated successfully.
Dec 13 09:16:56 compute-0 podman[400313]: 2025-12-13 09:16:55.941908608 +0000 UTC m=+0.030939232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:16:56 compute-0 podman[400313]: 2025-12-13 09:16:56.309017576 +0000 UTC m=+0.398048210 container create 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:16:56 compute-0 systemd[1]: Started libpod-conmon-94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353.scope.
Dec 13 09:16:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:16:56 compute-0 podman[400313]: 2025-12-13 09:16:56.785723674 +0000 UTC m=+0.874754268 container init 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 09:16:56 compute-0 podman[400313]: 2025-12-13 09:16:56.792990735 +0000 UTC m=+0.882021329 container start 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:16:56 compute-0 podman[400313]: 2025-12-13 09:16:56.862198419 +0000 UTC m=+0.951229013 container attach 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:16:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 2 op/s
Dec 13 09:16:57 compute-0 thirsty_goldstine[400330]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:16:57 compute-0 thirsty_goldstine[400330]: --> All data devices are unavailable
Dec 13 09:16:57 compute-0 systemd[1]: libpod-94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353.scope: Deactivated successfully.
Dec 13 09:16:57 compute-0 conmon[400330]: conmon 94b5e8882c2f4e22b494 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353.scope/container/memory.events
Dec 13 09:16:57 compute-0 podman[400313]: 2025-12-13 09:16:57.324819137 +0000 UTC m=+1.413849731 container died 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5-merged.mount: Deactivated successfully.
Dec 13 09:16:57 compute-0 podman[400313]: 2025-12-13 09:16:57.396307148 +0000 UTC m=+1.485337742 container remove 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:16:57 compute-0 systemd[1]: libpod-conmon-94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353.scope: Deactivated successfully.
Dec 13 09:16:57 compute-0 sudo[400234]: pam_unix(sudo:session): session closed for user root
Dec 13 09:16:57 compute-0 sudo[400360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:16:57 compute-0 sudo[400360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:16:57 compute-0 sudo[400360]: pam_unix(sudo:session): session closed for user root
Dec 13 09:16:57 compute-0 sudo[400385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:16:57 compute-0 sudo[400385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:16:57 compute-0 podman[400423]: 2025-12-13 09:16:57.864188966 +0000 UTC m=+0.032769727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:16:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.102 248514 DEBUG nova.compute.manager [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.104 248514 DEBUG nova.compute.manager [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing instance network info cache due to event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.104 248514 DEBUG oslo_concurrency.lockutils [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.105 248514 DEBUG oslo_concurrency.lockutils [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.105 248514 DEBUG nova.network.neutron [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:16:58 compute-0 podman[400423]: 2025-12-13 09:16:58.164594432 +0000 UTC m=+0.333175103 container create 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.185 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.186 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.186 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.187 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.187 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.190 248514 INFO nova.compute.manager [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Terminating instance
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.192 248514 DEBUG nova.compute.manager [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:16:58 compute-0 ceph-mon[76537]: pgmap v3475: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 2 op/s
Dec 13 09:16:58 compute-0 systemd[1]: Started libpod-conmon-4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3.scope.
Dec 13 09:16:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:16:58 compute-0 podman[400423]: 2025-12-13 09:16:58.914545857 +0000 UTC m=+1.083126618 container init 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:16:58 compute-0 podman[400423]: 2025-12-13 09:16:58.925780417 +0000 UTC m=+1.094361118 container start 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 09:16:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 2 op/s
Dec 13 09:16:58 compute-0 nice_chatelet[400440]: 167 167
Dec 13 09:16:58 compute-0 systemd[1]: libpod-4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3.scope: Deactivated successfully.
Dec 13 09:16:58 compute-0 kernel: tapfb0ef518-cd (unregistering): left promiscuous mode
Dec 13 09:16:58 compute-0 NetworkManager[50376]: <info>  [1765617418.9686] device (tapfb0ef518-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:16:58 compute-0 ovn_controller[148476]: 2025-12-13T09:16:58Z|01578|binding|INFO|Releasing lport fb0ef518-cd2c-4c22-8d93-009f10641109 from this chassis (sb_readonly=0)
Dec 13 09:16:58 compute-0 ovn_controller[148476]: 2025-12-13T09:16:58Z|01579|binding|INFO|Setting lport fb0ef518-cd2c-4c22-8d93-009f10641109 down in Southbound
Dec 13 09:16:58 compute-0 ovn_controller[148476]: 2025-12-13T09:16:58Z|01580|binding|INFO|Removing iface tapfb0ef518-cd ovn-installed in OVS
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.984 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:58 compute-0 nova_compute[248510]: 2025-12-13 09:16:58.986 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:58.992 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:ef:0d 10.100.0.6'], port_security=['fa:16:3e:43:ef:0d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fc9ec4-4452-4225-b100-75f3859e091a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74bfbdc0-d183-4405-bf5b-7ce9dc4c0882, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=fb0ef518-cd2c-4c22-8d93-009f10641109) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:16:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:58.994 158419 INFO neutron.agent.ovn.metadata.agent [-] Port fb0ef518-cd2c-4c22-8d93-009f10641109 in datapath d3fc9ec4-4452-4225-b100-75f3859e091a unbound from our chassis
Dec 13 09:16:58 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:58.995 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fc9ec4-4452-4225-b100-75f3859e091a
Dec 13 09:16:58 compute-0 kernel: tap398cfb7c-bd (unregistering): left promiscuous mode
Dec 13 09:16:59 compute-0 NetworkManager[50376]: <info>  [1765617419.0038] device (tap398cfb7c-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.006 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 ovn_controller[148476]: 2025-12-13T09:16:59Z|01581|binding|INFO|Releasing lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 from this chassis (sb_readonly=0)
Dec 13 09:16:59 compute-0 ovn_controller[148476]: 2025-12-13T09:16:59Z|01582|binding|INFO|Setting lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 down in Southbound
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 ovn_controller[148476]: 2025-12-13T09:16:59Z|01583|binding|INFO|Removing iface tap398cfb7c-bd ovn-installed in OVS
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.021 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3177051f-4a5e-41a1-a652-285132ee4853]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.026 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.032 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271'], port_security=['fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee3:e271/64', 'neutron:device_id': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=637bfd4d-c545-47be-afdb-625fd0ecaffb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=398cfb7c-bd58-4bb7-8778-e485fd13a934) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.065 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[befce846-63e3-42bf-bc0f-6bd54eaaa57b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.070 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d973e70d-567d-4293-9330-89614817b1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000092.scope: Deactivated successfully.
Dec 13 09:16:59 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000092.scope: Consumed 14.185s CPU time.
Dec 13 09:16:59 compute-0 systemd-machined[210538]: Machine qemu-177-instance-00000092 terminated.
Dec 13 09:16:59 compute-0 podman[400423]: 2025-12-13 09:16:59.099210349 +0000 UTC m=+1.267791020 container attach 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 09:16:59 compute-0 podman[400423]: 2025-12-13 09:16:59.101401193 +0000 UTC m=+1.269981864 container died 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.105 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cce48b-9f2e-4ef6-a3dc-63321028c7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.125 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f81719e-3f5f-442d-9b81-00d768ba6a2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fc9ec4-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:82:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 446], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992138, 'reachable_time': 20078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400474, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.147 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[359fd760-f86a-41b0-9b0c-ca484450c472]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd3fc9ec4-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992154, 'tstamp': 992154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400475, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd3fc9ec4-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992158, 'tstamp': 992158}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400475, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.149 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fc9ec4-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.159 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fc9ec4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.160 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.161 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fc9ec4-40, col_values=(('external_ids', {'iface-id': 'f954b813-27c7-46fa-b00a-23e0bb5d820d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.162 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.165 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 398cfb7c-bd58-4bb7-8778-e485fd13a934 in datapath 9f757ab8-2a8f-4771-8492-2bcf521016bb unbound from our chassis
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.168 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f757ab8-2a8f-4771-8492-2bcf521016bb
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.192 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26d6bc2a-57c6-4a95-a370-b02584144aca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.222 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[790f2d21-2613-403b-9ee8-b84c46e16245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.228 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd0c323-68fc-4e15-a638-ece561b426ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 NetworkManager[50376]: <info>  [1765617419.2391] manager: (tap398cfb7c-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.260 248514 INFO nova.virt.libvirt.driver [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance destroyed successfully.
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.261 248514 DEBUG nova.objects.instance [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.267 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[466c69dd-f362-4403-b682-ba011c443253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.285 248514 DEBUG nova.virt.libvirt.vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:16:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:16:28Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.286 248514 DEBUG nova.network.os_vif_util [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.287 248514 DEBUG nova.network.os_vif_util [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.286 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea4b3b5-b975-4e40-a719-00fc3a576b4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f757ab8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c4:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992247, 'reachable_time': 37493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400504, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.288 248514 DEBUG os_vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.292 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb0ef518-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.299 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.304 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.306 248514 INFO os_vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd')
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.307 248514 DEBUG nova.virt.libvirt.vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:16:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:16:28Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.307 248514 DEBUG nova.network.os_vif_util [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.308 248514 DEBUG nova.network.os_vif_util [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.308 248514 DEBUG os_vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.309 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0165461d-4e5c-400d-9cbf-b13b3efcad85]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f757ab8-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992261, 'tstamp': 992261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400505, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.310 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap398cfb7c-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.311 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f757ab8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:16:59 compute-0 nova_compute[248510]: 2025-12-13 09:16:59.315 248514 INFO os_vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd')
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.316 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f757ab8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.317 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.317 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f757ab8-20, col_values=(('external_ids', {'iface-id': '8c81cc53-1ad8-47d5-9def-f8710a05a423'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:16:59 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.047 248514 DEBUG nova.compute.manager [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-unplugged-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.048 248514 DEBUG oslo_concurrency.lockutils [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.048 248514 DEBUG oslo_concurrency.lockutils [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.049 248514 DEBUG oslo_concurrency.lockutils [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.049 248514 DEBUG nova.compute.manager [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-unplugged-fb0ef518-cd2c-4c22-8d93-009f10641109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.049 248514 DEBUG nova.compute.manager [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-unplugged-fb0ef518-cd2c-4c22-8d93-009f10641109 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.232 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-unplugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.233 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.233 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.234 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.234 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-unplugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.235 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-unplugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.235 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.236 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.236 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.237 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.237 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:00 compute-0 nova_compute[248510]: 2025-12-13 09:17:00.238 248514 WARNING nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received unexpected event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 for instance with vm_state active and task_state deleting.
Dec 13 09:17:00 compute-0 ceph-mon[76537]: pgmap v3476: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 2 op/s
Dec 13 09:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d15a022f82b9e20854443099096a553386da9089999dd92ae994af7f378ef6d-merged.mount: Deactivated successfully.
Dec 13 09:17:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 21 KiB/s wr, 3 op/s
Dec 13 09:17:01 compute-0 podman[400423]: 2025-12-13 09:17:01.001707113 +0000 UTC m=+3.170287774 container remove 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 09:17:01 compute-0 systemd[1]: libpod-conmon-4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3.scope: Deactivated successfully.
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.231 248514 INFO nova.virt.libvirt.driver [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Deleting instance files /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_del
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.232 248514 INFO nova.virt.libvirt.driver [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Deletion of /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_del complete
Dec 13 09:17:01 compute-0 podman[400534]: 2025-12-13 09:17:01.252661576 +0000 UTC m=+0.058465047 container create 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:17:01 compute-0 systemd[1]: Started libpod-conmon-79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0.scope.
Dec 13 09:17:01 compute-0 podman[400534]: 2025-12-13 09:17:01.230087404 +0000 UTC m=+0.035890905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:17:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:01 compute-0 podman[400534]: 2025-12-13 09:17:01.360503063 +0000 UTC m=+0.166306564 container init 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 09:17:01 compute-0 podman[400534]: 2025-12-13 09:17:01.369362784 +0000 UTC m=+0.175166255 container start 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 09:17:01 compute-0 podman[400534]: 2025-12-13 09:17:01.374155074 +0000 UTC m=+0.179958545 container attach 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:17:01 compute-0 ceph-mon[76537]: pgmap v3477: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 21 KiB/s wr, 3 op/s
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.559 248514 INFO nova.compute.manager [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Took 3.37 seconds to destroy the instance on the hypervisor.
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.561 248514 DEBUG oslo.service.loopingcall [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.561 248514 DEBUG nova.compute.manager [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.561 248514 DEBUG nova.network.neutron [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]: {
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:     "0": [
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:         {
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "devices": [
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "/dev/loop3"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             ],
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_name": "ceph_lv0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_size": "21470642176",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "name": "ceph_lv0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "tags": {
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cluster_name": "ceph",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.crush_device_class": "",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.encrypted": "0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.objectstore": "bluestore",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osd_id": "0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.type": "block",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.vdo": "0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.with_tpm": "0"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             },
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "type": "block",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "vg_name": "ceph_vg0"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:         }
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:     ],
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:     "1": [
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:         {
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "devices": [
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "/dev/loop4"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             ],
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_name": "ceph_lv1",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_size": "21470642176",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "name": "ceph_lv1",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "tags": {
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cluster_name": "ceph",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.crush_device_class": "",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.encrypted": "0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.objectstore": "bluestore",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osd_id": "1",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.type": "block",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.vdo": "0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.with_tpm": "0"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             },
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "type": "block",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "vg_name": "ceph_vg1"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:         }
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:     ],
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:     "2": [
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:         {
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "devices": [
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "/dev/loop5"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             ],
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_name": "ceph_lv2",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_size": "21470642176",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "name": "ceph_lv2",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "tags": {
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.cluster_name": "ceph",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.crush_device_class": "",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.encrypted": "0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.objectstore": "bluestore",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osd_id": "2",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.type": "block",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.vdo": "0",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:                 "ceph.with_tpm": "0"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             },
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "type": "block",
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:             "vg_name": "ceph_vg2"
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:         }
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]:     ]
Dec 13 09:17:01 compute-0 mystifying_ritchie[400550]: }
Dec 13 09:17:01 compute-0 systemd[1]: libpod-79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0.scope: Deactivated successfully.
Dec 13 09:17:01 compute-0 conmon[400550]: conmon 79f0a44fa66771032e39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0.scope/container/memory.events
Dec 13 09:17:01 compute-0 podman[400534]: 2025-12-13 09:17:01.728728939 +0000 UTC m=+0.534532400 container died 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf-merged.mount: Deactivated successfully.
Dec 13 09:17:01 compute-0 podman[400534]: 2025-12-13 09:17:01.774240703 +0000 UTC m=+0.580044184 container remove 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:17:01 compute-0 systemd[1]: libpod-conmon-79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0.scope: Deactivated successfully.
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.822 248514 DEBUG nova.network.neutron [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updated VIF entry in instance network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.823 248514 DEBUG nova.network.neutron [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:17:01 compute-0 sudo[400385]: pam_unix(sudo:session): session closed for user root
Dec 13 09:17:01 compute-0 nova_compute[248510]: 2025-12-13 09:17:01.850 248514 DEBUG oslo_concurrency.lockutils [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:17:01 compute-0 sudo[400572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:17:01 compute-0 sudo[400572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:17:01 compute-0 sudo[400572]: pam_unix(sudo:session): session closed for user root
Dec 13 09:17:01 compute-0 sudo[400597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:17:01 compute-0 sudo[400597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.154 248514 DEBUG nova.compute.manager [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.154 248514 DEBUG oslo_concurrency.lockutils [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.154 248514 DEBUG oslo_concurrency.lockutils [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.155 248514 DEBUG oslo_concurrency.lockutils [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.155 248514 DEBUG nova.compute.manager [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.155 248514 WARNING nova.compute.manager [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received unexpected event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 for instance with vm_state active and task_state deleting.
Dec 13 09:17:02 compute-0 podman[400634]: 2025-12-13 09:17:02.270693192 +0000 UTC m=+0.046101680 container create 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:17:02 compute-0 systemd[1]: Started libpod-conmon-97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923.scope.
Dec 13 09:17:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:17:02 compute-0 podman[400634]: 2025-12-13 09:17:02.25135581 +0000 UTC m=+0.026764318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:17:02 compute-0 podman[400634]: 2025-12-13 09:17:02.362174691 +0000 UTC m=+0.137583189 container init 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:17:02 compute-0 podman[400634]: 2025-12-13 09:17:02.371247598 +0000 UTC m=+0.146656076 container start 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:17:02 compute-0 podman[400634]: 2025-12-13 09:17:02.376019386 +0000 UTC m=+0.151427894 container attach 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:17:02 compute-0 practical_wing[400650]: 167 167
Dec 13 09:17:02 compute-0 podman[400634]: 2025-12-13 09:17:02.378649042 +0000 UTC m=+0.154057530 container died 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:17:02 compute-0 systemd[1]: libpod-97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923.scope: Deactivated successfully.
Dec 13 09:17:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-919d115e9804a4cd973bef2adb61562c7305dea26db8248e2dfcc274eb712789-merged.mount: Deactivated successfully.
Dec 13 09:17:02 compute-0 podman[400634]: 2025-12-13 09:17:02.420847263 +0000 UTC m=+0.196255741 container remove 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 09:17:02 compute-0 systemd[1]: libpod-conmon-97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923.scope: Deactivated successfully.
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.462 248514 DEBUG nova.compute.manager [req-bfb4e517-8a5d-4202-81fb-0e0fbfb185bd req-723979a7-8b5a-4c9a-a32e-80210c81c920 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-deleted-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.463 248514 INFO nova.compute.manager [req-bfb4e517-8a5d-4202-81fb-0e0fbfb185bd req-723979a7-8b5a-4c9a-a32e-80210c81c920 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Neutron deleted interface fb0ef518-cd2c-4c22-8d93-009f10641109; detaching it from the instance and deleting it from the info cache
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.463 248514 DEBUG nova.network.neutron [req-bfb4e517-8a5d-4202-81fb-0e0fbfb185bd req-723979a7-8b5a-4c9a-a32e-80210c81c920 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:17:02 compute-0 nova_compute[248510]: 2025-12-13 09:17:02.486 248514 DEBUG nova.compute.manager [req-bfb4e517-8a5d-4202-81fb-0e0fbfb185bd req-723979a7-8b5a-4c9a-a32e-80210c81c920 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Detach interface failed, port_id=fb0ef518-cd2c-4c22-8d93-009f10641109, reason: Instance c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:17:02 compute-0 podman[400673]: 2025-12-13 09:17:02.645578363 +0000 UTC m=+0.068658482 container create a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:17:02 compute-0 systemd[1]: Started libpod-conmon-a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf.scope.
Dec 13 09:17:02 compute-0 podman[400673]: 2025-12-13 09:17:02.612802716 +0000 UTC m=+0.035882845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:17:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:02 compute-0 podman[400673]: 2025-12-13 09:17:02.772179018 +0000 UTC m=+0.195259227 container init a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:17:02 compute-0 podman[400673]: 2025-12-13 09:17:02.785856649 +0000 UTC m=+0.208936758 container start a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:17:02 compute-0 podman[400673]: 2025-12-13 09:17:02.789811057 +0000 UTC m=+0.212891166 container attach a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:17:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 9.4 KiB/s wr, 2 op/s
Dec 13 09:17:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.153 248514 DEBUG nova.network.neutron [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.171 248514 INFO nova.compute.manager [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Took 1.61 seconds to deallocate network for instance.
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.220 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.221 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.301 248514 DEBUG oslo_concurrency.processutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:03 compute-0 lvm[400788]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:17:03 compute-0 lvm[400789]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:17:03 compute-0 lvm[400789]: VG ceph_vg1 finished
Dec 13 09:17:03 compute-0 lvm[400788]: VG ceph_vg0 finished
Dec 13 09:17:03 compute-0 lvm[400791]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:17:03 compute-0 lvm[400791]: VG ceph_vg2 finished
Dec 13 09:17:03 compute-0 recursing_thompson[400690]: {}
Dec 13 09:17:03 compute-0 systemd[1]: libpod-a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf.scope: Deactivated successfully.
Dec 13 09:17:03 compute-0 systemd[1]: libpod-a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf.scope: Consumed 1.382s CPU time.
Dec 13 09:17:03 compute-0 podman[400673]: 2025-12-13 09:17:03.700589321 +0000 UTC m=+1.123669430 container died a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 09:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07-merged.mount: Deactivated successfully.
Dec 13 09:17:03 compute-0 podman[400673]: 2025-12-13 09:17:03.778210875 +0000 UTC m=+1.201290984 container remove a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:17:03 compute-0 systemd[1]: libpod-conmon-a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf.scope: Deactivated successfully.
Dec 13 09:17:03 compute-0 sudo[400597]: pam_unix(sudo:session): session closed for user root
Dec 13 09:17:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:17:03 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:17:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:17:03 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:17:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:17:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721760534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.925 248514 DEBUG oslo_concurrency.processutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.935 248514 DEBUG nova.compute.provider_tree [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:17:03 compute-0 sudo[400807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:17:03 compute-0 sudo[400807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:17:03 compute-0 sudo[400807]: pam_unix(sudo:session): session closed for user root
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.958 248514 DEBUG nova.scheduler.client.report [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:17:03 compute-0 ceph-mon[76537]: pgmap v3478: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 9.4 KiB/s wr, 2 op/s
Dec 13 09:17:03 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:17:03 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:17:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1721760534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:03 compute-0 nova_compute[248510]: 2025-12-13 09:17:03.990 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:04 compute-0 nova_compute[248510]: 2025-12-13 09:17:04.013 248514 INFO nova.scheduler.client.report [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance c6e4d841-78ee-4a00-87ca-a6c2d542a9b7
Dec 13 09:17:04 compute-0 nova_compute[248510]: 2025-12-13 09:17:04.096 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:04 compute-0 nova_compute[248510]: 2025-12-13 09:17:04.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:04 compute-0 nova_compute[248510]: 2025-12-13 09:17:04.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:04 compute-0 nova_compute[248510]: 2025-12-13 09:17:04.565 248514 DEBUG nova.compute.manager [req-769ee749-bef1-4440-93b9-9d6673b476bf req-cb743aad-0776-4a34-a91e-acfd08d82789 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-deleted-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 11 KiB/s wr, 30 op/s
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.853 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.854 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.854 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.854 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.854 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.855 248514 INFO nova.compute.manager [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Terminating instance
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.856 248514 DEBUG nova.compute.manager [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:17:05 compute-0 kernel: tap66abeeb9-b4 (unregistering): left promiscuous mode
Dec 13 09:17:05 compute-0 NetworkManager[50376]: <info>  [1765617425.9090] device (tap66abeeb9-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:17:05 compute-0 ovn_controller[148476]: 2025-12-13T09:17:05Z|01584|binding|INFO|Releasing lport 66abeeb9-b4e2-4901-9437-be8cd001222f from this chassis (sb_readonly=0)
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:05 compute-0 ovn_controller[148476]: 2025-12-13T09:17:05Z|01585|binding|INFO|Setting lport 66abeeb9-b4e2-4901-9437-be8cd001222f down in Southbound
Dec 13 09:17:05 compute-0 ovn_controller[148476]: 2025-12-13T09:17:05Z|01586|binding|INFO|Removing iface tap66abeeb9-b4 ovn-installed in OVS
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:05 compute-0 kernel: tap3b47d34a-69 (unregistering): left promiscuous mode
Dec 13 09:17:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.932 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:09:e2 10.100.0.7'], port_security=['fa:16:3e:08:09:e2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '44149dbe-362b-4930-a63b-d04c9a3b3b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fc9ec4-4452-4225-b100-75f3859e091a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74bfbdc0-d183-4405-bf5b-7ce9dc4c0882, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=66abeeb9-b4e2-4901-9437-be8cd001222f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:17:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.933 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 66abeeb9-b4e2-4901-9437-be8cd001222f in datapath d3fc9ec4-4452-4225-b100-75f3859e091a unbound from our chassis
Dec 13 09:17:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.935 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3fc9ec4-4452-4225-b100-75f3859e091a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:17:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.937 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24b1147a-474a-4eb8-bdb0-cdbc089e9a4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.937 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a namespace which is not needed anymore
Dec 13 09:17:05 compute-0 NetworkManager[50376]: <info>  [1765617425.9411] device (tap3b47d34a-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.943 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:05 compute-0 ovn_controller[148476]: 2025-12-13T09:17:05Z|01587|binding|INFO|Releasing lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 from this chassis (sb_readonly=0)
Dec 13 09:17:05 compute-0 ovn_controller[148476]: 2025-12-13T09:17:05Z|01588|binding|INFO|Setting lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 down in Southbound
Dec 13 09:17:05 compute-0 ovn_controller[148476]: 2025-12-13T09:17:05Z|01589|binding|INFO|Removing iface tap3b47d34a-69 ovn-installed in OVS
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:05 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.961 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0'], port_security=['fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe1a:e9f0/64', 'neutron:device_id': '44149dbe-362b-4930-a63b-d04c9a3b3b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=637bfd4d-c545-47be-afdb-625fd0ecaffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b47d34a-6968-411d-8f9d-38a835c0fa77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:17:05 compute-0 nova_compute[248510]: 2025-12-13 09:17:05.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:05 compute-0 ceph-mon[76537]: pgmap v3479: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 11 KiB/s wr, 30 op/s
Dec 13 09:17:05 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000091.scope: Deactivated successfully.
Dec 13 09:17:05 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000091.scope: Consumed 16.873s CPU time.
Dec 13 09:17:06 compute-0 systemd-machined[210538]: Machine qemu-176-instance-00000091 terminated.
Dec 13 09:17:06 compute-0 podman[400836]: 2025-12-13 09:17:06.003877822 +0000 UTC m=+0.079685597 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 13 09:17:06 compute-0 podman[400835]: 2025-12-13 09:17:06.030983547 +0000 UTC m=+0.114968395 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 09:17:06 compute-0 podman[400834]: 2025-12-13 09:17:06.031312656 +0000 UTC m=+0.115211652 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 09:17:06 compute-0 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [NOTICE]   (398790) : haproxy version is 2.8.14-c23fe91
Dec 13 09:17:06 compute-0 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [NOTICE]   (398790) : path to executable is /usr/sbin/haproxy
Dec 13 09:17:06 compute-0 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [WARNING]  (398790) : Exiting Master process...
Dec 13 09:17:06 compute-0 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [ALERT]    (398790) : Current worker (398792) exited with code 143 (Terminated)
Dec 13 09:17:06 compute-0 NetworkManager[50376]: <info>  [1765617426.0875] manager: (tap66abeeb9-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/656)
Dec 13 09:17:06 compute-0 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [WARNING]  (398790) : All workers exited. Exiting... (0)
Dec 13 09:17:06 compute-0 systemd[1]: libpod-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb.scope: Deactivated successfully.
Dec 13 09:17:06 compute-0 conmon[398786]: conmon 6ea39ac5e945181bf086 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb.scope/container/memory.events
Dec 13 09:17:06 compute-0 podman[400923]: 2025-12-13 09:17:06.093262549 +0000 UTC m=+0.054554300 container died 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.118 248514 INFO nova.virt.libvirt.driver [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance destroyed successfully.
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.118 248514 DEBUG nova.objects.instance [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 44149dbe-362b-4930-a63b-d04c9a3b3b4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb-userdata-shm.mount: Deactivated successfully.
Dec 13 09:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a72ad838abac5f9443ea2e98c9a253d9a9467a978cbeac3dbb207a420c8706cd-merged.mount: Deactivated successfully.
Dec 13 09:17:06 compute-0 podman[400923]: 2025-12-13 09:17:06.137907802 +0000 UTC m=+0.099199553 container cleanup 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Dec 13 09:17:06 compute-0 systemd[1]: libpod-conmon-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb.scope: Deactivated successfully.
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.147 248514 DEBUG nova.virt.libvirt.vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:15:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:15:44Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.147 248514 DEBUG nova.network.os_vif_util [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.148 248514 DEBUG nova.network.os_vif_util [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.148 248514 DEBUG os_vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.151 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.151 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66abeeb9-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.154 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.160 248514 INFO os_vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4')
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.160 248514 DEBUG nova.virt.libvirt.vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:15:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:15:44Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.160 248514 DEBUG nova.network.os_vif_util [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.161 248514 DEBUG nova.network.os_vif_util [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.161 248514 DEBUG os_vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.162 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b47d34a-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.165 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.168 248514 INFO os_vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69')
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.191 248514 DEBUG nova.compute.manager [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-unplugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.191 248514 DEBUG oslo_concurrency.lockutils [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.192 248514 DEBUG oslo_concurrency.lockutils [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.192 248514 DEBUG oslo_concurrency.lockutils [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.193 248514 DEBUG nova.compute.manager [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-unplugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.193 248514 DEBUG nova.compute.manager [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-unplugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:17:06 compute-0 podman[400968]: 2025-12-13 09:17:06.758518306 +0000 UTC m=+0.592540376 container remove 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.766 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a0fabb-65f1-497b-b59f-3e0500924da0]: (4, ('Sat Dec 13 09:17:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a (6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb)\n6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb\nSat Dec 13 09:17:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a (6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb)\n6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.768 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64275cd7-7ab0-429d-bdb9-caa9d68d1840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.769 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fc9ec4-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:06 compute-0 kernel: tapd3fc9ec4-40: left promiscuous mode
Dec 13 09:17:06 compute-0 nova_compute[248510]: 2025-12-13 09:17:06.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06544b0c-9e8c-42c2-a030-05b6615f1671]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.808 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3d1455-16cd-4ff3-b563-f61499b0a54e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.810 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[168137d6-c5f1-41db-ad40-94f50daf192e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.829 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6aba7ee-9643-42e5-8f76-f95e9218a08a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992131, 'reachable_time': 17859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400998, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:06 compute-0 systemd[1]: run-netns-ovnmeta\x2dd3fc9ec4\x2d4452\x2d4225\x2db100\x2d75f3859e091a.mount: Deactivated successfully.
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.832 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.833 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd767e2-1b16-4942-922c-b43e8dad6f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.837 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b47d34a-6968-411d-8f9d-38a835c0fa77 in datapath 9f757ab8-2a8f-4771-8492-2bcf521016bb unbound from our chassis
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.838 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f757ab8-2a8f-4771-8492-2bcf521016bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.839 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e78af46d-dea1-41b2-942d-346bbe8ccb44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:06 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.840 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb namespace which is not needed anymore
Dec 13 09:17:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Dec 13 09:17:07 compute-0 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [NOTICE]   (398911) : haproxy version is 2.8.14-c23fe91
Dec 13 09:17:07 compute-0 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [NOTICE]   (398911) : path to executable is /usr/sbin/haproxy
Dec 13 09:17:07 compute-0 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [WARNING]  (398911) : Exiting Master process...
Dec 13 09:17:07 compute-0 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [WARNING]  (398911) : Exiting Master process...
Dec 13 09:17:07 compute-0 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [ALERT]    (398911) : Current worker (398913) exited with code 143 (Terminated)
Dec 13 09:17:07 compute-0 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [WARNING]  (398911) : All workers exited. Exiting... (0)
Dec 13 09:17:07 compute-0 systemd[1]: libpod-2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a.scope: Deactivated successfully.
Dec 13 09:17:07 compute-0 podman[401020]: 2025-12-13 09:17:07.368524185 +0000 UTC m=+0.434745693 container died 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:17:07 compute-0 ceph-mon[76537]: pgmap v3480: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.661 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.662 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing instance network info cache due to event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.662 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.662 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.663 248514 DEBUG nova.network.neutron [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:17:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a-userdata-shm.mount: Deactivated successfully.
Dec 13 09:17:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f319c184b06470198864244c31cd681dddd4dfb8c67f2ea0cdfe3b222afde25a-merged.mount: Deactivated successfully.
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:07 compute-0 podman[401020]: 2025-12-13 09:17:07.767549688 +0000 UTC m=+0.833771196 container cleanup 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.789 248514 INFO nova.virt.libvirt.driver [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Deleting instance files /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c_del
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.790 248514 INFO nova.virt.libvirt.driver [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Deletion of /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c_del complete
Dec 13 09:17:07 compute-0 podman[401048]: 2025-12-13 09:17:07.854650908 +0000 UTC m=+0.060067637 container remove 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:17:07 compute-0 systemd[1]: libpod-conmon-2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a.scope: Deactivated successfully.
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.863 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1225ddc2-0867-44b0-848c-587757fe8c25]: (4, ('Sat Dec 13 09:17:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb (2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a)\n2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a\nSat Dec 13 09:17:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb (2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a)\n2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.865 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c5ccbb-914e-4ed4-a0d8-99ed5673b186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.865 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f757ab8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:07 compute-0 kernel: tap9f757ab8-20: left promiscuous mode
Dec 13 09:17:07 compute-0 nova_compute[248510]: 2025-12-13 09:17:07.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.883 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c137f63-fed7-421a-825d-cef48160cf82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.910 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f581315a-506f-4d69-ac93-211a28d96de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1152716e-c388-4a2b-993d-bddf0cfbcc78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.927 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[555ce82c-11e3-4fd1-98de-fb0e27711894]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992236, 'reachable_time': 15176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401064, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.929 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:17:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.930 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[041dfa20-865a-43d0-8d20-fff8bc3374a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f757ab8\x2d2a8f\x2d4771\x2d8492\x2d2bcf521016bb.mount: Deactivated successfully.
Dec 13 09:17:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.348 248514 DEBUG nova.compute.manager [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.349 248514 DEBUG oslo_concurrency.lockutils [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.349 248514 DEBUG oslo_concurrency.lockutils [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.349 248514 DEBUG oslo_concurrency.lockutils [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.349 248514 DEBUG nova.compute.manager [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.350 248514 WARNING nova.compute.manager [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received unexpected event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 for instance with vm_state active and task_state deleting.
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.354 248514 INFO nova.compute.manager [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Took 2.50 seconds to destroy the instance on the hypervisor.
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.354 248514 DEBUG oslo.service.loopingcall [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.354 248514 DEBUG nova.compute.manager [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:17:08 compute-0 nova_compute[248510]: 2025-12-13 09:17:08.355 248514 DEBUG nova.network.neutron [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:17:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 93 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 33 op/s
Dec 13 09:17:09 compute-0 nova_compute[248510]: 2025-12-13 09:17:09.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:17:09
Dec 13 09:17:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:17:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:17:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', '.mgr', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 13 09:17:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:17:10 compute-0 ceph-mon[76537]: pgmap v3481: 321 pgs: 321 active+clean; 93 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 33 op/s
Dec 13 09:17:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:17:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:17:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:17:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:17:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:17:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.450 248514 DEBUG nova.compute.manager [req-e5f77e51-fe6d-4373-89a3-5ec40d4c7f1a req-6c1478e6-b055-473c-a445-bb9b14c017d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-deleted-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.450 248514 INFO nova.compute.manager [req-e5f77e51-fe6d-4373-89a3-5ec40d4c7f1a req-6c1478e6-b055-473c-a445-bb9b14c017d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Neutron deleted interface 3b47d34a-6968-411d-8f9d-38a835c0fa77; detaching it from the instance and deleting it from the info cache
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.451 248514 DEBUG nova.network.neutron [req-e5f77e51-fe6d-4373-89a3-5ec40d4c7f1a req-6c1478e6-b055-473c-a445-bb9b14c017d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.486 248514 DEBUG nova.network.neutron [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updated VIF entry in instance network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.486 248514 DEBUG nova.network.neutron [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.497 248514 DEBUG nova.compute.manager [req-e5f77e51-fe6d-4373-89a3-5ec40d4c7f1a req-6c1478e6-b055-473c-a445-bb9b14c017d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Detach interface failed, port_id=3b47d34a-6968-411d-8f9d-38a835c0fa77, reason: Instance 44149dbe-362b-4930-a63b-d04c9a3b3b4c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.513 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.513 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-unplugged-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.514 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.514 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.514 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.514 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-unplugged-66abeeb9-b4e2-4901-9437-be8cd001222f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.515 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-unplugged-66abeeb9-b4e2-4901-9437-be8cd001222f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.515 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.515 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.515 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.516 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.516 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:10 compute-0 nova_compute[248510]: 2025-12-13 09:17:10.516 248514 WARNING nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received unexpected event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f for instance with vm_state active and task_state deleting.
Dec 13 09:17:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:17:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:17:11 compute-0 nova_compute[248510]: 2025-12-13 09:17:11.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:11 compute-0 nova_compute[248510]: 2025-12-13 09:17:11.322 248514 DEBUG nova.network.neutron [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:17:11 compute-0 nova_compute[248510]: 2025-12-13 09:17:11.345 248514 INFO nova.compute.manager [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Took 2.99 seconds to deallocate network for instance.
Dec 13 09:17:11 compute-0 nova_compute[248510]: 2025-12-13 09:17:11.426 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:11 compute-0 nova_compute[248510]: 2025-12-13 09:17:11.426 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:11 compute-0 nova_compute[248510]: 2025-12-13 09:17:11.488 248514 DEBUG oslo_concurrency.processutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:17:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068009244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:12 compute-0 nova_compute[248510]: 2025-12-13 09:17:12.057 248514 DEBUG oslo_concurrency.processutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:12 compute-0 ceph-mon[76537]: pgmap v3482: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 09:17:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1068009244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:12 compute-0 nova_compute[248510]: 2025-12-13 09:17:12.067 248514 DEBUG nova.compute.provider_tree [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:17:12 compute-0 nova_compute[248510]: 2025-12-13 09:17:12.089 248514 DEBUG nova.scheduler.client.report [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:17:12 compute-0 nova_compute[248510]: 2025-12-13 09:17:12.120 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:12 compute-0 nova_compute[248510]: 2025-12-13 09:17:12.155 248514 INFO nova.scheduler.client.report [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 44149dbe-362b-4930-a63b-d04c9a3b3b4c
Dec 13 09:17:12 compute-0 nova_compute[248510]: 2025-12-13 09:17:12.255 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:12 compute-0 nova_compute[248510]: 2025-12-13 09:17:12.536 248514 DEBUG nova.compute.manager [req-b15739ed-1aad-4a82-86a6-260ba658af48 req-f7a32e79-0bb4-4d48-ae0b-13ea6be6d08d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-deleted-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Dec 13 09:17:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:14 compute-0 ceph-mon[76537]: pgmap v3483: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Dec 13 09:17:14 compute-0 nova_compute[248510]: 2025-12-13 09:17:14.147 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:14 compute-0 nova_compute[248510]: 2025-12-13 09:17:14.258 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617419.2561684, c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:17:14 compute-0 nova_compute[248510]: 2025-12-13 09:17:14.258 248514 INFO nova.compute.manager [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] VM Stopped (Lifecycle Event)
Dec 13 09:17:14 compute-0 nova_compute[248510]: 2025-12-13 09:17:14.288 248514 DEBUG nova.compute.manager [None req-44bd7916-3c46-4f0b-bccb-cda8b78173ad - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:17:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Dec 13 09:17:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:17:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324666648' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:17:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:17:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324666648' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:17:16 compute-0 ceph-mon[76537]: pgmap v3484: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Dec 13 09:17:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1324666648' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:17:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1324666648' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:17:16 compute-0 nova_compute[248510]: 2025-12-13 09:17:16.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:17:17 compute-0 nova_compute[248510]: 2025-12-13 09:17:17.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:17 compute-0 nova_compute[248510]: 2025-12-13 09:17:17.094 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:18 compute-0 ceph-mon[76537]: pgmap v3485: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:17:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:17:19 compute-0 nova_compute[248510]: 2025-12-13 09:17:19.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:20 compute-0 ceph-mon[76537]: pgmap v3486: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:17:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 341 B/s wr, 22 op/s
Dec 13 09:17:21 compute-0 nova_compute[248510]: 2025-12-13 09:17:21.115 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617426.114808, 44149dbe-362b-4930-a63b-d04c9a3b3b4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:17:21 compute-0 nova_compute[248510]: 2025-12-13 09:17:21.116 248514 INFO nova.compute.manager [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] VM Stopped (Lifecycle Event)
Dec 13 09:17:21 compute-0 nova_compute[248510]: 2025-12-13 09:17:21.135 248514 DEBUG nova.compute.manager [None req-bfd1b427-5ecb-442c-842c-8adade80865c - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:17:21 compute-0 nova_compute[248510]: 2025-12-13 09:17:21.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4732504539939352e-05 of space, bias 1.0, pg target 0.004419751361981806 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697116252414539 of space, bias 1.0, pg target 0.20091348757243618 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:17:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:17:21 compute-0 nova_compute[248510]: 2025-12-13 09:17:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:22 compute-0 ceph-mon[76537]: pgmap v3487: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 341 B/s wr, 22 op/s
Dec 13 09:17:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:23 compute-0 ceph-mon[76537]: pgmap v3488: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:24 compute-0 nova_compute[248510]: 2025-12-13 09:17:24.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:24 compute-0 nova_compute[248510]: 2025-12-13 09:17:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:26 compute-0 nova_compute[248510]: 2025-12-13 09:17:26.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:26 compute-0 ceph-mon[76537]: pgmap v3489: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:27 compute-0 ceph-mon[76537]: pgmap v3490: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:27 compute-0 nova_compute[248510]: 2025-12-13 09:17:27.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:27 compute-0 nova_compute[248510]: 2025-12-13 09:17:27.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:17:27 compute-0 nova_compute[248510]: 2025-12-13 09:17:27.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:17:27 compute-0 nova_compute[248510]: 2025-12-13 09:17:27.883 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:17:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:29 compute-0 nova_compute[248510]: 2025-12-13 09:17:29.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:29.635 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:17:29 compute-0 nova_compute[248510]: 2025-12-13 09:17:29.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:29.636 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:17:29 compute-0 ceph-mon[76537]: pgmap v3491: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:31 compute-0 nova_compute[248510]: 2025-12-13 09:17:31.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:31 compute-0 nova_compute[248510]: 2025-12-13 09:17:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:32 compute-0 ceph-mon[76537]: pgmap v3492: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:32 compute-0 nova_compute[248510]: 2025-12-13 09:17:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:32 compute-0 nova_compute[248510]: 2025-12-13 09:17:32.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:32 compute-0 nova_compute[248510]: 2025-12-13 09:17:32.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:32 compute-0 nova_compute[248510]: 2025-12-13 09:17:32.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:32 compute-0 nova_compute[248510]: 2025-12-13 09:17:32.812 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:17:32 compute-0 nova_compute[248510]: 2025-12-13 09:17:32.812 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:33.016 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:3e:92 10.100.0.2 2001:db8::f816:3eff:fe95:3e92'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe95:3e92/64', 'neutron:device_id': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=311a1d17-b8d0-425a-a0e2-260061ce5d5d) old=Port_Binding(mac=['fa:16:3e:95:3e:92 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:17:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:33.018 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 311a1d17-b8d0-425a-a0e2-260061ce5d5d in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 updated
Dec 13 09:17:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:33.021 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5815775-e5d0-4b72-a008-efd9f04c6ee4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:17:33 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:33.022 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff960c10-5f08-4f26-9e7a-0e5208415f50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:17:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268610521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.345 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.534 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.536 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.98739747237414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.537 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.537 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.660 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.661 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.682 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.713 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.714 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.735 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.771 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:17:33 compute-0 nova_compute[248510]: 2025-12-13 09:17:33.793 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:34 compute-0 ceph-mon[76537]: pgmap v3493: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4268610521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:34 compute-0 nova_compute[248510]: 2025-12-13 09:17:34.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:17:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/55505373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:34 compute-0 nova_compute[248510]: 2025-12-13 09:17:34.431 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:34 compute-0 nova_compute[248510]: 2025-12-13 09:17:34.438 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:17:34 compute-0 nova_compute[248510]: 2025-12-13 09:17:34.457 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:17:34 compute-0 nova_compute[248510]: 2025-12-13 09:17:34.482 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:17:34 compute-0 nova_compute[248510]: 2025-12-13 09:17:34.483 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:34 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:34.639 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/55505373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:36 compute-0 ceph-mon[76537]: pgmap v3494: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:36 compute-0 nova_compute[248510]: 2025-12-13 09:17:36.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:36 compute-0 podman[401136]: 2025-12-13 09:17:36.981670727 +0000 UTC m=+0.055377541 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 13 09:17:36 compute-0 podman[401135]: 2025-12-13 09:17:36.989035471 +0000 UTC m=+0.068241272 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 13 09:17:37 compute-0 podman[401134]: 2025-12-13 09:17:37.022698229 +0000 UTC m=+0.104798962 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 13 09:17:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:37.048 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:3e:92 10.100.0.2 2001:db8:0:1:f816:3eff:fe95:3e92 2001:db8::f816:3eff:fe95:3e92'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe95:3e92/64 2001:db8::f816:3eff:fe95:3e92/64', 'neutron:device_id': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=311a1d17-b8d0-425a-a0e2-260061ce5d5d) old=Port_Binding(mac=['fa:16:3e:95:3e:92 10.100.0.2 2001:db8::f816:3eff:fe95:3e92'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe95:3e92/64', 'neutron:device_id': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:17:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:37.049 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 311a1d17-b8d0-425a-a0e2-260061ce5d5d in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 updated
Dec 13 09:17:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:37.051 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5815775-e5d0-4b72-a008-efd9f04c6ee4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:17:37 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:37.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0663c8a5-3f09-4e46-a107-d324ee8730bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:37 compute-0 nova_compute[248510]: 2025-12-13 09:17:37.483 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:37 compute-0 nova_compute[248510]: 2025-12-13 09:17:37.484 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:37 compute-0 nova_compute[248510]: 2025-12-13 09:17:37.484 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:17:37 compute-0 ceph-mon[76537]: pgmap v3495: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:37 compute-0 nova_compute[248510]: 2025-12-13 09:17:37.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:39 compute-0 nova_compute[248510]: 2025-12-13 09:17:39.157 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:17:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:17:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:17:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:17:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:17:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:17:40 compute-0 ceph-mon[76537]: pgmap v3496: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:41 compute-0 nova_compute[248510]: 2025-12-13 09:17:41.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:42 compute-0 nova_compute[248510]: 2025-12-13 09:17:42.122 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:42 compute-0 nova_compute[248510]: 2025-12-13 09:17:42.122 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:42 compute-0 nova_compute[248510]: 2025-12-13 09:17:42.142 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:17:42 compute-0 nova_compute[248510]: 2025-12-13 09:17:42.234 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:42 compute-0 nova_compute[248510]: 2025-12-13 09:17:42.235 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:42 compute-0 nova_compute[248510]: 2025-12-13 09:17:42.243 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:17:42 compute-0 nova_compute[248510]: 2025-12-13 09:17:42.243 248514 INFO nova.compute.claims [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:17:42 compute-0 ceph-mon[76537]: pgmap v3497: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:42 compute-0 nova_compute[248510]: 2025-12-13 09:17:42.357 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:17:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738697607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.041 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.050 248514 DEBUG nova.compute.provider_tree [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.082 248514 DEBUG nova.scheduler.client.report [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.113 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.114 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.164 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.165 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.193 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.215 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.324 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.325 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.326 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Creating image(s)
Dec 13 09:17:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2738697607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.425 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.450 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.479 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.483 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.545 248514 DEBUG nova.policy [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.591 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.592 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.593 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.593 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.617 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:17:43 compute-0 nova_compute[248510]: 2025-12-13 09:17:43.621 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:44 compute-0 nova_compute[248510]: 2025-12-13 09:17:44.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:44 compute-0 nova_compute[248510]: 2025-12-13 09:17:44.804 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Successfully created port: 1271974a-de11-42b5-87b8-470c51840315 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:17:44 compute-0 ceph-mon[76537]: pgmap v3498: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:17:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 13 09:17:46 compute-0 ceph-mon[76537]: pgmap v3499: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.053 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.146 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.318 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Successfully updated port: 1271974a-de11-42b5-87b8-470c51840315 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.342 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.343 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.343 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.407 248514 DEBUG nova.compute.manager [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-changed-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.407 248514 DEBUG nova.compute.manager [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing instance network info cache due to event network-changed-1271974a-de11-42b5-87b8-470c51840315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.407 248514 DEBUG oslo_concurrency.lockutils [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.468 248514 DEBUG nova.objects.instance [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 7aadb3d0-64f2-4531-9896-93b087cdea5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.487 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.488 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Ensure instance console log exists: /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.488 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.489 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.489 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:46 compute-0 nova_compute[248510]: 2025-12-13 09:17:46.519 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:17:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 13 09:17:47 compute-0 nova_compute[248510]: 2025-12-13 09:17:47.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:17:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.009 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.032 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.034 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance network_info: |[{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.035 248514 DEBUG oslo_concurrency.lockutils [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.035 248514 DEBUG nova.network.neutron [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing network info cache for port 1271974a-de11-42b5-87b8-470c51840315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.038 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start _get_guest_xml network_info=[{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.042 248514 WARNING nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.047 248514 DEBUG nova.virt.libvirt.host [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.048 248514 DEBUG nova.virt.libvirt.host [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.053 248514 DEBUG nova.virt.libvirt.host [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.054 248514 DEBUG nova.virt.libvirt.host [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.054 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.055 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.055 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.055 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.056 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.056 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.056 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.056 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.057 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.057 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.057 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.058 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.060 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:48 compute-0 ceph-mon[76537]: pgmap v3500: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 13 09:17:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:17:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311718874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.634 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.663 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:17:48 compute-0 nova_compute[248510]: 2025-12-13 09:17:48.668 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:17:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964636419' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.255 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.257 248514 DEBUG nova.virt.libvirt.vif [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:17:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1903797139',display_name='tempest-TestGettingAddress-server-1903797139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1903797139',id=147,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-zvq7q3d0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:17:43Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7aadb3d0-64f2-4531-9896-93b087cdea5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.258 248514 DEBUG nova.network.os_vif_util [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.259 248514 DEBUG nova.network.os_vif_util [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.260 248514 DEBUG nova.objects.instance [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7aadb3d0-64f2-4531-9896-93b087cdea5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.280 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <uuid>7aadb3d0-64f2-4531-9896-93b087cdea5c</uuid>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <name>instance-00000093</name>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-1903797139</nova:name>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:17:48</nova:creationTime>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <nova:port uuid="1271974a-de11-42b5-87b8-470c51840315">
Dec 13 09:17:49 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fed5:f68c" ipVersion="6"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fed5:f68c" ipVersion="6"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <system>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <entry name="serial">7aadb3d0-64f2-4531-9896-93b087cdea5c</entry>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <entry name="uuid">7aadb3d0-64f2-4531-9896-93b087cdea5c</entry>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </system>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <os>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   </os>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <features>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   </features>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7aadb3d0-64f2-4531-9896-93b087cdea5c_disk">
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       </source>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config">
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       </source>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:17:49 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:d5:f6:8c"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <target dev="tap1271974a-de"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/console.log" append="off"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <video>
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </video>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:17:49 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:17:49 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:17:49 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:17:49 compute-0 nova_compute[248510]: </domain>
Dec 13 09:17:49 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.282 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Preparing to wait for external event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.282 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.282 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.283 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.283 248514 DEBUG nova.virt.libvirt.vif [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:17:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1903797139',display_name='tempest-TestGettingAddress-server-1903797139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1903797139',id=147,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-zvq7q3d0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:17:43Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7aadb3d0-64f2-4531-9896-93b087cdea5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.284 248514 DEBUG nova.network.os_vif_util [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.284 248514 DEBUG nova.network.os_vif_util [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.285 248514 DEBUG os_vif [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.286 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.286 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.290 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.291 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1271974a-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.292 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1271974a-de, col_values=(('external_ids', {'iface-id': '1271974a-de11-42b5-87b8-470c51840315', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:f6:8c', 'vm-uuid': '7aadb3d0-64f2-4531-9896-93b087cdea5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.294 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:49 compute-0 NetworkManager[50376]: <info>  [1765617469.2959] manager: (tap1271974a-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.302 248514 INFO os_vif [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de')
Dec 13 09:17:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2311718874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:17:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3964636419' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.572 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.573 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.573 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:d5:f6:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.575 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Using config drive
Dec 13 09:17:49 compute-0 nova_compute[248510]: 2025-12-13 09:17:49.595 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:17:50 compute-0 nova_compute[248510]: 2025-12-13 09:17:50.157 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Creating config drive at /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config
Dec 13 09:17:50 compute-0 nova_compute[248510]: 2025-12-13 09:17:50.163 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmgp4ody4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:50 compute-0 nova_compute[248510]: 2025-12-13 09:17:50.320 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmgp4ody4" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:50 compute-0 nova_compute[248510]: 2025-12-13 09:17:50.352 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:17:50 compute-0 nova_compute[248510]: 2025-12-13 09:17:50.358 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:17:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:17:51 compute-0 ceph-mon[76537]: pgmap v3501: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.397 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.399 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Deleting local config drive /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config because it was imported into RBD.
Dec 13 09:17:51 compute-0 kernel: tap1271974a-de: entered promiscuous mode
Dec 13 09:17:51 compute-0 NetworkManager[50376]: <info>  [1765617471.4566] manager: (tap1271974a-de): new Tun device (/org/freedesktop/NetworkManager/Devices/658)
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:51 compute-0 ovn_controller[148476]: 2025-12-13T09:17:51Z|01590|binding|INFO|Claiming lport 1271974a-de11-42b5-87b8-470c51840315 for this chassis.
Dec 13 09:17:51 compute-0 ovn_controller[148476]: 2025-12-13T09:17:51Z|01591|binding|INFO|1271974a-de11-42b5-87b8-470c51840315: Claiming fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.461 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:51 compute-0 systemd-machined[210538]: New machine qemu-178-instance-00000093.
Dec 13 09:17:51 compute-0 systemd-udevd[401522]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:17:51 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000093.
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:51 compute-0 ovn_controller[148476]: 2025-12-13T09:17:51Z|01592|binding|INFO|Setting lport 1271974a-de11-42b5-87b8-470c51840315 ovn-installed in OVS
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:51 compute-0 NetworkManager[50376]: <info>  [1765617471.5433] device (tap1271974a-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:17:51 compute-0 NetworkManager[50376]: <info>  [1765617471.5441] device (tap1271974a-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:17:51 compute-0 ovn_controller[148476]: 2025-12-13T09:17:51Z|01593|binding|INFO|Setting lport 1271974a-de11-42b5-87b8-470c51840315 up in Southbound
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.661 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c'], port_security=['fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fed5:f68c/64 2001:db8::f816:3eff:fed5:f68c/64', 'neutron:device_id': '7aadb3d0-64f2-4531-9896-93b087cdea5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '747095ec-05b6-4395-bc25-989b7630b3c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1271974a-de11-42b5-87b8-470c51840315) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.662 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1271974a-de11-42b5-87b8-470c51840315 in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 bound to our chassis
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.664 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5815775-e5d0-4b72-a008-efd9f04c6ee4
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe98fc33-dfb6-475b-bfd2-e8a757dd8e0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.678 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5815775-e1 in ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.680 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5815775-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.680 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7abd16b8-48c9-4413-8b53-4fe51344f6bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.681 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b04420f-303e-4a40-99ef-717a63e6efbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.706 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c61e8dfc-e0c4-4336-8d65-505888902133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.742 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf7e39d-b983-4ae8-872b-799a64d55719]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.779 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf5f173-5a39-4e96-ab9c-578bd9afb9e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.788 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01c2f587-5e71-4a88-95fb-11f70aa46c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 NetworkManager[50376]: <info>  [1765617471.7903] manager: (tape5815775-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/659)
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.824 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebf83dc-a4e1-4db0-b3d6-2d4cd3e88c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.828 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[529e5748-629c-409c-b86d-6a7b6fef19a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 NetworkManager[50376]: <info>  [1765617471.8550] device (tape5815775-e0): carrier: link connected
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.864 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[04652054-3a5b-45ec-8acd-ca7b4f413331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b83472a-6cbe-4ef3-bf08-8395eb3aca54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5815775-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:3e:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004907, 'reachable_time': 36718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401555, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.915 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85de9519-15b4-4b00-9e88-4f5eedafa305]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:3e92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004907, 'tstamp': 1004907}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401556, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.936 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e53000f5-5bcf-4797-8c04-56b21bc176ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5815775-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:3e:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004907, 'reachable_time': 36718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401557, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.953 248514 DEBUG nova.network.neutron [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updated VIF entry in instance network info cache for port 1271974a-de11-42b5-87b8-470c51840315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.954 248514 DEBUG nova.network.neutron [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:17:51 compute-0 nova_compute[248510]: 2025-12-13 09:17:51.976 248514 DEBUG oslo_concurrency.lockutils [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86a3b989-5d11-4724-afbd-dc8bbda4b0fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.088 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e613d36-18ed-4636-954a-44286ab459e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.091 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5815775-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.092 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5815775-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:52 compute-0 NetworkManager[50376]: <info>  [1765617472.0974] manager: (tape5815775-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:52 compute-0 kernel: tape5815775-e0: entered promiscuous mode
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.101 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5815775-e0, col_values=(('external_ids', {'iface-id': '311a1d17-b8d0-425a-a0e2-260061ce5d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:52 compute-0 ovn_controller[148476]: 2025-12-13T09:17:52Z|01594|binding|INFO|Releasing lport 311a1d17-b8d0-425a-a0e2-260061ce5d5d from this chassis (sb_readonly=0)
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.131 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5815775-e5d0-4b72-a008-efd9f04c6ee4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5815775-e5d0-4b72-a008-efd9f04c6ee4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.133 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ca11b9-e363-411e-995e-df57f33e63fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.135 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-e5815775-e5d0-4b72-a008-efd9f04c6ee4
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/e5815775-e5d0-4b72-a008-efd9f04c6ee4.pid.haproxy
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID e5815775-e5d0-4b72-a008-efd9f04c6ee4
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:17:52 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.138 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'env', 'PROCESS_TAG=haproxy-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5815775-e5d0-4b72-a008-efd9f04c6ee4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:17:52 compute-0 ceph-mon[76537]: pgmap v3502: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.377 248514 DEBUG nova.compute.manager [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.378 248514 DEBUG oslo_concurrency.lockutils [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.378 248514 DEBUG oslo_concurrency.lockutils [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.378 248514 DEBUG oslo_concurrency.lockutils [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.379 248514 DEBUG nova.compute.manager [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Processing event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:17:52 compute-0 podman[401607]: 2025-12-13 09:17:52.567833947 +0000 UTC m=+0.029465405 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.671 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617472.6704443, 7aadb3d0-64f2-4531-9896-93b087cdea5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.671 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] VM Started (Lifecycle Event)
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.675 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.679 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.683 248514 INFO nova.virt.libvirt.driver [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance spawned successfully.
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.684 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.693 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.697 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.720 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.721 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617472.6705973, 7aadb3d0-64f2-4531-9896-93b087cdea5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.721 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] VM Paused (Lifecycle Event)
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.728 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.728 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.729 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.730 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.731 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.731 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.810 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.815 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617472.6790617, 7aadb3d0-64f2-4531-9896-93b087cdea5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:17:52 compute-0 nova_compute[248510]: 2025-12-13 09:17:52.815 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] VM Resumed (Lifecycle Event)
Dec 13 09:17:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:17:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:53 compute-0 nova_compute[248510]: 2025-12-13 09:17:53.002 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:17:53 compute-0 nova_compute[248510]: 2025-12-13 09:17:53.007 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:17:53 compute-0 nova_compute[248510]: 2025-12-13 09:17:53.016 248514 INFO nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Took 9.69 seconds to spawn the instance on the hypervisor.
Dec 13 09:17:53 compute-0 nova_compute[248510]: 2025-12-13 09:17:53.017 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:17:53 compute-0 nova_compute[248510]: 2025-12-13 09:17:53.061 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:17:53 compute-0 podman[401607]: 2025-12-13 09:17:53.077926577 +0000 UTC m=+0.539558025 container create 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 09:17:53 compute-0 systemd[1]: Started libpod-conmon-5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763.scope.
Dec 13 09:17:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cab876830acab31fce5dbc2e972338fc2e9949b13d6a5f72f360a2dbded2beb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:17:53 compute-0 ceph-mon[76537]: pgmap v3503: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:17:53 compute-0 podman[401607]: 2025-12-13 09:17:53.557608189 +0000 UTC m=+1.019239617 container init 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:17:53 compute-0 podman[401607]: 2025-12-13 09:17:53.565004464 +0000 UTC m=+1.026635892 container start 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 09:17:53 compute-0 nova_compute[248510]: 2025-12-13 09:17:53.584 248514 INFO nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Took 11.38 seconds to build instance.
Dec 13 09:17:53 compute-0 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [NOTICE]   (401650) : New worker (401652) forked
Dec 13 09:17:53 compute-0 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [NOTICE]   (401650) : Loading success.
Dec 13 09:17:53 compute-0 nova_compute[248510]: 2025-12-13 09:17:53.623 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:54 compute-0 nova_compute[248510]: 2025-12-13 09:17:54.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:54 compute-0 nova_compute[248510]: 2025-12-13 09:17:54.345 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:54 compute-0 nova_compute[248510]: 2025-12-13 09:17:54.462 248514 DEBUG nova.compute.manager [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:54 compute-0 nova_compute[248510]: 2025-12-13 09:17:54.463 248514 DEBUG oslo_concurrency.lockutils [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:54 compute-0 nova_compute[248510]: 2025-12-13 09:17:54.464 248514 DEBUG oslo_concurrency.lockutils [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:54 compute-0 nova_compute[248510]: 2025-12-13 09:17:54.464 248514 DEBUG oslo_concurrency.lockutils [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:54 compute-0 nova_compute[248510]: 2025-12-13 09:17:54.465 248514 DEBUG nova.compute.manager [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] No waiting events found dispatching network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:17:54 compute-0 nova_compute[248510]: 2025-12-13 09:17:54.465 248514 WARNING nova.compute.manager [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received unexpected event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 for instance with vm_state active and task_state None.
Dec 13 09:17:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec 13 09:17:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:55.454 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:17:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:55.455 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:17:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:17:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:17:56 compute-0 ceph-mon[76537]: pgmap v3504: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec 13 09:17:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Dec 13 09:17:57 compute-0 ceph-mon[76537]: pgmap v3505: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Dec 13 09:17:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:17:58 compute-0 ovn_controller[148476]: 2025-12-13T09:17:58Z|01595|binding|INFO|Releasing lport 311a1d17-b8d0-425a-a0e2-260061ce5d5d from this chassis (sb_readonly=0)
Dec 13 09:17:58 compute-0 nova_compute[248510]: 2025-12-13 09:17:58.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:58 compute-0 NetworkManager[50376]: <info>  [1765617478.8147] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Dec 13 09:17:58 compute-0 NetworkManager[50376]: <info>  [1765617478.8164] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Dec 13 09:17:58 compute-0 ovn_controller[148476]: 2025-12-13T09:17:58Z|01596|binding|INFO|Releasing lport 311a1d17-b8d0-425a-a0e2-260061ce5d5d from this chassis (sb_readonly=0)
Dec 13 09:17:58 compute-0 nova_compute[248510]: 2025-12-13 09:17:58.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:58 compute-0 nova_compute[248510]: 2025-12-13 09:17:58.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 09:17:59 compute-0 nova_compute[248510]: 2025-12-13 09:17:59.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:59 compute-0 nova_compute[248510]: 2025-12-13 09:17:59.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:17:59 compute-0 ceph-mon[76537]: pgmap v3506: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 09:17:59 compute-0 nova_compute[248510]: 2025-12-13 09:17:59.695 248514 DEBUG nova.compute.manager [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-changed-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:17:59 compute-0 nova_compute[248510]: 2025-12-13 09:17:59.697 248514 DEBUG nova.compute.manager [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing instance network info cache due to event network-changed-1271974a-de11-42b5-87b8-470c51840315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:17:59 compute-0 nova_compute[248510]: 2025-12-13 09:17:59.698 248514 DEBUG oslo_concurrency.lockutils [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:17:59 compute-0 nova_compute[248510]: 2025-12-13 09:17:59.698 248514 DEBUG oslo_concurrency.lockutils [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:17:59 compute-0 nova_compute[248510]: 2025-12-13 09:17:59.699 248514 DEBUG nova.network.neutron [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing network info cache for port 1271974a-de11-42b5-87b8-470c51840315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:18:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:18:02 compute-0 ceph-mon[76537]: pgmap v3507: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:18:02 compute-0 nova_compute[248510]: 2025-12-13 09:18:02.470 248514 DEBUG nova.network.neutron [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updated VIF entry in instance network info cache for port 1271974a-de11-42b5-87b8-470c51840315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:18:02 compute-0 nova_compute[248510]: 2025-12-13 09:18:02.471 248514 DEBUG nova.network.neutron [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:18:02 compute-0 nova_compute[248510]: 2025-12-13 09:18:02.514 248514 DEBUG oslo_concurrency.lockutils [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:18:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:18:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:04 compute-0 ceph-mon[76537]: pgmap v3508: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:18:04 compute-0 sudo[401662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:18:04 compute-0 sudo[401662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:04 compute-0 sudo[401662]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:04 compute-0 sudo[401687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:18:04 compute-0 sudo[401687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:04 compute-0 nova_compute[248510]: 2025-12-13 09:18:04.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:04 compute-0 nova_compute[248510]: 2025-12-13 09:18:04.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:04 compute-0 sudo[401687]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 09:18:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 09:18:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:18:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:18:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:18:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:18:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:18:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:18:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:18:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:18:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:18:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:18:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:18:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:18:04 compute-0 sudo[401741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:18:04 compute-0 sudo[401741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:04 compute-0 sudo[401741]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:04 compute-0 sudo[401766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:18:04 compute-0 sudo[401766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:18:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 09:18:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:18:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:18:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:18:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:18:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:18:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:18:05 compute-0 podman[401802]: 2025-12-13 09:18:05.13124471 +0000 UTC m=+0.044099790 container create ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:18:05 compute-0 systemd[1]: Started libpod-conmon-ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e.scope.
Dec 13 09:18:05 compute-0 podman[401802]: 2025-12-13 09:18:05.109720324 +0000 UTC m=+0.022575444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:18:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:18:05 compute-0 podman[401802]: 2025-12-13 09:18:05.231784095 +0000 UTC m=+0.144639215 container init ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:18:05 compute-0 podman[401802]: 2025-12-13 09:18:05.24120374 +0000 UTC m=+0.154058830 container start ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:18:05 compute-0 podman[401802]: 2025-12-13 09:18:05.245158819 +0000 UTC m=+0.158013939 container attach ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:18:05 compute-0 hungry_gould[401819]: 167 167
Dec 13 09:18:05 compute-0 systemd[1]: libpod-ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e.scope: Deactivated successfully.
Dec 13 09:18:05 compute-0 podman[401802]: 2025-12-13 09:18:05.249303732 +0000 UTC m=+0.162158842 container died ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f361aa35f1edb18d587d2e7898f6cc3951fd7a67a0e7bcc48218740df0cbff22-merged.mount: Deactivated successfully.
Dec 13 09:18:05 compute-0 podman[401802]: 2025-12-13 09:18:05.313477621 +0000 UTC m=+0.226332711 container remove ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:18:05 compute-0 systemd[1]: libpod-conmon-ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e.scope: Deactivated successfully.
Dec 13 09:18:05 compute-0 podman[401843]: 2025-12-13 09:18:05.524663743 +0000 UTC m=+0.046337896 container create 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:18:05 compute-0 systemd[1]: Started libpod-conmon-8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2.scope.
Dec 13 09:18:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:05 compute-0 podman[401843]: 2025-12-13 09:18:05.508815338 +0000 UTC m=+0.030489521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:18:05 compute-0 podman[401843]: 2025-12-13 09:18:05.608918882 +0000 UTC m=+0.130593055 container init 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:18:05 compute-0 podman[401843]: 2025-12-13 09:18:05.614529882 +0000 UTC m=+0.136204035 container start 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:18:05 compute-0 podman[401843]: 2025-12-13 09:18:05.61845302 +0000 UTC m=+0.140127213 container attach 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Dec 13 09:18:05 compute-0 ovn_controller[148476]: 2025-12-13T09:18:05Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:f6:8c 10.100.0.6
Dec 13 09:18:05 compute-0 ovn_controller[148476]: 2025-12-13T09:18:05Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:f6:8c 10.100.0.6
Dec 13 09:18:06 compute-0 ceph-mon[76537]: pgmap v3509: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:18:06 compute-0 musing_beaver[401859]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:18:06 compute-0 musing_beaver[401859]: --> All data devices are unavailable
Dec 13 09:18:06 compute-0 systemd[1]: libpod-8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2.scope: Deactivated successfully.
Dec 13 09:18:06 compute-0 podman[401843]: 2025-12-13 09:18:06.118287234 +0000 UTC m=+0.639961417 container died 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 09:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a-merged.mount: Deactivated successfully.
Dec 13 09:18:06 compute-0 podman[401843]: 2025-12-13 09:18:06.181718635 +0000 UTC m=+0.703392828 container remove 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 09:18:06 compute-0 systemd[1]: libpod-conmon-8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2.scope: Deactivated successfully.
Dec 13 09:18:06 compute-0 sudo[401766]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:06 compute-0 sudo[401891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:18:06 compute-0 sudo[401891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:06 compute-0 sudo[401891]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:06 compute-0 sudo[401916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:18:06 compute-0 sudo[401916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:06 compute-0 podman[401953]: 2025-12-13 09:18:06.71934525 +0000 UTC m=+0.052403956 container create c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:18:06 compute-0 systemd[1]: Started libpod-conmon-c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe.scope.
Dec 13 09:18:06 compute-0 podman[401953]: 2025-12-13 09:18:06.699234399 +0000 UTC m=+0.032293135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:18:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:18:06 compute-0 podman[401953]: 2025-12-13 09:18:06.814145262 +0000 UTC m=+0.147203968 container init c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:18:06 compute-0 podman[401953]: 2025-12-13 09:18:06.822041149 +0000 UTC m=+0.155099845 container start c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 09:18:06 compute-0 podman[401953]: 2025-12-13 09:18:06.8249049 +0000 UTC m=+0.157963686 container attach c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:18:06 compute-0 zealous_wing[401969]: 167 167
Dec 13 09:18:06 compute-0 systemd[1]: libpod-c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe.scope: Deactivated successfully.
Dec 13 09:18:06 compute-0 podman[401953]: 2025-12-13 09:18:06.828862809 +0000 UTC m=+0.161921505 container died c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 09:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1262a75e45ef039ec507822985d5528fcf7be03208f895049da71bc72467a3f-merged.mount: Deactivated successfully.
Dec 13 09:18:06 compute-0 podman[401953]: 2025-12-13 09:18:06.872828875 +0000 UTC m=+0.205887611 container remove c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:18:06 compute-0 systemd[1]: libpod-conmon-c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe.scope: Deactivated successfully.
Dec 13 09:18:06 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 51 op/s
Dec 13 09:18:07 compute-0 podman[401992]: 2025-12-13 09:18:07.100855376 +0000 UTC m=+0.065262097 container create dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 09:18:07 compute-0 systemd[1]: Started libpod-conmon-dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65.scope.
Dec 13 09:18:07 compute-0 podman[401992]: 2025-12-13 09:18:07.071394412 +0000 UTC m=+0.035801203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:18:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:07 compute-0 podman[401992]: 2025-12-13 09:18:07.21859095 +0000 UTC m=+0.182997691 container init dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:18:07 compute-0 podman[401992]: 2025-12-13 09:18:07.226952438 +0000 UTC m=+0.191359189 container start dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:18:07 compute-0 podman[401992]: 2025-12-13 09:18:07.231476521 +0000 UTC m=+0.195883282 container attach dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 09:18:07 compute-0 podman[402010]: 2025-12-13 09:18:07.253867469 +0000 UTC m=+0.091044570 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:18:07 compute-0 podman[402011]: 2025-12-13 09:18:07.286754608 +0000 UTC m=+0.111948220 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 09:18:07 compute-0 podman[402006]: 2025-12-13 09:18:07.300022269 +0000 UTC m=+0.138228635 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 09:18:07 compute-0 recursing_newton[402022]: {
Dec 13 09:18:07 compute-0 recursing_newton[402022]:     "0": [
Dec 13 09:18:07 compute-0 recursing_newton[402022]:         {
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "devices": [
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "/dev/loop3"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             ],
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_name": "ceph_lv0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_size": "21470642176",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "name": "ceph_lv0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "tags": {
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cluster_name": "ceph",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.crush_device_class": "",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.encrypted": "0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.objectstore": "bluestore",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osd_id": "0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.type": "block",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.vdo": "0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.with_tpm": "0"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             },
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "type": "block",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "vg_name": "ceph_vg0"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:         }
Dec 13 09:18:07 compute-0 recursing_newton[402022]:     ],
Dec 13 09:18:07 compute-0 recursing_newton[402022]:     "1": [
Dec 13 09:18:07 compute-0 recursing_newton[402022]:         {
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "devices": [
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "/dev/loop4"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             ],
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_name": "ceph_lv1",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_size": "21470642176",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "name": "ceph_lv1",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "tags": {
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cluster_name": "ceph",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.crush_device_class": "",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.encrypted": "0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.objectstore": "bluestore",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osd_id": "1",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.type": "block",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.vdo": "0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.with_tpm": "0"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             },
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "type": "block",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "vg_name": "ceph_vg1"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:         }
Dec 13 09:18:07 compute-0 recursing_newton[402022]:     ],
Dec 13 09:18:07 compute-0 recursing_newton[402022]:     "2": [
Dec 13 09:18:07 compute-0 recursing_newton[402022]:         {
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "devices": [
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "/dev/loop5"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             ],
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_name": "ceph_lv2",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_size": "21470642176",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "name": "ceph_lv2",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "tags": {
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.cluster_name": "ceph",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.crush_device_class": "",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.encrypted": "0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.objectstore": "bluestore",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osd_id": "2",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.type": "block",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.vdo": "0",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:                 "ceph.with_tpm": "0"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             },
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "type": "block",
Dec 13 09:18:07 compute-0 recursing_newton[402022]:             "vg_name": "ceph_vg2"
Dec 13 09:18:07 compute-0 recursing_newton[402022]:         }
Dec 13 09:18:07 compute-0 recursing_newton[402022]:     ]
Dec 13 09:18:07 compute-0 recursing_newton[402022]: }
Dec 13 09:18:07 compute-0 systemd[1]: libpod-dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65.scope: Deactivated successfully.
Dec 13 09:18:07 compute-0 conmon[402022]: conmon dd25b3b20459392efed4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65.scope/container/memory.events
Dec 13 09:18:07 compute-0 podman[401992]: 2025-12-13 09:18:07.555852654 +0000 UTC m=+0.520259405 container died dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37-merged.mount: Deactivated successfully.
Dec 13 09:18:07 compute-0 podman[401992]: 2025-12-13 09:18:07.623975131 +0000 UTC m=+0.588381832 container remove dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 09:18:07 compute-0 systemd[1]: libpod-conmon-dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65.scope: Deactivated successfully.
Dec 13 09:18:07 compute-0 sudo[401916]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:07 compute-0 sudo[402091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:18:07 compute-0 sudo[402091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:07 compute-0 sudo[402091]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:07 compute-0 sudo[402116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:18:07 compute-0 sudo[402116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:08 compute-0 ceph-mon[76537]: pgmap v3510: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 51 op/s
Dec 13 09:18:08 compute-0 podman[402153]: 2025-12-13 09:18:08.189605135 +0000 UTC m=+0.045047803 container create 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:18:08 compute-0 systemd[1]: Started libpod-conmon-03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b.scope.
Dec 13 09:18:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:18:08 compute-0 podman[402153]: 2025-12-13 09:18:08.168462328 +0000 UTC m=+0.023904986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:18:08 compute-0 podman[402153]: 2025-12-13 09:18:08.281529066 +0000 UTC m=+0.136971744 container init 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:18:08 compute-0 podman[402153]: 2025-12-13 09:18:08.289478384 +0000 UTC m=+0.144921022 container start 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:18:08 compute-0 podman[402153]: 2025-12-13 09:18:08.292986551 +0000 UTC m=+0.148429209 container attach 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:18:08 compute-0 vigorous_lovelace[402170]: 167 167
Dec 13 09:18:08 compute-0 systemd[1]: libpod-03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b.scope: Deactivated successfully.
Dec 13 09:18:08 compute-0 podman[402153]: 2025-12-13 09:18:08.298845677 +0000 UTC m=+0.154288315 container died 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1fbbc82bde121c026aaa9bd85ced7f8dc58de0daad75e7ebcfbb920c4925748-merged.mount: Deactivated successfully.
Dec 13 09:18:08 compute-0 podman[402153]: 2025-12-13 09:18:08.341385997 +0000 UTC m=+0.196828645 container remove 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:18:08 compute-0 systemd[1]: libpod-conmon-03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b.scope: Deactivated successfully.
Dec 13 09:18:08 compute-0 podman[402192]: 2025-12-13 09:18:08.533863663 +0000 UTC m=+0.043371562 container create 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:18:08 compute-0 systemd[1]: Started libpod-conmon-5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da.scope.
Dec 13 09:18:08 compute-0 podman[402192]: 2025-12-13 09:18:08.514332286 +0000 UTC m=+0.023840195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:18:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:18:08 compute-0 podman[402192]: 2025-12-13 09:18:08.636938451 +0000 UTC m=+0.146446370 container init 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Dec 13 09:18:08 compute-0 podman[402192]: 2025-12-13 09:18:08.646141531 +0000 UTC m=+0.155649430 container start 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:18:08 compute-0 podman[402192]: 2025-12-13 09:18:08.649284669 +0000 UTC m=+0.158792568 container attach 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 09:18:08 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Dec 13 09:18:09 compute-0 nova_compute[248510]: 2025-12-13 09:18:09.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:09 compute-0 nova_compute[248510]: 2025-12-13 09:18:09.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:09 compute-0 lvm[402288]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:18:09 compute-0 lvm[402288]: VG ceph_vg2 finished
Dec 13 09:18:09 compute-0 lvm[402289]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:18:09 compute-0 lvm[402289]: VG ceph_vg1 finished
Dec 13 09:18:09 compute-0 lvm[402285]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:18:09 compute-0 lvm[402285]: VG ceph_vg0 finished
Dec 13 09:18:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:18:09
Dec 13 09:18:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:18:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:18:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Dec 13 09:18:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:18:09 compute-0 admiring_napier[402208]: {}
Dec 13 09:18:09 compute-0 systemd[1]: libpod-5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da.scope: Deactivated successfully.
Dec 13 09:18:09 compute-0 systemd[1]: libpod-5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da.scope: Consumed 1.516s CPU time.
Dec 13 09:18:09 compute-0 podman[402192]: 2025-12-13 09:18:09.535238045 +0000 UTC m=+1.044745964 container died 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07-merged.mount: Deactivated successfully.
Dec 13 09:18:09 compute-0 podman[402192]: 2025-12-13 09:18:09.587455436 +0000 UTC m=+1.096963335 container remove 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:18:09 compute-0 systemd[1]: libpod-conmon-5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da.scope: Deactivated successfully.
Dec 13 09:18:09 compute-0 sudo[402116]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:18:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:18:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:18:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:18:09 compute-0 sudo[402304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:18:09 compute-0 sudo[402304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:18:09 compute-0 sudo[402304]: pam_unix(sudo:session): session closed for user root
Dec 13 09:18:10 compute-0 ceph-mon[76537]: pgmap v3511: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Dec 13 09:18:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:18:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:18:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:18:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:18:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:18:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:18:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:18:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:18:10 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:18:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:18:12 compute-0 ceph-mon[76537]: pgmap v3512: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:18:12 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:18:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:14 compute-0 ceph-mon[76537]: pgmap v3513: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:18:14 compute-0 nova_compute[248510]: 2025-12-13 09:18:14.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:14 compute-0 nova_compute[248510]: 2025-12-13 09:18:14.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:14 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:18:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:18:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1696508029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:18:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:18:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1696508029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:18:16 compute-0 ceph-mon[76537]: pgmap v3514: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:18:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1696508029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:18:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1696508029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:18:16 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:18:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:18 compute-0 ceph-mon[76537]: pgmap v3515: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:18:18 compute-0 nova_compute[248510]: 2025-12-13 09:18:18.419 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:18 compute-0 nova_compute[248510]: 2025-12-13 09:18:18.420 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:18 compute-0 nova_compute[248510]: 2025-12-13 09:18:18.445 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:18:18 compute-0 nova_compute[248510]: 2025-12-13 09:18:18.527 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:18 compute-0 nova_compute[248510]: 2025-12-13 09:18:18.528 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:18 compute-0 nova_compute[248510]: 2025-12-13 09:18:18.543 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:18:18 compute-0 nova_compute[248510]: 2025-12-13 09:18:18.544 248514 INFO nova.compute.claims [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:18:18 compute-0 nova_compute[248510]: 2025-12-13 09:18:18.711 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:18 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:18:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236208476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.345 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.354 248514 DEBUG nova.compute.provider_tree [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.378 248514 DEBUG nova.scheduler.client.report [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.408 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.410 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.457 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.458 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.476 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.493 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.597 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.599 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.599 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Creating image(s)
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.625 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.655 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.683 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.688 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.741 248514 DEBUG nova.policy [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.787 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.788 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.789 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.789 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.820 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:18:19 compute-0 nova_compute[248510]: 2025-12-13 09:18:19.825 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:20 compute-0 ceph-mon[76537]: pgmap v3516: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:18:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1236208476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.191 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.299 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.384 248514 DEBUG nova.objects.instance [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.404 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.405 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Ensure instance console log exists: /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.406 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.406 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.406 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:20 compute-0 nova_compute[248510]: 2025-12-13 09:18:20.763 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Successfully created port: 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:18:20 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007742746986199878 of space, bias 1.0, pg target 0.23228240958599636 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697116252414539 of space, bias 1.0, pg target 0.20091348757243618 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:18:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.596 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Successfully updated port: 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.620 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.621 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.621 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.743 248514 DEBUG nova.compute.manager [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.744 248514 DEBUG nova.compute.manager [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing instance network info cache due to event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.744 248514 DEBUG oslo_concurrency.lockutils [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:21 compute-0 nova_compute[248510]: 2025-12-13 09:18:21.796 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:18:22 compute-0 ceph-mon[76537]: pgmap v3517: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Dec 13 09:18:22 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 143 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Dec 13 09:18:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:24 compute-0 ceph-mon[76537]: pgmap v3518: 321 pgs: 321 active+clean; 143 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Dec 13 09:18:24 compute-0 nova_compute[248510]: 2025-12-13 09:18:24.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:24 compute-0 nova_compute[248510]: 2025-12-13 09:18:24.360 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:24 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:18:25 compute-0 ceph-mon[76537]: pgmap v3519: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.502 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.574 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.574 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance network_info: |[{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.576 248514 DEBUG oslo_concurrency.lockutils [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.576 248514 DEBUG nova.network.neutron [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.582 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start _get_guest_xml network_info=[{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.591 248514 WARNING nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.606 248514 DEBUG nova.virt.libvirt.host [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.607 248514 DEBUG nova.virt.libvirt.host [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.612 248514 DEBUG nova.virt.libvirt.host [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.613 248514 DEBUG nova.virt.libvirt.host [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.614 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.614 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.615 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.616 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.616 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.617 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.617 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.618 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.618 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.619 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.619 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.619 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.625 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:26 compute-0 nova_compute[248510]: 2025-12-13 09:18:26.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:26 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:18:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:18:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286539412' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.235 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.266 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.271 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.836 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 09:18:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:18:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252489878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.887 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.889 248514 DEBUG nova.virt.libvirt.vif [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1366374546',display_name='tempest-TestGettingAddress-server-1366374546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1366374546',id=148,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-3gsokvio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:18:19Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=db3ef8e8-1a8a-42cc-a5ed-d3e401098f07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.889 248514 DEBUG nova.network.os_vif_util [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.890 248514 DEBUG nova.network.os_vif_util [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.892 248514 DEBUG nova.objects.instance [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.919 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <uuid>db3ef8e8-1a8a-42cc-a5ed-d3e401098f07</uuid>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <name>instance-00000094</name>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-1366374546</nova:name>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:18:26</nova:creationTime>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <nova:port uuid="051f9c2f-0627-4ddc-b79c-5b2542f0efa7">
Dec 13 09:18:27 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe5c:fd37" ipVersion="6"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5c:fd37" ipVersion="6"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <system>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <entry name="serial">db3ef8e8-1a8a-42cc-a5ed-d3e401098f07</entry>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <entry name="uuid">db3ef8e8-1a8a-42cc-a5ed-d3e401098f07</entry>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </system>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <os>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   </os>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <features>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   </features>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk">
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       </source>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config">
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       </source>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:18:27 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:5c:fd:37"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <target dev="tap051f9c2f-06"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/console.log" append="off"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <video>
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </video>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:18:27 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:18:27 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:18:27 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:18:27 compute-0 nova_compute[248510]: </domain>
Dec 13 09:18:27 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.921 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Preparing to wait for external event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.922 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.922 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.922 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.923 248514 DEBUG nova.virt.libvirt.vif [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1366374546',display_name='tempest-TestGettingAddress-server-1366374546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1366374546',id=148,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-3gsokvio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:18:19Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=db3ef8e8-1a8a-42cc-a5ed-d3e401098f07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.923 248514 DEBUG nova.network.os_vif_util [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.924 248514 DEBUG nova.network.os_vif_util [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.925 248514 DEBUG os_vif [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.932 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap051f9c2f-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.932 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap051f9c2f-06, col_values=(('external_ids', {'iface-id': '051f9c2f-0627-4ddc-b79c-5b2542f0efa7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:fd:37', 'vm-uuid': 'db3ef8e8-1a8a-42cc-a5ed-d3e401098f07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:27 compute-0 NetworkManager[50376]: <info>  [1765617507.9357] manager: (tap051f9c2f-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:27 compute-0 nova_compute[248510]: 2025-12-13 09:18:27.950 248514 INFO os_vif [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06')
Dec 13 09:18:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:28 compute-0 ceph-mon[76537]: pgmap v3520: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:18:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4286539412' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:18:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/252489878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.153 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.153 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.154 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:5c:fd:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.155 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Using config drive
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.191 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.310 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.310 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.311 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.311 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aadb3d0-64f2-4531-9896-93b087cdea5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.578 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Creating config drive at /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.584 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdyfxvsn9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.740 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdyfxvsn9" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.771 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.775 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.946 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:28 compute-0 nova_compute[248510]: 2025-12-13 09:18:28.947 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Deleting local config drive /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config because it was imported into RBD.
Dec 13 09:18:28 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3521: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:18:29 compute-0 kernel: tap051f9c2f-06: entered promiscuous mode
Dec 13 09:18:29 compute-0 NetworkManager[50376]: <info>  [1765617509.0241] manager: (tap051f9c2f-06): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Dec 13 09:18:29 compute-0 ovn_controller[148476]: 2025-12-13T09:18:29Z|01597|binding|INFO|Claiming lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 for this chassis.
Dec 13 09:18:29 compute-0 ovn_controller[148476]: 2025-12-13T09:18:29Z|01598|binding|INFO|051f9c2f-0627-4ddc-b79c-5b2542f0efa7: Claiming fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.026 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:29 compute-0 ovn_controller[148476]: 2025-12-13T09:18:29Z|01599|binding|INFO|Setting lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 ovn-installed in OVS
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:29 compute-0 systemd-udevd[402652]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:18:29 compute-0 systemd-machined[210538]: New machine qemu-179-instance-00000094.
Dec 13 09:18:29 compute-0 NetworkManager[50376]: <info>  [1765617509.0819] device (tap051f9c2f-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:18:29 compute-0 NetworkManager[50376]: <info>  [1765617509.0826] device (tap051f9c2f-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:18:29 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000094.
Dec 13 09:18:29 compute-0 ovn_controller[148476]: 2025-12-13T09:18:29Z|01600|binding|INFO|Setting lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 up in Southbound
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.134 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37'], port_security=['fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe5c:fd37/64 2001:db8::f816:3eff:fe5c:fd37/64', 'neutron:device_id': 'db3ef8e8-1a8a-42cc-a5ed-d3e401098f07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '747095ec-05b6-4395-bc25-989b7630b3c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=051f9c2f-0627-4ddc-b79c-5b2542f0efa7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.137 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 bound to our chassis
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.140 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5815775-e5d0-4b72-a008-efd9f04c6ee4
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.162 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f106e3b-6490-4393-80b5-e142136a442c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.206 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f366397-1921-4470-9786-d125e5fcac81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.211 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[52a7733e-9c5a-48fc-85b0-ce9d776bc16a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.259 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[13888107-9c42-4cab-b715-cbbeccdb62ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.284 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef04ddd-1730-432c-b1cf-3fcc093eba60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5815775-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:3e:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004907, 'reachable_time': 36718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402668, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.307 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[194fa4ad-f656-474c-a059-0d26def8eca4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape5815775-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004925, 'tstamp': 1004925}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402669, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape5815775-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004929, 'tstamp': 1004929}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402669, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.309 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5815775-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.313 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5815775-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.313 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.314 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5815775-e0, col_values=(('external_ids', {'iface-id': '311a1d17-b8d0-425a-a0e2-260061ce5d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.315 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.459 248514 DEBUG nova.compute.manager [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.460 248514 DEBUG oslo_concurrency.lockutils [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.460 248514 DEBUG oslo_concurrency.lockutils [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.460 248514 DEBUG oslo_concurrency.lockutils [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.461 248514 DEBUG nova.compute.manager [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Processing event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.643 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.644 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617509.642649, db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.644 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] VM Started (Lifecycle Event)
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.646 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.649 248514 INFO nova.virt.libvirt.driver [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance spawned successfully.
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.650 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.755 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.774 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.779 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.780 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.781 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.781 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.782 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.783 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.826 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.827 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617509.6458447, db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.827 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] VM Paused (Lifecycle Event)
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.898 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:29 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.900 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.908 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.913 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617509.6460905, db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.913 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] VM Resumed (Lifecycle Event)
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.921 248514 INFO nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Took 10.32 seconds to spawn the instance on the hypervisor.
Dec 13 09:18:29 compute-0 nova_compute[248510]: 2025-12-13 09:18:29.922 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:18:30 compute-0 nova_compute[248510]: 2025-12-13 09:18:30.029 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:18:30 compute-0 nova_compute[248510]: 2025-12-13 09:18:30.033 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:18:30 compute-0 ceph-mon[76537]: pgmap v3521: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:18:30 compute-0 nova_compute[248510]: 2025-12-13 09:18:30.068 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:18:30 compute-0 nova_compute[248510]: 2025-12-13 09:18:30.092 248514 INFO nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Took 11.60 seconds to build instance.
Dec 13 09:18:30 compute-0 nova_compute[248510]: 2025-12-13 09:18:30.115 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:30 compute-0 nova_compute[248510]: 2025-12-13 09:18:30.708 248514 DEBUG nova.network.neutron [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updated VIF entry in instance network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:18:30 compute-0 nova_compute[248510]: 2025-12-13 09:18:30.709 248514 DEBUG nova.network.neutron [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:18:30 compute-0 nova_compute[248510]: 2025-12-13 09:18:30.729 248514 DEBUG oslo_concurrency.lockutils [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:18:30 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.448 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.468 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.469 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.548 248514 DEBUG nova.compute.manager [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.549 248514 DEBUG oslo_concurrency.lockutils [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.549 248514 DEBUG oslo_concurrency.lockutils [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.549 248514 DEBUG oslo_concurrency.lockutils [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.550 248514 DEBUG nova.compute.manager [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] No waiting events found dispatching network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:18:31 compute-0 nova_compute[248510]: 2025-12-13 09:18:31.550 248514 WARNING nova.compute.manager [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received unexpected event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 for instance with vm_state active and task_state None.
Dec 13 09:18:32 compute-0 ceph-mon[76537]: pgmap v3522: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:18:32 compute-0 nova_compute[248510]: 2025-12-13 09:18:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:32 compute-0 nova_compute[248510]: 2025-12-13 09:18:32.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:32 compute-0 nova_compute[248510]: 2025-12-13 09:18:32.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:32 compute-0 nova_compute[248510]: 2025-12-13 09:18:32.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:32 compute-0 nova_compute[248510]: 2025-12-13 09:18:32.799 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:18:32 compute-0 nova_compute[248510]: 2025-12-13 09:18:32.799 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:32 compute-0 nova_compute[248510]: 2025-12-13 09:18:32.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:32 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3523: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 09:18:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:18:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056976498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:18:33 compute-0 ceph-mon[76537]: pgmap v3523: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 09:18:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2056976498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.398 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.491 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.492 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.497 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.497 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.660 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.661 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3126MB free_disk=59.92107794713229GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.662 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.662 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.855 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7aadb3d0-64f2-4531-9896-93b087cdea5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.855 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.856 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.856 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:18:33 compute-0 nova_compute[248510]: 2025-12-13 09:18:33.996 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.055 248514 DEBUG nova.compute.manager [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.056 248514 DEBUG nova.compute.manager [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing instance network info cache due to event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.056 248514 DEBUG oslo_concurrency.lockutils [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.057 248514 DEBUG oslo_concurrency.lockutils [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.057 248514 DEBUG nova.network.neutron [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:18:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427994080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.590 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.598 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:18:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3427994080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.628 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.661 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:18:34 compute-0 nova_compute[248510]: 2025-12-13 09:18:34.661 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:34 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 799 KiB/s wr, 97 op/s
Dec 13 09:18:35 compute-0 ceph-mon[76537]: pgmap v3524: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 799 KiB/s wr, 97 op/s
Dec 13 09:18:35 compute-0 nova_compute[248510]: 2025-12-13 09:18:35.661 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:36 compute-0 nova_compute[248510]: 2025-12-13 09:18:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:36 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3525: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:18:37 compute-0 nova_compute[248510]: 2025-12-13 09:18:37.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:37 compute-0 nova_compute[248510]: 2025-12-13 09:18:37.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:37 compute-0 nova_compute[248510]: 2025-12-13 09:18:37.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:18:37 compute-0 nova_compute[248510]: 2025-12-13 09:18:37.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:37 compute-0 podman[402759]: 2025-12-13 09:18:37.998145995 +0000 UTC m=+0.075885972 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 13 09:18:38 compute-0 podman[402758]: 2025-12-13 09:18:38.032146702 +0000 UTC m=+0.109628093 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 09:18:38 compute-0 ceph-mon[76537]: pgmap v3525: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:18:38 compute-0 podman[402757]: 2025-12-13 09:18:38.056425697 +0000 UTC m=+0.128933504 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 09:18:38 compute-0 nova_compute[248510]: 2025-12-13 09:18:38.543 248514 DEBUG nova.network.neutron [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updated VIF entry in instance network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:18:38 compute-0 nova_compute[248510]: 2025-12-13 09:18:38.544 248514 DEBUG nova.network.neutron [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:18:38 compute-0 nova_compute[248510]: 2025-12-13 09:18:38.786 248514 DEBUG oslo_concurrency.lockutils [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:18:38 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:18:39 compute-0 nova_compute[248510]: 2025-12-13 09:18:39.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:39.905 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:40 compute-0 ceph-mon[76537]: pgmap v3526: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.057962) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520058036, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1541, "num_deletes": 251, "total_data_size": 2613330, "memory_usage": 2656688, "flush_reason": "Manual Compaction"}
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520077983, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 2544917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69026, "largest_seqno": 70566, "table_properties": {"data_size": 2537639, "index_size": 4284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14932, "raw_average_key_size": 20, "raw_value_size": 2523261, "raw_average_value_size": 3382, "num_data_blocks": 191, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617358, "oldest_key_time": 1765617358, "file_creation_time": 1765617520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 20066 microseconds, and 6905 cpu microseconds.
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.078034) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 2544917 bytes OK
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.078058) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.080198) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.080214) EVENT_LOG_v1 {"time_micros": 1765617520080209, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.080231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2606588, prev total WAL file size 2606588, number of live WAL files 2.
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.081147) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(2485KB)], [164(9253KB)]
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520081292, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 12020564, "oldest_snapshot_seqno": -1}
Dec 13 09:18:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:18:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:18:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:18:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8917 keys, 10227131 bytes, temperature: kUnknown
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520192629, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10227131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10171956, "index_size": 31766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22341, "raw_key_size": 233889, "raw_average_key_size": 26, "raw_value_size": 10017447, "raw_average_value_size": 1123, "num_data_blocks": 1221, "num_entries": 8917, "num_filter_entries": 8917, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:18:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:18:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.192921) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10227131 bytes
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.196529) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.9 rd, 91.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 9431, records dropped: 514 output_compression: NoCompression
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.196553) EVENT_LOG_v1 {"time_micros": 1765617520196542, "job": 102, "event": "compaction_finished", "compaction_time_micros": 111443, "compaction_time_cpu_micros": 32727, "output_level": 6, "num_output_files": 1, "total_output_size": 10227131, "num_input_records": 9431, "num_output_records": 8917, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520197043, "job": 102, "event": "table_file_deletion", "file_number": 166}
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520198854, "job": 102, "event": "table_file_deletion", "file_number": 164}
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.080969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:18:40 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:18:40 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:18:42 compute-0 ceph-mon[76537]: pgmap v3527: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:18:42 compute-0 ovn_controller[148476]: 2025-12-13T09:18:42Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:fd:37 10.100.0.13
Dec 13 09:18:42 compute-0 ovn_controller[148476]: 2025-12-13T09:18:42Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:fd:37 10.100.0.13
Dec 13 09:18:42 compute-0 nova_compute[248510]: 2025-12-13 09:18:42.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:42 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3528: 321 pgs: 321 active+clean; 175 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 761 KiB/s wr, 91 op/s
Dec 13 09:18:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:44 compute-0 ceph-mon[76537]: pgmap v3528: 321 pgs: 321 active+clean; 175 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 761 KiB/s wr, 91 op/s
Dec 13 09:18:44 compute-0 nova_compute[248510]: 2025-12-13 09:18:44.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:44 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3529: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Dec 13 09:18:46 compute-0 ceph-mon[76537]: pgmap v3529: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Dec 13 09:18:46 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 13 09:18:47 compute-0 nova_compute[248510]: 2025-12-13 09:18:47.950 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:48 compute-0 ceph-mon[76537]: pgmap v3530: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 13 09:18:48 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:18:49 compute-0 nova_compute[248510]: 2025-12-13 09:18:49.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:49 compute-0 nova_compute[248510]: 2025-12-13 09:18:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:18:50 compute-0 ceph-mon[76537]: pgmap v3531: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:18:50 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:18:52 compute-0 ceph-mon[76537]: pgmap v3532: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:18:52 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:18:52 compute-0 nova_compute[248510]: 2025-12-13 09:18:52.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.084 248514 DEBUG nova.compute.manager [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.085 248514 DEBUG nova.compute.manager [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing instance network info cache due to event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.085 248514 DEBUG oslo_concurrency.lockutils [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.085 248514 DEBUG oslo_concurrency.lockutils [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.085 248514 DEBUG nova.network.neutron [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:18:54 compute-0 ceph-mon[76537]: pgmap v3533: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.265 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.266 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.266 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.266 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.266 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.268 248514 INFO nova.compute.manager [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Terminating instance
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.268 248514 DEBUG nova.compute.manager [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:18:54 compute-0 kernel: tap051f9c2f-06 (unregistering): left promiscuous mode
Dec 13 09:18:54 compute-0 NetworkManager[50376]: <info>  [1765617534.3251] device (tap051f9c2f-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:18:54 compute-0 ovn_controller[148476]: 2025-12-13T09:18:54Z|01601|binding|INFO|Releasing lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 from this chassis (sb_readonly=0)
Dec 13 09:18:54 compute-0 ovn_controller[148476]: 2025-12-13T09:18:54Z|01602|binding|INFO|Setting lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 down in Southbound
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:54 compute-0 ovn_controller[148476]: 2025-12-13T09:18:54Z|01603|binding|INFO|Removing iface tap051f9c2f-06 ovn-installed in OVS
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:54 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000094.scope: Deactivated successfully.
Dec 13 09:18:54 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000094.scope: Consumed 14.828s CPU time.
Dec 13 09:18:54 compute-0 systemd-machined[210538]: Machine qemu-179-instance-00000094 terminated.
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.477 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37'], port_security=['fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe5c:fd37/64 2001:db8::f816:3eff:fe5c:fd37/64', 'neutron:device_id': 'db3ef8e8-1a8a-42cc-a5ed-d3e401098f07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '747095ec-05b6-4395-bc25-989b7630b3c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=051f9c2f-0627-4ddc-b79c-5b2542f0efa7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.478 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 unbound from our chassis
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.480 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5815775-e5d0-4b72-a008-efd9f04c6ee4
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.501 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4b3ec5-da19-4143-a26d-3a114d03a536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.514 248514 INFO nova.virt.libvirt.driver [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance destroyed successfully.
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.515 248514 DEBUG nova.objects.instance [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.533 248514 DEBUG nova.virt.libvirt.vif [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1366374546',display_name='tempest-TestGettingAddress-server-1366374546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1366374546',id=148,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:18:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-3gsokvio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:18:30Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=db3ef8e8-1a8a-42cc-a5ed-d3e401098f07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.535 248514 DEBUG nova.network.os_vif_util [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.536 248514 DEBUG nova.network.os_vif_util [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.537 248514 DEBUG os_vif [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.539 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap051f9c2f-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.547 248514 INFO os_vif [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06')
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.557 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0f731302-28e7-4774-9266-679e1e147ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.562 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[df634efe-6157-4dd5-bb6b-ae53e6a63ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.616 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8293dbda-fa8b-466a-a82d-6077b7580e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.642 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a286de2-7944-46b5-8d55-0f8062953afd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5815775-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:3e:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004907, 'reachable_time': 36718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402855, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.667 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f4a611-e9de-4ee5-abb0-424b7a5da9e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape5815775-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004925, 'tstamp': 1004925}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402856, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape5815775-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004929, 'tstamp': 1004929}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402856, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.670 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5815775-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.673 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5815775-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.674 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:18:54 compute-0 nova_compute[248510]: 2025-12-13 09:18:54.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.674 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5815775-e0, col_values=(('external_ids', {'iface-id': '311a1d17-b8d0-425a-a0e2-260061ce5d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:18:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.675 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:18:54 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.4 MiB/s wr, 48 op/s
Dec 13 09:18:55 compute-0 nova_compute[248510]: 2025-12-13 09:18:55.208 248514 INFO nova.virt.libvirt.driver [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Deleting instance files /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_del
Dec 13 09:18:55 compute-0 nova_compute[248510]: 2025-12-13 09:18:55.210 248514 INFO nova.virt.libvirt.driver [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Deletion of /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_del complete
Dec 13 09:18:55 compute-0 nova_compute[248510]: 2025-12-13 09:18:55.402 248514 INFO nova.compute.manager [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Took 1.13 seconds to destroy the instance on the hypervisor.
Dec 13 09:18:55 compute-0 nova_compute[248510]: 2025-12-13 09:18:55.403 248514 DEBUG oslo.service.loopingcall [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:18:55 compute-0 nova_compute[248510]: 2025-12-13 09:18:55.404 248514 DEBUG nova.compute.manager [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:18:55 compute-0 nova_compute[248510]: 2025-12-13 09:18:55.405 248514 DEBUG nova.network.neutron [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:18:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:55.455 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:18:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:56 compute-0 ceph-mon[76537]: pgmap v3534: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.4 MiB/s wr, 48 op/s
Dec 13 09:18:56 compute-0 nova_compute[248510]: 2025-12-13 09:18:56.608 248514 DEBUG nova.compute.manager [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-unplugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:18:56 compute-0 nova_compute[248510]: 2025-12-13 09:18:56.609 248514 DEBUG oslo_concurrency.lockutils [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:56 compute-0 nova_compute[248510]: 2025-12-13 09:18:56.610 248514 DEBUG oslo_concurrency.lockutils [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:56 compute-0 nova_compute[248510]: 2025-12-13 09:18:56.610 248514 DEBUG oslo_concurrency.lockutils [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:56 compute-0 nova_compute[248510]: 2025-12-13 09:18:56.611 248514 DEBUG nova.compute.manager [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] No waiting events found dispatching network-vif-unplugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:18:56 compute-0 nova_compute[248510]: 2025-12-13 09:18:56.611 248514 DEBUG nova.compute.manager [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-unplugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:18:56 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 66 KiB/s wr, 12 op/s
Dec 13 09:18:57 compute-0 nova_compute[248510]: 2025-12-13 09:18:57.794 248514 DEBUG nova.network.neutron [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:18:57 compute-0 nova_compute[248510]: 2025-12-13 09:18:57.919 248514 INFO nova.compute.manager [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Took 2.51 seconds to deallocate network for instance.
Dec 13 09:18:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:18:58 compute-0 ceph-mon[76537]: pgmap v3535: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 66 KiB/s wr, 12 op/s
Dec 13 09:18:58 compute-0 nova_compute[248510]: 2025-12-13 09:18:58.182 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:58 compute-0 nova_compute[248510]: 2025-12-13 09:18:58.183 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:58 compute-0 nova_compute[248510]: 2025-12-13 09:18:58.276 248514 DEBUG oslo_concurrency.processutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:18:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:18:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373072729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:18:58 compute-0 nova_compute[248510]: 2025-12-13 09:18:58.930 248514 DEBUG oslo_concurrency.processutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:18:58 compute-0 nova_compute[248510]: 2025-12-13 09:18:58.939 248514 DEBUG nova.compute.provider_tree [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:18:58 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 70 KiB/s wr, 40 op/s
Dec 13 09:18:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2373072729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.287 248514 DEBUG nova.compute.manager [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.288 248514 DEBUG oslo_concurrency.lockutils [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.289 248514 DEBUG oslo_concurrency.lockutils [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.289 248514 DEBUG oslo_concurrency.lockutils [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.290 248514 DEBUG nova.compute.manager [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] No waiting events found dispatching network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.290 248514 WARNING nova.compute.manager [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received unexpected event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 for instance with vm_state deleted and task_state None.
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.291 248514 DEBUG nova.compute.manager [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-deleted-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.300 248514 DEBUG nova.scheduler.client.report [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.425 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.474 248514 INFO nova.scheduler.client.report [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance db3ef8e8-1a8a-42cc-a5ed-d3e401098f07
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.497 248514 DEBUG nova.network.neutron [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updated VIF entry in instance network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.497 248514 DEBUG nova.network.neutron [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.818 248514 DEBUG oslo_concurrency.lockutils [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:18:59 compute-0 nova_compute[248510]: 2025-12-13 09:18:59.859 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:00 compute-0 ceph-mon[76537]: pgmap v3536: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 70 KiB/s wr, 40 op/s
Dec 13 09:19:00 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 30 op/s
Dec 13 09:19:02 compute-0 ceph-mon[76537]: pgmap v3537: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 30 op/s
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.258 248514 DEBUG nova.compute.manager [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-changed-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.258 248514 DEBUG nova.compute.manager [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing instance network info cache due to event network-changed-1271974a-de11-42b5-87b8-470c51840315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.259 248514 DEBUG oslo_concurrency.lockutils [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.259 248514 DEBUG oslo_concurrency.lockutils [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.259 248514 DEBUG nova.network.neutron [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing network info cache for port 1271974a-de11-42b5-87b8-470c51840315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.326 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.326 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.327 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.327 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.327 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.328 248514 INFO nova.compute.manager [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Terminating instance
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.330 248514 DEBUG nova.compute.manager [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:19:02 compute-0 kernel: tap1271974a-de (unregistering): left promiscuous mode
Dec 13 09:19:02 compute-0 NetworkManager[50376]: <info>  [1765617542.3821] device (tap1271974a-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:19:02 compute-0 ovn_controller[148476]: 2025-12-13T09:19:02Z|01604|binding|INFO|Releasing lport 1271974a-de11-42b5-87b8-470c51840315 from this chassis (sb_readonly=0)
Dec 13 09:19:02 compute-0 ovn_controller[148476]: 2025-12-13T09:19:02Z|01605|binding|INFO|Setting lport 1271974a-de11-42b5-87b8-470c51840315 down in Southbound
Dec 13 09:19:02 compute-0 ovn_controller[148476]: 2025-12-13T09:19:02Z|01606|binding|INFO|Removing iface tap1271974a-de ovn-installed in OVS
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.439 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c'], port_security=['fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fed5:f68c/64 2001:db8::f816:3eff:fed5:f68c/64', 'neutron:device_id': '7aadb3d0-64f2-4531-9896-93b087cdea5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '747095ec-05b6-4395-bc25-989b7630b3c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1271974a-de11-42b5-87b8-470c51840315) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.440 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1271974a-de11-42b5-87b8-470c51840315 in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 unbound from our chassis
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.442 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5815775-e5d0-4b72-a008-efd9f04c6ee4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.443 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2af1db-293e-494e-9fdd-167cbdfd0474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.444 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 namespace which is not needed anymore
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:02 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000093.scope: Deactivated successfully.
Dec 13 09:19:02 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000093.scope: Consumed 16.383s CPU time.
Dec 13 09:19:02 compute-0 systemd-machined[210538]: Machine qemu-178-instance-00000093 terminated.
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.571 248514 INFO nova.virt.libvirt.driver [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance destroyed successfully.
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.572 248514 DEBUG nova.objects.instance [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 7aadb3d0-64f2-4531-9896-93b087cdea5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:19:02 compute-0 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [NOTICE]   (401650) : haproxy version is 2.8.14-c23fe91
Dec 13 09:19:02 compute-0 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [NOTICE]   (401650) : path to executable is /usr/sbin/haproxy
Dec 13 09:19:02 compute-0 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [WARNING]  (401650) : Exiting Master process...
Dec 13 09:19:02 compute-0 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [ALERT]    (401650) : Current worker (401652) exited with code 143 (Terminated)
Dec 13 09:19:02 compute-0 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [WARNING]  (401650) : All workers exited. Exiting... (0)
Dec 13 09:19:02 compute-0 systemd[1]: libpod-5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763.scope: Deactivated successfully.
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.598 248514 DEBUG nova.virt.libvirt.vif [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:17:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1903797139',display_name='tempest-TestGettingAddress-server-1903797139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1903797139',id=147,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:17:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-zvq7q3d0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:17:53Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7aadb3d0-64f2-4531-9896-93b087cdea5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.598 248514 DEBUG nova.network.os_vif_util [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.600 248514 DEBUG nova.network.os_vif_util [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.600 248514 DEBUG os_vif [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:19:02 compute-0 podman[402904]: 2025-12-13 09:19:02.602508242 +0000 UTC m=+0.051887823 container died 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.603 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1271974a-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.608 248514 INFO os_vif [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de')
Dec 13 09:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763-userdata-shm.mount: Deactivated successfully.
Dec 13 09:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cab876830acab31fce5dbc2e972338fc2e9949b13d6a5f72f360a2dbded2beb-merged.mount: Deactivated successfully.
Dec 13 09:19:02 compute-0 podman[402904]: 2025-12-13 09:19:02.649125804 +0000 UTC m=+0.098505425 container cleanup 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:19:02 compute-0 systemd[1]: libpod-conmon-5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763.scope: Deactivated successfully.
Dec 13 09:19:02 compute-0 podman[402959]: 2025-12-13 09:19:02.77098042 +0000 UTC m=+0.079803689 container remove 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.778 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e43fdf81-7406-4f47-a4c0-a17f4dcac17a]: (4, ('Sat Dec 13 09:19:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 (5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763)\n5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763\nSat Dec 13 09:19:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 (5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763)\n5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.781 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64e9522f-9cdf-4b0d-a5b2-d772560c5eea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.781 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5815775-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:19:02 compute-0 kernel: tape5815775-e0: left promiscuous mode
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.796 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.799 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a090df-06e6-4537-b897-8f382441aee3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.813 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[17be6174-9311-4e45-9dbf-2a5682111381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.815 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c67a109-ed5c-4191-9a19-69a328c8985b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b581ddd-7565-44f2-b880-2137f731a772]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004898, 'reachable_time': 25872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402975, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:02 compute-0 systemd[1]: run-netns-ovnmeta\x2de5815775\x2de5d0\x2d4b72\x2da008\x2defd9f04c6ee4.mount: Deactivated successfully.
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.836 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:19:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.837 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4c877d2a-9acf-477a-b833-b0f24ed587e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.909 248514 INFO nova.virt.libvirt.driver [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Deleting instance files /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c_del
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.911 248514 INFO nova.virt.libvirt.driver [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Deletion of /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c_del complete
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.986 248514 INFO nova.compute.manager [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Took 0.66 seconds to destroy the instance on the hypervisor.
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.986 248514 DEBUG oslo.service.loopingcall [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.987 248514 DEBUG nova.compute.manager [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:19:02 compute-0 nova_compute[248510]: 2025-12-13 09:19:02.987 248514 DEBUG nova.network.neutron [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:19:02 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 13 KiB/s wr, 35 op/s
Dec 13 09:19:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:03 compute-0 nova_compute[248510]: 2025-12-13 09:19:03.829 248514 DEBUG nova.network.neutron [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:19:03 compute-0 nova_compute[248510]: 2025-12-13 09:19:03.856 248514 INFO nova.compute.manager [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Took 0.87 seconds to deallocate network for instance.
Dec 13 09:19:03 compute-0 nova_compute[248510]: 2025-12-13 09:19:03.939 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:03 compute-0 nova_compute[248510]: 2025-12-13 09:19:03.940 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:03 compute-0 nova_compute[248510]: 2025-12-13 09:19:03.948 248514 DEBUG nova.compute.manager [req-8c64a81e-85bb-4c99-93c7-4ca94429135b req-e094ef57-43d9-4f65-82e9-760a78fcca65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-deleted-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:19:03 compute-0 nova_compute[248510]: 2025-12-13 09:19:03.985 248514 DEBUG oslo_concurrency.processutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:19:04 compute-0 ceph-mon[76537]: pgmap v3538: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 13 KiB/s wr, 35 op/s
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.250 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.369 248514 DEBUG nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-unplugged-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.370 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.371 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.372 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.372 248514 DEBUG nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] No waiting events found dispatching network-vif-unplugged-1271974a-de11-42b5-87b8-470c51840315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.373 248514 WARNING nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received unexpected event network-vif-unplugged-1271974a-de11-42b5-87b8-470c51840315 for instance with vm_state deleted and task_state None.
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.374 248514 DEBUG nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.375 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.375 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.376 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.377 248514 DEBUG nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] No waiting events found dispatching network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.377 248514 WARNING nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received unexpected event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 for instance with vm_state deleted and task_state None.
Dec 13 09:19:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:19:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046399464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.589 248514 DEBUG oslo_concurrency.processutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.595 248514 DEBUG nova.compute.provider_tree [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.620 248514 DEBUG nova.scheduler.client.report [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.657 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.692 248514 INFO nova.scheduler.client.report [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 7aadb3d0-64f2-4531-9896-93b087cdea5c
Dec 13 09:19:04 compute-0 nova_compute[248510]: 2025-12-13 09:19:04.767 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:04 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 55 op/s
Dec 13 09:19:05 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3046399464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:19:06 compute-0 ceph-mon[76537]: pgmap v3539: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 55 op/s
Dec 13 09:19:06 compute-0 nova_compute[248510]: 2025-12-13 09:19:06.313 248514 DEBUG nova.network.neutron [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updated VIF entry in instance network info cache for port 1271974a-de11-42b5-87b8-470c51840315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:19:06 compute-0 nova_compute[248510]: 2025-12-13 09:19:06.313 248514 DEBUG nova.network.neutron [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:19:06 compute-0 nova_compute[248510]: 2025-12-13 09:19:06.341 248514 DEBUG oslo_concurrency.lockutils [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:19:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 4.4 KiB/s wr, 54 op/s
Dec 13 09:19:07 compute-0 nova_compute[248510]: 2025-12-13 09:19:07.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:07 compute-0 nova_compute[248510]: 2025-12-13 09:19:07.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:08 compute-0 ceph-mon[76537]: pgmap v3540: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 4.4 KiB/s wr, 54 op/s
Dec 13 09:19:08 compute-0 podman[403000]: 2025-12-13 09:19:08.977641012 +0000 UTC m=+0.056153530 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:19:08 compute-0 podman[402999]: 2025-12-13 09:19:08.989412085 +0000 UTC m=+0.074008235 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 09:19:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 55 op/s
Dec 13 09:19:09 compute-0 podman[402998]: 2025-12-13 09:19:09.044567439 +0000 UTC m=+0.126012760 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 09:19:09 compute-0 nova_compute[248510]: 2025-12-13 09:19:09.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:19:09
Dec 13 09:19:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:19:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:19:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'volumes', 'vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec 13 09:19:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:19:09 compute-0 ceph-mon[76537]: pgmap v3541: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 55 op/s
Dec 13 09:19:09 compute-0 nova_compute[248510]: 2025-12-13 09:19:09.511 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617534.5099504, db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:19:09 compute-0 nova_compute[248510]: 2025-12-13 09:19:09.512 248514 INFO nova.compute.manager [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] VM Stopped (Lifecycle Event)
Dec 13 09:19:09 compute-0 nova_compute[248510]: 2025-12-13 09:19:09.544 248514 DEBUG nova.compute.manager [None req-b80e8ac5-f84c-4138-b40a-4764013f1218 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:19:09 compute-0 sudo[403056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:19:09 compute-0 sudo[403056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:09 compute-0 sudo[403056]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:09 compute-0 sudo[403081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 09:19:09 compute-0 sudo[403081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:19:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:19:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:19:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:19:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:19:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:19:10 compute-0 podman[403148]: 2025-12-13 09:19:10.394308711 +0000 UTC m=+0.070947728 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:19:10 compute-0 podman[403148]: 2025-12-13 09:19:10.54757754 +0000 UTC m=+0.224216547 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:19:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:19:11 compute-0 sudo[403081]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:19:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:19:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:11 compute-0 sudo[403331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:19:11 compute-0 sudo[403331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:11 compute-0 sudo[403331]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:11 compute-0 sudo[403356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:19:11 compute-0 sudo[403356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:12 compute-0 ceph-mon[76537]: pgmap v3542: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:19:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:12 compute-0 sudo[403356]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:19:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:19:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:19:12 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:19:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:19:12 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:19:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:19:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:19:12 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:19:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:19:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:19:12 compute-0 sudo[403413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:19:12 compute-0 sudo[403413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:12 compute-0 sudo[403413]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:12 compute-0 auditd[696]: Audit daemon rotating log files
Dec 13 09:19:12 compute-0 sudo[403438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:19:12 compute-0 sudo[403438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:12 compute-0 nova_compute[248510]: 2025-12-13 09:19:12.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:12 compute-0 podman[403474]: 2025-12-13 09:19:12.775296518 +0000 UTC m=+0.033027354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:19:12 compute-0 podman[403474]: 2025-12-13 09:19:12.913737328 +0000 UTC m=+0.171468084 container create aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:19:12 compute-0 systemd[1]: Started libpod-conmon-aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a.scope.
Dec 13 09:19:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:19:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:19:13 compute-0 podman[403474]: 2025-12-13 09:19:13.01336932 +0000 UTC m=+0.271100146 container init aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 09:19:13 compute-0 podman[403474]: 2025-12-13 09:19:13.023843111 +0000 UTC m=+0.281573857 container start aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 09:19:13 compute-0 podman[403474]: 2025-12-13 09:19:13.027642486 +0000 UTC m=+0.285373322 container attach aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:19:13 compute-0 brave_chatterjee[403490]: 167 167
Dec 13 09:19:13 compute-0 systemd[1]: libpod-aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a.scope: Deactivated successfully.
Dec 13 09:19:13 compute-0 podman[403474]: 2025-12-13 09:19:13.030961129 +0000 UTC m=+0.288691915 container died aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5f74a7255a0b9d3170abaf185392807ab88c9709bd84f32f407f2c2d4725a38-merged.mount: Deactivated successfully.
Dec 13 09:19:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:19:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:19:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:19:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:19:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:19:13 compute-0 podman[403474]: 2025-12-13 09:19:13.085373875 +0000 UTC m=+0.343104651 container remove aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:19:13 compute-0 systemd[1]: libpod-conmon-aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a.scope: Deactivated successfully.
Dec 13 09:19:13 compute-0 podman[403513]: 2025-12-13 09:19:13.309883419 +0000 UTC m=+0.053613787 container create 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:19:13 compute-0 systemd[1]: Started libpod-conmon-2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88.scope.
Dec 13 09:19:13 compute-0 podman[403513]: 2025-12-13 09:19:13.287134212 +0000 UTC m=+0.030864580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:19:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:13 compute-0 podman[403513]: 2025-12-13 09:19:13.421244134 +0000 UTC m=+0.164974582 container init 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 09:19:13 compute-0 podman[403513]: 2025-12-13 09:19:13.435340155 +0000 UTC m=+0.179070543 container start 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 09:19:13 compute-0 podman[403513]: 2025-12-13 09:19:13.443732984 +0000 UTC m=+0.187463432 container attach 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 09:19:14 compute-0 practical_knuth[403530]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:19:14 compute-0 practical_knuth[403530]: --> All data devices are unavailable
Dec 13 09:19:14 compute-0 systemd[1]: libpod-2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88.scope: Deactivated successfully.
Dec 13 09:19:14 compute-0 conmon[403530]: conmon 2e57cc744d3430914349 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88.scope/container/memory.events
Dec 13 09:19:14 compute-0 podman[403513]: 2025-12-13 09:19:14.088032538 +0000 UTC m=+0.831762926 container died 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 09:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b-merged.mount: Deactivated successfully.
Dec 13 09:19:14 compute-0 podman[403513]: 2025-12-13 09:19:14.145979402 +0000 UTC m=+0.889709760 container remove 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 09:19:14 compute-0 systemd[1]: libpod-conmon-2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88.scope: Deactivated successfully.
Dec 13 09:19:14 compute-0 sudo[403438]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:14 compute-0 ceph-mon[76537]: pgmap v3543: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:19:14 compute-0 sudo[403561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:19:14 compute-0 sudo[403561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:14 compute-0 sudo[403561]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:14 compute-0 nova_compute[248510]: 2025-12-13 09:19:14.277 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:14 compute-0 sudo[403586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:19:14 compute-0 sudo[403586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:14 compute-0 podman[403622]: 2025-12-13 09:19:14.70256117 +0000 UTC m=+0.067274378 container create d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:19:14 compute-0 systemd[1]: Started libpod-conmon-d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba.scope.
Dec 13 09:19:14 compute-0 podman[403622]: 2025-12-13 09:19:14.680701855 +0000 UTC m=+0.045415083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:19:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:19:14 compute-0 podman[403622]: 2025-12-13 09:19:14.795047214 +0000 UTC m=+0.159760422 container init d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:19:14 compute-0 podman[403622]: 2025-12-13 09:19:14.803236238 +0000 UTC m=+0.167949426 container start d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:19:14 compute-0 podman[403622]: 2025-12-13 09:19:14.806441278 +0000 UTC m=+0.171154466 container attach d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:19:14 compute-0 unruffled_shtern[403639]: 167 167
Dec 13 09:19:14 compute-0 systemd[1]: libpod-d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba.scope: Deactivated successfully.
Dec 13 09:19:14 compute-0 podman[403622]: 2025-12-13 09:19:14.812456258 +0000 UTC m=+0.177169456 container died d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-82905772f25c4c34acd5ab19c3cfe0c0f4b3a435b097d06c694f75d682c538e9-merged.mount: Deactivated successfully.
Dec 13 09:19:14 compute-0 podman[403622]: 2025-12-13 09:19:14.856223169 +0000 UTC m=+0.220936357 container remove d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:19:14 compute-0 systemd[1]: libpod-conmon-d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba.scope: Deactivated successfully.
Dec 13 09:19:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 597 B/s wr, 23 op/s
Dec 13 09:19:15 compute-0 podman[403664]: 2025-12-13 09:19:15.040454649 +0000 UTC m=+0.046837408 container create 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:19:15 compute-0 systemd[1]: Started libpod-conmon-54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21.scope.
Dec 13 09:19:15 compute-0 podman[403664]: 2025-12-13 09:19:15.021524177 +0000 UTC m=+0.027906956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:19:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:19:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:19:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3034764914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:19:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:19:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3034764914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:15 compute-0 podman[403664]: 2025-12-13 09:19:15.13841816 +0000 UTC m=+0.144800949 container init 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:19:15 compute-0 podman[403664]: 2025-12-13 09:19:15.144302357 +0000 UTC m=+0.150685116 container start 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:19:15 compute-0 podman[403664]: 2025-12-13 09:19:15.14724909 +0000 UTC m=+0.153631849 container attach 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:19:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3034764914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:19:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3034764914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]: {
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:     "0": [
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:         {
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "devices": [
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "/dev/loop3"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             ],
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_name": "ceph_lv0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_size": "21470642176",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "name": "ceph_lv0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "tags": {
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cluster_name": "ceph",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.crush_device_class": "",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.encrypted": "0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.objectstore": "bluestore",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osd_id": "0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.type": "block",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.vdo": "0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.with_tpm": "0"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             },
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "type": "block",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "vg_name": "ceph_vg0"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:         }
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:     ],
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:     "1": [
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:         {
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "devices": [
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "/dev/loop4"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             ],
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_name": "ceph_lv1",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_size": "21470642176",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "name": "ceph_lv1",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "tags": {
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cluster_name": "ceph",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.crush_device_class": "",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.encrypted": "0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.objectstore": "bluestore",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osd_id": "1",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.type": "block",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.vdo": "0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.with_tpm": "0"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             },
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "type": "block",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "vg_name": "ceph_vg1"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:         }
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:     ],
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:     "2": [
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:         {
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "devices": [
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "/dev/loop5"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             ],
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_name": "ceph_lv2",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_size": "21470642176",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "name": "ceph_lv2",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "tags": {
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.cluster_name": "ceph",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.crush_device_class": "",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.encrypted": "0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.objectstore": "bluestore",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osd_id": "2",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.type": "block",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.vdo": "0",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:                 "ceph.with_tpm": "0"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             },
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "type": "block",
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:             "vg_name": "ceph_vg2"
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:         }
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]:     ]
Dec 13 09:19:15 compute-0 amazing_jepsen[403680]: }
Dec 13 09:19:15 compute-0 systemd[1]: libpod-54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21.scope: Deactivated successfully.
Dec 13 09:19:15 compute-0 podman[403664]: 2025-12-13 09:19:15.466750561 +0000 UTC m=+0.473133330 container died 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df-merged.mount: Deactivated successfully.
Dec 13 09:19:15 compute-0 podman[403664]: 2025-12-13 09:19:15.509735252 +0000 UTC m=+0.516118031 container remove 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 09:19:15 compute-0 systemd[1]: libpod-conmon-54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21.scope: Deactivated successfully.
Dec 13 09:19:15 compute-0 sudo[403586]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:15 compute-0 sudo[403699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:19:15 compute-0 sudo[403699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:15 compute-0 sudo[403699]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:15 compute-0 sudo[403724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:19:15 compute-0 sudo[403724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:16 compute-0 podman[403762]: 2025-12-13 09:19:16.063286485 +0000 UTC m=+0.048337925 container create 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:19:16 compute-0 systemd[1]: Started libpod-conmon-187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a.scope.
Dec 13 09:19:16 compute-0 podman[403762]: 2025-12-13 09:19:16.042560619 +0000 UTC m=+0.027612059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:19:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:19:16 compute-0 podman[403762]: 2025-12-13 09:19:16.173061601 +0000 UTC m=+0.158113061 container init 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:19:16 compute-0 podman[403762]: 2025-12-13 09:19:16.187554092 +0000 UTC m=+0.172605502 container start 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 09:19:16 compute-0 podman[403762]: 2025-12-13 09:19:16.191386567 +0000 UTC m=+0.176438037 container attach 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:19:16 compute-0 exciting_tharp[403779]: 167 167
Dec 13 09:19:16 compute-0 systemd[1]: libpod-187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a.scope: Deactivated successfully.
Dec 13 09:19:16 compute-0 podman[403762]: 2025-12-13 09:19:16.194902975 +0000 UTC m=+0.179954425 container died 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:19:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5ac879590d5ab5159b845e7b560dafc34f85de39f394cc4891d43dcf3de2521-merged.mount: Deactivated successfully.
Dec 13 09:19:16 compute-0 podman[403762]: 2025-12-13 09:19:16.237227969 +0000 UTC m=+0.222279379 container remove 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:19:16 compute-0 systemd[1]: libpod-conmon-187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a.scope: Deactivated successfully.
Dec 13 09:19:16 compute-0 ceph-mon[76537]: pgmap v3544: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 597 B/s wr, 23 op/s
Dec 13 09:19:16 compute-0 podman[403802]: 2025-12-13 09:19:16.434741301 +0000 UTC m=+0.047713240 container create 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 09:19:16 compute-0 systemd[1]: Started libpod-conmon-4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5.scope.
Dec 13 09:19:16 compute-0 podman[403802]: 2025-12-13 09:19:16.41464385 +0000 UTC m=+0.027615819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:19:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:19:16 compute-0 podman[403802]: 2025-12-13 09:19:16.55027778 +0000 UTC m=+0.163249719 container init 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 09:19:16 compute-0 podman[403802]: 2025-12-13 09:19:16.560047943 +0000 UTC m=+0.173019882 container start 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 09:19:16 compute-0 podman[403802]: 2025-12-13 09:19:16.563954261 +0000 UTC m=+0.176926210 container attach 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:19:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 255 B/s wr, 1 op/s
Dec 13 09:19:17 compute-0 lvm[403897]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:19:17 compute-0 lvm[403897]: VG ceph_vg0 finished
Dec 13 09:19:17 compute-0 lvm[403898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:19:17 compute-0 lvm[403898]: VG ceph_vg1 finished
Dec 13 09:19:17 compute-0 lvm[403900]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:19:17 compute-0 lvm[403900]: VG ceph_vg2 finished
Dec 13 09:19:17 compute-0 elated_germain[403818]: {}
Dec 13 09:19:17 compute-0 systemd[1]: libpod-4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5.scope: Deactivated successfully.
Dec 13 09:19:17 compute-0 podman[403802]: 2025-12-13 09:19:17.509939562 +0000 UTC m=+1.122911501 container died 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:19:17 compute-0 systemd[1]: libpod-4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5.scope: Consumed 1.576s CPU time.
Dec 13 09:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce-merged.mount: Deactivated successfully.
Dec 13 09:19:17 compute-0 podman[403802]: 2025-12-13 09:19:17.565328642 +0000 UTC m=+1.178300591 container remove 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:19:17 compute-0 nova_compute[248510]: 2025-12-13 09:19:17.570 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617542.5681205, 7aadb3d0-64f2-4531-9896-93b087cdea5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:19:17 compute-0 nova_compute[248510]: 2025-12-13 09:19:17.572 248514 INFO nova.compute.manager [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] VM Stopped (Lifecycle Event)
Dec 13 09:19:17 compute-0 systemd[1]: libpod-conmon-4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5.scope: Deactivated successfully.
Dec 13 09:19:17 compute-0 nova_compute[248510]: 2025-12-13 09:19:17.598 248514 DEBUG nova.compute.manager [None req-c9d65d3c-5327-498f-9461-1ff528634278 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:19:17 compute-0 nova_compute[248510]: 2025-12-13 09:19:17.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:17 compute-0 sudo[403724]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:19:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:19:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:17 compute-0 sudo[403915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:19:17 compute-0 sudo[403915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:19:17 compute-0 sudo[403915]: pam_unix(sudo:session): session closed for user root
Dec 13 09:19:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:18 compute-0 ceph-mon[76537]: pgmap v3545: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 255 B/s wr, 1 op/s
Dec 13 09:19:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:19:18 compute-0 nova_compute[248510]: 2025-12-13 09:19:18.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:18 compute-0 nova_compute[248510]: 2025-12-13 09:19:18.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 1 op/s
Dec 13 09:19:19 compute-0 nova_compute[248510]: 2025-12-13 09:19:19.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:19 compute-0 ceph-mon[76537]: pgmap v3546: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 1 op/s
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4798206657980494e-05 of space, bias 1.0, pg target 0.004439461997394148 quantized to 32 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697110352885981 of space, bias 1.0, pg target 0.20091331058657944 quantized to 32 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:19:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:19:22 compute-0 ceph-mon[76537]: pgmap v3547: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:22 compute-0 nova_compute[248510]: 2025-12-13 09:19:22.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:23 compute-0 nova_compute[248510]: 2025-12-13 09:19:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:24 compute-0 ceph-mon[76537]: pgmap v3548: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:24 compute-0 nova_compute[248510]: 2025-12-13 09:19:24.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:25 compute-0 ceph-mon[76537]: pgmap v3549: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:26 compute-0 nova_compute[248510]: 2025-12-13 09:19:26.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:27 compute-0 nova_compute[248510]: 2025-12-13 09:19:27.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:28 compute-0 ceph-mon[76537]: pgmap v3550: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:29 compute-0 nova_compute[248510]: 2025-12-13 09:19:29.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:29 compute-0 nova_compute[248510]: 2025-12-13 09:19:29.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:29 compute-0 nova_compute[248510]: 2025-12-13 09:19:29.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:19:29 compute-0 nova_compute[248510]: 2025-12-13 09:19:29.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:19:29 compute-0 nova_compute[248510]: 2025-12-13 09:19:29.820 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:19:30 compute-0 ceph-mon[76537]: pgmap v3551: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:31 compute-0 sshd-session[403941]: Invalid user tezos from 80.94.92.165 port 59674
Dec 13 09:19:31 compute-0 sshd-session[403941]: Connection closed by invalid user tezos 80.94.92.165 port 59674 [preauth]
Dec 13 09:19:31 compute-0 ceph-mon[76537]: pgmap v3552: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:31.869 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:19:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:31.870 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:19:31 compute-0 nova_compute[248510]: 2025-12-13 09:19:31.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:32 compute-0 nova_compute[248510]: 2025-12-13 09:19:32.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:34 compute-0 ceph-mon[76537]: pgmap v3553: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:34 compute-0 nova_compute[248510]: 2025-12-13 09:19:34.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:34 compute-0 nova_compute[248510]: 2025-12-13 09:19:34.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:34 compute-0 nova_compute[248510]: 2025-12-13 09:19:34.962 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:34 compute-0 nova_compute[248510]: 2025-12-13 09:19:34.962 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:34 compute-0 nova_compute[248510]: 2025-12-13 09:19:34.963 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:34 compute-0 nova_compute[248510]: 2025-12-13 09:19:34.963 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:19:34 compute-0 nova_compute[248510]: 2025-12-13 09:19:34.963 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:19:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:35 compute-0 ceph-mon[76537]: pgmap v3554: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:19:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164859578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:19:35 compute-0 nova_compute[248510]: 2025-12-13 09:19:35.714 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:19:35 compute-0 nova_compute[248510]: 2025-12-13 09:19:35.897 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:19:35 compute-0 nova_compute[248510]: 2025-12-13 09:19:35.898 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3491MB free_disk=59.987393531017005GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:19:35 compute-0 nova_compute[248510]: 2025-12-13 09:19:35.898 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:35 compute-0 nova_compute[248510]: 2025-12-13 09:19:35.899 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:35 compute-0 nova_compute[248510]: 2025-12-13 09:19:35.977 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:19:35 compute-0 nova_compute[248510]: 2025-12-13 09:19:35.977 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:19:36 compute-0 nova_compute[248510]: 2025-12-13 09:19:36.005 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:19:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:19:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621910952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:19:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/164859578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:19:36 compute-0 nova_compute[248510]: 2025-12-13 09:19:36.681 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:19:36 compute-0 nova_compute[248510]: 2025-12-13 09:19:36.689 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:19:36 compute-0 nova_compute[248510]: 2025-12-13 09:19:36.712 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:19:36 compute-0 nova_compute[248510]: 2025-12-13 09:19:36.751 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:19:36 compute-0 nova_compute[248510]: 2025-12-13 09:19:36.752 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:36 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:36.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:19:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:37 compute-0 nova_compute[248510]: 2025-12-13 09:19:37.625 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:37 compute-0 nova_compute[248510]: 2025-12-13 09:19:37.753 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:37 compute-0 nova_compute[248510]: 2025-12-13 09:19:37.753 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/621910952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:19:37 compute-0 ceph-mon[76537]: pgmap v3555: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:38 compute-0 nova_compute[248510]: 2025-12-13 09:19:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:38 compute-0 nova_compute[248510]: 2025-12-13 09:19:38.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:38 compute-0 nova_compute[248510]: 2025-12-13 09:19:38.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:19:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:39 compute-0 nova_compute[248510]: 2025-12-13 09:19:39.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:39 compute-0 podman[403990]: 2025-12-13 09:19:39.983145897 +0000 UTC m=+0.065322699 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:19:39 compute-0 podman[403989]: 2025-12-13 09:19:39.988030358 +0000 UTC m=+0.072304792 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd)
Dec 13 09:19:40 compute-0 podman[403988]: 2025-12-13 09:19:40.01658942 +0000 UTC m=+0.101030089 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 09:19:40 compute-0 ceph-mon[76537]: pgmap v3556: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:19:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:19:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:19:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:19:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:19:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:19:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:40.283 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:85:21 10.100.0.2 2001:db8::f816:3eff:fe5f:8521'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe5f:8521/64', 'neutron:device_id': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0c3d5666-7453-4eb7-8973-bef187574418) old=Port_Binding(mac=['fa:16:3e:5f:85:21 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:19:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:40.285 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0c3d5666-7453-4eb7-8973-bef187574418 in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 updated
Dec 13 09:19:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:40.288 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e41a163e-7597-4a26-aa5d-b894f952cca4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:19:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:40.289 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5247e23-f5cd-44f5-a515-09628d9185da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:42 compute-0 ceph-mon[76537]: pgmap v3557: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:42 compute-0 nova_compute[248510]: 2025-12-13 09:19:42.629 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:44 compute-0 nova_compute[248510]: 2025-12-13 09:19:44.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:44 compute-0 ceph-mon[76537]: pgmap v3558: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3559: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:45 compute-0 ceph-mon[76537]: pgmap v3559: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:46.179 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:85:21 10.100.0.2 2001:db8:0:1:f816:3eff:fe5f:8521 2001:db8::f816:3eff:fe5f:8521'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe5f:8521/64 2001:db8::f816:3eff:fe5f:8521/64', 'neutron:device_id': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0c3d5666-7453-4eb7-8973-bef187574418) old=Port_Binding(mac=['fa:16:3e:5f:85:21 10.100.0.2 2001:db8::f816:3eff:fe5f:8521'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe5f:8521/64', 'neutron:device_id': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:19:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:46.180 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0c3d5666-7453-4eb7-8973-bef187574418 in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 updated
Dec 13 09:19:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:46.181 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e41a163e-7597-4a26-aa5d-b894f952cca4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:19:46 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:46.182 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1204dfc2-1ac1-4a65-9aff-94b70c05c2ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:19:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:47 compute-0 nova_compute[248510]: 2025-12-13 09:19:47.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:48 compute-0 ceph-mon[76537]: pgmap v3560: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:19:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 09:19:49 compute-0 nova_compute[248510]: 2025-12-13 09:19:49.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:50 compute-0 ceph-mon[76537]: pgmap v3561: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 09:19:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.675 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.675 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.695 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.785 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.786 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.794 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.795 248514 INFO nova.compute.claims [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:19:51 compute-0 nova_compute[248510]: 2025-12-13 09:19:51.960 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:19:52 compute-0 ceph-mon[76537]: pgmap v3562: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:19:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4276888792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.697 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.707 248514 DEBUG nova.compute.provider_tree [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.741 248514 DEBUG nova.scheduler.client.report [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.778 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.779 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.831 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.832 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.863 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:19:52 compute-0 nova_compute[248510]: 2025-12-13 09:19:52.927 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:19:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.036 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.038 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.038 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Creating image(s)
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.078330) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593078458, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 847, "num_deletes": 257, "total_data_size": 1191091, "memory_usage": 1218104, "flush_reason": "Manual Compaction"}
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.093 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593108472, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1159516, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70567, "largest_seqno": 71413, "table_properties": {"data_size": 1155293, "index_size": 1938, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9217, "raw_average_key_size": 19, "raw_value_size": 1146772, "raw_average_value_size": 2364, "num_data_blocks": 87, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617521, "oldest_key_time": 1765617521, "file_creation_time": 1765617593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 30185 microseconds, and 5112 cpu microseconds.
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.108530) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1159516 bytes OK
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.108559) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110253) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110274) EVENT_LOG_v1 {"time_micros": 1765617593110267, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110300) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1186897, prev total WAL file size 1186897, number of live WAL files 2.
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110958) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323637' seq:0, type:0; will stop at (end)
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(1132KB)], [167(9987KB)]
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593111009, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 11386647, "oldest_snapshot_seqno": -1}
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.125 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.155 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.160 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8877 keys, 11269453 bytes, temperature: kUnknown
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593205464, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 11269453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11212994, "index_size": 33178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 233952, "raw_average_key_size": 26, "raw_value_size": 11057586, "raw_average_value_size": 1245, "num_data_blocks": 1281, "num_entries": 8877, "num_filter_entries": 8877, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.205865) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11269453 bytes
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.207893) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.4 rd, 119.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(19.5) write-amplify(9.7) OK, records in: 9402, records dropped: 525 output_compression: NoCompression
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.207915) EVENT_LOG_v1 {"time_micros": 1765617593207905, "job": 104, "event": "compaction_finished", "compaction_time_micros": 94572, "compaction_time_cpu_micros": 27646, "output_level": 6, "num_output_files": 1, "total_output_size": 11269453, "num_input_records": 9402, "num_output_records": 8877, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593212734, "job": 104, "event": "table_file_deletion", "file_number": 169}
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.213 248514 DEBUG nova.policy [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:19:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4276888792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593216812, "job": 104, "event": "table_file_deletion", "file_number": 167}
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:19:53 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.256 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.257 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.258 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.258 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.287 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:19:53 compute-0 nova_compute[248510]: 2025-12-13 09:19:53.292 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:19:54 compute-0 nova_compute[248510]: 2025-12-13 09:19:54.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:54 compute-0 ceph-mon[76537]: pgmap v3563: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 09:19:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3564: 321 pgs: 321 active+clean; 52 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 77 KiB/s wr, 17 op/s
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.071 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.146 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:19:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:19:55.457 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.459 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Successfully created port: 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.726 248514 DEBUG nova.objects.instance [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 387ffe9e-b998-43aa-bb42-e87639c1ba6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.751 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.752 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Ensure instance console log exists: /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.753 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.754 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:19:55 compute-0 nova_compute[248510]: 2025-12-13 09:19:55.754 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:19:55 compute-0 ceph-mon[76537]: pgmap v3564: 321 pgs: 321 active+clean; 52 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 77 KiB/s wr, 17 op/s
Dec 13 09:19:56 compute-0 nova_compute[248510]: 2025-12-13 09:19:56.352 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Successfully updated port: 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:19:56 compute-0 nova_compute[248510]: 2025-12-13 09:19:56.384 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:19:56 compute-0 nova_compute[248510]: 2025-12-13 09:19:56.385 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:19:56 compute-0 nova_compute[248510]: 2025-12-13 09:19:56.385 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:19:56 compute-0 nova_compute[248510]: 2025-12-13 09:19:56.465 248514 DEBUG nova.compute.manager [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:19:56 compute-0 nova_compute[248510]: 2025-12-13 09:19:56.466 248514 DEBUG nova.compute.manager [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing instance network info cache due to event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:19:56 compute-0 nova_compute[248510]: 2025-12-13 09:19:56.466 248514 DEBUG oslo_concurrency.lockutils [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:19:56 compute-0 nova_compute[248510]: 2025-12-13 09:19:56.596 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:19:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 52 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 77 KiB/s wr, 17 op/s
Dec 13 09:19:57 compute-0 nova_compute[248510]: 2025-12-13 09:19:57.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:19:58 compute-0 ceph-mon[76537]: pgmap v3565: 321 pgs: 321 active+clean; 52 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 77 KiB/s wr, 17 op/s
Dec 13 09:19:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 80 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.383 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.421 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.421 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance network_info: |[{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.421 248514 DEBUG oslo_concurrency.lockutils [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.422 248514 DEBUG nova.network.neutron [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.424 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start _get_guest_xml network_info=[{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.430 248514 WARNING nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.436 248514 DEBUG nova.virt.libvirt.host [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.437 248514 DEBUG nova.virt.libvirt.host [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.442 248514 DEBUG nova.virt.libvirt.host [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.443 248514 DEBUG nova.virt.libvirt.host [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.445 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.446 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.446 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.448 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.448 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.448 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.448 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:19:59 compute-0 nova_compute[248510]: 2025-12-13 09:19:59.451 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:20:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068317904' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.037 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.066 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.072 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:00 compute-0 ceph-mon[76537]: pgmap v3566: 321 pgs: 321 active+clean; 80 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Dec 13 09:20:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4068317904' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:20:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:20:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705364153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.674 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.677 248514 DEBUG nova.virt.libvirt.vif [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569396822',display_name='tempest-TestGettingAddress-server-569396822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569396822',id=149,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-r3ss7vci',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:19:52Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=387ffe9e-b998-43aa-bb42-e87639c1ba6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.677 248514 DEBUG nova.network.os_vif_util [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.679 248514 DEBUG nova.network.os_vif_util [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.680 248514 DEBUG nova.objects.instance [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 387ffe9e-b998-43aa-bb42-e87639c1ba6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.707 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <uuid>387ffe9e-b998-43aa-bb42-e87639c1ba6a</uuid>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <name>instance-00000095</name>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-569396822</nova:name>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:19:59</nova:creationTime>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <nova:port uuid="76d94bc0-cb6b-4bd0-8366-5158fdaece5b">
Dec 13 09:20:00 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe30:6b81" ipVersion="6"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe30:6b81" ipVersion="6"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <system>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <entry name="serial">387ffe9e-b998-43aa-bb42-e87639c1ba6a</entry>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <entry name="uuid">387ffe9e-b998-43aa-bb42-e87639c1ba6a</entry>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </system>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <os>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   </os>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <features>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   </features>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk">
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       </source>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config">
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       </source>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:20:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:30:6b:81"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <target dev="tap76d94bc0-cb"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/console.log" append="off"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <video>
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </video>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:20:00 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:20:00 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:20:00 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:20:00 compute-0 nova_compute[248510]: </domain>
Dec 13 09:20:00 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.709 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Preparing to wait for external event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.710 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.711 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.711 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.712 248514 DEBUG nova.virt.libvirt.vif [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569396822',display_name='tempest-TestGettingAddress-server-569396822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569396822',id=149,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-r3ss7vci',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:19:52Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=387ffe9e-b998-43aa-bb42-e87639c1ba6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.713 248514 DEBUG nova.network.os_vif_util [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.714 248514 DEBUG nova.network.os_vif_util [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.715 248514 DEBUG os_vif [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.718 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.718 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.723 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76d94bc0-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.724 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76d94bc0-cb, col_values=(('external_ids', {'iface-id': '76d94bc0-cb6b-4bd0-8366-5158fdaece5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:6b:81', 'vm-uuid': '387ffe9e-b998-43aa-bb42-e87639c1ba6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:00 compute-0 NetworkManager[50376]: <info>  [1765617600.7278] manager: (tap76d94bc0-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/665)
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.737 248514 INFO os_vif [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb')
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.806 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.807 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.808 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:30:6b:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.809 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Using config drive
Dec 13 09:20:00 compute-0 nova_compute[248510]: 2025-12-13 09:20:00.838 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 13 09:20:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2705364153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.340 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Creating config drive at /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.345 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp26o15cx6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.489 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp26o15cx6" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.522 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.529 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.697 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.698 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Deleting local config drive /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config because it was imported into RBD.
Dec 13 09:20:01 compute-0 kernel: tap76d94bc0-cb: entered promiscuous mode
Dec 13 09:20:01 compute-0 NetworkManager[50376]: <info>  [1765617601.7586] manager: (tap76d94bc0-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Dec 13 09:20:01 compute-0 ovn_controller[148476]: 2025-12-13T09:20:01Z|01607|binding|INFO|Claiming lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b for this chassis.
Dec 13 09:20:01 compute-0 ovn_controller[148476]: 2025-12-13T09:20:01Z|01608|binding|INFO|76d94bc0-cb6b-4bd0-8366-5158fdaece5b: Claiming fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.770 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:01 compute-0 systemd-udevd[404373]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:20:01 compute-0 systemd-machined[210538]: New machine qemu-180-instance-00000095.
Dec 13 09:20:01 compute-0 NetworkManager[50376]: <info>  [1765617601.8103] device (tap76d94bc0-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:20:01 compute-0 NetworkManager[50376]: <info>  [1765617601.8115] device (tap76d94bc0-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.816 248514 DEBUG nova.network.neutron [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updated VIF entry in instance network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:20:01 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000095.
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.817 248514 DEBUG nova.network.neutron [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:20:01 compute-0 ovn_controller[148476]: 2025-12-13T09:20:01Z|01609|binding|INFO|Setting lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b ovn-installed in OVS
Dec 13 09:20:01 compute-0 nova_compute[248510]: 2025-12-13 09:20:01.835 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.171 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617602.170666, 387ffe9e-b998-43aa-bb42-e87639c1ba6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.171 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] VM Started (Lifecycle Event)
Dec 13 09:20:02 compute-0 ceph-mon[76537]: pgmap v3567: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 13 09:20:02 compute-0 ovn_controller[148476]: 2025-12-13T09:20:02Z|01610|binding|INFO|Setting lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b up in Southbound
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.224 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81'], port_security=['fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8:0:1:f816:3eff:fe30:6b81/64 2001:db8::f816:3eff:fe30:6b81/64', 'neutron:device_id': '387ffe9e-b998-43aa-bb42-e87639c1ba6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7518f8bf-76d2-4439-8293-d0d6a2979cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=76d94bc0-cb6b-4bd0-8366-5158fdaece5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.228 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 bound to our chassis
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.232 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e41a163e-7597-4a26-aa5d-b894f952cca4
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.248 248514 DEBUG oslo_concurrency.lockutils [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.249 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.253 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef77f01-69c1-46c8-beba-b183b68cc819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.254 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape41a163e-71 in ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.254 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617602.1753275, 387ffe9e-b998-43aa-bb42-e87639c1ba6a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.255 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] VM Paused (Lifecycle Event)
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.257 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape41a163e-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.257 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0a0fe4-3e03-496e-b568-4f8b6d8f6f63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11669d67-72ef-4b7f-ab75-1770884769c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.276 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[765015bd-aaa2-41a2-8197-f6caa0aba718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.286 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.292 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25bbbd6f-326e-4728-a19f-fda8cbadad5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.316 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.339 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0be82de0-5848-4531-8faa-e39a1f6b44a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 systemd-udevd[404376]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:20:02 compute-0 NetworkManager[50376]: <info>  [1765617602.3506] manager: (tape41a163e-70): new Veth device (/org/freedesktop/NetworkManager/Devices/667)
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.349 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f471fe34-32b3-42cd-873b-9e08f0122262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.393 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b07ecb14-420c-4457-9774-817c51e082cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.398 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cef4dbc8-4fc8-464b-834d-8a3d46b24dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 NetworkManager[50376]: <info>  [1765617602.4313] device (tape41a163e-70): carrier: link connected
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.440 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eecfff2e-0cf8-4bc7-987e-78d448849aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.462 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b93f2f2-4135-4eec-bfb3-e20638559d47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape41a163e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:85:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017964, 'reachable_time': 24214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404449, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.488 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c23a1d8-f61b-4ff9-9780-0e0290e889d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:8521'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017964, 'tstamp': 1017964}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404450, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.517 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21b78a5d-fc50-4c75-80e9-42fe839a0d7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape41a163e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:85:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017964, 'reachable_time': 24214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404451, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.560 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc219224-0537-4132-801f-44402e0d9fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.641 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6eae7e-6d8e-47dd-9d1a-fdb129734a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape41a163e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape41a163e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:02 compute-0 kernel: tape41a163e-70: entered promiscuous mode
Dec 13 09:20:02 compute-0 NetworkManager[50376]: <info>  [1765617602.6475] manager: (tape41a163e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.649 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape41a163e-70, col_values=(('external_ids', {'iface-id': '0c3d5666-7453-4eb7-8973-bef187574418'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:02 compute-0 ovn_controller[148476]: 2025-12-13T09:20:02Z|01611|binding|INFO|Releasing lport 0c3d5666-7453-4eb7-8973-bef187574418 from this chassis (sb_readonly=0)
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.663 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.665 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e41a163e-7597-4a26-aa5d-b894f952cca4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e41a163e-7597-4a26-aa5d-b894f952cca4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.666 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52cfe3bd-a5a1-4414-9bfd-9f8e961aa192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.667 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-e41a163e-7597-4a26-aa5d-b894f952cca4
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/e41a163e-7597-4a26-aa5d-b894f952cca4.pid.haproxy
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID e41a163e-7597-4a26-aa5d-b894f952cca4
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:20:02 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.667 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'env', 'PROCESS_TAG=haproxy-e41a163e-7597-4a26-aa5d-b894f952cca4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e41a163e-7597-4a26-aa5d-b894f952cca4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.959 248514 DEBUG nova.compute.manager [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.960 248514 DEBUG oslo_concurrency.lockutils [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.960 248514 DEBUG oslo_concurrency.lockutils [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.961 248514 DEBUG oslo_concurrency.lockutils [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.961 248514 DEBUG nova.compute.manager [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Processing event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.961 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.965 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617602.9654477, 387ffe9e-b998-43aa-bb42-e87639c1ba6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.965 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] VM Resumed (Lifecycle Event)
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.967 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.971 248514 INFO nova.virt.libvirt.driver [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance spawned successfully.
Dec 13 09:20:02 compute-0 nova_compute[248510]: 2025-12-13 09:20:02.971 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.001 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.008 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.009 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.009 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.010 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.010 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.011 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.017 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:20:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.052 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:20:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.077 248514 INFO nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Took 10.04 seconds to spawn the instance on the hypervisor.
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.077 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:20:03 compute-0 podman[404483]: 2025-12-13 09:20:03.092512805 +0000 UTC m=+0.056823327 container create d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:20:03 compute-0 systemd[1]: Started libpod-conmon-d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c.scope.
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.151 248514 INFO nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Took 11.41 seconds to build instance.
Dec 13 09:20:03 compute-0 podman[404483]: 2025-12-13 09:20:03.059858401 +0000 UTC m=+0.024168933 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:20:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb212c1c75939221274e39c5248b874df50042250d2b2667a9a1d38806218d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:03 compute-0 nova_compute[248510]: 2025-12-13 09:20:03.176 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:03 compute-0 podman[404483]: 2025-12-13 09:20:03.382850399 +0000 UTC m=+0.347160951 container init d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:20:03 compute-0 podman[404483]: 2025-12-13 09:20:03.388532231 +0000 UTC m=+0.352842753 container start d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 09:20:03 compute-0 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [NOTICE]   (404502) : New worker (404504) forked
Dec 13 09:20:03 compute-0 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [NOTICE]   (404502) : Loading success.
Dec 13 09:20:03 compute-0 ceph-mon[76537]: pgmap v3568: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 09:20:04 compute-0 nova_compute[248510]: 2025-12-13 09:20:04.406 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 13 09:20:05 compute-0 nova_compute[248510]: 2025-12-13 09:20:05.090 248514 DEBUG nova.compute.manager [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:20:05 compute-0 nova_compute[248510]: 2025-12-13 09:20:05.091 248514 DEBUG oslo_concurrency.lockutils [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:05 compute-0 nova_compute[248510]: 2025-12-13 09:20:05.091 248514 DEBUG oslo_concurrency.lockutils [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:05 compute-0 nova_compute[248510]: 2025-12-13 09:20:05.091 248514 DEBUG oslo_concurrency.lockutils [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:05 compute-0 nova_compute[248510]: 2025-12-13 09:20:05.092 248514 DEBUG nova.compute.manager [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] No waiting events found dispatching network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:20:05 compute-0 nova_compute[248510]: 2025-12-13 09:20:05.092 248514 WARNING nova.compute.manager [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received unexpected event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b for instance with vm_state active and task_state None.
Dec 13 09:20:05 compute-0 nova_compute[248510]: 2025-12-13 09:20:05.727 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:06 compute-0 ceph-mon[76537]: pgmap v3569: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 13 09:20:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Dec 13 09:20:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:08 compute-0 ceph-mon[76537]: pgmap v3570: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Dec 13 09:20:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 99 op/s
Dec 13 09:20:09 compute-0 nova_compute[248510]: 2025-12-13 09:20:09.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:20:09
Dec 13 09:20:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:20:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:20:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['vms', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Dec 13 09:20:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:20:10 compute-0 ovn_controller[148476]: 2025-12-13T09:20:10Z|01612|binding|INFO|Releasing lport 0c3d5666-7453-4eb7-8973-bef187574418 from this chassis (sb_readonly=0)
Dec 13 09:20:10 compute-0 NetworkManager[50376]: <info>  [1765617610.0038] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Dec 13 09:20:10 compute-0 NetworkManager[50376]: <info>  [1765617610.0044] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:10 compute-0 ovn_controller[148476]: 2025-12-13T09:20:10Z|01613|binding|INFO|Releasing lport 0c3d5666-7453-4eb7-8973-bef187574418 from this chassis (sb_readonly=0)
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:20:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:20:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:20:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:20:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:20:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:20:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:10.302 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:20:10 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:10.303 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:10 compute-0 ceph-mon[76537]: pgmap v3571: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 99 op/s
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.383 248514 DEBUG nova.compute.manager [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.384 248514 DEBUG nova.compute.manager [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing instance network info cache due to event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.384 248514 DEBUG oslo_concurrency.lockutils [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.384 248514 DEBUG oslo_concurrency.lockutils [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.385 248514 DEBUG nova.network.neutron [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:20:10 compute-0 nova_compute[248510]: 2025-12-13 09:20:10.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:11 compute-0 podman[404515]: 2025-12-13 09:20:10.999870992 +0000 UTC m=+0.083160863 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 09:20:11 compute-0 podman[404516]: 2025-12-13 09:20:11.007717898 +0000 UTC m=+0.091111481 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 312 KiB/s wr, 74 op/s
Dec 13 09:20:11 compute-0 podman[404514]: 2025-12-13 09:20:11.042262939 +0000 UTC m=+0.122746170 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:20:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:20:12 compute-0 ceph-mon[76537]: pgmap v3572: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 312 KiB/s wr, 74 op/s
Dec 13 09:20:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:20:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:14 compute-0 nova_compute[248510]: 2025-12-13 09:20:14.000 248514 DEBUG nova.network.neutron [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updated VIF entry in instance network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:20:14 compute-0 nova_compute[248510]: 2025-12-13 09:20:14.001 248514 DEBUG nova.network.neutron [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:20:14 compute-0 nova_compute[248510]: 2025-12-13 09:20:14.036 248514 DEBUG oslo_concurrency.lockutils [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:20:14 compute-0 ceph-mon[76537]: pgmap v3573: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:20:14 compute-0 nova_compute[248510]: 2025-12-13 09:20:14.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 96 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 77 op/s
Dec 13 09:20:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:20:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840370517' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:20:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:20:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840370517' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:20:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2840370517' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:20:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2840370517' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:20:15 compute-0 nova_compute[248510]: 2025-12-13 09:20:15.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:15 compute-0 ovn_controller[148476]: 2025-12-13T09:20:15Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:6b:81 10.100.0.11
Dec 13 09:20:15 compute-0 ovn_controller[148476]: 2025-12-13T09:20:15Z|00210|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:6b:81 10.100.0.11
Dec 13 09:20:16 compute-0 ceph-mon[76537]: pgmap v3574: 321 pgs: 321 active+clean; 96 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 77 op/s
Dec 13 09:20:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 96 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 848 KiB/s wr, 56 op/s
Dec 13 09:20:17 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:17.306 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:17 compute-0 ceph-mon[76537]: pgmap v3575: 321 pgs: 321 active+clean; 96 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 848 KiB/s wr, 56 op/s
Dec 13 09:20:17 compute-0 sudo[404578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:20:17 compute-0 sudo[404578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:17 compute-0 sudo[404578]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:17 compute-0 sudo[404603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:20:17 compute-0 sudo[404603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:18 compute-0 sudo[404603]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:20:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:20:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:20:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:20:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:20:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:20:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:20:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:20:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:20:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:20:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:20:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:20:18 compute-0 sudo[404660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:20:18 compute-0 sudo[404660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:18 compute-0 sudo[404660]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:18 compute-0 sudo[404685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:20:18 compute-0 sudo[404685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:20:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:20:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:20:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:20:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:20:18 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:20:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 13 09:20:19 compute-0 podman[404723]: 2025-12-13 09:20:19.125657572 +0000 UTC m=+0.059915624 container create 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 09:20:19 compute-0 systemd[1]: Started libpod-conmon-838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd.scope.
Dec 13 09:20:19 compute-0 podman[404723]: 2025-12-13 09:20:19.102912725 +0000 UTC m=+0.037170807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:20:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:20:19 compute-0 podman[404723]: 2025-12-13 09:20:19.229555861 +0000 UTC m=+0.163813963 container init 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:20:19 compute-0 podman[404723]: 2025-12-13 09:20:19.244151144 +0000 UTC m=+0.178409206 container start 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:20:19 compute-0 podman[404723]: 2025-12-13 09:20:19.248943884 +0000 UTC m=+0.183201976 container attach 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:20:19 compute-0 dazzling_hodgkin[404739]: 167 167
Dec 13 09:20:19 compute-0 systemd[1]: libpod-838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd.scope: Deactivated successfully.
Dec 13 09:20:19 compute-0 podman[404723]: 2025-12-13 09:20:19.256178914 +0000 UTC m=+0.190437006 container died 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2143cfb170410ee97406c607ab72f9c08fb210bcd8ca5efdebfda551077eca17-merged.mount: Deactivated successfully.
Dec 13 09:20:19 compute-0 podman[404723]: 2025-12-13 09:20:19.327899431 +0000 UTC m=+0.262157513 container remove 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:20:19 compute-0 systemd[1]: libpod-conmon-838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd.scope: Deactivated successfully.
Dec 13 09:20:19 compute-0 nova_compute[248510]: 2025-12-13 09:20:19.445 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:19 compute-0 podman[404764]: 2025-12-13 09:20:19.554406055 +0000 UTC m=+0.052101109 container create 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:20:19 compute-0 systemd[1]: Started libpod-conmon-1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a.scope.
Dec 13 09:20:19 compute-0 podman[404764]: 2025-12-13 09:20:19.530868289 +0000 UTC m=+0.028563393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:20:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:19 compute-0 podman[404764]: 2025-12-13 09:20:19.659884673 +0000 UTC m=+0.157579757 container init 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:20:19 compute-0 podman[404764]: 2025-12-13 09:20:19.671721918 +0000 UTC m=+0.169416972 container start 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 09:20:19 compute-0 podman[404764]: 2025-12-13 09:20:19.67620469 +0000 UTC m=+0.173899744 container attach 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:20:19 compute-0 ceph-mon[76537]: pgmap v3576: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 13 09:20:20 compute-0 angry_sammet[404781]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:20:20 compute-0 angry_sammet[404781]: --> All data devices are unavailable
Dec 13 09:20:20 compute-0 systemd[1]: libpod-1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a.scope: Deactivated successfully.
Dec 13 09:20:20 compute-0 conmon[404781]: conmon 1acd90fbaa8ae3a68115 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a.scope/container/memory.events
Dec 13 09:20:20 compute-0 podman[404801]: 2025-12-13 09:20:20.308667249 +0000 UTC m=+0.030794068 container died 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 09:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8-merged.mount: Deactivated successfully.
Dec 13 09:20:20 compute-0 podman[404801]: 2025-12-13 09:20:20.359511496 +0000 UTC m=+0.081638295 container remove 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:20:20 compute-0 systemd[1]: libpod-conmon-1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a.scope: Deactivated successfully.
Dec 13 09:20:20 compute-0 sudo[404685]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:20 compute-0 sudo[404817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:20:20 compute-0 sudo[404817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:20 compute-0 sudo[404817]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:20 compute-0 sudo[404842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:20:20 compute-0 sudo[404842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:20 compute-0 nova_compute[248510]: 2025-12-13 09:20:20.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:20 compute-0 podman[404879]: 2025-12-13 09:20:20.970594143 +0000 UTC m=+0.049418033 container create 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:20:21 compute-0 systemd[1]: Started libpod-conmon-16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539.scope.
Dec 13 09:20:21 compute-0 podman[404879]: 2025-12-13 09:20:20.944151524 +0000 UTC m=+0.022975434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:20:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:20:21 compute-0 podman[404879]: 2025-12-13 09:20:21.083850685 +0000 UTC m=+0.162674605 container init 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:20:21 compute-0 podman[404879]: 2025-12-13 09:20:21.090668344 +0000 UTC m=+0.169492234 container start 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:20:21 compute-0 podman[404879]: 2025-12-13 09:20:21.094450389 +0000 UTC m=+0.173274299 container attach 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:20:21 compute-0 magical_austin[404895]: 167 167
Dec 13 09:20:21 compute-0 systemd[1]: libpod-16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539.scope: Deactivated successfully.
Dec 13 09:20:21 compute-0 podman[404879]: 2025-12-13 09:20:21.099034973 +0000 UTC m=+0.177858863 container died 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e501d0296170ffebea6cd9505ad55761893c17f9a86a74d70af4b5231abdcad-merged.mount: Deactivated successfully.
Dec 13 09:20:21 compute-0 podman[404879]: 2025-12-13 09:20:21.134300372 +0000 UTC m=+0.213124262 container remove 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:20:21 compute-0 systemd[1]: libpod-conmon-16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539.scope: Deactivated successfully.
Dec 13 09:20:21 compute-0 podman[404920]: 2025-12-13 09:20:21.341427763 +0000 UTC m=+0.069640757 container create 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:20:21 compute-0 systemd[1]: Started libpod-conmon-8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3.scope.
Dec 13 09:20:21 compute-0 podman[404920]: 2025-12-13 09:20:21.318709057 +0000 UTC m=+0.046922091 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:20:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:21 compute-0 podman[404920]: 2025-12-13 09:20:21.458628763 +0000 UTC m=+0.186841807 container init 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:20:21 compute-0 podman[404920]: 2025-12-13 09:20:21.467850903 +0000 UTC m=+0.196063947 container start 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:20:21 compute-0 podman[404920]: 2025-12-13 09:20:21.472597511 +0000 UTC m=+0.200810555 container attach 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007714119368623521 of space, bias 1.0, pg target 0.2314235810587056 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697110352885981 of space, bias 1.0, pg target 0.20091331058657944 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:20:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:20:21 compute-0 youthful_golick[404936]: {
Dec 13 09:20:21 compute-0 youthful_golick[404936]:     "0": [
Dec 13 09:20:21 compute-0 youthful_golick[404936]:         {
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "devices": [
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "/dev/loop3"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             ],
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_name": "ceph_lv0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_size": "21470642176",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "name": "ceph_lv0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "tags": {
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cluster_name": "ceph",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.crush_device_class": "",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.encrypted": "0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.objectstore": "bluestore",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osd_id": "0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.type": "block",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.vdo": "0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.with_tpm": "0"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             },
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "type": "block",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "vg_name": "ceph_vg0"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:         }
Dec 13 09:20:21 compute-0 youthful_golick[404936]:     ],
Dec 13 09:20:21 compute-0 youthful_golick[404936]:     "1": [
Dec 13 09:20:21 compute-0 youthful_golick[404936]:         {
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "devices": [
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "/dev/loop4"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             ],
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_name": "ceph_lv1",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_size": "21470642176",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "name": "ceph_lv1",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "tags": {
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cluster_name": "ceph",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.crush_device_class": "",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.encrypted": "0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.objectstore": "bluestore",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osd_id": "1",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.type": "block",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.vdo": "0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.with_tpm": "0"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             },
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "type": "block",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "vg_name": "ceph_vg1"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:         }
Dec 13 09:20:21 compute-0 youthful_golick[404936]:     ],
Dec 13 09:20:21 compute-0 youthful_golick[404936]:     "2": [
Dec 13 09:20:21 compute-0 youthful_golick[404936]:         {
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "devices": [
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "/dev/loop5"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             ],
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_name": "ceph_lv2",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_size": "21470642176",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "name": "ceph_lv2",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "tags": {
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.cluster_name": "ceph",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.crush_device_class": "",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.encrypted": "0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.objectstore": "bluestore",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osd_id": "2",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.type": "block",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.vdo": "0",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:                 "ceph.with_tpm": "0"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             },
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "type": "block",
Dec 13 09:20:21 compute-0 youthful_golick[404936]:             "vg_name": "ceph_vg2"
Dec 13 09:20:21 compute-0 youthful_golick[404936]:         }
Dec 13 09:20:21 compute-0 youthful_golick[404936]:     ]
Dec 13 09:20:21 compute-0 youthful_golick[404936]: }
Dec 13 09:20:21 compute-0 systemd[1]: libpod-8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3.scope: Deactivated successfully.
Dec 13 09:20:21 compute-0 podman[404920]: 2025-12-13 09:20:21.817504065 +0000 UTC m=+0.545717099 container died 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f-merged.mount: Deactivated successfully.
Dec 13 09:20:21 compute-0 podman[404920]: 2025-12-13 09:20:21.885445608 +0000 UTC m=+0.613658612 container remove 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 09:20:21 compute-0 systemd[1]: libpod-conmon-8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3.scope: Deactivated successfully.
Dec 13 09:20:21 compute-0 sudo[404842]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:22 compute-0 sudo[404957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:20:22 compute-0 sudo[404957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:22 compute-0 sudo[404957]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:22 compute-0 ceph-mon[76537]: pgmap v3577: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:20:22 compute-0 sudo[404982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:20:22 compute-0 sudo[404982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:22 compute-0 podman[405019]: 2025-12-13 09:20:22.446220161 +0000 UTC m=+0.042596652 container create a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:20:22 compute-0 systemd[1]: Started libpod-conmon-a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af.scope.
Dec 13 09:20:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:20:22 compute-0 podman[405019]: 2025-12-13 09:20:22.428477179 +0000 UTC m=+0.024853700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:20:22 compute-0 podman[405019]: 2025-12-13 09:20:22.536553101 +0000 UTC m=+0.132929642 container init a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:20:22 compute-0 podman[405019]: 2025-12-13 09:20:22.547597216 +0000 UTC m=+0.143973717 container start a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:20:22 compute-0 podman[405019]: 2025-12-13 09:20:22.550821147 +0000 UTC m=+0.147197678 container attach a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:20:22 compute-0 musing_kilby[405036]: 167 167
Dec 13 09:20:22 compute-0 systemd[1]: libpod-a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af.scope: Deactivated successfully.
Dec 13 09:20:22 compute-0 podman[405019]: 2025-12-13 09:20:22.555515644 +0000 UTC m=+0.151892125 container died a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a36a236b973814a06452819059709aff9c22b7c8e10712e110a52d5cb1df24f2-merged.mount: Deactivated successfully.
Dec 13 09:20:22 compute-0 podman[405019]: 2025-12-13 09:20:22.593635523 +0000 UTC m=+0.190012014 container remove a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:20:22 compute-0 systemd[1]: libpod-conmon-a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af.scope: Deactivated successfully.
Dec 13 09:20:22 compute-0 podman[405059]: 2025-12-13 09:20:22.844180486 +0000 UTC m=+0.081421059 container create b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 09:20:22 compute-0 systemd[1]: Started libpod-conmon-b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa.scope.
Dec 13 09:20:22 compute-0 podman[405059]: 2025-12-13 09:20:22.808358954 +0000 UTC m=+0.045599577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:20:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:20:22 compute-0 podman[405059]: 2025-12-13 09:20:22.939655925 +0000 UTC m=+0.176896528 container init b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:20:22 compute-0 podman[405059]: 2025-12-13 09:20:22.955293085 +0000 UTC m=+0.192533668 container start b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 09:20:22 compute-0 podman[405059]: 2025-12-13 09:20:22.959298335 +0000 UTC m=+0.196538938 container attach b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 09:20:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:20:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:23 compute-0 lvm[405153]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:20:23 compute-0 lvm[405153]: VG ceph_vg0 finished
Dec 13 09:20:23 compute-0 lvm[405154]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:20:23 compute-0 lvm[405154]: VG ceph_vg1 finished
Dec 13 09:20:23 compute-0 lvm[405156]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:20:23 compute-0 lvm[405156]: VG ceph_vg2 finished
Dec 13 09:20:23 compute-0 lvm[405158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:20:23 compute-0 lvm[405158]: VG ceph_vg2 finished
Dec 13 09:20:23 compute-0 practical_kalam[405075]: {}
Dec 13 09:20:23 compute-0 nova_compute[248510]: 2025-12-13 09:20:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:23 compute-0 systemd[1]: libpod-b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa.scope: Deactivated successfully.
Dec 13 09:20:23 compute-0 podman[405059]: 2025-12-13 09:20:23.79666819 +0000 UTC m=+1.033908813 container died b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:20:23 compute-0 systemd[1]: libpod-b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa.scope: Consumed 1.362s CPU time.
Dec 13 09:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6-merged.mount: Deactivated successfully.
Dec 13 09:20:23 compute-0 podman[405059]: 2025-12-13 09:20:23.85405629 +0000 UTC m=+1.091296873 container remove b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:20:23 compute-0 systemd[1]: libpod-conmon-b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa.scope: Deactivated successfully.
Dec 13 09:20:23 compute-0 sudo[404982]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:20:24 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:20:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:20:24 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:20:24 compute-0 ceph-mon[76537]: pgmap v3578: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:20:24 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:20:24 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:20:24 compute-0 sudo[405170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:20:24 compute-0 sudo[405170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:20:24 compute-0 sudo[405170]: pam_unix(sudo:session): session closed for user root
Dec 13 09:20:24 compute-0 nova_compute[248510]: 2025-12-13 09:20:24.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:20:25 compute-0 nova_compute[248510]: 2025-12-13 09:20:25.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:26 compute-0 ceph-mon[76537]: pgmap v3579: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:20:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.3 MiB/s wr, 55 op/s
Dec 13 09:20:27 compute-0 ceph-mon[76537]: pgmap v3580: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.3 MiB/s wr, 55 op/s
Dec 13 09:20:27 compute-0 nova_compute[248510]: 2025-12-13 09:20:27.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 1.3 MiB/s wr, 55 op/s
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.665 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.666 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.689 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.792 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.793 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.807 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.808 248514 INFO nova.compute.claims [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:20:29 compute-0 nova_compute[248510]: 2025-12-13 09:20:29.950 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:30 compute-0 ceph-mon[76537]: pgmap v3581: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 1.3 MiB/s wr, 55 op/s
Dec 13 09:20:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:20:30 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32728649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.544 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.552 248514 DEBUG nova.compute.provider_tree [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.580 248514 DEBUG nova.scheduler.client.report [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.626 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.628 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.688 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.689 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.929 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:20:30 compute-0 nova_compute[248510]: 2025-12-13 09:20:30.951 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:20:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 61 KiB/s wr, 4 op/s
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.067 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.069 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.070 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Creating image(s)
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.103 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:31 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/32728649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.135 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.159 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.164 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.276 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.277 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.278 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.278 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.304 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.310 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.560 248514 DEBUG nova.policy [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.669 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.737 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.812 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.822 248514 DEBUG nova.objects.instance [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 776d7b4e-dc82-47f6-b1bb-53188ad804b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.837 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.837 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Ensure instance console log exists: /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.838 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.838 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:31 compute-0 nova_compute[248510]: 2025-12-13 09:20:31.838 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:32 compute-0 ceph-mon[76537]: pgmap v3582: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 61 KiB/s wr, 4 op/s
Dec 13 09:20:32 compute-0 nova_compute[248510]: 2025-12-13 09:20:32.366 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:20:32 compute-0 nova_compute[248510]: 2025-12-13 09:20:32.366 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:20:32 compute-0 nova_compute[248510]: 2025-12-13 09:20:32.367 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:20:32 compute-0 nova_compute[248510]: 2025-12-13 09:20:32.367 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 387ffe9e-b998-43aa-bb42-e87639c1ba6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:20:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 139 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 418 KiB/s wr, 12 op/s
Dec 13 09:20:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:33 compute-0 nova_compute[248510]: 2025-12-13 09:20:33.344 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Successfully created port: 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:20:34 compute-0 ceph-mon[76537]: pgmap v3583: 321 pgs: 321 active+clean; 139 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 418 KiB/s wr, 12 op/s
Dec 13 09:20:34 compute-0 nova_compute[248510]: 2025-12-13 09:20:34.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:34 compute-0 nova_compute[248510]: 2025-12-13 09:20:34.838 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Successfully updated port: 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:20:34 compute-0 nova_compute[248510]: 2025-12-13 09:20:34.856 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:20:34 compute-0 nova_compute[248510]: 2025-12-13 09:20:34.857 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:20:34 compute-0 nova_compute[248510]: 2025-12-13 09:20:34.857 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:20:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:20:35 compute-0 nova_compute[248510]: 2025-12-13 09:20:35.284 248514 DEBUG nova.compute.manager [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:20:35 compute-0 nova_compute[248510]: 2025-12-13 09:20:35.285 248514 DEBUG nova.compute.manager [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing instance network info cache due to event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:20:35 compute-0 nova_compute[248510]: 2025-12-13 09:20:35.285 248514 DEBUG oslo_concurrency.lockutils [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:20:35 compute-0 nova_compute[248510]: 2025-12-13 09:20:35.385 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:20:35 compute-0 nova_compute[248510]: 2025-12-13 09:20:35.421 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:20:35 compute-0 nova_compute[248510]: 2025-12-13 09:20:35.441 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:20:35 compute-0 nova_compute[248510]: 2025-12-13 09:20:35.441 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:20:35 compute-0 nova_compute[248510]: 2025-12-13 09:20:35.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:36 compute-0 ceph-mon[76537]: pgmap v3584: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:20:36 compute-0 nova_compute[248510]: 2025-12-13 09:20:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:36 compute-0 nova_compute[248510]: 2025-12-13 09:20:36.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:36 compute-0 nova_compute[248510]: 2025-12-13 09:20:36.820 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:36 compute-0 nova_compute[248510]: 2025-12-13 09:20:36.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:36 compute-0 nova_compute[248510]: 2025-12-13 09:20:36.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:36 compute-0 nova_compute[248510]: 2025-12-13 09:20:36.822 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:20:36 compute-0 nova_compute[248510]: 2025-12-13 09:20:36.822 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:20:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:20:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259731585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.421 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.516 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.517 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.749 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.751 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3219MB free_disk=59.92107870336622GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.751 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.752 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.754 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.792 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.792 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance network_info: |[{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.793 248514 DEBUG oslo_concurrency.lockutils [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.793 248514 DEBUG nova.network.neutron [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.797 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start _get_guest_xml network_info=[{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.802 248514 WARNING nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.810 248514 DEBUG nova.virt.libvirt.host [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.810 248514 DEBUG nova.virt.libvirt.host [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.819 248514 DEBUG nova.virt.libvirt.host [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.819 248514 DEBUG nova.virt.libvirt.host [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.820 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.820 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.821 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.821 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.821 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.822 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.822 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.822 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.823 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.823 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.823 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.823 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.827 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.935 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 387ffe9e-b998-43aa-bb42-e87639c1ba6a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.936 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 776d7b4e-dc82-47f6-b1bb-53188ad804b6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.937 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:20:37 compute-0 nova_compute[248510]: 2025-12-13 09:20:37.937 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:20:38 compute-0 nova_compute[248510]: 2025-12-13 09:20:38.006 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:20:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/952357148' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:20:38 compute-0 nova_compute[248510]: 2025-12-13 09:20:38.456 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:38 compute-0 nova_compute[248510]: 2025-12-13 09:20:38.488 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:38 compute-0 nova_compute[248510]: 2025-12-13 09:20:38.494 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:38 compute-0 ceph-mon[76537]: pgmap v3585: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:20:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2259731585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:20:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:20:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:20:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966682541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:20:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:20:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894919964' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.113 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.118 248514 DEBUG nova.virt.libvirt.vif [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-272771977',display_name='tempest-TestGettingAddress-server-272771977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-272771977',id=150,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-nwca5oxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:20:30Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=776d7b4e-dc82-47f6-b1bb-53188ad804b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.120 248514 DEBUG nova.network.os_vif_util [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.122 248514 DEBUG nova.network.os_vif_util [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.124 248514 DEBUG nova.objects.instance [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 776d7b4e-dc82-47f6-b1bb-53188ad804b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.127 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.136 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.150 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <uuid>776d7b4e-dc82-47f6-b1bb-53188ad804b6</uuid>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <name>instance-00000096</name>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-272771977</nova:name>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:20:37</nova:creationTime>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <nova:port uuid="3e8a3ef7-51fa-466a-b6ae-c5e00a04d987">
Dec 13 09:20:39 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feae:a19" ipVersion="6"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feae:a19" ipVersion="6"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <system>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <entry name="serial">776d7b4e-dc82-47f6-b1bb-53188ad804b6</entry>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <entry name="uuid">776d7b4e-dc82-47f6-b1bb-53188ad804b6</entry>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </system>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <os>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   </os>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <features>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   </features>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk">
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       </source>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config">
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       </source>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:20:39 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:ae:0a:19"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <target dev="tap3e8a3ef7-51"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/console.log" append="off"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <video>
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </video>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:20:39 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:20:39 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:20:39 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:20:39 compute-0 nova_compute[248510]: </domain>
Dec 13 09:20:39 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.152 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Preparing to wait for external event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.153 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.153 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.153 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.154 248514 DEBUG nova.virt.libvirt.vif [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-272771977',display_name='tempest-TestGettingAddress-server-272771977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-272771977',id=150,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-nwca5oxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:20:30Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=776d7b4e-dc82-47f6-b1bb-53188ad804b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.155 248514 DEBUG nova.network.os_vif_util [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.156 248514 DEBUG nova.network.os_vif_util [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.156 248514 DEBUG os_vif [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.157 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.158 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.158 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.161 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.167 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e8a3ef7-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.167 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e8a3ef7-51, col_values=(('external_ids', {'iface-id': '3e8a3ef7-51fa-466a-b6ae-c5e00a04d987', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:0a:19', 'vm-uuid': '776d7b4e-dc82-47f6-b1bb-53188ad804b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:20:39 compute-0 NetworkManager[50376]: <info>  [1765617639.1716] manager: (tap3e8a3ef7-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.183 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.184 248514 INFO os_vif [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51')
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.187 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.187 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.585 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.586 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.586 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:ae:0a:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.587 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Using config drive
Dec 13 09:20:39 compute-0 nova_compute[248510]: 2025-12-13 09:20:39.630 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:20:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:20:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:20:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:20:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:20:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:20:40 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/952357148' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:20:40 compute-0 ceph-mon[76537]: pgmap v3586: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:20:40 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2966682541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:20:40 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2894919964' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.589 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Creating config drive at /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.597 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8ygcv5e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.638 248514 DEBUG nova.network.neutron [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updated VIF entry in instance network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.640 248514 DEBUG nova.network.neutron [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.662 248514 DEBUG oslo_concurrency.lockutils [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.743 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8ygcv5e" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.789 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.794 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.965 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:20:40 compute-0 nova_compute[248510]: 2025-12-13 09:20:40.966 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Deleting local config drive /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config because it was imported into RBD.
Dec 13 09:20:41 compute-0 kernel: tap3e8a3ef7-51: entered promiscuous mode
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:41 compute-0 NetworkManager[50376]: <info>  [1765617641.0375] manager: (tap3e8a3ef7-51): new Tun device (/org/freedesktop/NetworkManager/Devices/672)
Dec 13 09:20:41 compute-0 ovn_controller[148476]: 2025-12-13T09:20:41Z|01614|binding|INFO|Claiming lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 for this chassis.
Dec 13 09:20:41 compute-0 ovn_controller[148476]: 2025-12-13T09:20:41Z|01615|binding|INFO|3e8a3ef7-51fa-466a-b6ae-c5e00a04d987: Claiming fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.046 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19'], port_security=['fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:feae:a19/64 2001:db8::f816:3eff:feae:a19/64', 'neutron:device_id': '776d7b4e-dc82-47f6-b1bb-53188ad804b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7518f8bf-76d2-4439-8293-d0d6a2979cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.048 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 bound to our chassis
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.050 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e41a163e-7597-4a26-aa5d-b894f952cca4
Dec 13 09:20:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:20:41 compute-0 ovn_controller[148476]: 2025-12-13T09:20:41Z|01616|binding|INFO|Setting lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 ovn-installed in OVS
Dec 13 09:20:41 compute-0 ovn_controller[148476]: 2025-12-13T09:20:41Z|01617|binding|INFO|Setting lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 up in Southbound
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.076 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7872e92-486c-4889-8dca-c190807e674f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:41 compute-0 systemd-udevd[405590]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:20:41 compute-0 systemd-machined[210538]: New machine qemu-181-instance-00000096.
Dec 13 09:20:41 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000096.
Dec 13 09:20:41 compute-0 NetworkManager[50376]: <info>  [1765617641.1150] device (tap3e8a3ef7-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:20:41 compute-0 NetworkManager[50376]: <info>  [1765617641.1161] device (tap3e8a3ef7-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.120 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[546eec4a-311e-4728-ba29-44fc256d0d82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.123 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[67ac630d-6378-4cc2-b7ed-1aeff95fe249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:41 compute-0 podman[405562]: 2025-12-13 09:20:41.150032924 +0000 UTC m=+0.075288837 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.154 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[acb6f206-9a0b-47a8-a2cf-8116f591d048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:41 compute-0 podman[405560]: 2025-12-13 09:20:41.155910821 +0000 UTC m=+0.080919908 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.176 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d11a7368-a2c0-41c6-b588-0932ee2e22fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape41a163e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:85:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017964, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405631, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.186 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.187 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.187 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.187 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.194 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5abdf3a9-745b-486e-8e2d-c16f1a1e9b2b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape41a163e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017980, 'tstamp': 1017980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405639, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape41a163e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017984, 'tstamp': 1017984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405639, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape41a163e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.199 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape41a163e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.199 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.199 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape41a163e-70, col_values=(('external_ids', {'iface-id': '0c3d5666-7453-4eb7-8973-bef187574418'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:20:41 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.200 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:20:41 compute-0 podman[405563]: 2025-12-13 09:20:41.212661695 +0000 UTC m=+0.133836686 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.470 248514 DEBUG nova.compute.manager [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.472 248514 DEBUG oslo_concurrency.lockutils [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.472 248514 DEBUG oslo_concurrency.lockutils [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.473 248514 DEBUG oslo_concurrency.lockutils [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.473 248514 DEBUG nova.compute.manager [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Processing event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.670 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.672 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617641.6702237, 776d7b4e-dc82-47f6-b1bb-53188ad804b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.672 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] VM Started (Lifecycle Event)
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.676 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.680 248514 INFO nova.virt.libvirt.driver [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance spawned successfully.
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.680 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.704 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.710 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.714 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.714 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.715 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.715 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.715 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.716 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.743 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.744 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617641.6717417, 776d7b4e-dc82-47f6-b1bb-53188ad804b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.745 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] VM Paused (Lifecycle Event)
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.782 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.788 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617641.6759708, 776d7b4e-dc82-47f6-b1bb-53188ad804b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.788 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] VM Resumed (Lifecycle Event)
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.796 248514 INFO nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Took 10.73 seconds to spawn the instance on the hypervisor.
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.796 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.808 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.812 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.844 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.871 248514 INFO nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Took 12.12 seconds to build instance.
Dec 13 09:20:41 compute-0 nova_compute[248510]: 2025-12-13 09:20:41.902 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:42 compute-0 ceph-mon[76537]: pgmap v3587: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:20:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 13 09:20:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:43 compute-0 nova_compute[248510]: 2025-12-13 09:20:43.576 248514 DEBUG nova.compute.manager [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:20:43 compute-0 nova_compute[248510]: 2025-12-13 09:20:43.576 248514 DEBUG oslo_concurrency.lockutils [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:43 compute-0 nova_compute[248510]: 2025-12-13 09:20:43.576 248514 DEBUG oslo_concurrency.lockutils [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:43 compute-0 nova_compute[248510]: 2025-12-13 09:20:43.576 248514 DEBUG oslo_concurrency.lockutils [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:43 compute-0 nova_compute[248510]: 2025-12-13 09:20:43.577 248514 DEBUG nova.compute.manager [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] No waiting events found dispatching network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:20:43 compute-0 nova_compute[248510]: 2025-12-13 09:20:43.577 248514 WARNING nova.compute.manager [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received unexpected event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 for instance with vm_state active and task_state None.
Dec 13 09:20:44 compute-0 nova_compute[248510]: 2025-12-13 09:20:44.170 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:44 compute-0 ceph-mon[76537]: pgmap v3588: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 13 09:20:44 compute-0 nova_compute[248510]: 2025-12-13 09:20:44.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 88 op/s
Dec 13 09:20:45 compute-0 nova_compute[248510]: 2025-12-13 09:20:45.963 248514 DEBUG nova.compute.manager [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:20:45 compute-0 nova_compute[248510]: 2025-12-13 09:20:45.964 248514 DEBUG nova.compute.manager [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing instance network info cache due to event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:20:45 compute-0 nova_compute[248510]: 2025-12-13 09:20:45.964 248514 DEBUG oslo_concurrency.lockutils [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:20:45 compute-0 nova_compute[248510]: 2025-12-13 09:20:45.964 248514 DEBUG oslo_concurrency.lockutils [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:20:45 compute-0 nova_compute[248510]: 2025-12-13 09:20:45.964 248514 DEBUG nova.network.neutron [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:20:46 compute-0 ceph-mon[76537]: pgmap v3589: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 88 op/s
Dec 13 09:20:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:20:47 compute-0 ceph-mon[76537]: pgmap v3590: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:20:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:48 compute-0 nova_compute[248510]: 2025-12-13 09:20:48.444 248514 DEBUG nova.network.neutron [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updated VIF entry in instance network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:20:48 compute-0 nova_compute[248510]: 2025-12-13 09:20:48.446 248514 DEBUG nova.network.neutron [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:20:48 compute-0 nova_compute[248510]: 2025-12-13 09:20:48.473 248514 DEBUG oslo_concurrency.lockutils [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:20:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 09:20:49 compute-0 nova_compute[248510]: 2025-12-13 09:20:49.172 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:20:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 71K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1342 writes, 6337 keys, 1342 commit groups, 1.0 writes per commit group, ingest: 9.21 MB, 0.02 MB/s
                                           Interval WAL: 1343 writes, 1343 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     23.4      3.92              0.32        52    0.075       0      0       0.0       0.0
                                             L6      1/0   10.75 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9     83.6     70.9      6.37              1.46        51    0.125    350K    27K       0.0       0.0
                                            Sum      1/0   10.75 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     51.7     52.8     10.29              1.79       103    0.100    350K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     64.7     66.9      1.08              0.29        12    0.090     55K   3080       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     83.6     70.9      6.37              1.46        51    0.125    350K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     23.4      3.92              0.32        51    0.077       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.089, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.53 GB write, 0.08 MB/s write, 0.52 GB read, 0.08 MB/s read, 10.3 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 61.01 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.00065 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3770,58.51 MB,19.2459%) FilterBlock(104,973.55 KB,0.31274%) IndexBlock(104,1.56 MB,0.511807%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 09:20:49 compute-0 nova_compute[248510]: 2025-12-13 09:20:49.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:50 compute-0 ceph-mon[76537]: pgmap v3591: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 09:20:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:20:51 compute-0 nova_compute[248510]: 2025-12-13 09:20:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:52 compute-0 ceph-mon[76537]: pgmap v3592: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 09:20:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Dec 13 09:20:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:54 compute-0 ceph-mon[76537]: pgmap v3593: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Dec 13 09:20:54 compute-0 nova_compute[248510]: 2025-12-13 09:20:54.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:54 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Dec 13 09:20:54 compute-0 nova_compute[248510]: 2025-12-13 09:20:54.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:54 compute-0 nova_compute[248510]: 2025-12-13 09:20:54.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:20:54 compute-0 nova_compute[248510]: 2025-12-13 09:20:54.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:20:54 compute-0 ovn_controller[148476]: 2025-12-13T09:20:54Z|00211|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:0a:19 10.100.0.14
Dec 13 09:20:54 compute-0 ovn_controller[148476]: 2025-12-13T09:20:54Z|00212|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:0a:19 10.100.0.14
Dec 13 09:20:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 83 op/s
Dec 13 09:20:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:55.457 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:20:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:55.458 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:20:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:20:55.459 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:20:56 compute-0 ceph-mon[76537]: pgmap v3594: 321 pgs: 321 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 83 op/s
Dec 13 09:20:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Dec 13 09:20:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:20:58 compute-0 ceph-mon[76537]: pgmap v3595: 321 pgs: 321 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Dec 13 09:20:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:20:59 compute-0 nova_compute[248510]: 2025-12-13 09:20:59.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:20:59 compute-0 nova_compute[248510]: 2025-12-13 09:20:59.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:00 compute-0 ceph-mon[76537]: pgmap v3596: 321 pgs: 321 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 09:21:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:21:02 compute-0 ceph-mon[76537]: pgmap v3597: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:21:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:21:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.868 248514 DEBUG nova.compute.manager [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.869 248514 DEBUG nova.compute.manager [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing instance network info cache due to event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.869 248514 DEBUG oslo_concurrency.lockutils [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.870 248514 DEBUG oslo_concurrency.lockutils [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.870 248514 DEBUG nova.network.neutron [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.950 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.951 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.951 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.952 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.953 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.956 248514 INFO nova.compute.manager [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Terminating instance
Dec 13 09:21:03 compute-0 nova_compute[248510]: 2025-12-13 09:21:03.957 248514 DEBUG nova.compute.manager [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:21:04 compute-0 kernel: tap3e8a3ef7-51 (unregistering): left promiscuous mode
Dec 13 09:21:04 compute-0 NetworkManager[50376]: <info>  [1765617664.0154] device (tap3e8a3ef7-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:21:04 compute-0 ovn_controller[148476]: 2025-12-13T09:21:04Z|01618|binding|INFO|Releasing lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 from this chassis (sb_readonly=0)
Dec 13 09:21:04 compute-0 ovn_controller[148476]: 2025-12-13T09:21:04Z|01619|binding|INFO|Setting lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 down in Southbound
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.030 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:04 compute-0 ovn_controller[148476]: 2025-12-13T09:21:04Z|01620|binding|INFO|Removing iface tap3e8a3ef7-51 ovn-installed in OVS
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.041 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19'], port_security=['fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:feae:a19/64 2001:db8::f816:3eff:feae:a19/64', 'neutron:device_id': '776d7b4e-dc82-47f6-b1bb-53188ad804b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7518f8bf-76d2-4439-8293-d0d6a2979cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.042 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 unbound from our chassis
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.045 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e41a163e-7597-4a26-aa5d-b894f952cca4
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.066 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d95fec9-1674-438d-9239-a33a9de33302]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:04 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000096.scope: Deactivated successfully.
Dec 13 09:21:04 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000096.scope: Consumed 13.626s CPU time.
Dec 13 09:21:04 compute-0 systemd-machined[210538]: Machine qemu-181-instance-00000096 terminated.
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.104 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cbaae919-a882-41e9-b262-a767235ad5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.108 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1851258e-5969-4e42-9ae7-c4b030519588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.146 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[02ee4b7e-738a-4d12-a9ef-3e6ca89b3f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.168 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79f233a3-7313-4de3-8ea6-ebd41f5ff239]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape41a163e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:85:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017964, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405695, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.179 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:04 compute-0 ceph-mon[76537]: pgmap v3598: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.191 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84795b8f-c4a3-48ce-88d3-5ebdb71675e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape41a163e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017980, 'tstamp': 1017980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405697, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape41a163e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017984, 'tstamp': 1017984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405697, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.194 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape41a163e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.201 248514 INFO nova.virt.libvirt.driver [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance destroyed successfully.
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.201 248514 DEBUG nova.objects.instance [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 776d7b4e-dc82-47f6-b1bb-53188ad804b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.211 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape41a163e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape41a163e-70, col_values=(('external_ids', {'iface-id': '0c3d5666-7453-4eb7-8973-bef187574418'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.213 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.220 248514 DEBUG nova.virt.libvirt.vif [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-272771977',display_name='tempest-TestGettingAddress-server-272771977',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-272771977',id=150,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:20:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-nwca5oxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:20:41Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=776d7b4e-dc82-47f6-b1bb-53188ad804b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.221 248514 DEBUG nova.network.os_vif_util [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.222 248514 DEBUG nova.network.os_vif_util [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.223 248514 DEBUG os_vif [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.227 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e8a3ef7-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.235 248514 INFO os_vif [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51')
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.557 248514 INFO nova.virt.libvirt.driver [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Deleting instance files /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6_del
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.558 248514 INFO nova.virt.libvirt.driver [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Deletion of /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6_del complete
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.666 248514 INFO nova.compute.manager [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Took 0.71 seconds to destroy the instance on the hypervisor.
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.667 248514 DEBUG oslo.service.loopingcall [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.668 248514 DEBUG nova.compute.manager [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.669 248514 DEBUG nova.network.neutron [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:21:04 compute-0 nova_compute[248510]: 2025-12-13 09:21:04.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.812 248514 DEBUG nova.network.neutron [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.833 248514 INFO nova.compute.manager [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Took 1.16 seconds to deallocate network for instance.
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.900 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.901 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.994 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-unplugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.995 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.996 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.996 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.997 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] No waiting events found dispatching network-vif-unplugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.998 248514 WARNING nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received unexpected event network-vif-unplugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 for instance with vm_state deleted and task_state None.
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.998 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:05 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.999 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:05.999 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.000 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.000 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] No waiting events found dispatching network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.001 248514 WARNING nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received unexpected event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 for instance with vm_state deleted and task_state None.
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.001 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-deleted-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.004 248514 DEBUG oslo_concurrency.processutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.090 248514 DEBUG nova.network.neutron [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updated VIF entry in instance network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.091 248514 DEBUG nova.network.neutron [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.119 248514 DEBUG oslo_concurrency.lockutils [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:21:06 compute-0 ceph-mon[76537]: pgmap v3599: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:21:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:21:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/902950929' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.586 248514 DEBUG oslo_concurrency.processutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.593 248514 DEBUG nova.compute.provider_tree [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.615 248514 DEBUG nova.scheduler.client.report [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.646 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.675 248514 INFO nova.scheduler.client.report [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 776d7b4e-dc82-47f6-b1bb-53188ad804b6
Dec 13 09:21:06 compute-0 nova_compute[248510]: 2025-12-13 09:21:06.780 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 764 KiB/s wr, 40 op/s
Dec 13 09:21:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/902950929' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.786 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.787 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.787 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.788 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.788 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.789 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.789 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.812 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.832 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.833 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Image id 0ed20320-9c25-4108-ad76-64b3cb3500ce yields fingerprint 7e19890462cb757da298333dcef0801755c35301 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.833 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] image 0ed20320-9c25-4108-ad76-64b3cb3500ce at (/var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301): checking
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.834 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] image 0ed20320-9c25-4108-ad76-64b3cb3500ce at (/var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.837 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.837 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] 387ffe9e-b998-43aa-bb42-e87639c1ba6a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.837 248514 WARNING nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.838 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Active base files: /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.838 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Removable base files: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.838 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.838 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.839 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.839 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.839 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.938 248514 DEBUG nova.compute.manager [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.939 248514 DEBUG nova.compute.manager [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing instance network info cache due to event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.939 248514 DEBUG oslo_concurrency.lockutils [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.939 248514 DEBUG oslo_concurrency.lockutils [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:21:07 compute-0 nova_compute[248510]: 2025-12-13 09:21:07.939 248514 DEBUG nova.network.neutron [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.011 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.012 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.012 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.013 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.013 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.014 248514 INFO nova.compute.manager [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Terminating instance
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.016 248514 DEBUG nova.compute.manager [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:21:08 compute-0 kernel: tap76d94bc0-cb (unregistering): left promiscuous mode
Dec 13 09:21:08 compute-0 NetworkManager[50376]: <info>  [1765617668.0768] device (tap76d94bc0-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:21:08 compute-0 ovn_controller[148476]: 2025-12-13T09:21:08Z|01621|binding|INFO|Releasing lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b from this chassis (sb_readonly=0)
Dec 13 09:21:08 compute-0 ovn_controller[148476]: 2025-12-13T09:21:08Z|01622|binding|INFO|Setting lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b down in Southbound
Dec 13 09:21:08 compute-0 ovn_controller[148476]: 2025-12-13T09:21:08Z|01623|binding|INFO|Removing iface tap76d94bc0-cb ovn-installed in OVS
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.089 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.092 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81'], port_security=['fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8:0:1:f816:3eff:fe30:6b81/64 2001:db8::f816:3eff:fe30:6b81/64', 'neutron:device_id': '387ffe9e-b998-43aa-bb42-e87639c1ba6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7518f8bf-76d2-4439-8293-d0d6a2979cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=76d94bc0-cb6b-4bd0-8366-5158fdaece5b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.094 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 unbound from our chassis
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.097 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e41a163e-7597-4a26-aa5d-b894f952cca4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.098 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[28e63c90-d01c-40d8-b881-c15c31431893]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.100 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 namespace which is not needed anymore
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.109 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:08 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000095.scope: Deactivated successfully.
Dec 13 09:21:08 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000095.scope: Consumed 15.403s CPU time.
Dec 13 09:21:08 compute-0 systemd-machined[210538]: Machine qemu-180-instance-00000095 terminated.
Dec 13 09:21:08 compute-0 ceph-mon[76537]: pgmap v3600: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 764 KiB/s wr, 40 op/s
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.259 248514 INFO nova.virt.libvirt.driver [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance destroyed successfully.
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.260 248514 DEBUG nova.objects.instance [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 387ffe9e-b998-43aa-bb42-e87639c1ba6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.278 248514 DEBUG nova.virt.libvirt.vif [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569396822',display_name='tempest-TestGettingAddress-server-569396822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569396822',id=149,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:20:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-r3ss7vci',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:20:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=387ffe9e-b998-43aa-bb42-e87639c1ba6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.278 248514 DEBUG nova.network.os_vif_util [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.279 248514 DEBUG nova.network.os_vif_util [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.280 248514 DEBUG os_vif [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.282 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.282 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76d94bc0-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.307 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:08 compute-0 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [NOTICE]   (404502) : haproxy version is 2.8.14-c23fe91
Dec 13 09:21:08 compute-0 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [NOTICE]   (404502) : path to executable is /usr/sbin/haproxy
Dec 13 09:21:08 compute-0 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [WARNING]  (404502) : Exiting Master process...
Dec 13 09:21:08 compute-0 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [WARNING]  (404502) : Exiting Master process...
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.312 248514 INFO os_vif [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb')
Dec 13 09:21:08 compute-0 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [ALERT]    (404502) : Current worker (404504) exited with code 143 (Terminated)
Dec 13 09:21:08 compute-0 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [WARNING]  (404502) : All workers exited. Exiting... (0)
Dec 13 09:21:08 compute-0 systemd[1]: libpod-d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c.scope: Deactivated successfully.
Dec 13 09:21:08 compute-0 podman[405775]: 2025-12-13 09:21:08.321262111 +0000 UTC m=+0.087498931 container died d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 09:21:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c-userdata-shm.mount: Deactivated successfully.
Dec 13 09:21:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fcb212c1c75939221274e39c5248b874df50042250d2b2667a9a1d38806218d-merged.mount: Deactivated successfully.
Dec 13 09:21:08 compute-0 podman[405775]: 2025-12-13 09:21:08.369552514 +0000 UTC m=+0.135789334 container cleanup d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 09:21:08 compute-0 systemd[1]: libpod-conmon-d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c.scope: Deactivated successfully.
Dec 13 09:21:08 compute-0 podman[405833]: 2025-12-13 09:21:08.462123271 +0000 UTC m=+0.061231847 container remove d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.470 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b7a874-18f6-4713-8209-99a898bf4ced]: (4, ('Sat Dec 13 09:21:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 (d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c)\nd73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c\nSat Dec 13 09:21:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 (d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c)\nd73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bf655df2-146b-43aa-8832-7b807c8585ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.474 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape41a163e-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:08 compute-0 kernel: tape41a163e-70: left promiscuous mode
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.490 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd16346b-4b91-4b26-bcf1-408f85ec749f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.509 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e53fbde-e9a2-4f82-93cc-3103236d6163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.511 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[82b47ace-cc35-4d2f-8fb3-8207d89641c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.527 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ad94e5-7a65-4bd0-abec-04512fda14a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017954, 'reachable_time': 20185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405849, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:08 compute-0 systemd[1]: run-netns-ovnmeta\x2de41a163e\x2d7597\x2d4a26\x2daa5d\x2db894f952cca4.mount: Deactivated successfully.
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.531 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:21:08 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.531 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[2517588a-e44c-47d1-a120-7bc523f1f6b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.615 248514 INFO nova.virt.libvirt.driver [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Deleting instance files /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a_del
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.616 248514 INFO nova.virt.libvirt.driver [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Deletion of /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a_del complete
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.687 248514 INFO nova.compute.manager [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Took 0.67 seconds to destroy the instance on the hypervisor.
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.687 248514 DEBUG oslo.service.loopingcall [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.687 248514 DEBUG nova.compute.manager [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:21:08 compute-0 nova_compute[248510]: 2025-12-13 09:21:08.688 248514 DEBUG nova.network.neutron [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:21:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 204 KiB/s rd, 766 KiB/s wr, 70 op/s
Dec 13 09:21:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:21:09
Dec 13 09:21:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:21:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:21:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'images']
Dec 13 09:21:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.474 248514 DEBUG nova.network.neutron [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.499 248514 INFO nova.compute.manager [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Took 0.81 seconds to deallocate network for instance.
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.563 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.564 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.573 248514 DEBUG nova.compute.manager [req-ecc7fbd4-505f-4b72-8efa-e431aaa1f05c req-7a099754-7e46-4889-8b84-c974b07c384d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-deleted-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.619 248514 DEBUG oslo_concurrency.processutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.919 248514 DEBUG nova.network.neutron [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updated VIF entry in instance network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.920 248514 DEBUG nova.network.neutron [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:21:09 compute-0 nova_compute[248510]: 2025-12-13 09:21:09.943 248514 DEBUG oslo_concurrency.lockutils [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.046 248514 DEBUG nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-unplugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.047 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.048 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.048 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.048 248514 DEBUG nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] No waiting events found dispatching network-vif-unplugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.049 248514 WARNING nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received unexpected event network-vif-unplugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b for instance with vm_state deleted and task_state None.
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.049 248514 DEBUG nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.049 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.050 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.050 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.050 248514 DEBUG nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] No waiting events found dispatching network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.051 248514 WARNING nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received unexpected event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b for instance with vm_state deleted and task_state None.
Dec 13 09:21:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:21:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:21:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:21:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:21:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:21:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:21:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:21:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182366854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:10 compute-0 ceph-mon[76537]: pgmap v3601: 321 pgs: 321 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 204 KiB/s rd, 766 KiB/s wr, 70 op/s
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.245 248514 DEBUG oslo_concurrency.processutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.255 248514 DEBUG nova.compute.provider_tree [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.285 248514 DEBUG nova.scheduler.client.report [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.315 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.358 248514 INFO nova.scheduler.client.report [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 387ffe9e-b998-43aa-bb42-e87639c1ba6a
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.431 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:10 compute-0 nova_compute[248510]: 2025-12-13 09:21:10.819 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 79 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 26 KiB/s wr, 46 op/s
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:21:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:21:11 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/182366854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:12 compute-0 podman[405873]: 2025-12-13 09:21:12.013984353 +0000 UTC m=+0.082594189 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 13 09:21:12 compute-0 podman[405874]: 2025-12-13 09:21:12.02749345 +0000 UTC m=+0.095794008 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 09:21:12 compute-0 podman[405872]: 2025-12-13 09:21:12.042618556 +0000 UTC m=+0.114387411 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 13 09:21:12 compute-0 ceph-mon[76537]: pgmap v3602: 321 pgs: 321 active+clean; 79 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 26 KiB/s wr, 46 op/s
Dec 13 09:21:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 46 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 26 KiB/s wr, 56 op/s
Dec 13 09:21:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:13 compute-0 nova_compute[248510]: 2025-12-13 09:21:13.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:13.536 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:21:13 compute-0 nova_compute[248510]: 2025-12-13 09:21:13.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:13 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:13.538 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:21:14 compute-0 ceph-mon[76537]: pgmap v3603: 321 pgs: 321 active+clean; 46 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 26 KiB/s wr, 56 op/s
Dec 13 09:21:14 compute-0 nova_compute[248510]: 2025-12-13 09:21:14.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 24 KiB/s wr, 58 op/s
Dec 13 09:21:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:21:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452393289' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:21:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:21:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452393289' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:21:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/452393289' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:21:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/452393289' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:21:16 compute-0 ceph-mon[76537]: pgmap v3604: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 24 KiB/s wr, 58 op/s
Dec 13 09:21:16 compute-0 nova_compute[248510]: 2025-12-13 09:21:16.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:16 compute-0 nova_compute[248510]: 2025-12-13 09:21:16.958 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Dec 13 09:21:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:18 compute-0 nova_compute[248510]: 2025-12-13 09:21:18.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:18 compute-0 ceph-mon[76537]: pgmap v3605: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Dec 13 09:21:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Dec 13 09:21:19 compute-0 nova_compute[248510]: 2025-12-13 09:21:19.201 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617664.199182, 776d7b4e-dc82-47f6-b1bb-53188ad804b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:21:19 compute-0 nova_compute[248510]: 2025-12-13 09:21:19.201 248514 INFO nova.compute.manager [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] VM Stopped (Lifecycle Event)
Dec 13 09:21:19 compute-0 nova_compute[248510]: 2025-12-13 09:21:19.229 248514 DEBUG nova.compute.manager [None req-d2aa4f23-4e27-4eef-abe2-6ffd2fc1c42a - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:21:19 compute-0 nova_compute[248510]: 2025-12-13 09:21:19.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:20 compute-0 ceph-mon[76537]: pgmap v3606: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.3 KiB/s wr, 25 op/s
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.497708657387606e-05 of space, bias 1.0, pg target 0.004493125972162818 quantized to 32 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696862883188067 of space, bias 1.0, pg target 0.200905886495642 quantized to 32 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:21:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:21:22 compute-0 ceph-mon[76537]: pgmap v3607: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.3 KiB/s wr, 25 op/s
Dec 13 09:21:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 682 B/s wr, 12 op/s
Dec 13 09:21:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:23 compute-0 nova_compute[248510]: 2025-12-13 09:21:23.255 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617668.25371, 387ffe9e-b998-43aa-bb42-e87639c1ba6a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:21:23 compute-0 nova_compute[248510]: 2025-12-13 09:21:23.256 248514 INFO nova.compute.manager [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] VM Stopped (Lifecycle Event)
Dec 13 09:21:23 compute-0 nova_compute[248510]: 2025-12-13 09:21:23.283 248514 DEBUG nova.compute.manager [None req-e54e4e89-532d-43c1-a452-6b12d8a36f8c - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:21:23 compute-0 nova_compute[248510]: 2025-12-13 09:21:23.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:23.541 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:24 compute-0 sudo[405938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:21:24 compute-0 sudo[405938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:24 compute-0 sudo[405938]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:24 compute-0 sudo[405963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:21:24 compute-0 sudo[405963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:24 compute-0 ceph-mon[76537]: pgmap v3608: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 682 B/s wr, 12 op/s
Dec 13 09:21:24 compute-0 nova_compute[248510]: 2025-12-13 09:21:24.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Dec 13 09:21:25 compute-0 sudo[405963]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:21:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:21:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:21:25 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:21:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:21:25 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:21:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:21:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:21:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:21:25 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:21:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:21:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:21:25 compute-0 sudo[406020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:21:25 compute-0 sudo[406020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:25 compute-0 sudo[406020]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:25 compute-0 ceph-mon[76537]: pgmap v3609: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Dec 13 09:21:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:21:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:21:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:21:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:21:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:21:25 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:21:25 compute-0 sudo[406045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:21:25 compute-0 sudo[406045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:25 compute-0 nova_compute[248510]: 2025-12-13 09:21:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:25 compute-0 podman[406082]: 2025-12-13 09:21:25.935307711 +0000 UTC m=+0.108605007 container create 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 13 09:21:25 compute-0 podman[406082]: 2025-12-13 09:21:25.85339856 +0000 UTC m=+0.026695886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:21:26 compute-0 systemd[1]: Started libpod-conmon-94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322.scope.
Dec 13 09:21:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:21:26 compute-0 podman[406082]: 2025-12-13 09:21:26.181596308 +0000 UTC m=+0.354893624 container init 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:21:26 compute-0 podman[406082]: 2025-12-13 09:21:26.189108636 +0000 UTC m=+0.362405932 container start 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:21:26 compute-0 eager_meitner[406098]: 167 167
Dec 13 09:21:26 compute-0 systemd[1]: libpod-94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322.scope: Deactivated successfully.
Dec 13 09:21:26 compute-0 conmon[406098]: conmon 94876c858e2883d6b07e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322.scope/container/memory.events
Dec 13 09:21:26 compute-0 podman[406082]: 2025-12-13 09:21:26.301461985 +0000 UTC m=+0.474759321 container attach 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 09:21:26 compute-0 podman[406082]: 2025-12-13 09:21:26.30205355 +0000 UTC m=+0.475350886 container died 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 09:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6159560793f0aedec36f604f1b7529852b4d0a50373f6bcf585bffead3e1f8a2-merged.mount: Deactivated successfully.
Dec 13 09:21:26 compute-0 podman[406082]: 2025-12-13 09:21:26.612184007 +0000 UTC m=+0.785481313 container remove 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:21:26 compute-0 systemd[1]: libpod-conmon-94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322.scope: Deactivated successfully.
Dec 13 09:21:26 compute-0 podman[406120]: 2025-12-13 09:21:26.800245283 +0000 UTC m=+0.043032713 container create 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 09:21:26 compute-0 podman[406120]: 2025-12-13 09:21:26.78168395 +0000 UTC m=+0.024471400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:21:27 compute-0 systemd[1]: Started libpod-conmon-1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79.scope.
Dec 13 09:21:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:27 compute-0 podman[406120]: 2025-12-13 09:21:27.109627941 +0000 UTC m=+0.352415391 container init 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:21:27 compute-0 podman[406120]: 2025-12-13 09:21:27.117950059 +0000 UTC m=+0.360737489 container start 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:21:27 compute-0 podman[406120]: 2025-12-13 09:21:27.121374674 +0000 UTC m=+0.364162104 container attach 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 13 09:21:27 compute-0 laughing_goldberg[406137]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:21:27 compute-0 laughing_goldberg[406137]: --> All data devices are unavailable
Dec 13 09:21:27 compute-0 systemd[1]: libpod-1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79.scope: Deactivated successfully.
Dec 13 09:21:27 compute-0 podman[406120]: 2025-12-13 09:21:27.644227242 +0000 UTC m=+0.887014712 container died 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f-merged.mount: Deactivated successfully.
Dec 13 09:21:27 compute-0 nova_compute[248510]: 2025-12-13 09:21:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:27 compute-0 nova_compute[248510]: 2025-12-13 09:21:27.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:21:27 compute-0 nova_compute[248510]: 2025-12-13 09:21:27.807 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:21:27 compute-0 podman[406120]: 2025-12-13 09:21:27.840622405 +0000 UTC m=+1.083409845 container remove 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 09:21:27 compute-0 systemd[1]: libpod-conmon-1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79.scope: Deactivated successfully.
Dec 13 09:21:27 compute-0 sudo[406045]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:27 compute-0 sudo[406171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:21:27 compute-0 sudo[406171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:27 compute-0 sudo[406171]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:28 compute-0 sudo[406196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:21:28 compute-0 sudo[406196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:28 compute-0 nova_compute[248510]: 2025-12-13 09:21:28.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:28 compute-0 ceph-mon[76537]: pgmap v3610: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:28 compute-0 podman[406233]: 2025-12-13 09:21:28.331405534 +0000 UTC m=+0.046054508 container create 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:21:28 compute-0 podman[406233]: 2025-12-13 09:21:28.311596641 +0000 UTC m=+0.026245625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:21:28 compute-0 systemd[1]: Started libpod-conmon-3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7.scope.
Dec 13 09:21:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:21:28 compute-0 podman[406233]: 2025-12-13 09:21:28.711976978 +0000 UTC m=+0.426625962 container init 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:21:28 compute-0 podman[406233]: 2025-12-13 09:21:28.719749591 +0000 UTC m=+0.434398565 container start 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:21:28 compute-0 sweet_knuth[406249]: 167 167
Dec 13 09:21:28 compute-0 systemd[1]: libpod-3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7.scope: Deactivated successfully.
Dec 13 09:21:28 compute-0 podman[406233]: 2025-12-13 09:21:28.73616085 +0000 UTC m=+0.450809844 container attach 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 09:21:28 compute-0 podman[406233]: 2025-12-13 09:21:28.738364675 +0000 UTC m=+0.453013679 container died 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f9af6ca7bb067ef107378ecd0a6a06c346514356b600e890f64242678a9cefe-merged.mount: Deactivated successfully.
Dec 13 09:21:28 compute-0 podman[406233]: 2025-12-13 09:21:28.780362602 +0000 UTC m=+0.495011576 container remove 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:21:28 compute-0 systemd[1]: libpod-conmon-3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7.scope: Deactivated successfully.
Dec 13 09:21:28 compute-0 nova_compute[248510]: 2025-12-13 09:21:28.801 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:28 compute-0 podman[406274]: 2025-12-13 09:21:28.959996857 +0000 UTC m=+0.042992492 container create 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:21:29 compute-0 systemd[1]: Started libpod-conmon-91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757.scope.
Dec 13 09:21:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:29 compute-0 podman[406274]: 2025-12-13 09:21:28.942227254 +0000 UTC m=+0.025222919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:21:29 compute-0 podman[406274]: 2025-12-13 09:21:29.044746669 +0000 UTC m=+0.127742344 container init 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 09:21:29 compute-0 podman[406274]: 2025-12-13 09:21:29.053588889 +0000 UTC m=+0.136584524 container start 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:21:29 compute-0 podman[406274]: 2025-12-13 09:21:29.05683241 +0000 UTC m=+0.139828065 container attach 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:21:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]: {
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:     "0": [
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:         {
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "devices": [
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "/dev/loop3"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             ],
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_name": "ceph_lv0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_size": "21470642176",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "name": "ceph_lv0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "tags": {
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cluster_name": "ceph",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.crush_device_class": "",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.encrypted": "0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.objectstore": "bluestore",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osd_id": "0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.type": "block",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.vdo": "0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.with_tpm": "0"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             },
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "type": "block",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "vg_name": "ceph_vg0"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:         }
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:     ],
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:     "1": [
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:         {
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "devices": [
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "/dev/loop4"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             ],
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_name": "ceph_lv1",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_size": "21470642176",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "name": "ceph_lv1",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "tags": {
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cluster_name": "ceph",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.crush_device_class": "",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.encrypted": "0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.objectstore": "bluestore",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osd_id": "1",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.type": "block",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.vdo": "0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.with_tpm": "0"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             },
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "type": "block",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "vg_name": "ceph_vg1"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:         }
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:     ],
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:     "2": [
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:         {
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "devices": [
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "/dev/loop5"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             ],
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_name": "ceph_lv2",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_size": "21470642176",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "name": "ceph_lv2",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "tags": {
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.cluster_name": "ceph",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.crush_device_class": "",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.encrypted": "0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.objectstore": "bluestore",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osd_id": "2",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.type": "block",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.vdo": "0",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:                 "ceph.with_tpm": "0"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             },
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "type": "block",
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:             "vg_name": "ceph_vg2"
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:         }
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]:     ]
Dec 13 09:21:29 compute-0 flamboyant_bassi[406290]: }
Dec 13 09:21:29 compute-0 systemd[1]: libpod-91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757.scope: Deactivated successfully.
Dec 13 09:21:29 compute-0 podman[406274]: 2025-12-13 09:21:29.407124958 +0000 UTC m=+0.490120623 container died 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33-merged.mount: Deactivated successfully.
Dec 13 09:21:29 compute-0 podman[406274]: 2025-12-13 09:21:29.455809091 +0000 UTC m=+0.538804726 container remove 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:21:29 compute-0 systemd[1]: libpod-conmon-91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757.scope: Deactivated successfully.
Dec 13 09:21:29 compute-0 sudo[406196]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:29 compute-0 sudo[406310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:21:29 compute-0 sudo[406310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:29 compute-0 sudo[406310]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:29 compute-0 sudo[406335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:21:29 compute-0 nova_compute[248510]: 2025-12-13 09:21:29.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:29 compute-0 sudo[406335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:30 compute-0 podman[406374]: 2025-12-13 09:21:30.006706328 +0000 UTC m=+0.057146455 container create ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:21:30 compute-0 systemd[1]: Started libpod-conmon-ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3.scope.
Dec 13 09:21:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:21:30 compute-0 podman[406374]: 2025-12-13 09:21:29.98914079 +0000 UTC m=+0.039580937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:21:30 compute-0 podman[406374]: 2025-12-13 09:21:30.090771853 +0000 UTC m=+0.141212000 container init ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:21:30 compute-0 podman[406374]: 2025-12-13 09:21:30.097239874 +0000 UTC m=+0.147680011 container start ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 09:21:30 compute-0 podman[406374]: 2025-12-13 09:21:30.101776427 +0000 UTC m=+0.152216564 container attach ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 09:21:30 compute-0 dreamy_ptolemy[406390]: 167 167
Dec 13 09:21:30 compute-0 systemd[1]: libpod-ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3.scope: Deactivated successfully.
Dec 13 09:21:30 compute-0 podman[406374]: 2025-12-13 09:21:30.104806872 +0000 UTC m=+0.155246999 container died ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8979391525bde997567861d586ab3fbc40b97ee3b952f62b85a50f699e3124d9-merged.mount: Deactivated successfully.
Dec 13 09:21:30 compute-0 podman[406374]: 2025-12-13 09:21:30.140519742 +0000 UTC m=+0.190959869 container remove ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:21:30 compute-0 systemd[1]: libpod-conmon-ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3.scope: Deactivated successfully.
Dec 13 09:21:30 compute-0 nova_compute[248510]: 2025-12-13 09:21:30.224 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:30 compute-0 podman[406414]: 2025-12-13 09:21:30.324277381 +0000 UTC m=+0.051051743 container create 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:21:30 compute-0 ceph-mon[76537]: pgmap v3611: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:30 compute-0 systemd[1]: Started libpod-conmon-7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0.scope.
Dec 13 09:21:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:21:30 compute-0 podman[406414]: 2025-12-13 09:21:30.302427517 +0000 UTC m=+0.029201909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:30 compute-0 podman[406414]: 2025-12-13 09:21:30.417228987 +0000 UTC m=+0.144003369 container init 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 09:21:30 compute-0 podman[406414]: 2025-12-13 09:21:30.425479833 +0000 UTC m=+0.152254195 container start 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:21:30 compute-0 podman[406414]: 2025-12-13 09:21:30.429293918 +0000 UTC m=+0.156068290 container attach 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:21:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:31 compute-0 lvm[406508]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:21:31 compute-0 lvm[406509]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:21:31 compute-0 lvm[406509]: VG ceph_vg0 finished
Dec 13 09:21:31 compute-0 lvm[406508]: VG ceph_vg1 finished
Dec 13 09:21:31 compute-0 lvm[406511]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:21:31 compute-0 lvm[406511]: VG ceph_vg2 finished
Dec 13 09:21:31 compute-0 nostalgic_knuth[406430]: {}
Dec 13 09:21:31 compute-0 systemd[1]: libpod-7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0.scope: Deactivated successfully.
Dec 13 09:21:31 compute-0 systemd[1]: libpod-7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0.scope: Consumed 1.524s CPU time.
Dec 13 09:21:31 compute-0 podman[406414]: 2025-12-13 09:21:31.354783997 +0000 UTC m=+1.081558359 container died 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f-merged.mount: Deactivated successfully.
Dec 13 09:21:31 compute-0 podman[406414]: 2025-12-13 09:21:31.401379558 +0000 UTC m=+1.128153920 container remove 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 09:21:31 compute-0 systemd[1]: libpod-conmon-7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0.scope: Deactivated successfully.
Dec 13 09:21:31 compute-0 sudo[406335]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:21:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:21:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:21:31 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:21:31 compute-0 sudo[406526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:21:31 compute-0 sudo[406526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:21:31 compute-0 sudo[406526]: pam_unix(sudo:session): session closed for user root
Dec 13 09:21:32 compute-0 ceph-mon[76537]: pgmap v3612: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:21:32 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:21:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:33 compute-0 nova_compute[248510]: 2025-12-13 09:21:33.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:33 compute-0 nova_compute[248510]: 2025-12-13 09:21:33.799 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:33 compute-0 nova_compute[248510]: 2025-12-13 09:21:33.800 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:21:33 compute-0 nova_compute[248510]: 2025-12-13 09:21:33.801 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:21:34 compute-0 nova_compute[248510]: 2025-12-13 09:21:34.284 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:21:34 compute-0 ceph-mon[76537]: pgmap v3613: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:34 compute-0 nova_compute[248510]: 2025-12-13 09:21:34.654 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3614: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:35.903 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:66:81 10.100.0.2 2001:db8::f816:3eff:fe2c:6681'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe2c:6681/64', 'neutron:device_id': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f3bff06d-212e-4afb-99d4-011e9e890967) old=Port_Binding(mac=['fa:16:3e:2c:66:81 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:35.904 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f3bff06d-212e-4afb-99d4-011e9e890967 in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d updated
Dec 13 09:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:35.905 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4b9bd04-ada7-4867-9918-3cd5d21d273d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:21:35 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:35.907 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd765596-1e39-4cc9-aa67-ab874f3f2a27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:36 compute-0 ceph-mon[76537]: pgmap v3614: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:36 compute-0 nova_compute[248510]: 2025-12-13 09:21:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:38 compute-0 nova_compute[248510]: 2025-12-13 09:21:38.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:38 compute-0 ceph-mon[76537]: pgmap v3615: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:38 compute-0 nova_compute[248510]: 2025-12-13 09:21:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:38 compute-0 nova_compute[248510]: 2025-12-13 09:21:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:38 compute-0 nova_compute[248510]: 2025-12-13 09:21:38.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:38 compute-0 nova_compute[248510]: 2025-12-13 09:21:38.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:38 compute-0 nova_compute[248510]: 2025-12-13 09:21:38.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:38 compute-0 nova_compute[248510]: 2025-12-13 09:21:38.814 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:21:38 compute-0 nova_compute[248510]: 2025-12-13 09:21:38.815 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:21:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287222542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.423 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.642 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.643 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3459MB free_disk=59.9873828003183GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.644 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.644 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.975 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.976 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:21:39 compute-0 nova_compute[248510]: 2025-12-13 09:21:39.999 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:21:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:21:40 compute-0 ceph-mon[76537]: pgmap v3616: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:40 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2287222542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:21:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614901868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:40 compute-0 nova_compute[248510]: 2025-12-13 09:21:40.588 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:40 compute-0 nova_compute[248510]: 2025-12-13 09:21:40.595 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:21:40 compute-0 nova_compute[248510]: 2025-12-13 09:21:40.618 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:21:40 compute-0 nova_compute[248510]: 2025-12-13 09:21:40.638 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:21:40 compute-0 nova_compute[248510]: 2025-12-13 09:21:40.639 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1614901868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:41 compute-0 nova_compute[248510]: 2025-12-13 09:21:41.639 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:41 compute-0 nova_compute[248510]: 2025-12-13 09:21:41.640 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:41 compute-0 nova_compute[248510]: 2025-12-13 09:21:41.640 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:21:42 compute-0 ceph-mon[76537]: pgmap v3617: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:42 compute-0 podman[406598]: 2025-12-13 09:21:42.978151418 +0000 UTC m=+0.062236532 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:21:42 compute-0 podman[406597]: 2025-12-13 09:21:42.986028464 +0000 UTC m=+0.070938938 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_id=multipathd)
Dec 13 09:21:43 compute-0 podman[406596]: 2025-12-13 09:21:43.011908539 +0000 UTC m=+0.096818963 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 09:21:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:43 compute-0 ceph-mon[76537]: pgmap v3618: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.477 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.477 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.498 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.591 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.592 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.600 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.601 248514 INFO nova.compute.claims [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:21:43 compute-0 nova_compute[248510]: 2025-12-13 09:21:43.713 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:21:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820773070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.310 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.321 248514 DEBUG nova.compute.provider_tree [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:21:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3820773070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.475 248514 DEBUG nova.scheduler.client.report [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.514 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.516 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.583 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.584 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.615 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.637 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.736 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.737 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.738 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Creating image(s)
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.773 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.813 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.855 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.862 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.921 248514 DEBUG nova.policy [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.960 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.961 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.962 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.963 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.993 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:21:44 compute-0 nova_compute[248510]: 2025-12-13 09:21:44.999 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1437fc03-8f31-440f-8928-2fe388a22bbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:45 compute-0 nova_compute[248510]: 2025-12-13 09:21:45.337 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1437fc03-8f31-440f-8928-2fe388a22bbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:45 compute-0 ceph-mon[76537]: pgmap v3619: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:45 compute-0 nova_compute[248510]: 2025-12-13 09:21:45.411 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:21:45 compute-0 nova_compute[248510]: 2025-12-13 09:21:45.497 248514 DEBUG nova.objects.instance [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 1437fc03-8f31-440f-8928-2fe388a22bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:21:45 compute-0 nova_compute[248510]: 2025-12-13 09:21:45.517 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:21:45 compute-0 nova_compute[248510]: 2025-12-13 09:21:45.517 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Ensure instance console log exists: /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:21:45 compute-0 nova_compute[248510]: 2025-12-13 09:21:45.518 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:45 compute-0 nova_compute[248510]: 2025-12-13 09:21:45.518 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:45 compute-0 nova_compute[248510]: 2025-12-13 09:21:45.519 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:47 compute-0 nova_compute[248510]: 2025-12-13 09:21:47.522 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Successfully created port: 2777ba55-72e3-4334-96ae-48077ed6a8d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:21:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:48 compute-0 ceph-mon[76537]: pgmap v3620: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.365 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Successfully updated port: 2777ba55-72e3-4334-96ae-48077ed6a8d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.384 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.384 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.384 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.506 248514 DEBUG nova.compute.manager [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.506 248514 DEBUG nova.compute.manager [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing instance network info cache due to event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.506 248514 DEBUG oslo_concurrency.lockutils [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:21:48 compute-0 nova_compute[248510]: 2025-12-13 09:21:48.597 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:21:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.7 MiB/s wr, 2 op/s
Dec 13 09:21:49 compute-0 nova_compute[248510]: 2025-12-13 09:21:49.657 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:50 compute-0 ceph-mon[76537]: pgmap v3621: 321 pgs: 321 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.7 MiB/s wr, 2 op/s
Dec 13 09:21:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3622: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.548 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.573 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.573 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance network_info: |[{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.574 248514 DEBUG oslo_concurrency.lockutils [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.574 248514 DEBUG nova.network.neutron [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.577 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start _get_guest_xml network_info=[{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.582 248514 WARNING nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.590 248514 DEBUG nova.virt.libvirt.host [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.590 248514 DEBUG nova.virt.libvirt.host [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.599 248514 DEBUG nova.virt.libvirt.host [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.600 248514 DEBUG nova.virt.libvirt.host [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.601 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.601 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.602 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.602 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.602 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.602 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.603 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.603 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.603 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.604 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.604 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.604 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:21:51 compute-0 nova_compute[248510]: 2025-12-13 09:21:51.608 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:21:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117008094' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.188 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:52 compute-0 ceph-mon[76537]: pgmap v3622: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:21:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2117008094' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.308 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.314 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:21:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:21:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110810512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.893 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.895 248514 DEBUG nova.virt.libvirt.vif [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1012009154',display_name='tempest-TestGettingAddress-server-1012009154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1012009154',id=151,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4xgqh80f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:21:44Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=1437fc03-8f31-440f-8928-2fe388a22bbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.896 248514 DEBUG nova.network.os_vif_util [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.898 248514 DEBUG nova.network.os_vif_util [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.900 248514 DEBUG nova.objects.instance [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1437fc03-8f31-440f-8928-2fe388a22bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.924 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <uuid>1437fc03-8f31-440f-8928-2fe388a22bbe</uuid>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <name>instance-00000097</name>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-1012009154</nova:name>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:21:51</nova:creationTime>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <nova:port uuid="2777ba55-72e3-4334-96ae-48077ed6a8d5">
Dec 13 09:21:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4e:1751" ipVersion="6"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <system>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <entry name="serial">1437fc03-8f31-440f-8928-2fe388a22bbe</entry>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <entry name="uuid">1437fc03-8f31-440f-8928-2fe388a22bbe</entry>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </system>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <os>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   </os>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <features>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   </features>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1437fc03-8f31-440f-8928-2fe388a22bbe_disk">
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       </source>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config">
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       </source>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:21:52 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:4e:17:51"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <target dev="tap2777ba55-72"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/console.log" append="off"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <video>
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </video>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:21:52 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:21:52 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:21:52 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:21:52 compute-0 nova_compute[248510]: </domain>
Dec 13 09:21:52 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.926 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Preparing to wait for external event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.926 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.926 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.927 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.928 248514 DEBUG nova.virt.libvirt.vif [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1012009154',display_name='tempest-TestGettingAddress-server-1012009154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1012009154',id=151,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4xgqh80f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:21:44Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=1437fc03-8f31-440f-8928-2fe388a22bbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.928 248514 DEBUG nova.network.os_vif_util [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.929 248514 DEBUG nova.network.os_vif_util [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.929 248514 DEBUG os_vif [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.930 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.931 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.935 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2777ba55-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.936 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2777ba55-72, col_values=(('external_ids', {'iface-id': '2777ba55-72e3-4334-96ae-48077ed6a8d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:17:51', 'vm-uuid': '1437fc03-8f31-440f-8928-2fe388a22bbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:52 compute-0 NetworkManager[50376]: <info>  [1765617712.9397] manager: (tap2777ba55-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/673)
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:52 compute-0 nova_compute[248510]: 2025-12-13 09:21:52.953 248514 INFO os_vif [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72')
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.013 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.013 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.014 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:4e:17:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.014 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Using config drive
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.040 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:21:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3623: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:21:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4110810512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.352 248514 DEBUG nova.network.neutron [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updated VIF entry in instance network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.353 248514 DEBUG nova.network.neutron [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.370 248514 DEBUG oslo_concurrency.lockutils [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.457 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Creating config drive at /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.463 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn78rasmo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.633 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn78rasmo" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.665 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.670 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.888 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.889 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Deleting local config drive /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config because it was imported into RBD.
Dec 13 09:21:53 compute-0 kernel: tap2777ba55-72: entered promiscuous mode
Dec 13 09:21:53 compute-0 NetworkManager[50376]: <info>  [1765617713.9424] manager: (tap2777ba55-72): new Tun device (/org/freedesktop/NetworkManager/Devices/674)
Dec 13 09:21:53 compute-0 ovn_controller[148476]: 2025-12-13T09:21:53Z|01624|binding|INFO|Claiming lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 for this chassis.
Dec 13 09:21:53 compute-0 ovn_controller[148476]: 2025-12-13T09:21:53Z|01625|binding|INFO|2777ba55-72e3-4334-96ae-48077ed6a8d5: Claiming fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:53 compute-0 nova_compute[248510]: 2025-12-13 09:21:53.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.960 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751'], port_security=['fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe4e:1751/64', 'neutron:device_id': '1437fc03-8f31-440f-8928-2fe388a22bbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd4a89b2-79c1-411e-b3a6-9349b772360d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2777ba55-72e3-4334-96ae-48077ed6a8d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.962 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2777ba55-72e3-4334-96ae-48077ed6a8d5 in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d bound to our chassis
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.965 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4b9bd04-ada7-4867-9918-3cd5d21d273d
Dec 13 09:21:53 compute-0 systemd-udevd[406985]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[54f406cd-54da-4eff-bf06-e26cfdc96aac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.978 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4b9bd04-a1 in ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.981 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4b9bd04-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.981 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0bfe14-25e4-4472-bd1d-8c611dbb1e55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:53 compute-0 systemd-machined[210538]: New machine qemu-182-instance-00000097.
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.982 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[20f75c83-dc99-4074-9763-6bc02fd2b13c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:53 compute-0 NetworkManager[50376]: <info>  [1765617713.9906] device (tap2777ba55-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:21:53 compute-0 NetworkManager[50376]: <info>  [1765617713.9927] device (tap2777ba55-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:21:53 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.997 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[280c73b7-3a11-473a-8d98-a81cbec967a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:54 compute-0 ovn_controller[148476]: 2025-12-13T09:21:54Z|01626|binding|INFO|Setting lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 ovn-installed in OVS
Dec 13 09:21:54 compute-0 ovn_controller[148476]: 2025-12-13T09:21:54Z|01627|binding|INFO|Setting lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 up in Southbound
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:54 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000097.
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.021 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c703d88-4a9d-4fcb-876b-0b5ad12f123b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.051 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[60579ae0-21e4-4c53-8c34-67d8af1299ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.057 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a88783f0-6e10-4a61-b1cb-e7c03118bfb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 NetworkManager[50376]: <info>  [1765617714.0592] manager: (tape4b9bd04-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/675)
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.113 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[39ec972a-38a5-4b22-9aec-4475784e6458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.117 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[424e1ad5-ede1-43c3-b04b-3ff46941e8af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 NetworkManager[50376]: <info>  [1765617714.1534] device (tape4b9bd04-a0): carrier: link connected
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.165 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95409b57-36e5-4a95-9a10-a80166792222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.192 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f4473a-a7e3-4cae-b813-f0032b56a193]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4b9bd04-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:66:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029136, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407019, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.214 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c539c68f-bd25-4875-aeea-89fe66f295b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:6681'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029136, 'tstamp': 1029136}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407020, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.236 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9ea036-3df0-41f9-83c4-dab42c4921bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4b9bd04-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:66:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029136, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407021, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.250 248514 DEBUG nova.compute.manager [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.251 248514 DEBUG oslo_concurrency.lockutils [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.252 248514 DEBUG oslo_concurrency.lockutils [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.253 248514 DEBUG oslo_concurrency.lockutils [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.253 248514 DEBUG nova.compute.manager [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Processing event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6474da8-f241-4fbe-ba1c-3f6fb7ad84bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ceph-mon[76537]: pgmap v3623: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.376 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce920d5-4ab4-40de-9657-edcc54e2f496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.378 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4b9bd04-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.379 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.380 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4b9bd04-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:54 compute-0 NetworkManager[50376]: <info>  [1765617714.3836] manager: (tape4b9bd04-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/676)
Dec 13 09:21:54 compute-0 kernel: tape4b9bd04-a0: entered promiscuous mode
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.386 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4b9bd04-a0, col_values=(('external_ids', {'iface-id': 'f3bff06d-212e-4afb-99d4-011e9e890967'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:21:54 compute-0 ovn_controller[148476]: 2025-12-13T09:21:54Z|01628|binding|INFO|Releasing lport f3bff06d-212e-4afb-99d4-011e9e890967 from this chassis (sb_readonly=0)
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.416 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.423 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4b9bd04-ada7-4867-9918-3cd5d21d273d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4b9bd04-ada7-4867-9918-3cd5d21d273d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.425 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43b78ceb-d789-4d2d-8c3a-89fb1f4ab482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.426 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: global
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     log         /dev/log local0 debug
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     log-tag     haproxy-metadata-proxy-e4b9bd04-ada7-4867-9918-3cd5d21d273d
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     user        root
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     group       root
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     maxconn     1024
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     pidfile     /var/lib/neutron/external/pids/e4b9bd04-ada7-4867-9918-3cd5d21d273d.pid.haproxy
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     daemon
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: defaults
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     log global
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     mode http
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     option httplog
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     option dontlognull
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     option http-server-close
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     option forwardfor
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     retries                 3
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     timeout http-request    30s
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     timeout connect         30s
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     timeout client          32s
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     timeout server          32s
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     timeout http-keep-alive 30s
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: listen listener
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     bind 169.254.169.254:80
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:     http-request add-header X-OVN-Network-ID e4b9bd04-ada7-4867-9918-3cd5d21d273d
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 09:21:54 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.427 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'env', 'PROCESS_TAG=haproxy-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4b9bd04-ada7-4867-9918-3cd5d21d273d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.652 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.655 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617714.650619, 1437fc03-8f31-440f-8928-2fe388a22bbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.656 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] VM Started (Lifecycle Event)
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.710 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.715 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.719 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.722 248514 INFO nova.virt.libvirt.driver [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance spawned successfully.
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.723 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.747 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.748 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617714.6507325, 1437fc03-8f31-440f-8928-2fe388a22bbe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.748 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] VM Paused (Lifecycle Event)
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.755 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.756 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.756 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.756 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.757 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.757 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.789 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.793 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617714.7114062, 1437fc03-8f31-440f-8928-2fe388a22bbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.793 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] VM Resumed (Lifecycle Event)
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.817 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.822 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.832 248514 INFO nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Took 10.10 seconds to spawn the instance on the hypervisor.
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.832 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.844 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:21:54 compute-0 podman[407092]: 2025-12-13 09:21:54.881419402 +0000 UTC m=+0.065703108 container create 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.895 248514 INFO nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Took 11.33 seconds to build instance.
Dec 13 09:21:54 compute-0 nova_compute[248510]: 2025-12-13 09:21:54.916 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:54 compute-0 systemd[1]: Started libpod-conmon-5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8.scope.
Dec 13 09:21:54 compute-0 podman[407092]: 2025-12-13 09:21:54.851811824 +0000 UTC m=+0.036095550 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 09:21:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd99150eceb1e5ac1a3f0f2d3168cf6b269a5c94e89b47366e7abdf1ad0507ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 09:21:54 compute-0 podman[407092]: 2025-12-13 09:21:54.983395913 +0000 UTC m=+0.167679639 container init 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:21:54 compute-0 podman[407092]: 2025-12-13 09:21:54.990383807 +0000 UTC m=+0.174667533 container start 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:21:55 compute-0 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [NOTICE]   (407112) : New worker (407114) forked
Dec 13 09:21:55 compute-0 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [NOTICE]   (407112) : Loading success.
Dec 13 09:21:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3624: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:21:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:55.458 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:55.459 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:21:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:56 compute-0 ceph-mon[76537]: pgmap v3624: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:21:56 compute-0 nova_compute[248510]: 2025-12-13 09:21:56.341 248514 DEBUG nova.compute.manager [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:56 compute-0 nova_compute[248510]: 2025-12-13 09:21:56.342 248514 DEBUG oslo_concurrency.lockutils [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:21:56 compute-0 nova_compute[248510]: 2025-12-13 09:21:56.342 248514 DEBUG oslo_concurrency.lockutils [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:21:56 compute-0 nova_compute[248510]: 2025-12-13 09:21:56.343 248514 DEBUG oslo_concurrency.lockutils [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:21:56 compute-0 nova_compute[248510]: 2025-12-13 09:21:56.343 248514 DEBUG nova.compute.manager [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] No waiting events found dispatching network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:21:56 compute-0 nova_compute[248510]: 2025-12-13 09:21:56.343 248514 WARNING nova.compute.manager [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received unexpected event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 for instance with vm_state active and task_state None.
Dec 13 09:21:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3625: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:21:57 compute-0 nova_compute[248510]: 2025-12-13 09:21:57.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:21:58 compute-0 ceph-mon[76537]: pgmap v3625: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:21:58 compute-0 ovn_controller[148476]: 2025-12-13T09:21:58Z|01629|binding|INFO|Releasing lport f3bff06d-212e-4afb-99d4-011e9e890967 from this chassis (sb_readonly=0)
Dec 13 09:21:58 compute-0 nova_compute[248510]: 2025-12-13 09:21:58.973 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:58 compute-0 NetworkManager[50376]: <info>  [1765617718.9793] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/677)
Dec 13 09:21:58 compute-0 NetworkManager[50376]: <info>  [1765617718.9807] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/678)
Dec 13 09:21:59 compute-0 ovn_controller[148476]: 2025-12-13T09:21:59Z|01630|binding|INFO|Releasing lport f3bff06d-212e-4afb-99d4-011e9e890967 from this chassis (sb_readonly=0)
Dec 13 09:21:59 compute-0 nova_compute[248510]: 2025-12-13 09:21:59.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:59 compute-0 nova_compute[248510]: 2025-12-13 09:21:59.030 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:21:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3626: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 09:21:59 compute-0 nova_compute[248510]: 2025-12-13 09:21:59.445 248514 DEBUG nova.compute.manager [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:21:59 compute-0 nova_compute[248510]: 2025-12-13 09:21:59.446 248514 DEBUG nova.compute.manager [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing instance network info cache due to event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:21:59 compute-0 nova_compute[248510]: 2025-12-13 09:21:59.446 248514 DEBUG oslo_concurrency.lockutils [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:21:59 compute-0 nova_compute[248510]: 2025-12-13 09:21:59.447 248514 DEBUG oslo_concurrency.lockutils [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:21:59 compute-0 nova_compute[248510]: 2025-12-13 09:21:59.447 248514 DEBUG nova.network.neutron [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:21:59 compute-0 ceph-mon[76537]: pgmap v3626: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 09:21:59 compute-0 nova_compute[248510]: 2025-12-13 09:21:59.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:01 compute-0 nova_compute[248510]: 2025-12-13 09:22:01.056 248514 DEBUG nova.network.neutron [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updated VIF entry in instance network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:22:01 compute-0 nova_compute[248510]: 2025-12-13 09:22:01.057 248514 DEBUG nova.network.neutron [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:22:01 compute-0 nova_compute[248510]: 2025-12-13 09:22:01.091 248514 DEBUG oslo_concurrency.lockutils [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:22:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3627: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 131 KiB/s wr, 98 op/s
Dec 13 09:22:02 compute-0 ceph-mon[76537]: pgmap v3627: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 131 KiB/s wr, 98 op/s
Dec 13 09:22:02 compute-0 nova_compute[248510]: 2025-12-13 09:22:02.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3628: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:22:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:22:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.76 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2584 writes, 10K keys, 2584 commit groups, 1.0 writes per commit group, ingest: 11.17 MB, 0.02 MB/s
                                           Interval WAL: 2585 writes, 1016 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:22:04 compute-0 ceph-mon[76537]: pgmap v3628: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:22:04 compute-0 nova_compute[248510]: 2025-12-13 09:22:04.716 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3629: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:22:06 compute-0 ceph-mon[76537]: pgmap v3629: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:22:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3630: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:22:07 compute-0 nova_compute[248510]: 2025-12-13 09:22:07.945 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:08 compute-0 ceph-mon[76537]: pgmap v3630: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:22:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3631: 321 pgs: 321 active+clean; 92 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 828 KiB/s wr, 84 op/s
Dec 13 09:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:22:09
Dec 13 09:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'volumes']
Dec 13 09:22:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:22:09 compute-0 ceph-mon[76537]: pgmap v3631: 321 pgs: 321 active+clean; 92 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 828 KiB/s wr, 84 op/s
Dec 13 09:22:09 compute-0 nova_compute[248510]: 2025-12-13 09:22:09.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:10 compute-0 ovn_controller[148476]: 2025-12-13T09:22:10Z|00213|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:17:51 10.100.0.14
Dec 13 09:22:10 compute-0 ovn_controller[148476]: 2025-12-13T09:22:10Z|00214|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:17:51 10.100.0.14
Dec 13 09:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:22:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:22:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:22:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6601.8 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2279 writes, 9430 keys, 2279 commit groups, 1.0 writes per commit group, ingest: 11.32 MB, 0.02 MB/s
                                           Interval WAL: 2279 writes, 877 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3632: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 1.1 MiB/s wr, 28 op/s
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:22:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:22:12 compute-0 ceph-mon[76537]: pgmap v3632: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 1.1 MiB/s wr, 28 op/s
Dec 13 09:22:12 compute-0 nova_compute[248510]: 2025-12-13 09:22:12.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3633: 321 pgs: 321 active+clean; 105 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Dec 13 09:22:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:13 compute-0 podman[407127]: 2025-12-13 09:22:13.979280263 +0000 UTC m=+0.060307564 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:22:14 compute-0 podman[407125]: 2025-12-13 09:22:14.001416775 +0000 UTC m=+0.090517487 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Dec 13 09:22:14 compute-0 podman[407126]: 2025-12-13 09:22:14.009767143 +0000 UTC m=+0.093827919 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:22:14 compute-0 ceph-mon[76537]: pgmap v3633: 321 pgs: 321 active+clean; 105 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Dec 13 09:22:14 compute-0 nova_compute[248510]: 2025-12-13 09:22:14.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3634: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:22:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:22:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/751250222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:22:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:22:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/751250222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:22:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/751250222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:22:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/751250222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:22:16 compute-0 ceph-mon[76537]: pgmap v3634: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:22:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3635: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:22:17 compute-0 nova_compute[248510]: 2025-12-13 09:22:17.950 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:18 compute-0 ceph-mon[76537]: pgmap v3635: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:22:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3636: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:22:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:22:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.2 total, 600.0 interval
                                           Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.82 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1838 writes, 7373 keys, 1838 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s
                                           Interval WAL: 1838 writes, 718 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:22:19 compute-0 nova_compute[248510]: 2025-12-13 09:22:19.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:20 compute-0 ceph-mon[76537]: pgmap v3636: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3637: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Dec 13 09:22:21 compute-0 ceph-mon[76537]: pgmap v3637: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.497792) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617741497898, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1457, "num_deletes": 251, "total_data_size": 2381224, "memory_usage": 2418400, "flush_reason": "Manual Compaction"}
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617741516856, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2327064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71414, "largest_seqno": 72870, "table_properties": {"data_size": 2320303, "index_size": 3895, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14214, "raw_average_key_size": 19, "raw_value_size": 2306742, "raw_average_value_size": 3235, "num_data_blocks": 174, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617593, "oldest_key_time": 1765617593, "file_creation_time": 1765617741, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 19104 microseconds, and 6969 cpu microseconds.
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.516913) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2327064 bytes OK
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.516936) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520054) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520088) EVENT_LOG_v1 {"time_micros": 1765617741520084, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520110) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2374806, prev total WAL file size 2374806, number of live WAL files 2.
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520840) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2272KB)], [170(10MB)]
Dec 13 09:22:21 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617741520920, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 13596517, "oldest_snapshot_seqno": -1}
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007738823178706089 of space, bias 1.0, pg target 0.23216469536118267 quantized to 32 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696852326136964 of space, bias 1.0, pg target 0.20090556978410892 quantized to 32 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:22:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 9076 keys, 11754775 bytes, temperature: kUnknown
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617742022704, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11754775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11696528, "index_size": 34454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238739, "raw_average_key_size": 26, "raw_value_size": 11537172, "raw_average_value_size": 1271, "num_data_blocks": 1329, "num_entries": 9076, "num_filter_entries": 9076, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617741, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.023029) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11754775 bytes
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.103522) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.1 rd, 23.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.7 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(10.9) write-amplify(5.1) OK, records in: 9590, records dropped: 514 output_compression: NoCompression
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.103575) EVENT_LOG_v1 {"time_micros": 1765617742103553, "job": 106, "event": "compaction_finished", "compaction_time_micros": 501873, "compaction_time_cpu_micros": 35191, "output_level": 6, "num_output_files": 1, "total_output_size": 11754775, "num_input_records": 9590, "num_output_records": 9076, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617742104730, "job": 106, "event": "table_file_deletion", "file_number": 172}
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617742109133, "job": 106, "event": "table_file_deletion", "file_number": 170}
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:22:22 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:22:22 compute-0 nova_compute[248510]: 2025-12-13 09:22:22.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3638: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec 13 09:22:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 09:22:24 compute-0 ceph-mon[76537]: pgmap v3638: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec 13 09:22:24 compute-0 nova_compute[248510]: 2025-12-13 09:22:24.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3639: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 804 KiB/s wr, 26 op/s
Dec 13 09:22:25 compute-0 ceph-mon[76537]: pgmap v3639: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 804 KiB/s wr, 26 op/s
Dec 13 09:22:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3640: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 09:22:27 compute-0 nova_compute[248510]: 2025-12-13 09:22:27.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:27 compute-0 nova_compute[248510]: 2025-12-13 09:22:27.956 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:28 compute-0 ceph-mon[76537]: pgmap v3640: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 09:22:28 compute-0 ovn_controller[148476]: 2025-12-13T09:22:28Z|01631|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Dec 13 09:22:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3641: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 09:22:29 compute-0 nova_compute[248510]: 2025-12-13 09:22:29.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:29 compute-0 ceph-mon[76537]: pgmap v3641: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 09:22:30 compute-0 nova_compute[248510]: 2025-12-13 09:22:30.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3642: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 13 09:22:31 compute-0 sudo[407186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:22:31 compute-0 sudo[407186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:31 compute-0 sudo[407186]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:31 compute-0 sudo[407211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:22:31 compute-0 sudo[407211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:32 compute-0 ceph-mon[76537]: pgmap v3642: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 13 09:22:32 compute-0 sudo[407211]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:22:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:22:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:22:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:22:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:22:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:22:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:22:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:22:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:22:32 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:22:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:22:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:22:32 compute-0 sudo[407267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:22:32 compute-0 sudo[407267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:32 compute-0 sudo[407267]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:32 compute-0 sudo[407292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:22:32 compute-0 sudo[407292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:32 compute-0 podman[407330]: 2025-12-13 09:22:32.866204679 +0000 UTC m=+0.045963096 container create 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:22:32 compute-0 systemd[1]: Started libpod-conmon-734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3.scope.
Dec 13 09:22:32 compute-0 podman[407330]: 2025-12-13 09:22:32.847189935 +0000 UTC m=+0.026948332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:22:32 compute-0 nova_compute[248510]: 2025-12-13 09:22:32.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:22:32 compute-0 podman[407330]: 2025-12-13 09:22:32.99268013 +0000 UTC m=+0.172438527 container init 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 09:22:33 compute-0 podman[407330]: 2025-12-13 09:22:33.011447928 +0000 UTC m=+0.191206305 container start 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 09:22:33 compute-0 podman[407330]: 2025-12-13 09:22:33.016958775 +0000 UTC m=+0.196717152 container attach 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 09:22:33 compute-0 determined_feistel[407346]: 167 167
Dec 13 09:22:33 compute-0 systemd[1]: libpod-734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3.scope: Deactivated successfully.
Dec 13 09:22:33 compute-0 podman[407330]: 2025-12-13 09:22:33.020006291 +0000 UTC m=+0.199764688 container died 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-41bc9a6af237de3cc9692ee091f973a65e1f9ad6087bf9e57a552d7a5fac2faa-merged.mount: Deactivated successfully.
Dec 13 09:22:33 compute-0 podman[407330]: 2025-12-13 09:22:33.081469163 +0000 UTC m=+0.261227580 container remove 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:22:33 compute-0 systemd[1]: libpod-conmon-734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3.scope: Deactivated successfully.
Dec 13 09:22:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3643: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 09:22:33 compute-0 podman[407369]: 2025-12-13 09:22:33.253600852 +0000 UTC m=+0.033956947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:22:33 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:22:33 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:22:33 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:22:33 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:22:33 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:22:33 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:22:33 compute-0 podman[407369]: 2025-12-13 09:22:33.365631913 +0000 UTC m=+0.145987948 container create a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:22:33 compute-0 systemd[1]: Started libpod-conmon-a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad.scope.
Dec 13 09:22:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:33 compute-0 podman[407369]: 2025-12-13 09:22:33.483585923 +0000 UTC m=+0.263941988 container init a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:22:33 compute-0 podman[407369]: 2025-12-13 09:22:33.495001797 +0000 UTC m=+0.275357832 container start a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 09:22:33 compute-0 podman[407369]: 2025-12-13 09:22:33.500254758 +0000 UTC m=+0.280610833 container attach a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:22:34 compute-0 amazing_raman[407386]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:22:34 compute-0 amazing_raman[407386]: --> All data devices are unavailable
Dec 13 09:22:34 compute-0 systemd[1]: libpod-a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad.scope: Deactivated successfully.
Dec 13 09:22:34 compute-0 conmon[407386]: conmon a84ab27a0f80ce2329a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad.scope/container/memory.events
Dec 13 09:22:34 compute-0 podman[407369]: 2025-12-13 09:22:34.137790484 +0000 UTC m=+0.918146519 container died a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 09:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73-merged.mount: Deactivated successfully.
Dec 13 09:22:34 compute-0 podman[407369]: 2025-12-13 09:22:34.189847751 +0000 UTC m=+0.970203836 container remove a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:22:34 compute-0 systemd[1]: libpod-conmon-a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad.scope: Deactivated successfully.
Dec 13 09:22:34 compute-0 sudo[407292]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:34 compute-0 sudo[407416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:22:34 compute-0 sudo[407416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:34 compute-0 sudo[407416]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:34 compute-0 ceph-mon[76537]: pgmap v3643: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 09:22:34 compute-0 sudo[407441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:22:34 compute-0 sudo[407441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:34 compute-0 podman[407478]: 2025-12-13 09:22:34.717868988 +0000 UTC m=+0.047284530 container create 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Dec 13 09:22:34 compute-0 systemd[1]: Started libpod-conmon-3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b.scope.
Dec 13 09:22:34 compute-0 nova_compute[248510]: 2025-12-13 09:22:34.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:34 compute-0 podman[407478]: 2025-12-13 09:22:34.698839363 +0000 UTC m=+0.028254915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:22:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:22:34 compute-0 podman[407478]: 2025-12-13 09:22:34.823537021 +0000 UTC m=+0.152952613 container init 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 09:22:34 compute-0 podman[407478]: 2025-12-13 09:22:34.831789436 +0000 UTC m=+0.161204948 container start 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:22:34 compute-0 podman[407478]: 2025-12-13 09:22:34.836295648 +0000 UTC m=+0.165711220 container attach 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:22:34 compute-0 sleepy_khayyam[407494]: 167 167
Dec 13 09:22:34 compute-0 systemd[1]: libpod-3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b.scope: Deactivated successfully.
Dec 13 09:22:34 compute-0 podman[407478]: 2025-12-13 09:22:34.840142294 +0000 UTC m=+0.169557836 container died 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 09:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fc349560b66e4d3dd28cd7f2106bd4109e1bd5051afdfa816233a4440af3c60-merged.mount: Deactivated successfully.
Dec 13 09:22:34 compute-0 podman[407478]: 2025-12-13 09:22:34.883553216 +0000 UTC m=+0.212968768 container remove 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:22:34 compute-0 systemd[1]: libpod-conmon-3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b.scope: Deactivated successfully.
Dec 13 09:22:35 compute-0 podman[407519]: 2025-12-13 09:22:35.106692675 +0000 UTC m=+0.060788326 container create 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:22:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3644: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 09:22:35 compute-0 systemd[1]: Started libpod-conmon-1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d.scope.
Dec 13 09:22:35 compute-0 podman[407519]: 2025-12-13 09:22:35.075874827 +0000 UTC m=+0.029970548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:22:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:35 compute-0 podman[407519]: 2025-12-13 09:22:35.203778294 +0000 UTC m=+0.157873935 container init 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:22:35 compute-0 podman[407519]: 2025-12-13 09:22:35.21687168 +0000 UTC m=+0.170967321 container start 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:22:35 compute-0 podman[407519]: 2025-12-13 09:22:35.220729596 +0000 UTC m=+0.174825247 container attach 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:22:35 compute-0 happy_babbage[407537]: {
Dec 13 09:22:35 compute-0 happy_babbage[407537]:     "0": [
Dec 13 09:22:35 compute-0 happy_babbage[407537]:         {
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "devices": [
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "/dev/loop3"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             ],
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_name": "ceph_lv0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_size": "21470642176",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "name": "ceph_lv0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "tags": {
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cluster_name": "ceph",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.crush_device_class": "",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.encrypted": "0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.objectstore": "bluestore",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osd_id": "0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.type": "block",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.vdo": "0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.with_tpm": "0"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             },
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "type": "block",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "vg_name": "ceph_vg0"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:         }
Dec 13 09:22:35 compute-0 happy_babbage[407537]:     ],
Dec 13 09:22:35 compute-0 happy_babbage[407537]:     "1": [
Dec 13 09:22:35 compute-0 happy_babbage[407537]:         {
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "devices": [
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "/dev/loop4"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             ],
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_name": "ceph_lv1",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_size": "21470642176",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "name": "ceph_lv1",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "tags": {
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cluster_name": "ceph",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.crush_device_class": "",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.encrypted": "0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.objectstore": "bluestore",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osd_id": "1",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.type": "block",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.vdo": "0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.with_tpm": "0"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             },
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "type": "block",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "vg_name": "ceph_vg1"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:         }
Dec 13 09:22:35 compute-0 happy_babbage[407537]:     ],
Dec 13 09:22:35 compute-0 happy_babbage[407537]:     "2": [
Dec 13 09:22:35 compute-0 happy_babbage[407537]:         {
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "devices": [
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "/dev/loop5"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             ],
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_name": "ceph_lv2",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_size": "21470642176",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "name": "ceph_lv2",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "tags": {
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.cluster_name": "ceph",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.crush_device_class": "",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.encrypted": "0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.objectstore": "bluestore",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osd_id": "2",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.type": "block",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.vdo": "0",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:                 "ceph.with_tpm": "0"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             },
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "type": "block",
Dec 13 09:22:35 compute-0 happy_babbage[407537]:             "vg_name": "ceph_vg2"
Dec 13 09:22:35 compute-0 happy_babbage[407537]:         }
Dec 13 09:22:35 compute-0 happy_babbage[407537]:     ]
Dec 13 09:22:35 compute-0 happy_babbage[407537]: }
Dec 13 09:22:35 compute-0 systemd[1]: libpod-1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d.scope: Deactivated successfully.
Dec 13 09:22:35 compute-0 podman[407519]: 2025-12-13 09:22:35.562781889 +0000 UTC m=+0.516877580 container died 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:22:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36-merged.mount: Deactivated successfully.
Dec 13 09:22:35 compute-0 podman[407519]: 2025-12-13 09:22:35.611553715 +0000 UTC m=+0.565649356 container remove 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:22:35 compute-0 systemd[1]: libpod-conmon-1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d.scope: Deactivated successfully.
Dec 13 09:22:35 compute-0 sudo[407441]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:35 compute-0 sudo[407559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:22:35 compute-0 sudo[407559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:35 compute-0 sudo[407559]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:35 compute-0 nova_compute[248510]: 2025-12-13 09:22:35.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:35 compute-0 nova_compute[248510]: 2025-12-13 09:22:35.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:22:35 compute-0 nova_compute[248510]: 2025-12-13 09:22:35.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:22:35 compute-0 sudo[407584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:22:35 compute-0 sudo[407584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:36 compute-0 podman[407621]: 2025-12-13 09:22:36.154974995 +0000 UTC m=+0.058139489 container create e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:22:36 compute-0 systemd[1]: Started libpod-conmon-e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966.scope.
Dec 13 09:22:36 compute-0 podman[407621]: 2025-12-13 09:22:36.125115421 +0000 UTC m=+0.028279955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:22:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:22:36 compute-0 podman[407621]: 2025-12-13 09:22:36.251917641 +0000 UTC m=+0.155082095 container init e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:22:36 compute-0 podman[407621]: 2025-12-13 09:22:36.259472629 +0000 UTC m=+0.162637083 container start e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 09:22:36 compute-0 podman[407621]: 2025-12-13 09:22:36.263014607 +0000 UTC m=+0.166179081 container attach e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 09:22:36 compute-0 unruffled_diffie[407637]: 167 167
Dec 13 09:22:36 compute-0 systemd[1]: libpod-e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966.scope: Deactivated successfully.
Dec 13 09:22:36 compute-0 podman[407621]: 2025-12-13 09:22:36.267172461 +0000 UTC m=+0.170336925 container died e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:22:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-726c4799ddc752383ab68694b782979c5a50b1a68a9254de88c03a40375051fa-merged.mount: Deactivated successfully.
Dec 13 09:22:36 compute-0 podman[407621]: 2025-12-13 09:22:36.311364242 +0000 UTC m=+0.214528706 container remove e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:22:36 compute-0 systemd[1]: libpod-conmon-e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966.scope: Deactivated successfully.
Dec 13 09:22:36 compute-0 ceph-mon[76537]: pgmap v3644: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 09:22:36 compute-0 sshd-session[407531]: Invalid user algorand from 80.94.92.165 port 34096
Dec 13 09:22:36 compute-0 nova_compute[248510]: 2025-12-13 09:22:36.461 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:22:36 compute-0 nova_compute[248510]: 2025-12-13 09:22:36.461 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:22:36 compute-0 nova_compute[248510]: 2025-12-13 09:22:36.461 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 09:22:36 compute-0 nova_compute[248510]: 2025-12-13 09:22:36.461 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1437fc03-8f31-440f-8928-2fe388a22bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:22:36 compute-0 podman[407660]: 2025-12-13 09:22:36.559502605 +0000 UTC m=+0.064058397 container create ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 09:22:36 compute-0 systemd[1]: Started libpod-conmon-ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92.scope.
Dec 13 09:22:36 compute-0 podman[407660]: 2025-12-13 09:22:36.533962909 +0000 UTC m=+0.038518751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:22:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:22:36 compute-0 podman[407660]: 2025-12-13 09:22:36.653285512 +0000 UTC m=+0.157841344 container init ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 09:22:36 compute-0 podman[407660]: 2025-12-13 09:22:36.665671961 +0000 UTC m=+0.170227753 container start ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:22:36 compute-0 podman[407660]: 2025-12-13 09:22:36.669445725 +0000 UTC m=+0.174001517 container attach ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:22:36 compute-0 sshd-session[407531]: Connection closed by invalid user algorand 80.94.92.165 port 34096 [preauth]
Dec 13 09:22:36 compute-0 nova_compute[248510]: 2025-12-13 09:22:36.898 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:36 compute-0 nova_compute[248510]: 2025-12-13 09:22:36.900 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:36 compute-0 nova_compute[248510]: 2025-12-13 09:22:36.919 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.007 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.008 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.017 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.017 248514 INFO nova.compute.claims [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:22:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3645: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.122 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.154 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.154 248514 DEBUG nova.compute.provider_tree [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.177 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.210 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.266 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:37 compute-0 lvm[407757]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:22:37 compute-0 lvm[407757]: VG ceph_vg1 finished
Dec 13 09:22:37 compute-0 lvm[407760]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:22:37 compute-0 lvm[407760]: VG ceph_vg2 finished
Dec 13 09:22:37 compute-0 lvm[407756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:22:37 compute-0 lvm[407756]: VG ceph_vg0 finished
Dec 13 09:22:37 compute-0 gracious_meninsky[407677]: {}
Dec 13 09:22:37 compute-0 systemd[1]: libpod-ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92.scope: Deactivated successfully.
Dec 13 09:22:37 compute-0 systemd[1]: libpod-ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92.scope: Consumed 1.433s CPU time.
Dec 13 09:22:37 compute-0 podman[407660]: 2025-12-13 09:22:37.565216645 +0000 UTC m=+1.069772437 container died ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:22:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a-merged.mount: Deactivated successfully.
Dec 13 09:22:37 compute-0 podman[407660]: 2025-12-13 09:22:37.608416701 +0000 UTC m=+1.112972483 container remove ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:22:37 compute-0 systemd[1]: libpod-conmon-ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92.scope: Deactivated successfully.
Dec 13 09:22:37 compute-0 sudo[407584]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:22:37 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:22:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:22:37 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:22:37 compute-0 sudo[407791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:22:37 compute-0 sudo[407791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:22:37 compute-0 sudo[407791]: pam_unix(sudo:session): session closed for user root
Dec 13 09:22:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:22:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4232831819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.866 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.876 248514 DEBUG nova.compute.provider_tree [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.900 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.933 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.934 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.966 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.988 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:22:37 compute-0 nova_compute[248510]: 2025-12-13 09:22:37.989 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.011 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.030 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.126 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.127 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.128 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Creating image(s)
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.150 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.176 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.203 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.208 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.310 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.312 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.312 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.313 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.338 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.344 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:38 compute-0 ceph-mon[76537]: pgmap v3645: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 09:22:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:22:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:22:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4232831819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:22:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.533 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.539 248514 DEBUG nova.policy [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.556 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.557 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.558 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.712 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.798 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.911 248514 DEBUG nova.objects.instance [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f12ff6b-a944-402c-9e58-ee4338d7eca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.929 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.930 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Ensure instance console log exists: /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.931 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.932 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:38 compute-0 nova_compute[248510]: 2025-12-13 09:22:38.932 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3646: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 170 B/s wr, 2 op/s
Dec 13 09:22:39 compute-0 nova_compute[248510]: 2025-12-13 09:22:39.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:39 compute-0 nova_compute[248510]: 2025-12-13 09:22:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:39 compute-0 nova_compute[248510]: 2025-12-13 09:22:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:39 compute-0 nova_compute[248510]: 2025-12-13 09:22:39.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:39 compute-0 nova_compute[248510]: 2025-12-13 09:22:39.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:39 compute-0 nova_compute[248510]: 2025-12-13 09:22:39.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:39 compute-0 nova_compute[248510]: 2025-12-13 09:22:39.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:22:39 compute-0 nova_compute[248510]: 2025-12-13 09:22:39.818 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:40.020 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:22:40 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:40.021 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:22:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:22:40 compute-0 ceph-mon[76537]: pgmap v3646: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 170 B/s wr, 2 op/s
Dec 13 09:22:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:22:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624575856' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.455 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.535 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Successfully created port: f4743e20-bc79-4ca1-87e3-c94183b9a23f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.554 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.554 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.709 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.710 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3187MB free_disk=59.94185793865472GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.710 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.710 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.785 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 1437fc03-8f31-440f-8928-2fe388a22bbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.786 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7f12ff6b-a944-402c-9e58-ee4338d7eca4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.786 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.786 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:22:40 compute-0 nova_compute[248510]: 2025-12-13 09:22:40.846 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3647: 321 pgs: 321 active+clean; 130 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 503 KiB/s wr, 16 op/s
Dec 13 09:22:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2624575856' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:22:41 compute-0 ceph-mon[76537]: pgmap v3647: 321 pgs: 321 active+clean; 130 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 503 KiB/s wr, 16 op/s
Dec 13 09:22:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:22:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/196965802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.451 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.460 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.485 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.514 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.515 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.841 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Successfully updated port: f4743e20-bc79-4ca1-87e3-c94183b9a23f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.869 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.870 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.870 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.918 248514 DEBUG nova.compute.manager [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.918 248514 DEBUG nova.compute.manager [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing instance network info cache due to event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:22:41 compute-0 nova_compute[248510]: 2025-12-13 09:22:41.919 248514 DEBUG oslo_concurrency.lockutils [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:22:42 compute-0 nova_compute[248510]: 2025-12-13 09:22:42.025 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:22:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/196965802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:22:42 compute-0 nova_compute[248510]: 2025-12-13 09:22:42.516 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:42 compute-0 nova_compute[248510]: 2025-12-13 09:22:42.516 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:42 compute-0 nova_compute[248510]: 2025-12-13 09:22:42.517 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:22:42 compute-0 nova_compute[248510]: 2025-12-13 09:22:42.970 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3648: 321 pgs: 321 active+clean; 155 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 18 op/s
Dec 13 09:22:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:43 compute-0 ceph-mon[76537]: pgmap v3648: 321 pgs: 321 active+clean; 155 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 18 op/s
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.472 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.494 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.495 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance network_info: |[{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.496 248514 DEBUG oslo_concurrency.lockutils [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.497 248514 DEBUG nova.network.neutron [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.505 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start _get_guest_xml network_info=[{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.513 248514 WARNING nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.527 248514 DEBUG nova.virt.libvirt.host [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.528 248514 DEBUG nova.virt.libvirt.host [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.531 248514 DEBUG nova.virt.libvirt.host [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.532 248514 DEBUG nova.virt.libvirt.host [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.533 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.533 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.534 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.534 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.534 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.535 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.535 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.535 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.536 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.536 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.536 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.537 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:22:43 compute-0 nova_compute[248510]: 2025-12-13 09:22:43.541 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:22:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1575609293' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.134 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.164 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.171 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1575609293' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:22:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:22:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230898467' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.783 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.785 248514 DEBUG nova.virt.libvirt.vif [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1374339452',display_name='tempest-TestGettingAddress-server-1374339452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1374339452',id=152,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-330yrx20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:22:38Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7f12ff6b-a944-402c-9e58-ee4338d7eca4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.785 248514 DEBUG nova.network.os_vif_util [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.786 248514 DEBUG nova.network.os_vif_util [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.787 248514 DEBUG nova.objects.instance [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f12ff6b-a944-402c-9e58-ee4338d7eca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.806 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <uuid>7f12ff6b-a944-402c-9e58-ee4338d7eca4</uuid>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <name>instance-00000098</name>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <nova:name>tempest-TestGettingAddress-server-1374339452</nova:name>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:22:43</nova:creationTime>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <nova:port uuid="f4743e20-bc79-4ca1-87e3-c94183b9a23f">
Dec 13 09:22:44 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:febd:a8f9" ipVersion="6"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <system>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <entry name="serial">7f12ff6b-a944-402c-9e58-ee4338d7eca4</entry>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <entry name="uuid">7f12ff6b-a944-402c-9e58-ee4338d7eca4</entry>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </system>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <os>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   </os>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <features>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   </features>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk">
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       </source>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config">
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       </source>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:22:44 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:bd:a8:f9"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <target dev="tapf4743e20-bc"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/console.log" append="off"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <video>
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </video>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:22:44 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:22:44 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:22:44 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:22:44 compute-0 nova_compute[248510]: </domain>
Dec 13 09:22:44 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.807 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Preparing to wait for external event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.808 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.808 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.808 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.809 248514 DEBUG nova.virt.libvirt.vif [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1374339452',display_name='tempest-TestGettingAddress-server-1374339452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1374339452',id=152,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-330yrx20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:22:38Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7f12ff6b-a944-402c-9e58-ee4338d7eca4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.810 248514 DEBUG nova.network.os_vif_util [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.811 248514 DEBUG nova.network.os_vif_util [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.811 248514 DEBUG os_vif [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.812 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.813 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.813 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.818 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4743e20-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.819 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4743e20-bc, col_values=(('external_ids', {'iface-id': 'f4743e20-bc79-4ca1-87e3-c94183b9a23f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:a8:f9', 'vm-uuid': '7f12ff6b-a944-402c-9e58-ee4338d7eca4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:22:44 compute-0 NetworkManager[50376]: <info>  [1765617764.8228] manager: (tapf4743e20-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/679)
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.826 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.833 248514 INFO os_vif [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc')
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.891 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.892 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.893 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:bd:a8:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.894 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Using config drive
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.928 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.935 248514 DEBUG nova.network.neutron [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updated VIF entry in instance network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.936 248514 DEBUG nova.network.neutron [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:22:44 compute-0 nova_compute[248510]: 2025-12-13 09:22:44.967 248514 DEBUG oslo_concurrency.lockutils [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:22:44 compute-0 podman[408095]: 2025-12-13 09:22:44.982003029 +0000 UTC m=+0.060847187 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 09:22:44 compute-0 podman[408094]: 2025-12-13 09:22:44.994129851 +0000 UTC m=+0.072249461 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:22:45 compute-0 podman[408093]: 2025-12-13 09:22:45.036940888 +0000 UTC m=+0.115014157 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:22:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3649: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.265 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Creating config drive at /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.274 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi1c9uy30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.431 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi1c9uy30" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3230898467' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:22:45 compute-0 ceph-mon[76537]: pgmap v3649: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.482 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.488 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.671 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.672 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Deleting local config drive /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config because it was imported into RBD.
Dec 13 09:22:45 compute-0 kernel: tapf4743e20-bc: entered promiscuous mode
Dec 13 09:22:45 compute-0 NetworkManager[50376]: <info>  [1765617765.7511] manager: (tapf4743e20-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/680)
Dec 13 09:22:45 compute-0 ovn_controller[148476]: 2025-12-13T09:22:45Z|01632|binding|INFO|Claiming lport f4743e20-bc79-4ca1-87e3-c94183b9a23f for this chassis.
Dec 13 09:22:45 compute-0 ovn_controller[148476]: 2025-12-13T09:22:45Z|01633|binding|INFO|f4743e20-bc79-4ca1-87e3-c94183b9a23f: Claiming fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.761 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9'], port_security=['fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:febd:a8f9/64', 'neutron:device_id': '7f12ff6b-a944-402c-9e58-ee4338d7eca4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd4a89b2-79c1-411e-b3a6-9349b772360d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f4743e20-bc79-4ca1-87e3-c94183b9a23f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.763 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f4743e20-bc79-4ca1-87e3-c94183b9a23f in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d bound to our chassis
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.765 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4b9bd04-ada7-4867-9918-3cd5d21d273d
Dec 13 09:22:45 compute-0 ovn_controller[148476]: 2025-12-13T09:22:45Z|01634|binding|INFO|Setting lport f4743e20-bc79-4ca1-87e3-c94183b9a23f ovn-installed in OVS
Dec 13 09:22:45 compute-0 ovn_controller[148476]: 2025-12-13T09:22:45Z|01635|binding|INFO|Setting lport f4743e20-bc79-4ca1-87e3-c94183b9a23f up in Southbound
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.770 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.784 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92c020c0-c109-4373-93cd-d6badd13f111]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:22:45 compute-0 systemd-udevd[408225]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:22:45 compute-0 systemd-machined[210538]: New machine qemu-183-instance-00000098.
Dec 13 09:22:45 compute-0 NetworkManager[50376]: <info>  [1765617765.8104] device (tapf4743e20-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:22:45 compute-0 NetworkManager[50376]: <info>  [1765617765.8112] device (tapf4743e20-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:22:45 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000098.
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.824 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6d819270-bd26-4bb7-adb3-23f58edb66aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.827 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7284f0-32a8-45ad-b6f3-93ed239c5dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.859 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a405386-1fce-47df-aa12-89bd0372735c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.880 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4698f4a6-142e-4900-adb2-552cfcd58746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4b9bd04-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:66:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029136, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408238, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.899 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5781a5f-5cee-4947-89f7-4b87ec4af849]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape4b9bd04-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029153, 'tstamp': 1029153}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408240, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape4b9bd04-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029158, 'tstamp': 1029158}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408240, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.901 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4b9bd04-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:45 compute-0 nova_compute[248510]: 2025-12-13 09:22:45.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.906 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4b9bd04-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4b9bd04-a0, col_values=(('external_ids', {'iface-id': 'f3bff06d-212e-4afb-99d4-011e9e890967'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:22:45 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.908 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.218 248514 DEBUG nova.compute.manager [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.219 248514 DEBUG oslo_concurrency.lockutils [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.220 248514 DEBUG oslo_concurrency.lockutils [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.220 248514 DEBUG oslo_concurrency.lockutils [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.220 248514 DEBUG nova.compute.manager [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Processing event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.645 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617766.6445847, 7f12ff6b-a944-402c-9e58-ee4338d7eca4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.646 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] VM Started (Lifecycle Event)
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.652 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.670 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.677 248514 INFO nova.virt.libvirt.driver [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance spawned successfully.
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.677 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.719 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.731 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.733 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.734 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.734 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.735 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.736 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.745 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.788 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.789 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617766.6499858, 7f12ff6b-a944-402c-9e58-ee4338d7eca4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.789 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] VM Paused (Lifecycle Event)
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.839 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.845 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617766.656597, 7f12ff6b-a944-402c-9e58-ee4338d7eca4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.845 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] VM Resumed (Lifecycle Event)
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.868 248514 INFO nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Took 8.74 seconds to spawn the instance on the hypervisor.
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.869 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.909 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.916 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.958 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:22:46 compute-0 nova_compute[248510]: 2025-12-13 09:22:46.987 248514 INFO nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Took 10.01 seconds to build instance.
Dec 13 09:22:47 compute-0 nova_compute[248510]: 2025-12-13 09:22:47.006 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:47.025 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:22:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3650: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:22:48 compute-0 ceph-mon[76537]: pgmap v3650: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:22:48 compute-0 nova_compute[248510]: 2025-12-13 09:22:48.320 248514 DEBUG nova.compute.manager [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:22:48 compute-0 nova_compute[248510]: 2025-12-13 09:22:48.321 248514 DEBUG oslo_concurrency.lockutils [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:48 compute-0 nova_compute[248510]: 2025-12-13 09:22:48.321 248514 DEBUG oslo_concurrency.lockutils [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:48 compute-0 nova_compute[248510]: 2025-12-13 09:22:48.322 248514 DEBUG oslo_concurrency.lockutils [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:48 compute-0 nova_compute[248510]: 2025-12-13 09:22:48.322 248514 DEBUG nova.compute.manager [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] No waiting events found dispatching network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:22:48 compute-0 nova_compute[248510]: 2025-12-13 09:22:48.323 248514 WARNING nova.compute.manager [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received unexpected event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f for instance with vm_state active and task_state None.
Dec 13 09:22:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3651: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Dec 13 09:22:49 compute-0 nova_compute[248510]: 2025-12-13 09:22:49.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:49 compute-0 nova_compute[248510]: 2025-12-13 09:22:49.821 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:50 compute-0 ceph-mon[76537]: pgmap v3651: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Dec 13 09:22:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3652: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec 13 09:22:52 compute-0 ceph-mon[76537]: pgmap v3652: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec 13 09:22:52 compute-0 nova_compute[248510]: 2025-12-13 09:22:52.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:22:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3653: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 85 op/s
Dec 13 09:22:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:53 compute-0 nova_compute[248510]: 2025-12-13 09:22:53.433 248514 DEBUG nova.compute.manager [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:22:53 compute-0 nova_compute[248510]: 2025-12-13 09:22:53.433 248514 DEBUG nova.compute.manager [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing instance network info cache due to event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:22:53 compute-0 nova_compute[248510]: 2025-12-13 09:22:53.433 248514 DEBUG oslo_concurrency.lockutils [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:22:53 compute-0 nova_compute[248510]: 2025-12-13 09:22:53.434 248514 DEBUG oslo_concurrency.lockutils [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:22:53 compute-0 nova_compute[248510]: 2025-12-13 09:22:53.434 248514 DEBUG nova.network.neutron [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:22:54 compute-0 ceph-mon[76537]: pgmap v3653: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 85 op/s
Dec 13 09:22:54 compute-0 nova_compute[248510]: 2025-12-13 09:22:54.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:54 compute-0 nova_compute[248510]: 2025-12-13 09:22:54.823 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:22:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3654: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 569 KiB/s wr, 83 op/s
Dec 13 09:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:55.458 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:55.459 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:22:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:22:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:22:56 compute-0 ceph-mon[76537]: pgmap v3654: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 569 KiB/s wr, 83 op/s
Dec 13 09:22:56 compute-0 nova_compute[248510]: 2025-12-13 09:22:56.507 248514 DEBUG nova.network.neutron [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updated VIF entry in instance network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:22:56 compute-0 nova_compute[248510]: 2025-12-13 09:22:56.509 248514 DEBUG nova.network.neutron [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:22:56 compute-0 nova_compute[248510]: 2025-12-13 09:22:56.545 248514 DEBUG oslo_concurrency.lockutils [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:22:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3655: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:22:58 compute-0 ceph-mon[76537]: pgmap v3655: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 09:22:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:22:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3656: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 76 op/s
Dec 13 09:22:59 compute-0 nova_compute[248510]: 2025-12-13 09:22:59.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:00 compute-0 ceph-mon[76537]: pgmap v3656: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 76 op/s
Dec 13 09:23:00 compute-0 ovn_controller[148476]: 2025-12-13T09:23:00Z|00215|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:a8:f9 10.100.0.13
Dec 13 09:23:00 compute-0 ovn_controller[148476]: 2025-12-13T09:23:00Z|00216|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:a8:f9 10.100.0.13
Dec 13 09:23:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3657: 321 pgs: 321 active+clean; 173 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 662 KiB/s rd, 351 KiB/s wr, 35 op/s
Dec 13 09:23:02 compute-0 ceph-mon[76537]: pgmap v3657: 321 pgs: 321 active+clean; 173 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 662 KiB/s rd, 351 KiB/s wr, 35 op/s
Dec 13 09:23:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3658: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Dec 13 09:23:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:03 compute-0 ceph-mon[76537]: pgmap v3658: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Dec 13 09:23:04 compute-0 nova_compute[248510]: 2025-12-13 09:23:04.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3659: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:23:06 compute-0 ceph-mon[76537]: pgmap v3659: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:23:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3660: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:23:07 compute-0 ceph-mon[76537]: pgmap v3660: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:23:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3661: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:23:09
Dec 13 09:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Dec 13 09:23:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:23:09 compute-0 ceph-mon[76537]: pgmap v3661: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 09:23:09 compute-0 nova_compute[248510]: 2025-12-13 09:23:09.828 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:23:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:23:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3662: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.660 248514 DEBUG nova.compute.manager [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.661 248514 DEBUG nova.compute.manager [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing instance network info cache due to event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.661 248514 DEBUG oslo_concurrency.lockutils [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.662 248514 DEBUG oslo_concurrency.lockutils [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.662 248514 DEBUG nova.network.neutron [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.725 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.726 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.726 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.727 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.728 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.729 248514 INFO nova.compute.manager [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Terminating instance
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.730 248514 DEBUG nova.compute.manager [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:23:11 compute-0 kernel: tapf4743e20-bc (unregistering): left promiscuous mode
Dec 13 09:23:11 compute-0 NetworkManager[50376]: <info>  [1765617791.7954] device (tapf4743e20-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:23:11 compute-0 ovn_controller[148476]: 2025-12-13T09:23:11Z|01636|binding|INFO|Releasing lport f4743e20-bc79-4ca1-87e3-c94183b9a23f from this chassis (sb_readonly=0)
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:11 compute-0 ovn_controller[148476]: 2025-12-13T09:23:11Z|01637|binding|INFO|Setting lport f4743e20-bc79-4ca1-87e3-c94183b9a23f down in Southbound
Dec 13 09:23:11 compute-0 ovn_controller[148476]: 2025-12-13T09:23:11Z|01638|binding|INFO|Removing iface tapf4743e20-bc ovn-installed in OVS
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.827 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9'], port_security=['fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:febd:a8f9/64', 'neutron:device_id': '7f12ff6b-a944-402c-9e58-ee4338d7eca4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd4a89b2-79c1-411e-b3a6-9349b772360d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f4743e20-bc79-4ca1-87e3-c94183b9a23f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:23:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.829 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f4743e20-bc79-4ca1-87e3-c94183b9a23f in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d unbound from our chassis
Dec 13 09:23:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.831 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4b9bd04-ada7-4867-9918-3cd5d21d273d
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:11 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000098.scope: Deactivated successfully.
Dec 13 09:23:11 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000098.scope: Consumed 14.723s CPU time.
Dec 13 09:23:11 compute-0 systemd-machined[210538]: Machine qemu-183-instance-00000098 terminated.
Dec 13 09:23:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.873 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc34f33-239c-494f-8a17-8947e5111a12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.922 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7461754-3087-43d1-aadd-4988ecd9a054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.926 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5c406b85-f2b5-4158-8679-5066886a395c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.969 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5e7f90-cae4-46e1-9a9f-831ac5a7dfa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.973 248514 INFO nova.virt.libvirt.driver [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance destroyed successfully.
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.973 248514 DEBUG nova.objects.instance [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 7f12ff6b-a944-402c-9e58-ee4338d7eca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.990 248514 DEBUG nova.virt.libvirt.vif [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1374339452',display_name='tempest-TestGettingAddress-server-1374339452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1374339452',id=152,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:22:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-330yrx20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:22:46Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7f12ff6b-a944-402c-9e58-ee4338d7eca4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.990 248514 DEBUG nova.network.os_vif_util [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:23:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.990 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1c06f3-b08f-4286-8aab-5b08a2af948f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4b9bd04-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:66:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029136, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408305, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.991 248514 DEBUG nova.network.os_vif_util [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.993 248514 DEBUG os_vif [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.995 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4743e20-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:11 compute-0 nova_compute[248510]: 2025-12-13 09:23:11.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:12 compute-0 nova_compute[248510]: 2025-12-13 09:23:12.002 248514 INFO os_vif [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc')
Dec 13 09:23:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.014 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc71fe9a-10c9-45dc-9b36-beafbe0f9c59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape4b9bd04-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029153, 'tstamp': 1029153}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408307, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape4b9bd04-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029158, 'tstamp': 1029158}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408307, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.016 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4b9bd04-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:23:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.020 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4b9bd04-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:23:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.020 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:23:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4b9bd04-a0, col_values=(('external_ids', {'iface-id': 'f3bff06d-212e-4afb-99d4-011e9e890967'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:23:12 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:23:12 compute-0 nova_compute[248510]: 2025-12-13 09:23:12.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:12 compute-0 ceph-mon[76537]: pgmap v3662: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 09:23:12 compute-0 nova_compute[248510]: 2025-12-13 09:23:12.325 248514 INFO nova.virt.libvirt.driver [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Deleting instance files /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4_del
Dec 13 09:23:12 compute-0 nova_compute[248510]: 2025-12-13 09:23:12.326 248514 INFO nova.virt.libvirt.driver [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Deletion of /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4_del complete
Dec 13 09:23:12 compute-0 nova_compute[248510]: 2025-12-13 09:23:12.393 248514 INFO nova.compute.manager [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Took 0.66 seconds to destroy the instance on the hypervisor.
Dec 13 09:23:12 compute-0 nova_compute[248510]: 2025-12-13 09:23:12.394 248514 DEBUG oslo.service.loopingcall [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:23:12 compute-0 nova_compute[248510]: 2025-12-13 09:23:12.394 248514 DEBUG nova.compute.manager [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:23:12 compute-0 nova_compute[248510]: 2025-12-13 09:23:12.394 248514 DEBUG nova.network.neutron [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:23:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3663: 321 pgs: 321 active+clean; 168 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 09:23:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:13 compute-0 nova_compute[248510]: 2025-12-13 09:23:13.742 248514 DEBUG nova.compute.manager [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-unplugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:13 compute-0 nova_compute[248510]: 2025-12-13 09:23:13.742 248514 DEBUG oslo_concurrency.lockutils [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:13 compute-0 nova_compute[248510]: 2025-12-13 09:23:13.742 248514 DEBUG oslo_concurrency.lockutils [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:13 compute-0 nova_compute[248510]: 2025-12-13 09:23:13.743 248514 DEBUG oslo_concurrency.lockutils [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:13 compute-0 nova_compute[248510]: 2025-12-13 09:23:13.743 248514 DEBUG nova.compute.manager [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] No waiting events found dispatching network-vif-unplugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:23:13 compute-0 nova_compute[248510]: 2025-12-13 09:23:13.743 248514 DEBUG nova.compute.manager [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-unplugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:23:14 compute-0 ceph-mon[76537]: pgmap v3663: 321 pgs: 321 active+clean; 168 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 09:23:14 compute-0 nova_compute[248510]: 2025-12-13 09:23:14.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3664: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 1.1 MiB/s wr, 52 op/s
Dec 13 09:23:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:23:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/903405779' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:23:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:23:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/903405779' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:23:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/903405779' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:23:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/903405779' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.812 248514 DEBUG nova.network.neutron [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.830 248514 INFO nova.compute.manager [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Took 3.44 seconds to deallocate network for instance.
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.856 248514 DEBUG nova.compute.manager [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.857 248514 DEBUG oslo_concurrency.lockutils [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.857 248514 DEBUG oslo_concurrency.lockutils [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.857 248514 DEBUG oslo_concurrency.lockutils [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.857 248514 DEBUG nova.compute.manager [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] No waiting events found dispatching network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.858 248514 WARNING nova.compute.manager [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received unexpected event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f for instance with vm_state active and task_state deleting.
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.877 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.877 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:15 compute-0 nova_compute[248510]: 2025-12-13 09:23:15.976 248514 DEBUG oslo_concurrency.processutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:15 compute-0 podman[408328]: 2025-12-13 09:23:15.985461652 +0000 UTC m=+0.073017850 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 09:23:15 compute-0 podman[408329]: 2025-12-13 09:23:15.998195739 +0000 UTC m=+0.072745333 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:23:16 compute-0 podman[408327]: 2025-12-13 09:23:16.030207857 +0000 UTC m=+0.117916129 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 09:23:16 compute-0 ceph-mon[76537]: pgmap v3664: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 1.1 MiB/s wr, 52 op/s
Dec 13 09:23:16 compute-0 nova_compute[248510]: 2025-12-13 09:23:16.230 248514 DEBUG nova.network.neutron [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updated VIF entry in instance network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:23:16 compute-0 nova_compute[248510]: 2025-12-13 09:23:16.231 248514 DEBUG nova.network.neutron [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:23:16 compute-0 nova_compute[248510]: 2025-12-13 09:23:16.254 248514 DEBUG oslo_concurrency.lockutils [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:23:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:23:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135396273' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:16 compute-0 nova_compute[248510]: 2025-12-13 09:23:16.538 248514 DEBUG oslo_concurrency.processutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:23:16 compute-0 nova_compute[248510]: 2025-12-13 09:23:16.545 248514 DEBUG nova.compute.provider_tree [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:23:16 compute-0 nova_compute[248510]: 2025-12-13 09:23:16.565 248514 DEBUG nova.scheduler.client.report [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:23:16 compute-0 nova_compute[248510]: 2025-12-13 09:23:16.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:17 compute-0 nova_compute[248510]: 2025-12-13 09:23:17.020 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3665: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 13 09:23:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2135396273' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:17 compute-0 nova_compute[248510]: 2025-12-13 09:23:17.379 248514 INFO nova.scheduler.client.report [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 7f12ff6b-a944-402c-9e58-ee4338d7eca4
Dec 13 09:23:17 compute-0 nova_compute[248510]: 2025-12-13 09:23:17.742 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:17 compute-0 nova_compute[248510]: 2025-12-13 09:23:17.978 248514 DEBUG nova.compute.manager [req-b5dc4951-b301-4325-8cee-9b13f7f41584 req-54f81004-7aa3-44ee-b611-37b9517de27c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-deleted-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:17 compute-0 nova_compute[248510]: 2025-12-13 09:23:17.979 248514 INFO nova.compute.manager [req-b5dc4951-b301-4325-8cee-9b13f7f41584 req-54f81004-7aa3-44ee-b611-37b9517de27c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Neutron deleted interface f4743e20-bc79-4ca1-87e3-c94183b9a23f; detaching it from the instance and deleting it from the info cache
Dec 13 09:23:17 compute-0 nova_compute[248510]: 2025-12-13 09:23:17.979 248514 DEBUG nova.network.neutron [req-b5dc4951-b301-4325-8cee-9b13f7f41584 req-54f81004-7aa3-44ee-b611-37b9517de27c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Dec 13 09:23:17 compute-0 nova_compute[248510]: 2025-12-13 09:23:17.981 248514 DEBUG nova.compute.manager [req-b5dc4951-b301-4325-8cee-9b13f7f41584 req-54f81004-7aa3-44ee-b611-37b9517de27c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Detach interface failed, port_id=f4743e20-bc79-4ca1-87e3-c94183b9a23f, reason: Instance 7f12ff6b-a944-402c-9e58-ee4338d7eca4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:23:18 compute-0 ceph-mon[76537]: pgmap v3665: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 13 09:23:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.091 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.092 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.093 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.094 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.094 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.096 248514 INFO nova.compute.manager [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Terminating instance
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.098 248514 DEBUG nova.compute.manager [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:23:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3666: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 13 09:23:19 compute-0 kernel: tap2777ba55-72 (unregistering): left promiscuous mode
Dec 13 09:23:19 compute-0 NetworkManager[50376]: <info>  [1765617799.1471] device (tap2777ba55-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:23:19 compute-0 ovn_controller[148476]: 2025-12-13T09:23:19Z|01639|binding|INFO|Releasing lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 from this chassis (sb_readonly=0)
Dec 13 09:23:19 compute-0 ovn_controller[148476]: 2025-12-13T09:23:19Z|01640|binding|INFO|Setting lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 down in Southbound
Dec 13 09:23:19 compute-0 ovn_controller[148476]: 2025-12-13T09:23:19Z|01641|binding|INFO|Removing iface tap2777ba55-72 ovn-installed in OVS
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.157 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.166 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751'], port_security=['fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe4e:1751/64', 'neutron:device_id': '1437fc03-8f31-440f-8928-2fe388a22bbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd4a89b2-79c1-411e-b3a6-9349b772360d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2777ba55-72e3-4334-96ae-48077ed6a8d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.167 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2777ba55-72e3-4334-96ae-48077ed6a8d5 in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d unbound from our chassis
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.169 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4b9bd04-ada7-4867-9918-3cd5d21d273d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.170 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c58bfc89-8f6e-4051-9558-0fd1a1bfd6b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.171 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d namespace which is not needed anymore
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:19 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000097.scope: Deactivated successfully.
Dec 13 09:23:19 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000097.scope: Consumed 16.905s CPU time.
Dec 13 09:23:19 compute-0 systemd-machined[210538]: Machine qemu-182-instance-00000097 terminated.
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.344 248514 INFO nova.virt.libvirt.driver [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance destroyed successfully.
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.346 248514 DEBUG nova.objects.instance [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 1437fc03-8f31-440f-8928-2fe388a22bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:23:19 compute-0 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [NOTICE]   (407112) : haproxy version is 2.8.14-c23fe91
Dec 13 09:23:19 compute-0 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [NOTICE]   (407112) : path to executable is /usr/sbin/haproxy
Dec 13 09:23:19 compute-0 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [WARNING]  (407112) : Exiting Master process...
Dec 13 09:23:19 compute-0 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [WARNING]  (407112) : Exiting Master process...
Dec 13 09:23:19 compute-0 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [ALERT]    (407112) : Current worker (407114) exited with code 143 (Terminated)
Dec 13 09:23:19 compute-0 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [WARNING]  (407112) : All workers exited. Exiting... (0)
Dec 13 09:23:19 compute-0 systemd[1]: libpod-5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8.scope: Deactivated successfully.
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.366 248514 DEBUG nova.virt.libvirt.vif [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1012009154',display_name='tempest-TestGettingAddress-server-1012009154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1012009154',id=151,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:21:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4xgqh80f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:21:54Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=1437fc03-8f31-440f-8928-2fe388a22bbe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:23:19 compute-0 podman[408436]: 2025-12-13 09:23:19.367430701 +0000 UTC m=+0.072831376 container died 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.367 248514 DEBUG nova.network.os_vif_util [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.368 248514 DEBUG nova.network.os_vif_util [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.369 248514 DEBUG os_vif [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.373 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2777ba55-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.379 248514 INFO os_vif [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72')
Dec 13 09:23:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd99150eceb1e5ac1a3f0f2d3168cf6b269a5c94e89b47366e7abdf1ad0507ca-merged.mount: Deactivated successfully.
Dec 13 09:23:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8-userdata-shm.mount: Deactivated successfully.
Dec 13 09:23:19 compute-0 podman[408436]: 2025-12-13 09:23:19.428037791 +0000 UTC m=+0.133438416 container cleanup 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:23:19 compute-0 systemd[1]: libpod-conmon-5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8.scope: Deactivated successfully.
Dec 13 09:23:19 compute-0 podman[408497]: 2025-12-13 09:23:19.535258622 +0000 UTC m=+0.064012886 container remove 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.542 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eee7399f-882b-41e9-8f34-cc28d1cce3b3]: (4, ('Sat Dec 13 09:23:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d (5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8)\n5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8\nSat Dec 13 09:23:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d (5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8)\n5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.546 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab2db02-4a7a-4adb-a653-f1beb5bae508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.548 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4b9bd04-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:19 compute-0 kernel: tape4b9bd04-a0: left promiscuous mode
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a23a3a08-6749-48e0-91cb-f1d7adfb4590]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[674fd9e6-5787-4a88-9829-873637021180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.589 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc383750-bc66-45a3-a760-72844afaf973]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.608 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c68055-a588-4924-b736-1742f24cbbfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029125, 'reachable_time': 17057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408513, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:19 compute-0 systemd[1]: run-netns-ovnmeta\x2de4b9bd04\x2dada7\x2d4867\x2d9918\x2d3cd5d21d273d.mount: Deactivated successfully.
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.612 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 09:23:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.613 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[363c6e2a-7e13-4aac-849a-27768c598cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.686 248514 INFO nova.virt.libvirt.driver [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Deleting instance files /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe_del
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.687 248514 INFO nova.virt.libvirt.driver [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Deletion of /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe_del complete
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.761 248514 INFO nova.compute.manager [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Took 0.66 seconds to destroy the instance on the hypervisor.
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.762 248514 DEBUG oslo.service.loopingcall [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.762 248514 DEBUG nova.compute.manager [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.763 248514 DEBUG nova.network.neutron [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:23:19 compute-0 nova_compute[248510]: 2025-12-13 09:23:19.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.075 248514 DEBUG nova.compute.manager [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.075 248514 DEBUG nova.compute.manager [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing instance network info cache due to event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.076 248514 DEBUG oslo_concurrency.lockutils [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.076 248514 DEBUG oslo_concurrency.lockutils [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.076 248514 DEBUG nova.network.neutron [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.157 248514 DEBUG nova.compute.manager [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-unplugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.158 248514 DEBUG oslo_concurrency.lockutils [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.158 248514 DEBUG oslo_concurrency.lockutils [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.159 248514 DEBUG oslo_concurrency.lockutils [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.159 248514 DEBUG nova.compute.manager [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] No waiting events found dispatching network-vif-unplugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.160 248514 DEBUG nova.compute.manager [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-unplugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:23:20 compute-0 ceph-mon[76537]: pgmap v3666: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.565 248514 DEBUG nova.network.neutron [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.584 248514 INFO nova.compute.manager [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Took 0.82 seconds to deallocate network for instance.
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.628 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.628 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:20 compute-0 nova_compute[248510]: 2025-12-13 09:23:20.671 248514 DEBUG oslo_concurrency.processutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3667: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 KiB/s wr, 29 op/s
Dec 13 09:23:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:23:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3610069559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:21 compute-0 nova_compute[248510]: 2025-12-13 09:23:21.292 248514 DEBUG oslo_concurrency.processutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:23:21 compute-0 nova_compute[248510]: 2025-12-13 09:23:21.301 248514 DEBUG nova.compute.provider_tree [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:23:21 compute-0 nova_compute[248510]: 2025-12-13 09:23:21.319 248514 DEBUG nova.scheduler.client.report [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:23:21 compute-0 nova_compute[248510]: 2025-12-13 09:23:21.344 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:21 compute-0 nova_compute[248510]: 2025-12-13 09:23:21.369 248514 INFO nova.scheduler.client.report [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 1437fc03-8f31-440f-8928-2fe388a22bbe
Dec 13 09:23:21 compute-0 nova_compute[248510]: 2025-12-13 09:23:21.438 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:21 compute-0 ceph-mon[76537]: pgmap v3667: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 KiB/s wr, 29 op/s
Dec 13 09:23:21 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3610069559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007748083575532455 of space, bias 1.0, pg target 0.23244250726597362 quantized to 32 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696737595831594 of space, bias 1.0, pg target 0.20090212787494782 quantized to 32 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:23:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.172 248514 DEBUG nova.compute.manager [req-010775ff-71af-4ac8-8d5e-d9f134885f02 req-316e30ae-a828-4a32-9c24-98d9587b7fc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-deleted-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.245 248514 DEBUG nova.compute.manager [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.246 248514 DEBUG oslo_concurrency.lockutils [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.246 248514 DEBUG oslo_concurrency.lockutils [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.247 248514 DEBUG oslo_concurrency.lockutils [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.247 248514 DEBUG nova.compute.manager [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] No waiting events found dispatching network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.247 248514 WARNING nova.compute.manager [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received unexpected event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 for instance with vm_state deleted and task_state None.
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.489 248514 DEBUG nova.network.neutron [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updated VIF entry in instance network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.490 248514 DEBUG nova.network.neutron [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:23:22 compute-0 nova_compute[248510]: 2025-12-13 09:23:22.513 248514 DEBUG oslo_concurrency.lockutils [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:23:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3668: 321 pgs: 321 active+clean; 89 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.4 KiB/s wr, 31 op/s
Dec 13 09:23:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:24 compute-0 ceph-mon[76537]: pgmap v3668: 321 pgs: 321 active+clean; 89 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.4 KiB/s wr, 31 op/s
Dec 13 09:23:24 compute-0 nova_compute[248510]: 2025-12-13 09:23:24.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:24 compute-0 nova_compute[248510]: 2025-12-13 09:23:24.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3669: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 4.6 KiB/s wr, 45 op/s
Dec 13 09:23:25 compute-0 ceph-mon[76537]: pgmap v3669: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 4.6 KiB/s wr, 45 op/s
Dec 13 09:23:26 compute-0 nova_compute[248510]: 2025-12-13 09:23:26.971 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617791.9701722, 7f12ff6b-a944-402c-9e58-ee4338d7eca4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:23:26 compute-0 nova_compute[248510]: 2025-12-13 09:23:26.972 248514 INFO nova.compute.manager [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] VM Stopped (Lifecycle Event)
Dec 13 09:23:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3670: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Dec 13 09:23:27 compute-0 nova_compute[248510]: 2025-12-13 09:23:27.227 248514 DEBUG nova.compute.manager [None req-28b321c2-fd0b-4d99-97fe-f0f5c10b1cd0 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:23:27 compute-0 nova_compute[248510]: 2025-12-13 09:23:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:28 compute-0 ceph-mon[76537]: pgmap v3670: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Dec 13 09:23:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3671: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Dec 13 09:23:29 compute-0 nova_compute[248510]: 2025-12-13 09:23:29.378 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:29 compute-0 nova_compute[248510]: 2025-12-13 09:23:29.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:29 compute-0 nova_compute[248510]: 2025-12-13 09:23:29.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:29 compute-0 nova_compute[248510]: 2025-12-13 09:23:29.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:30 compute-0 ceph-mon[76537]: pgmap v3671: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Dec 13 09:23:30 compute-0 nova_compute[248510]: 2025-12-13 09:23:30.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3672: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 27 op/s
Dec 13 09:23:32 compute-0 ceph-mon[76537]: pgmap v3672: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 27 op/s
Dec 13 09:23:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3673: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:23:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:34 compute-0 ceph-mon[76537]: pgmap v3673: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 09:23:34 compute-0 nova_compute[248510]: 2025-12-13 09:23:34.342 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617799.3405201, 1437fc03-8f31-440f-8928-2fe388a22bbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:23:34 compute-0 nova_compute[248510]: 2025-12-13 09:23:34.343 248514 INFO nova.compute.manager [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] VM Stopped (Lifecycle Event)
Dec 13 09:23:34 compute-0 nova_compute[248510]: 2025-12-13 09:23:34.377 248514 DEBUG nova.compute.manager [None req-ea8bc6c1-f54d-4455-af97-c4ac5d635111 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:23:34 compute-0 nova_compute[248510]: 2025-12-13 09:23:34.379 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:34 compute-0 nova_compute[248510]: 2025-12-13 09:23:34.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3674: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec 13 09:23:36 compute-0 ceph-mon[76537]: pgmap v3674: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec 13 09:23:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3675: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:37 compute-0 ceph-mon[76537]: pgmap v3675: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:37 compute-0 nova_compute[248510]: 2025-12-13 09:23:37.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:37 compute-0 nova_compute[248510]: 2025-12-13 09:23:37.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:23:37 compute-0 nova_compute[248510]: 2025-12-13 09:23:37.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:23:37 compute-0 sudo[408537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:23:37 compute-0 sudo[408537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:37 compute-0 sudo[408537]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:37 compute-0 nova_compute[248510]: 2025-12-13 09:23:37.876 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:23:37 compute-0 sudo[408562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:23:37 compute-0 sudo[408562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:38 compute-0 sudo[408562]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:23:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:23:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:23:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:23:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:23:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:23:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:23:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:23:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:23:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:23:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:23:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:23:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:23:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:23:38 compute-0 sudo[408619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:23:38 compute-0 sudo[408619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:38 compute-0 sudo[408619]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:38 compute-0 sudo[408644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:23:38 compute-0 sudo[408644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:38 compute-0 nova_compute[248510]: 2025-12-13 09:23:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:39 compute-0 podman[408682]: 2025-12-13 09:23:39.070862715 +0000 UTC m=+0.059712228 container create f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 09:23:39 compute-0 systemd[1]: Started libpod-conmon-f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1.scope.
Dec 13 09:23:39 compute-0 podman[408682]: 2025-12-13 09:23:39.039637362 +0000 UTC m=+0.028486935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:23:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3676: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:23:39 compute-0 podman[408682]: 2025-12-13 09:23:39.174443782 +0000 UTC m=+0.163293275 container init f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:23:39 compute-0 podman[408682]: 2025-12-13 09:23:39.182539455 +0000 UTC m=+0.171388928 container start f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:23:39 compute-0 podman[408682]: 2025-12-13 09:23:39.186422202 +0000 UTC m=+0.175271785 container attach f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:23:39 compute-0 beautiful_zhukovsky[408698]: 167 167
Dec 13 09:23:39 compute-0 systemd[1]: libpod-f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1.scope: Deactivated successfully.
Dec 13 09:23:39 compute-0 conmon[408698]: conmon f1b698a6877809036522 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1.scope/container/memory.events
Dec 13 09:23:39 compute-0 podman[408682]: 2025-12-13 09:23:39.190778321 +0000 UTC m=+0.179627784 container died f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:23:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-80bc05bd92f119a25415ebb602c748c2aa02b5b99bc62b0dea26955bda07548e-merged.mount: Deactivated successfully.
Dec 13 09:23:39 compute-0 podman[408682]: 2025-12-13 09:23:39.225324427 +0000 UTC m=+0.214173900 container remove f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:23:39 compute-0 systemd[1]: libpod-conmon-f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1.scope: Deactivated successfully.
Dec 13 09:23:39 compute-0 nova_compute[248510]: 2025-12-13 09:23:39.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:39 compute-0 podman[408721]: 2025-12-13 09:23:39.414225741 +0000 UTC m=+0.053579743 container create 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Dec 13 09:23:39 compute-0 systemd[1]: Started libpod-conmon-75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694.scope.
Dec 13 09:23:39 compute-0 podman[408721]: 2025-12-13 09:23:39.3918424 +0000 UTC m=+0.031196492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:23:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:39 compute-0 podman[408721]: 2025-12-13 09:23:39.518147796 +0000 UTC m=+0.157501848 container init 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 09:23:39 compute-0 podman[408721]: 2025-12-13 09:23:39.527151172 +0000 UTC m=+0.166505184 container start 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:23:39 compute-0 podman[408721]: 2025-12-13 09:23:39.53146452 +0000 UTC m=+0.170818522 container attach 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:23:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:23:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:23:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:23:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:23:39 compute-0 ceph-mon[76537]: pgmap v3676: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:39 compute-0 nova_compute[248510]: 2025-12-13 09:23:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:39 compute-0 nova_compute[248510]: 2025-12-13 09:23:39.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:39 compute-0 nova_compute[248510]: 2025-12-13 09:23:39.959 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:39 compute-0 nova_compute[248510]: 2025-12-13 09:23:39.960 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:39 compute-0 nova_compute[248510]: 2025-12-13 09:23:39.960 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:39 compute-0 nova_compute[248510]: 2025-12-13 09:23:39.960 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:23:39 compute-0 nova_compute[248510]: 2025-12-13 09:23:39.961 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:40 compute-0 nice_mccarthy[408738]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:23:40 compute-0 nice_mccarthy[408738]: --> All data devices are unavailable
Dec 13 09:23:40 compute-0 systemd[1]: libpod-75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694.scope: Deactivated successfully.
Dec 13 09:23:40 compute-0 podman[408721]: 2025-12-13 09:23:40.143725908 +0000 UTC m=+0.783079920 container died 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:23:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe-merged.mount: Deactivated successfully.
Dec 13 09:23:40 compute-0 podman[408721]: 2025-12-13 09:23:40.242030842 +0000 UTC m=+0.881384844 container remove 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:23:40 compute-0 systemd[1]: libpod-conmon-75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694.scope: Deactivated successfully.
Dec 13 09:23:40 compute-0 sudo[408644]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:40 compute-0 sudo[408790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:23:40 compute-0 sudo[408790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:40 compute-0 sudo[408790]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:40 compute-0 sudo[408815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:23:40 compute-0 sudo[408815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:23:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377932943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:40 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3377932943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:40 compute-0 nova_compute[248510]: 2025-12-13 09:23:40.638 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:23:40 compute-0 podman[408855]: 2025-12-13 09:23:40.793558747 +0000 UTC m=+0.057800700 container create ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:23:40 compute-0 systemd[1]: Started libpod-conmon-ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4.scope.
Dec 13 09:23:40 compute-0 podman[408855]: 2025-12-13 09:23:40.768194491 +0000 UTC m=+0.032436464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:23:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:23:40 compute-0 podman[408855]: 2025-12-13 09:23:40.8938207 +0000 UTC m=+0.158062673 container init ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:23:40 compute-0 nova_compute[248510]: 2025-12-13 09:23:40.897 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:23:40 compute-0 nova_compute[248510]: 2025-12-13 09:23:40.898 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3444MB free_disk=59.98738182429224GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:23:40 compute-0 nova_compute[248510]: 2025-12-13 09:23:40.899 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:40 compute-0 nova_compute[248510]: 2025-12-13 09:23:40.899 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:40 compute-0 podman[408855]: 2025-12-13 09:23:40.902743634 +0000 UTC m=+0.166985587 container start ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 09:23:40 compute-0 podman[408855]: 2025-12-13 09:23:40.906704603 +0000 UTC m=+0.170946556 container attach ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:23:40 compute-0 quizzical_hamilton[408871]: 167 167
Dec 13 09:23:40 compute-0 systemd[1]: libpod-ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4.scope: Deactivated successfully.
Dec 13 09:23:40 compute-0 podman[408855]: 2025-12-13 09:23:40.909481303 +0000 UTC m=+0.173723256 container died ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d45d5d7b0cfda718cbb2a3d12f28251ce67c4d59ead35f12647cc755ef082fbc-merged.mount: Deactivated successfully.
Dec 13 09:23:40 compute-0 podman[408855]: 2025-12-13 09:23:40.955803774 +0000 UTC m=+0.220045727 container remove ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:23:40 compute-0 systemd[1]: libpod-conmon-ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4.scope: Deactivated successfully.
Dec 13 09:23:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3677: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:41 compute-0 podman[408896]: 2025-12-13 09:23:41.153165641 +0000 UTC m=+0.051160023 container create e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:23:41 compute-0 systemd[1]: Started libpod-conmon-e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1.scope.
Dec 13 09:23:41 compute-0 nova_compute[248510]: 2025-12-13 09:23:41.203 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:23:41 compute-0 nova_compute[248510]: 2025-12-13 09:23:41.204 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:23:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:23:41 compute-0 podman[408896]: 2025-12-13 09:23:41.133369745 +0000 UTC m=+0.031364157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:41 compute-0 podman[408896]: 2025-12-13 09:23:41.245179578 +0000 UTC m=+0.143173970 container init e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:23:41 compute-0 podman[408896]: 2025-12-13 09:23:41.2564359 +0000 UTC m=+0.154430282 container start e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:23:41 compute-0 podman[408896]: 2025-12-13 09:23:41.260151983 +0000 UTC m=+0.158146365 container attach e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:23:41 compute-0 nova_compute[248510]: 2025-12-13 09:23:41.297 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:41 compute-0 sad_cartwright[408913]: {
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:     "0": [
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:         {
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "devices": [
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "/dev/loop3"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             ],
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_name": "ceph_lv0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_size": "21470642176",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "name": "ceph_lv0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "tags": {
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cluster_name": "ceph",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.crush_device_class": "",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.encrypted": "0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.objectstore": "bluestore",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osd_id": "0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.type": "block",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.vdo": "0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.with_tpm": "0"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             },
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "type": "block",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "vg_name": "ceph_vg0"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:         }
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:     ],
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:     "1": [
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:         {
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "devices": [
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "/dev/loop4"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             ],
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_name": "ceph_lv1",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_size": "21470642176",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "name": "ceph_lv1",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "tags": {
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cluster_name": "ceph",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.crush_device_class": "",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.encrypted": "0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.objectstore": "bluestore",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osd_id": "1",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.type": "block",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.vdo": "0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.with_tpm": "0"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             },
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "type": "block",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "vg_name": "ceph_vg1"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:         }
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:     ],
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:     "2": [
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:         {
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "devices": [
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "/dev/loop5"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             ],
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_name": "ceph_lv2",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_size": "21470642176",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "name": "ceph_lv2",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "tags": {
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.cluster_name": "ceph",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.crush_device_class": "",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.encrypted": "0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.objectstore": "bluestore",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osd_id": "2",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.type": "block",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.vdo": "0",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:                 "ceph.with_tpm": "0"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             },
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "type": "block",
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:             "vg_name": "ceph_vg2"
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:         }
Dec 13 09:23:41 compute-0 sad_cartwright[408913]:     ]
Dec 13 09:23:41 compute-0 sad_cartwright[408913]: }
Dec 13 09:23:41 compute-0 systemd[1]: libpod-e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1.scope: Deactivated successfully.
Dec 13 09:23:41 compute-0 ceph-mon[76537]: pgmap v3677: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:41 compute-0 podman[408942]: 2025-12-13 09:23:41.670565581 +0000 UTC m=+0.029663955 container died e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff-merged.mount: Deactivated successfully.
Dec 13 09:23:41 compute-0 podman[408942]: 2025-12-13 09:23:41.734279048 +0000 UTC m=+0.093377372 container remove e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 09:23:41 compute-0 systemd[1]: libpod-conmon-e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1.scope: Deactivated successfully.
Dec 13 09:23:41 compute-0 sudo[408815]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:41 compute-0 sudo[408957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:23:41 compute-0 sudo[408957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:41 compute-0 sudo[408957]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:23:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480714251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:41 compute-0 nova_compute[248510]: 2025-12-13 09:23:41.928 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:23:41 compute-0 nova_compute[248510]: 2025-12-13 09:23:41.935 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:23:41 compute-0 sudo[408982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:23:41 compute-0 sudo[408982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:41 compute-0 nova_compute[248510]: 2025-12-13 09:23:41.961 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:23:41 compute-0 nova_compute[248510]: 2025-12-13 09:23:41.994 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:23:41 compute-0 nova_compute[248510]: 2025-12-13 09:23:41.994 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:42 compute-0 podman[409021]: 2025-12-13 09:23:42.253701758 +0000 UTC m=+0.053570884 container create 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:23:42 compute-0 systemd[1]: Started libpod-conmon-73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841.scope.
Dec 13 09:23:42 compute-0 podman[409021]: 2025-12-13 09:23:42.22585424 +0000 UTC m=+0.025723356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:23:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:23:42 compute-0 podman[409021]: 2025-12-13 09:23:42.36388218 +0000 UTC m=+0.163751326 container init 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:23:42 compute-0 podman[409021]: 2025-12-13 09:23:42.373177893 +0000 UTC m=+0.173047039 container start 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 09:23:42 compute-0 podman[409021]: 2025-12-13 09:23:42.377396749 +0000 UTC m=+0.177265885 container attach 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:23:42 compute-0 dreamy_pare[409038]: 167 167
Dec 13 09:23:42 compute-0 systemd[1]: libpod-73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841.scope: Deactivated successfully.
Dec 13 09:23:42 compute-0 podman[409021]: 2025-12-13 09:23:42.380513677 +0000 UTC m=+0.180382783 container died 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-22666dd1c6be481a20543995b1e8b1e2d5533e10914a1cc52e74a11fa645ab6a-merged.mount: Deactivated successfully.
Dec 13 09:23:42 compute-0 podman[409021]: 2025-12-13 09:23:42.427769891 +0000 UTC m=+0.227638987 container remove 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:23:42 compute-0 systemd[1]: libpod-conmon-73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841.scope: Deactivated successfully.
Dec 13 09:23:42 compute-0 podman[409063]: 2025-12-13 09:23:42.61159792 +0000 UTC m=+0.056042636 container create 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:23:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/480714251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:42 compute-0 systemd[1]: Started libpod-conmon-98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762.scope.
Dec 13 09:23:42 compute-0 podman[409063]: 2025-12-13 09:23:42.586585983 +0000 UTC m=+0.031030689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:23:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:23:42 compute-0 podman[409063]: 2025-12-13 09:23:42.72449743 +0000 UTC m=+0.168942156 container init 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:23:42 compute-0 podman[409063]: 2025-12-13 09:23:42.733219809 +0000 UTC m=+0.177664515 container start 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:23:42 compute-0 podman[409063]: 2025-12-13 09:23:42.736834799 +0000 UTC m=+0.181279635 container attach 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 09:23:42 compute-0 nova_compute[248510]: 2025-12-13 09:23:42.994 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:42 compute-0 nova_compute[248510]: 2025-12-13 09:23:42.996 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3678: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:43 compute-0 lvm[409155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:23:43 compute-0 lvm[409155]: VG ceph_vg0 finished
Dec 13 09:23:43 compute-0 lvm[409156]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:23:43 compute-0 lvm[409156]: VG ceph_vg1 finished
Dec 13 09:23:43 compute-0 lvm[409158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:23:43 compute-0 lvm[409158]: VG ceph_vg2 finished
Dec 13 09:23:43 compute-0 lvm[409160]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:23:43 compute-0 lvm[409160]: VG ceph_vg2 finished
Dec 13 09:23:43 compute-0 charming_shamir[409077]: {}
Dec 13 09:23:43 compute-0 ceph-mon[76537]: pgmap v3678: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:43 compute-0 systemd[1]: libpod-98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762.scope: Deactivated successfully.
Dec 13 09:23:43 compute-0 podman[409063]: 2025-12-13 09:23:43.660234915 +0000 UTC m=+1.104679611 container died 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:23:43 compute-0 systemd[1]: libpod-98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762.scope: Consumed 1.498s CPU time.
Dec 13 09:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540-merged.mount: Deactivated successfully.
Dec 13 09:23:43 compute-0 podman[409063]: 2025-12-13 09:23:43.725912502 +0000 UTC m=+1.170357198 container remove 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:23:43 compute-0 systemd[1]: libpod-conmon-98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762.scope: Deactivated successfully.
Dec 13 09:23:43 compute-0 sudo[408982]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:43 compute-0 nova_compute[248510]: 2025-12-13 09:23:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:43 compute-0 nova_compute[248510]: 2025-12-13 09:23:43.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:23:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:23:43 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:23:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:23:43 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:23:43 compute-0 sudo[409175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:23:43 compute-0 sudo[409175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:23:43 compute-0 sudo[409175]: pam_unix(sudo:session): session closed for user root
Dec 13 09:23:44 compute-0 nova_compute[248510]: 2025-12-13 09:23:44.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:44.619 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:23:44 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:44.620 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:23:44 compute-0 nova_compute[248510]: 2025-12-13 09:23:44.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:44 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:23:44 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:23:44 compute-0 nova_compute[248510]: 2025-12-13 09:23:44.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3679: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:45 compute-0 ceph-mon[76537]: pgmap v3679: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:46 compute-0 podman[409202]: 2025-12-13 09:23:46.991121331 +0000 UTC m=+0.067347680 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 13 09:23:46 compute-0 podman[409201]: 2025-12-13 09:23:46.991151631 +0000 UTC m=+0.076371455 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 13 09:23:47 compute-0 podman[409200]: 2025-12-13 09:23:47.022298232 +0000 UTC m=+0.110765817 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:23:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3680: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:48 compute-0 ceph-mon[76537]: pgmap v3680: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:48 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:48.624 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:23:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3681: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:49 compute-0 nova_compute[248510]: 2025-12-13 09:23:49.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:49 compute-0 nova_compute[248510]: 2025-12-13 09:23:49.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:50 compute-0 ceph-mon[76537]: pgmap v3681: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:51 compute-0 nova_compute[248510]: 2025-12-13 09:23:51.390 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:51 compute-0 nova_compute[248510]: 2025-12-13 09:23:51.391 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:51 compute-0 nova_compute[248510]: 2025-12-13 09:23:51.412 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:23:51 compute-0 nova_compute[248510]: 2025-12-13 09:23:51.514 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:51 compute-0 nova_compute[248510]: 2025-12-13 09:23:51.514 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:51 compute-0 nova_compute[248510]: 2025-12-13 09:23:51.525 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:23:51 compute-0 nova_compute[248510]: 2025-12-13 09:23:51.526 248514 INFO nova.compute.claims [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:23:51 compute-0 nova_compute[248510]: 2025-12-13 09:23:51.789 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:52 compute-0 ceph-mon[76537]: pgmap v3682: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:23:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720196126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.368 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.378 248514 DEBUG nova.compute.provider_tree [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.400 248514 DEBUG nova.scheduler.client.report [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.425 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.426 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.478 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.478 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.512 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.529 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.619 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.621 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.622 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Creating image(s)
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.653 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.685 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.713 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.717 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.807 248514 DEBUG nova.policy [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '895926e785ed4e44b6b55b32f5fa263c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.814 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.816 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.817 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.818 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.855 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:23:52 compute-0 nova_compute[248510]: 2025-12-13 09:23:52.859 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 df092140-61e7-46dc-a59e-317f6b309e77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1720196126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.259 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 df092140-61e7-46dc-a59e-317f6b309e77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.329 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] resizing rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.415 248514 DEBUG nova.objects.instance [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'migration_context' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.441 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.442 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Ensure instance console log exists: /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.443 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.444 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.444 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:53 compute-0 nova_compute[248510]: 2025-12-13 09:23:53.890 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Successfully created port: 1d60d1ee-c619-439a-a2d3-0ab5e5872411 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 09:23:54 compute-0 ceph-mon[76537]: pgmap v3683: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:23:54 compute-0 nova_compute[248510]: 2025-12-13 09:23:54.401 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:54 compute-0 nova_compute[248510]: 2025-12-13 09:23:54.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:23:54 compute-0 nova_compute[248510]: 2025-12-13 09:23:54.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Dec 13 09:23:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:55.459 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:23:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:23:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:23:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:23:56 compute-0 ceph-mon[76537]: pgmap v3684: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Dec 13 09:23:56 compute-0 nova_compute[248510]: 2025-12-13 09:23:56.779 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Successfully updated port: 1d60d1ee-c619-439a-a2d3-0ab5e5872411 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 09:23:56 compute-0 nova_compute[248510]: 2025-12-13 09:23:56.811 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:23:56 compute-0 nova_compute[248510]: 2025-12-13 09:23:56.811 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquired lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:23:56 compute-0 nova_compute[248510]: 2025-12-13 09:23:56.812 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:23:56 compute-0 nova_compute[248510]: 2025-12-13 09:23:56.926 248514 DEBUG nova.compute.manager [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-changed-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:23:56 compute-0 nova_compute[248510]: 2025-12-13 09:23:56.927 248514 DEBUG nova.compute.manager [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Refreshing instance network info cache due to event network-changed-1d60d1ee-c619-439a-a2d3-0ab5e5872411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 09:23:56 compute-0 nova_compute[248510]: 2025-12-13 09:23:56.927 248514 DEBUG oslo_concurrency.lockutils [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:23:56 compute-0 nova_compute[248510]: 2025-12-13 09:23:56.992 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:23:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Dec 13 09:23:58 compute-0 ceph-mon[76537]: pgmap v3685: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Dec 13 09:23:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.831 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.865 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Releasing lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.865 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance network_info: |[{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.867 248514 DEBUG oslo_concurrency.lockutils [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.867 248514 DEBUG nova.network.neutron [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Refreshing network info cache for port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.872 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start _get_guest_xml network_info=[{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.880 248514 WARNING nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.885 248514 DEBUG nova.virt.libvirt.host [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.886 248514 DEBUG nova.virt.libvirt.host [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.894 248514 DEBUG nova.virt.libvirt.host [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.894 248514 DEBUG nova.virt.libvirt.host [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.895 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.895 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.896 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.896 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.896 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.896 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.897 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.897 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.897 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.897 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.898 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.898 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:23:58 compute-0 nova_compute[248510]: 2025-12-13 09:23:58.901 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:23:59 compute-0 nova_compute[248510]: 2025-12-13 09:23:59.457 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:23:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:23:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9542623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:23:59 compute-0 nova_compute[248510]: 2025-12-13 09:23:59.523 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:23:59 compute-0 nova_compute[248510]: 2025-12-13 09:23:59.549 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:23:59 compute-0 nova_compute[248510]: 2025-12-13 09:23:59.556 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:23:59 compute-0 nova_compute[248510]: 2025-12-13 09:23:59.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:24:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563599860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.135 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.137 248514 DEBUG nova.virt.libvirt.vif [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:23:52Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.137 248514 DEBUG nova.network.os_vif_util [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.138 248514 DEBUG nova.network.os_vif_util [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.139 248514 DEBUG nova.objects.instance [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.193 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <uuid>df092140-61e7-46dc-a59e-317f6b309e77</uuid>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <name>instance-00000099</name>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <nova:name>tempest-TestServerAdvancedOps-server-175634348</nova:name>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:23:58</nova:creationTime>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <nova:user uuid="895926e785ed4e44b6b55b32f5fa263c">tempest-TestServerAdvancedOps-1044369445-project-member</nova:user>
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <nova:project uuid="ca7e519f414149a7bdf699bbd9b4a3e9">tempest-TestServerAdvancedOps-1044369445</nova:project>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <nova:ports>
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <nova:port uuid="1d60d1ee-c619-439a-a2d3-0ab5e5872411">
Dec 13 09:24:00 compute-0 nova_compute[248510]:           <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:         </nova:port>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       </nova:ports>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <system>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <entry name="serial">df092140-61e7-46dc-a59e-317f6b309e77</entry>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <entry name="uuid">df092140-61e7-46dc-a59e-317f6b309e77</entry>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </system>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <os>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   </os>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <features>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   </features>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/df092140-61e7-46dc-a59e-317f6b309e77_disk">
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       </source>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/df092140-61e7-46dc-a59e-317f6b309e77_disk.config">
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       </source>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:24:00 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <interface type="ethernet">
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <mac address="fa:16:3e:77:21:8c"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <mtu size="1442"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <target dev="tap1d60d1ee-c6"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </interface>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/console.log" append="off"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <video>
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </video>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:24:00 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:24:00 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:24:00 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:24:00 compute-0 nova_compute[248510]: </domain>
Dec 13 09:24:00 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.194 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Preparing to wait for external event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.195 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.195 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.195 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.196 248514 DEBUG nova.virt.libvirt.vif [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:23:52Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.196 248514 DEBUG nova.network.os_vif_util [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.197 248514 DEBUG nova.network.os_vif_util [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.197 248514 DEBUG os_vif [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.198 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.199 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.203 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.204 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d60d1ee-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.205 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d60d1ee-c6, col_values=(('external_ids', {'iface-id': '1d60d1ee-c619-439a-a2d3-0ab5e5872411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:21:8c', 'vm-uuid': 'df092140-61e7-46dc-a59e-317f6b309e77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:00 compute-0 NetworkManager[50376]: <info>  [1765617840.2085] manager: (tap1d60d1ee-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/681)
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.219 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.221 248514 INFO os_vif [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6')
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.283 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.283 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.284 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] No VIF found with MAC fa:16:3e:77:21:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.284 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Using config drive
Dec 13 09:24:00 compute-0 ceph-mon[76537]: pgmap v3686: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 09:24:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/9542623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:24:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1563599860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.313 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.926 248514 DEBUG nova.network.neutron [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updated VIF entry in instance network info cache for port 1d60d1ee-c619-439a-a2d3-0ab5e5872411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.927 248514 DEBUG nova.network.neutron [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:24:00 compute-0 nova_compute[248510]: 2025-12-13 09:24:00.995 248514 DEBUG oslo_concurrency.lockutils [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.064 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Creating config drive at /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.069 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjchlz_z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:24:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.235 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjchlz_z" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.265 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.270 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config df092140-61e7-46dc-a59e-317f6b309e77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.440 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config df092140-61e7-46dc-a59e-317f6b309e77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.441 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Deleting local config drive /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config because it was imported into RBD.
Dec 13 09:24:01 compute-0 kernel: tap1d60d1ee-c6: entered promiscuous mode
Dec 13 09:24:01 compute-0 NetworkManager[50376]: <info>  [1765617841.5106] manager: (tap1d60d1ee-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/682)
Dec 13 09:24:01 compute-0 ovn_controller[148476]: 2025-12-13T09:24:01Z|01642|binding|INFO|Claiming lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 for this chassis.
Dec 13 09:24:01 compute-0 ovn_controller[148476]: 2025-12-13T09:24:01Z|01643|binding|INFO|1d60d1ee-c619-439a-a2d3-0ab5e5872411: Claiming fa:16:3e:77:21:8c 10.100.0.2
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.510 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:01.521 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:24:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:01.522 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 bound to our chassis
Dec 13 09:24:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:01.523 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 09:24:01 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:01.524 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[877ae67d-99ab-4bd3-8bec-fef9fc573feb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:24:01 compute-0 systemd-machined[210538]: New machine qemu-184-instance-00000099.
Dec 13 09:24:01 compute-0 systemd-udevd[409588]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:01 compute-0 ovn_controller[148476]: 2025-12-13T09:24:01Z|01644|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 ovn-installed in OVS
Dec 13 09:24:01 compute-0 ovn_controller[148476]: 2025-12-13T09:24:01Z|01645|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 up in Southbound
Dec 13 09:24:01 compute-0 nova_compute[248510]: 2025-12-13 09:24:01.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:01 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000099.
Dec 13 09:24:01 compute-0 NetworkManager[50376]: <info>  [1765617841.5633] device (tap1d60d1ee-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:24:01 compute-0 NetworkManager[50376]: <info>  [1765617841.5643] device (tap1d60d1ee-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.061 248514 DEBUG nova.compute.manager [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.063 248514 DEBUG oslo_concurrency.lockutils [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.063 248514 DEBUG oslo_concurrency.lockutils [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.063 248514 DEBUG oslo_concurrency.lockutils [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.063 248514 DEBUG nova.compute.manager [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Processing event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 09:24:02 compute-0 ceph-mon[76537]: pgmap v3687: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.723 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617842.7225373, df092140-61e7-46dc-a59e-317f6b309e77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.724 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Started (Lifecycle Event)
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.727 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.731 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.735 248514 INFO nova.virt.libvirt.driver [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance spawned successfully.
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.735 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.750 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.755 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.770 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.770 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.771 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.771 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.772 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.772 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.779 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.780 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617842.7238724, df092140-61e7-46dc-a59e-317f6b309e77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.780 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Paused (Lifecycle Event)
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.819 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.824 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617842.7296407, df092140-61e7-46dc-a59e-317f6b309e77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.824 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Resumed (Lifecycle Event)
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.854 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.861 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.865 248514 INFO nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Took 10.25 seconds to spawn the instance on the hypervisor.
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.866 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.900 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:24:02 compute-0 nova_compute[248510]: 2025-12-13 09:24:02.948 248514 INFO nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Took 11.47 seconds to build instance.
Dec 13 09:24:03 compute-0 nova_compute[248510]: 2025-12-13 09:24:03.002 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 09:24:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:04 compute-0 nova_compute[248510]: 2025-12-13 09:24:04.173 248514 DEBUG nova.compute.manager [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:04 compute-0 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 DEBUG oslo_concurrency.lockutils [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:04 compute-0 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 DEBUG oslo_concurrency.lockutils [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:04 compute-0 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 DEBUG oslo_concurrency.lockutils [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:04 compute-0 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 DEBUG nova.compute.manager [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:04 compute-0 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 WARNING nova.compute.manager [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state None.
Dec 13 09:24:04 compute-0 ceph-mon[76537]: pgmap v3688: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 09:24:04 compute-0 nova_compute[248510]: 2025-12-13 09:24:04.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 09:24:05 compute-0 nova_compute[248510]: 2025-12-13 09:24:05.207 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:05 compute-0 ceph-mon[76537]: pgmap v3689: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 09:24:06 compute-0 nova_compute[248510]: 2025-12-13 09:24:06.244 248514 DEBUG nova.objects.instance [None req-16ec372b-22e5-4dc7-805f-14a2d81a49e3 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:06 compute-0 nova_compute[248510]: 2025-12-13 09:24:06.275 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617846.2748225, df092140-61e7-46dc-a59e-317f6b309e77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:06 compute-0 nova_compute[248510]: 2025-12-13 09:24:06.275 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Paused (Lifecycle Event)
Dec 13 09:24:06 compute-0 nova_compute[248510]: 2025-12-13 09:24:06.478 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:06 compute-0 nova_compute[248510]: 2025-12-13 09:24:06.484 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:24:06 compute-0 nova_compute[248510]: 2025-12-13 09:24:06.585 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 13 09:24:06 compute-0 kernel: tap1d60d1ee-c6 (unregistering): left promiscuous mode
Dec 13 09:24:06 compute-0 NetworkManager[50376]: <info>  [1765617846.9833] device (tap1d60d1ee-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:24:06 compute-0 ovn_controller[148476]: 2025-12-13T09:24:06Z|01646|binding|INFO|Releasing lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 from this chassis (sb_readonly=0)
Dec 13 09:24:06 compute-0 ovn_controller[148476]: 2025-12-13T09:24:06Z|01647|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 down in Southbound
Dec 13 09:24:06 compute-0 ovn_controller[148476]: 2025-12-13T09:24:06Z|01648|binding|INFO|Removing iface tap1d60d1ee-c6 ovn-installed in OVS
Dec 13 09:24:06 compute-0 nova_compute[248510]: 2025-12-13 09:24:06.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:07.046 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:24:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:07.047 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 unbound from our chassis
Dec 13 09:24:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:07.048 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 09:24:07 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:07.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93ad7dc5-ebf0-4446-bdf9-9322585453fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:24:07 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec 13 09:24:07 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000099.scope: Consumed 4.876s CPU time.
Dec 13 09:24:07 compute-0 systemd-machined[210538]: Machine qemu-184-instance-00000099 terminated.
Dec 13 09:24:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 749 KiB/s wr, 68 op/s
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.187 248514 DEBUG nova.compute.manager [None req-16ec372b-22e5-4dc7-805f-14a2d81a49e3 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.275 248514 DEBUG nova.compute.manager [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.275 248514 DEBUG oslo_concurrency.lockutils [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.275 248514 DEBUG oslo_concurrency.lockutils [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.276 248514 DEBUG oslo_concurrency.lockutils [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.276 248514 DEBUG nova.compute.manager [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:07 compute-0 nova_compute[248510]: 2025-12-13 09:24:07.276 248514 WARNING nova.compute.manager [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state None.
Dec 13 09:24:08 compute-0 ceph-mon[76537]: pgmap v3690: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 749 KiB/s wr, 68 op/s
Dec 13 09:24:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 749 KiB/s wr, 135 op/s
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.243 248514 INFO nova.compute.manager [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Resuming
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.244 248514 DEBUG nova.objects.instance [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'flavor' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.287 248514 DEBUG oslo_concurrency.lockutils [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.287 248514 DEBUG oslo_concurrency.lockutils [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquired lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.288 248514 DEBUG nova.network.neutron [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.361 248514 DEBUG nova.compute.manager [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.362 248514 DEBUG oslo_concurrency.lockutils [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.363 248514 DEBUG oslo_concurrency.lockutils [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.363 248514 DEBUG oslo_concurrency.lockutils [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.363 248514 DEBUG nova.compute.manager [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.364 248514 WARNING nova.compute.manager [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state resuming.
Dec 13 09:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:24:09
Dec 13 09:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', 'volumes', '.rgw.root']
Dec 13 09:24:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:24:09 compute-0 nova_compute[248510]: 2025-12-13 09:24:09.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:24:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:24:10 compute-0 nova_compute[248510]: 2025-12-13 09:24:10.209 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:10 compute-0 ceph-mon[76537]: pgmap v3691: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 749 KiB/s wr, 135 op/s
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:24:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 133 op/s
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.554 248514 DEBUG nova.network.neutron [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.575 248514 DEBUG oslo_concurrency.lockutils [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Releasing lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.585 248514 DEBUG nova.virt.libvirt.vif [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:24:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:24:07Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.586 248514 DEBUG nova.network.os_vif_util [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.588 248514 DEBUG nova.network.os_vif_util [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.589 248514 DEBUG os_vif [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.590 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.591 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.592 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d60d1ee-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d60d1ee-c6, col_values=(('external_ids', {'iface-id': '1d60d1ee-c619-439a-a2d3-0ab5e5872411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:21:8c', 'vm-uuid': 'df092140-61e7-46dc-a59e-317f6b309e77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.599 248514 INFO os_vif [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6')
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.638 248514 DEBUG nova.objects.instance [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'numa_topology' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:11 compute-0 kernel: tap1d60d1ee-c6: entered promiscuous mode
Dec 13 09:24:11 compute-0 NetworkManager[50376]: <info>  [1765617851.7382] manager: (tap1d60d1ee-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/683)
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:11 compute-0 ovn_controller[148476]: 2025-12-13T09:24:11Z|01649|binding|INFO|Claiming lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 for this chassis.
Dec 13 09:24:11 compute-0 ovn_controller[148476]: 2025-12-13T09:24:11Z|01650|binding|INFO|1d60d1ee-c619-439a-a2d3-0ab5e5872411: Claiming fa:16:3e:77:21:8c 10.100.0.2
Dec 13 09:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:11.803 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:11.804 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 bound to our chassis
Dec 13 09:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:11.805 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 09:24:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:11.806 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7327aa-53b1-4888-b09a-6eccf65ab3c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:24:11 compute-0 nova_compute[248510]: 2025-12-13 09:24:11.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:11 compute-0 ovn_controller[148476]: 2025-12-13T09:24:11Z|01651|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 ovn-installed in OVS
Dec 13 09:24:11 compute-0 ovn_controller[148476]: 2025-12-13T09:24:11Z|01652|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 up in Southbound
Dec 13 09:24:11 compute-0 systemd-udevd[409672]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:24:11 compute-0 systemd-machined[210538]: New machine qemu-185-instance-00000099.
Dec 13 09:24:11 compute-0 NetworkManager[50376]: <info>  [1765617851.8372] device (tap1d60d1ee-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:24:11 compute-0 NetworkManager[50376]: <info>  [1765617851.8384] device (tap1d60d1ee-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:24:11 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000099.
Dec 13 09:24:12 compute-0 nova_compute[248510]: 2025-12-13 09:24:12.035 248514 DEBUG nova.compute.manager [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:12 compute-0 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 DEBUG oslo_concurrency.lockutils [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:12 compute-0 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 DEBUG oslo_concurrency.lockutils [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:12 compute-0 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 DEBUG oslo_concurrency.lockutils [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:12 compute-0 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 DEBUG nova.compute.manager [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:12 compute-0 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 WARNING nova.compute.manager [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state resuming.
Dec 13 09:24:12 compute-0 ceph-mon[76537]: pgmap v3692: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 133 op/s
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.051 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for df092140-61e7-46dc-a59e-317f6b309e77 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.052 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617853.050872, df092140-61e7-46dc-a59e-317f6b309e77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.052 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Started (Lifecycle Event)
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.078 248514 DEBUG nova.compute.manager [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.078 248514 DEBUG nova.objects.instance [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 131 op/s
Dec 13 09:24:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.523 248514 INFO nova.virt.libvirt.driver [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance running successfully.
Dec 13 09:24:13 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.527 248514 DEBUG nova.virt.libvirt.guest [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.528 248514 DEBUG nova.compute.manager [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.540 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.543 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.808 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617853.0567622, df092140-61e7-46dc-a59e-317f6b309e77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.809 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Resumed (Lifecycle Event)
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.914 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:13 compute-0 nova_compute[248510]: 2025-12-13 09:24:13.919 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:24:14 compute-0 nova_compute[248510]: 2025-12-13 09:24:14.175 248514 DEBUG nova.compute.manager [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:14 compute-0 nova_compute[248510]: 2025-12-13 09:24:14.176 248514 DEBUG oslo_concurrency.lockutils [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:14 compute-0 nova_compute[248510]: 2025-12-13 09:24:14.176 248514 DEBUG oslo_concurrency.lockutils [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:14 compute-0 nova_compute[248510]: 2025-12-13 09:24:14.177 248514 DEBUG oslo_concurrency.lockutils [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:14 compute-0 nova_compute[248510]: 2025-12-13 09:24:14.177 248514 DEBUG nova.compute.manager [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:14 compute-0 nova_compute[248510]: 2025-12-13 09:24:14.178 248514 WARNING nova.compute.manager [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state None.
Dec 13 09:24:14 compute-0 ceph-mon[76537]: pgmap v3693: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 131 op/s
Dec 13 09:24:15 compute-0 nova_compute[248510]: 2025-12-13 09:24:15.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:24:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3855798119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:24:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:24:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3855798119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:24:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 132 op/s
Dec 13 09:24:15 compute-0 nova_compute[248510]: 2025-12-13 09:24:15.253 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3855798119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:24:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3855798119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:24:16 compute-0 ceph-mon[76537]: pgmap v3694: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 132 op/s
Dec 13 09:24:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 71 op/s
Dec 13 09:24:17 compute-0 podman[409726]: 2025-12-13 09:24:17.985618203 +0000 UTC m=+0.066674242 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 13 09:24:18 compute-0 podman[409725]: 2025-12-13 09:24:18.018366604 +0000 UTC m=+0.103835384 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 09:24:18 compute-0 podman[409724]: 2025-12-13 09:24:18.052548161 +0000 UTC m=+0.134030291 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 13 09:24:18 compute-0 nova_compute[248510]: 2025-12-13 09:24:18.357 248514 DEBUG nova.objects.instance [None req-862950b7-a47d-42d3-9b87-195003c43e44 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:18 compute-0 nova_compute[248510]: 2025-12-13 09:24:18.381 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617858.3806083, df092140-61e7-46dc-a59e-317f6b309e77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:18 compute-0 nova_compute[248510]: 2025-12-13 09:24:18.381 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Paused (Lifecycle Event)
Dec 13 09:24:18 compute-0 ceph-mon[76537]: pgmap v3695: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 71 op/s
Dec 13 09:24:18 compute-0 nova_compute[248510]: 2025-12-13 09:24:18.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:18 compute-0 nova_compute[248510]: 2025-12-13 09:24:18.408 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:24:18 compute-0 nova_compute[248510]: 2025-12-13 09:24:18.432 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 13 09:24:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.481454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617858481565, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1209, "num_deletes": 250, "total_data_size": 1868254, "memory_usage": 1897128, "flush_reason": "Manual Compaction"}
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617858491714, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1107138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72871, "largest_seqno": 74079, "table_properties": {"data_size": 1102717, "index_size": 1880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11750, "raw_average_key_size": 20, "raw_value_size": 1093121, "raw_average_value_size": 1924, "num_data_blocks": 85, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617743, "oldest_key_time": 1765617743, "file_creation_time": 1765617858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 10308 microseconds, and 4189 cpu microseconds.
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.491777) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1107138 bytes OK
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.491810) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.493699) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.493713) EVENT_LOG_v1 {"time_micros": 1765617858493709, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.493737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1862757, prev total WAL file size 1862757, number of live WAL files 2.
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.494524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303132' seq:72057594037927935, type:22 .. '6D6772737461740033323633' seq:0, type:0; will stop at (end)
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1081KB)], [173(11MB)]
Dec 13 09:24:18 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617858494591, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12861913, "oldest_snapshot_seqno": -1}
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9184 keys, 10116410 bytes, temperature: kUnknown
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617859006211, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10116410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10060722, "index_size": 31639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241136, "raw_average_key_size": 26, "raw_value_size": 9902867, "raw_average_value_size": 1078, "num_data_blocks": 1215, "num_entries": 9184, "num_filter_entries": 9184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:24:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 71 op/s
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.006641) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10116410 bytes
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.244916) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.1 rd, 19.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(20.8) write-amplify(9.1) OK, records in: 9644, records dropped: 460 output_compression: NoCompression
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.244969) EVENT_LOG_v1 {"time_micros": 1765617859244948, "job": 108, "event": "compaction_finished", "compaction_time_micros": 511745, "compaction_time_cpu_micros": 27454, "output_level": 6, "num_output_files": 1, "total_output_size": 10116410, "num_input_records": 9644, "num_output_records": 9184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617859245603, "job": 108, "event": "table_file_deletion", "file_number": 175}
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617859249638, "job": 108, "event": "table_file_deletion", "file_number": 173}
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.494445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:24:19 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:24:19 compute-0 kernel: tap1d60d1ee-c6 (unregistering): left promiscuous mode
Dec 13 09:24:19 compute-0 NetworkManager[50376]: <info>  [1765617859.2884] device (tap1d60d1ee-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:24:19 compute-0 ovn_controller[148476]: 2025-12-13T09:24:19Z|01653|binding|INFO|Releasing lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 from this chassis (sb_readonly=0)
Dec 13 09:24:19 compute-0 ovn_controller[148476]: 2025-12-13T09:24:19Z|01654|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 down in Southbound
Dec 13 09:24:19 compute-0 ovn_controller[148476]: 2025-12-13T09:24:19Z|01655|binding|INFO|Removing iface tap1d60d1ee-c6 ovn-installed in OVS
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.328 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:19 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec 13 09:24:19 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Consumed 6.637s CPU time.
Dec 13 09:24:19 compute-0 systemd-machined[210538]: Machine qemu-185-instance-00000099 terminated.
Dec 13 09:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:19.410 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:19.411 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 unbound from our chassis
Dec 13 09:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:19.412 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 09:24:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:19.413 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[214367a9-6b07-4de3-a3ae-c38061125c74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.496 248514 DEBUG nova.compute.manager [None req-862950b7-a47d-42d3-9b87-195003c43e44 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:19 compute-0 ceph-mon[76537]: pgmap v3696: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 71 op/s
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.996 248514 DEBUG nova.compute.manager [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.997 248514 DEBUG oslo_concurrency.lockutils [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.997 248514 DEBUG oslo_concurrency.lockutils [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.997 248514 DEBUG oslo_concurrency.lockutils [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.998 248514 DEBUG nova.compute.manager [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:19 compute-0 nova_compute[248510]: 2025-12-13 09:24:19.998 248514 WARNING nova.compute.manager [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state None.
Dec 13 09:24:20 compute-0 nova_compute[248510]: 2025-12-13 09:24:20.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:20 compute-0 nova_compute[248510]: 2025-12-13 09:24:20.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:20 compute-0 nova_compute[248510]: 2025-12-13 09:24:20.631 248514 INFO nova.compute.manager [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Resuming
Dec 13 09:24:20 compute-0 nova_compute[248510]: 2025-12-13 09:24:20.632 248514 DEBUG nova.objects.instance [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'flavor' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:20 compute-0 nova_compute[248510]: 2025-12-13 09:24:20.820 248514 DEBUG oslo_concurrency.lockutils [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:24:20 compute-0 nova_compute[248510]: 2025-12-13 09:24:20.821 248514 DEBUG oslo_concurrency.lockutils [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquired lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:24:20 compute-0 nova_compute[248510]: 2025-12-13 09:24:20.821 248514 DEBUG nova.network.neutron [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036334424789835097 of space, bias 1.0, pg target 0.1090032743695053 quantized to 32 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696740079843619 of space, bias 1.0, pg target 0.20090220239530857 quantized to 32 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.719592936563563e-07 of space, bias 4.0, pg target 0.0006863511523876276 quantized to 16 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:24:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:24:22 compute-0 nova_compute[248510]: 2025-12-13 09:24:22.083 248514 DEBUG nova.compute.manager [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:22 compute-0 nova_compute[248510]: 2025-12-13 09:24:22.084 248514 DEBUG oslo_concurrency.lockutils [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:22 compute-0 nova_compute[248510]: 2025-12-13 09:24:22.084 248514 DEBUG oslo_concurrency.lockutils [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:22 compute-0 nova_compute[248510]: 2025-12-13 09:24:22.084 248514 DEBUG oslo_concurrency.lockutils [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:22 compute-0 nova_compute[248510]: 2025-12-13 09:24:22.084 248514 DEBUG nova.compute.manager [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:22 compute-0 nova_compute[248510]: 2025-12-13 09:24:22.085 248514 WARNING nova.compute.manager [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state resuming.
Dec 13 09:24:22 compute-0 ceph-mon[76537]: pgmap v3697: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.154 248514 DEBUG nova.network.neutron [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.175 248514 DEBUG oslo_concurrency.lockutils [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Releasing lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:24:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.181 248514 DEBUG nova.virt.libvirt.vif [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:24:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:24:19Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.181 248514 DEBUG nova.network.os_vif_util [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.182 248514 DEBUG nova.network.os_vif_util [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.183 248514 DEBUG os_vif [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.183 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.183 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.184 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.187 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d60d1ee-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.187 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d60d1ee-c6, col_values=(('external_ids', {'iface-id': '1d60d1ee-c619-439a-a2d3-0ab5e5872411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:21:8c', 'vm-uuid': 'df092140-61e7-46dc-a59e-317f6b309e77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.188 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.188 248514 INFO os_vif [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6')
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.210 248514 DEBUG nova.objects.instance [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'numa_topology' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:23 compute-0 kernel: tap1d60d1ee-c6: entered promiscuous mode
Dec 13 09:24:23 compute-0 ovn_controller[148476]: 2025-12-13T09:24:23Z|01656|binding|INFO|Claiming lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 for this chassis.
Dec 13 09:24:23 compute-0 ovn_controller[148476]: 2025-12-13T09:24:23Z|01657|binding|INFO|1d60d1ee-c619-439a-a2d3-0ab5e5872411: Claiming fa:16:3e:77:21:8c 10.100.0.2
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:23 compute-0 NetworkManager[50376]: <info>  [1765617863.2847] manager: (tap1d60d1ee-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/684)
Dec 13 09:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:23.291 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:23.292 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 bound to our chassis
Dec 13 09:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:23.293 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 09:24:23 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:23.294 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97d3e248-e51d-49be-ae2f-f7504651f73f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:23 compute-0 ovn_controller[148476]: 2025-12-13T09:24:23Z|01658|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 ovn-installed in OVS
Dec 13 09:24:23 compute-0 ovn_controller[148476]: 2025-12-13T09:24:23Z|01659|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 up in Southbound
Dec 13 09:24:23 compute-0 nova_compute[248510]: 2025-12-13 09:24:23.299 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:23 compute-0 systemd-udevd[409823]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 09:24:23 compute-0 systemd-machined[210538]: New machine qemu-186-instance-00000099.
Dec 13 09:24:23 compute-0 NetworkManager[50376]: <info>  [1765617863.3308] device (tap1d60d1ee-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 09:24:23 compute-0 NetworkManager[50376]: <info>  [1765617863.3315] device (tap1d60d1ee-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 09:24:23 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000099.
Dec 13 09:24:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.061 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for df092140-61e7-46dc-a59e-317f6b309e77 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.063 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617864.0612276, df092140-61e7-46dc-a59e-317f6b309e77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.064 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Started (Lifecycle Event)
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.082 248514 DEBUG nova.compute.manager [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.083 248514 DEBUG nova.objects.instance [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.096 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.101 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.108 248514 INFO nova.virt.libvirt.driver [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance running successfully.
Dec 13 09:24:24 compute-0 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.110 248514 DEBUG nova.virt.libvirt.guest [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.111 248514 DEBUG nova.compute.manager [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.153 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.154 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617864.0641851, df092140-61e7-46dc-a59e-317f6b309e77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.154 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Resumed (Lifecycle Event)
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.182 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.186 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.211 248514 DEBUG nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.212 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.212 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.212 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.212 248514 DEBUG nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.213 248514 WARNING nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state None.
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.213 248514 DEBUG nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.213 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.214 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.214 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.214 248514 DEBUG nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:24 compute-0 nova_compute[248510]: 2025-12-13 09:24:24.214 248514 WARNING nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state None.
Dec 13 09:24:24 compute-0 ceph-mon[76537]: pgmap v3698: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 7 op/s
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.275 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.276 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.277 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.277 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.277 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.279 248514 INFO nova.compute.manager [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Terminating instance
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.280 248514 DEBUG nova.compute.manager [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:24:25 compute-0 kernel: tap1d60d1ee-c6 (unregistering): left promiscuous mode
Dec 13 09:24:25 compute-0 NetworkManager[50376]: <info>  [1765617865.3221] device (tap1d60d1ee-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 09:24:25 compute-0 ovn_controller[148476]: 2025-12-13T09:24:25Z|01660|binding|INFO|Releasing lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 from this chassis (sb_readonly=0)
Dec 13 09:24:25 compute-0 ovn_controller[148476]: 2025-12-13T09:24:25Z|01661|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 down in Southbound
Dec 13 09:24:25 compute-0 ovn_controller[148476]: 2025-12-13T09:24:25Z|01662|binding|INFO|Removing iface tap1d60d1ee-c6 ovn-installed in OVS
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:25.339 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:25.340 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 unbound from our chassis
Dec 13 09:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:25.342 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 13 09:24:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:25.343 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee87b47-2afe-4b16-9fcc-a1c9d303a70d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:25 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec 13 09:24:25 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Consumed 1.887s CPU time.
Dec 13 09:24:25 compute-0 systemd-machined[210538]: Machine qemu-186-instance-00000099 terminated.
Dec 13 09:24:25 compute-0 NetworkManager[50376]: <info>  [1765617865.5020] manager: (tap1d60d1ee-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/685)
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.517 248514 INFO nova.virt.libvirt.driver [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance destroyed successfully.
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.519 248514 DEBUG nova.objects.instance [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'resources' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.573 248514 DEBUG nova.virt.libvirt.vif [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:24:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:24:24Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.573 248514 DEBUG nova.network.os_vif_util [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.574 248514 DEBUG nova.network.os_vif_util [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.574 248514 DEBUG os_vif [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.576 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d60d1ee-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.623 248514 INFO os_vif [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6')
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.959 248514 INFO nova.virt.libvirt.driver [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Deleting instance files /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77_del
Dec 13 09:24:25 compute-0 nova_compute[248510]: 2025-12-13 09:24:25.961 248514 INFO nova.virt.libvirt.driver [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Deletion of /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77_del complete
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.033 248514 INFO nova.compute.manager [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Took 0.75 seconds to destroy the instance on the hypervisor.
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.034 248514 DEBUG oslo.service.loopingcall [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.034 248514 DEBUG nova.compute.manager [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.034 248514 DEBUG nova.network.neutron [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.313 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.313 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.314 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.315 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.315 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.316 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.316 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.317 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.318 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.318 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.319 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.319 248514 WARNING nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state deleting.
Dec 13 09:24:26 compute-0 ceph-mon[76537]: pgmap v3699: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 7 op/s
Dec 13 09:24:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:26.926 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:26 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:26.928 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:24:26 compute-0 nova_compute[248510]: 2025-12-13 09:24:26.963 248514 DEBUG nova.network.neutron [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:24:27 compute-0 nova_compute[248510]: 2025-12-13 09:24:27.083 248514 DEBUG nova.compute.manager [req-489b79f1-75a6-4766-b700-ed8a8cf21932 req-e0ceab04-3097-4fa5-8777-05703033fe0a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-deleted-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 09:24:27 compute-0 nova_compute[248510]: 2025-12-13 09:24:27.084 248514 INFO nova.compute.manager [req-489b79f1-75a6-4766-b700-ed8a8cf21932 req-e0ceab04-3097-4fa5-8777-05703033fe0a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Neutron deleted interface 1d60d1ee-c619-439a-a2d3-0ab5e5872411; detaching it from the instance and deleting it from the info cache
Dec 13 09:24:27 compute-0 nova_compute[248510]: 2025-12-13 09:24:27.084 248514 DEBUG nova.network.neutron [req-489b79f1-75a6-4766-b700-ed8a8cf21932 req-e0ceab04-3097-4fa5-8777-05703033fe0a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:24:27 compute-0 nova_compute[248510]: 2025-12-13 09:24:27.172 248514 INFO nova.compute.manager [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Took 1.14 seconds to deallocate network for instance.
Dec 13 09:24:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 2 op/s
Dec 13 09:24:27 compute-0 nova_compute[248510]: 2025-12-13 09:24:27.193 248514 DEBUG nova.compute.manager [req-489b79f1-75a6-4766-b700-ed8a8cf21932 req-e0ceab04-3097-4fa5-8777-05703033fe0a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Detach interface failed, port_id=1d60d1ee-c619-439a-a2d3-0ab5e5872411, reason: Instance df092140-61e7-46dc-a59e-317f6b309e77 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 09:24:27 compute-0 nova_compute[248510]: 2025-12-13 09:24:27.464 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:27 compute-0 nova_compute[248510]: 2025-12-13 09:24:27.464 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:27 compute-0 nova_compute[248510]: 2025-12-13 09:24:27.523 248514 DEBUG oslo_concurrency.processutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:24:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:24:28 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481576824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:24:28 compute-0 nova_compute[248510]: 2025-12-13 09:24:28.111 248514 DEBUG oslo_concurrency.processutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:24:28 compute-0 nova_compute[248510]: 2025-12-13 09:24:28.121 248514 DEBUG nova.compute.provider_tree [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:24:28 compute-0 nova_compute[248510]: 2025-12-13 09:24:28.212 248514 DEBUG nova.scheduler.client.report [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:24:28 compute-0 nova_compute[248510]: 2025-12-13 09:24:28.266 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:28 compute-0 nova_compute[248510]: 2025-12-13 09:24:28.404 248514 INFO nova.scheduler.client.report [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Deleted allocations for instance df092140-61e7-46dc-a59e-317f6b309e77
Dec 13 09:24:28 compute-0 ceph-mon[76537]: pgmap v3700: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 2 op/s
Dec 13 09:24:28 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1481576824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:24:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:28 compute-0 nova_compute[248510]: 2025-12-13 09:24:28.775 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Dec 13 09:24:29 compute-0 nova_compute[248510]: 2025-12-13 09:24:29.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:30 compute-0 nova_compute[248510]: 2025-12-13 09:24:30.010 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:30 compute-0 ceph-mon[76537]: pgmap v3701: 321 pgs: 321 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Dec 13 09:24:30 compute-0 nova_compute[248510]: 2025-12-13 09:24:30.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 09:24:31 compute-0 nova_compute[248510]: 2025-12-13 09:24:31.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:31 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:31.931 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:24:32 compute-0 nova_compute[248510]: 2025-12-13 09:24:32.031 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:32 compute-0 ceph-mon[76537]: pgmap v3702: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 09:24:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 09:24:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:33 compute-0 ceph-mon[76537]: pgmap v3703: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 09:24:35 compute-0 nova_compute[248510]: 2025-12-13 09:24:35.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 09:24:35 compute-0 nova_compute[248510]: 2025-12-13 09:24:35.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:36 compute-0 ceph-mon[76537]: pgmap v3704: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 09:24:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 09:24:38 compute-0 ceph-mon[76537]: pgmap v3705: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 09:24:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 09:24:39 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:24:39 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:24:39 compute-0 nova_compute[248510]: 2025-12-13 09:24:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:39 compute-0 nova_compute[248510]: 2025-12-13 09:24:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:24:39 compute-0 nova_compute[248510]: 2025-12-13 09:24:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:24:39 compute-0 nova_compute[248510]: 2025-12-13 09:24:39.793 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:24:39 compute-0 nova_compute[248510]: 2025-12-13 09:24:39.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.015 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:24:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:24:40 compute-0 ceph-mon[76537]: pgmap v3706: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.514 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617865.5128546, df092140-61e7-46dc-a59e-317f6b309e77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.515 248514 INFO nova.compute.manager [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Stopped (Lifecycle Event)
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.539 248514 DEBUG nova.compute.manager [None req-2a383a6a-7510-4cc7-87eb-8be714742e60 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.814 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:24:40 compute-0 nova_compute[248510]: 2025-12-13 09:24:40.814 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:24:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 8 op/s
Dec 13 09:24:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:24:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3194415615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:24:41 compute-0 nova_compute[248510]: 2025-12-13 09:24:41.465 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:24:41 compute-0 nova_compute[248510]: 2025-12-13 09:24:41.709 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:24:41 compute-0 nova_compute[248510]: 2025-12-13 09:24:41.711 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3388MB free_disk=59.98738075513393GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:24:41 compute-0 nova_compute[248510]: 2025-12-13 09:24:41.711 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:41 compute-0 nova_compute[248510]: 2025-12-13 09:24:41.712 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:41 compute-0 nova_compute[248510]: 2025-12-13 09:24:41.795 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:24:41 compute-0 nova_compute[248510]: 2025-12-13 09:24:41.796 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:24:41 compute-0 nova_compute[248510]: 2025-12-13 09:24:41.818 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:24:42 compute-0 ceph-mon[76537]: pgmap v3707: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 8 op/s
Dec 13 09:24:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3194415615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:24:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:24:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524129072' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:24:42 compute-0 nova_compute[248510]: 2025-12-13 09:24:42.422 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:24:42 compute-0 nova_compute[248510]: 2025-12-13 09:24:42.430 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:24:42 compute-0 nova_compute[248510]: 2025-12-13 09:24:42.589 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:24:42 compute-0 nova_compute[248510]: 2025-12-13 09:24:42.713 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:24:42 compute-0 nova_compute[248510]: 2025-12-13 09:24:42.713 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3708: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2524129072' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:24:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:43 compute-0 nova_compute[248510]: 2025-12-13 09:24:43.714 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:43 compute-0 nova_compute[248510]: 2025-12-13 09:24:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:43 compute-0 nova_compute[248510]: 2025-12-13 09:24:43.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:24:44 compute-0 sudo[409979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:24:44 compute-0 sudo[409979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:44 compute-0 sudo[409979]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:44 compute-0 sudo[410004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:24:44 compute-0 sudo[410004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:44 compute-0 ceph-mon[76537]: pgmap v3708: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:44 compute-0 sudo[410004]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:24:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:24:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:24:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:24:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:24:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:24:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:24:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:24:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:24:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:24:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:24:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:24:44 compute-0 sudo[410060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:24:44 compute-0 sudo[410060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:44 compute-0 sudo[410060]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:44 compute-0 sudo[410085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:24:44 compute-0 sudo[410085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:45 compute-0 nova_compute[248510]: 2025-12-13 09:24:45.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:45 compute-0 podman[410122]: 2025-12-13 09:24:45.231466745 +0000 UTC m=+0.055403130 container create 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:24:45 compute-0 systemd[1]: Started libpod-conmon-5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc.scope.
Dec 13 09:24:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:24:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:24:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:24:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:24:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:24:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:24:45 compute-0 podman[410122]: 2025-12-13 09:24:45.203666678 +0000 UTC m=+0.027603113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:24:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:24:45 compute-0 podman[410122]: 2025-12-13 09:24:45.345899133 +0000 UTC m=+0.169835538 container init 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:24:45 compute-0 podman[410122]: 2025-12-13 09:24:45.357545065 +0000 UTC m=+0.181481450 container start 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 09:24:45 compute-0 podman[410122]: 2025-12-13 09:24:45.361055693 +0000 UTC m=+0.184992078 container attach 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:24:45 compute-0 inspiring_khayyam[410139]: 167 167
Dec 13 09:24:45 compute-0 systemd[1]: libpod-5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc.scope: Deactivated successfully.
Dec 13 09:24:45 compute-0 conmon[410139]: conmon 5d383b80a8d19b304a45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc.scope/container/memory.events
Dec 13 09:24:45 compute-0 podman[410122]: 2025-12-13 09:24:45.3657156 +0000 UTC m=+0.189652015 container died 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-85b4e18c13b58dd0822820b68d4b62e66a89d07e60589abb2d2760fa31f89d49-merged.mount: Deactivated successfully.
Dec 13 09:24:45 compute-0 podman[410122]: 2025-12-13 09:24:45.410292357 +0000 UTC m=+0.234228742 container remove 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:24:45 compute-0 systemd[1]: libpod-conmon-5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc.scope: Deactivated successfully.
Dec 13 09:24:45 compute-0 podman[410164]: 2025-12-13 09:24:45.599442209 +0000 UTC m=+0.056515798 container create 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 09:24:45 compute-0 systemd[1]: Started libpod-conmon-154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60.scope.
Dec 13 09:24:45 compute-0 podman[410164]: 2025-12-13 09:24:45.572640147 +0000 UTC m=+0.029713826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:24:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:45 compute-0 nova_compute[248510]: 2025-12-13 09:24:45.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:45 compute-0 podman[410164]: 2025-12-13 09:24:45.69005851 +0000 UTC m=+0.147132189 container init 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:24:45 compute-0 podman[410164]: 2025-12-13 09:24:45.710248286 +0000 UTC m=+0.167321915 container start 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:24:45 compute-0 podman[410164]: 2025-12-13 09:24:45.714822861 +0000 UTC m=+0.171896490 container attach 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 09:24:46 compute-0 nervous_morse[410180]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:24:46 compute-0 nervous_morse[410180]: --> All data devices are unavailable
Dec 13 09:24:46 compute-0 ceph-mon[76537]: pgmap v3709: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:46 compute-0 systemd[1]: libpod-154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60.scope: Deactivated successfully.
Dec 13 09:24:46 compute-0 podman[410164]: 2025-12-13 09:24:46.341715175 +0000 UTC m=+0.798788774 container died 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55-merged.mount: Deactivated successfully.
Dec 13 09:24:46 compute-0 podman[410164]: 2025-12-13 09:24:46.402806767 +0000 UTC m=+0.859880396 container remove 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 09:24:46 compute-0 systemd[1]: libpod-conmon-154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60.scope: Deactivated successfully.
Dec 13 09:24:46 compute-0 sudo[410085]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:46 compute-0 sudo[410211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:24:46 compute-0 sudo[410211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:46 compute-0 sudo[410211]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:46 compute-0 sudo[410236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:24:46 compute-0 sudo[410236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:47 compute-0 podman[410274]: 2025-12-13 09:24:47.042479271 +0000 UTC m=+0.053357758 container create 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:24:47 compute-0 systemd[1]: Started libpod-conmon-6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75.scope.
Dec 13 09:24:47 compute-0 podman[410274]: 2025-12-13 09:24:47.020056919 +0000 UTC m=+0.030935396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:24:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:24:47 compute-0 podman[410274]: 2025-12-13 09:24:47.150026677 +0000 UTC m=+0.160905164 container init 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:24:47 compute-0 podman[410274]: 2025-12-13 09:24:47.156754386 +0000 UTC m=+0.167632853 container start 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:24:47 compute-0 podman[410274]: 2025-12-13 09:24:47.161230278 +0000 UTC m=+0.172108845 container attach 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:24:47 compute-0 affectionate_lovelace[410290]: 167 167
Dec 13 09:24:47 compute-0 systemd[1]: libpod-6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75.scope: Deactivated successfully.
Dec 13 09:24:47 compute-0 podman[410274]: 2025-12-13 09:24:47.164400948 +0000 UTC m=+0.175279415 container died 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 09:24:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3710: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f34934a0d2432ae41ee7e621fc0a5729f14e91d2f04788359b98dca23e3f9586-merged.mount: Deactivated successfully.
Dec 13 09:24:47 compute-0 podman[410274]: 2025-12-13 09:24:47.249326636 +0000 UTC m=+0.260205093 container remove 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:24:47 compute-0 systemd[1]: libpod-conmon-6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75.scope: Deactivated successfully.
Dec 13 09:24:47 compute-0 podman[410313]: 2025-12-13 09:24:47.46654816 +0000 UTC m=+0.054944378 container create c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 09:24:47 compute-0 systemd[1]: Started libpod-conmon-c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1.scope.
Dec 13 09:24:47 compute-0 podman[410313]: 2025-12-13 09:24:47.442338874 +0000 UTC m=+0.030735082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:24:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:47 compute-0 podman[410313]: 2025-12-13 09:24:47.581297297 +0000 UTC m=+0.169693505 container init c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 09:24:47 compute-0 podman[410313]: 2025-12-13 09:24:47.593812201 +0000 UTC m=+0.182208379 container start c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:24:47 compute-0 podman[410313]: 2025-12-13 09:24:47.59737789 +0000 UTC m=+0.185774288 container attach c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 09:24:47 compute-0 jovial_poincare[410330]: {
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:     "0": [
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:         {
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "devices": [
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "/dev/loop3"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             ],
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_name": "ceph_lv0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_size": "21470642176",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "name": "ceph_lv0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "tags": {
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cluster_name": "ceph",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.crush_device_class": "",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.encrypted": "0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.objectstore": "bluestore",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osd_id": "0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.type": "block",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.vdo": "0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.with_tpm": "0"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             },
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "type": "block",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "vg_name": "ceph_vg0"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:         }
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:     ],
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:     "1": [
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:         {
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "devices": [
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "/dev/loop4"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             ],
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_name": "ceph_lv1",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_size": "21470642176",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "name": "ceph_lv1",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "tags": {
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cluster_name": "ceph",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.crush_device_class": "",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.encrypted": "0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.objectstore": "bluestore",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osd_id": "1",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.type": "block",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.vdo": "0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.with_tpm": "0"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             },
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "type": "block",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "vg_name": "ceph_vg1"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:         }
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:     ],
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:     "2": [
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:         {
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "devices": [
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "/dev/loop5"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             ],
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_name": "ceph_lv2",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_size": "21470642176",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "name": "ceph_lv2",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "tags": {
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.cluster_name": "ceph",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.crush_device_class": "",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.encrypted": "0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.objectstore": "bluestore",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osd_id": "2",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.type": "block",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.vdo": "0",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:                 "ceph.with_tpm": "0"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             },
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "type": "block",
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:             "vg_name": "ceph_vg2"
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:         }
Dec 13 09:24:47 compute-0 jovial_poincare[410330]:     ]
Dec 13 09:24:47 compute-0 jovial_poincare[410330]: }
Dec 13 09:24:47 compute-0 systemd[1]: libpod-c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1.scope: Deactivated successfully.
Dec 13 09:24:47 compute-0 podman[410313]: 2025-12-13 09:24:47.939146317 +0000 UTC m=+0.527542525 container died c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 09:24:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c-merged.mount: Deactivated successfully.
Dec 13 09:24:48 compute-0 podman[410313]: 2025-12-13 09:24:48.004434214 +0000 UTC m=+0.592830402 container remove c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:24:48 compute-0 systemd[1]: libpod-conmon-c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1.scope: Deactivated successfully.
Dec 13 09:24:48 compute-0 sudo[410236]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:48 compute-0 podman[410352]: 2025-12-13 09:24:48.13993318 +0000 UTC m=+0.089156386 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:24:48 compute-0 podman[410354]: 2025-12-13 09:24:48.161691036 +0000 UTC m=+0.098535901 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:24:48 compute-0 sudo[410376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:24:48 compute-0 sudo[410376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:48 compute-0 sudo[410376]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:48 compute-0 sudo[410426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:24:48 compute-0 sudo[410426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:48 compute-0 podman[410413]: 2025-12-13 09:24:48.284146195 +0000 UTC m=+0.109383533 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:24:48 compute-0 ceph-mon[76537]: pgmap v3710: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:48 compute-0 podman[410479]: 2025-12-13 09:24:48.589754826 +0000 UTC m=+0.045894902 container create c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 09:24:48 compute-0 systemd[1]: Started libpod-conmon-c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843.scope.
Dec 13 09:24:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:24:48 compute-0 podman[410479]: 2025-12-13 09:24:48.57197065 +0000 UTC m=+0.028110746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:24:48 compute-0 podman[410479]: 2025-12-13 09:24:48.682031979 +0000 UTC m=+0.138172065 container init c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 09:24:48 compute-0 podman[410479]: 2025-12-13 09:24:48.694380658 +0000 UTC m=+0.150520775 container start c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:24:48 compute-0 podman[410479]: 2025-12-13 09:24:48.698737558 +0000 UTC m=+0.154877644 container attach c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:24:48 compute-0 frosty_bhabha[410495]: 167 167
Dec 13 09:24:48 compute-0 systemd[1]: libpod-c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843.scope: Deactivated successfully.
Dec 13 09:24:48 compute-0 podman[410479]: 2025-12-13 09:24:48.700682586 +0000 UTC m=+0.156822662 container died c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 09:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f54cd06cce4bd26b185bb60227e0cde33d272b1afa3ec1970596a2fe03e9026b-merged.mount: Deactivated successfully.
Dec 13 09:24:48 compute-0 podman[410479]: 2025-12-13 09:24:48.750237289 +0000 UTC m=+0.206377365 container remove c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:24:48 compute-0 systemd[1]: libpod-conmon-c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843.scope: Deactivated successfully.
Dec 13 09:24:48 compute-0 podman[410518]: 2025-12-13 09:24:48.976255154 +0000 UTC m=+0.062535728 container create af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:24:49 compute-0 systemd[1]: Started libpod-conmon-af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a.scope.
Dec 13 09:24:49 compute-0 podman[410518]: 2025-12-13 09:24:48.951774031 +0000 UTC m=+0.038054605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:24:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:24:49 compute-0 podman[410518]: 2025-12-13 09:24:49.089954764 +0000 UTC m=+0.176235378 container init af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:24:49 compute-0 podman[410518]: 2025-12-13 09:24:49.102124209 +0000 UTC m=+0.188404763 container start af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:24:49 compute-0 podman[410518]: 2025-12-13 09:24:49.105603147 +0000 UTC m=+0.191883781 container attach af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:24:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:49 compute-0 lvm[410612]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:24:49 compute-0 lvm[410612]: VG ceph_vg0 finished
Dec 13 09:24:49 compute-0 lvm[410613]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:24:49 compute-0 lvm[410613]: VG ceph_vg1 finished
Dec 13 09:24:49 compute-0 lvm[410615]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:24:49 compute-0 lvm[410615]: VG ceph_vg2 finished
Dec 13 09:24:50 compute-0 goofy_jemison[410534]: {}
Dec 13 09:24:50 compute-0 systemd[1]: libpod-af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a.scope: Deactivated successfully.
Dec 13 09:24:50 compute-0 systemd[1]: libpod-af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a.scope: Consumed 1.614s CPU time.
Dec 13 09:24:50 compute-0 podman[410518]: 2025-12-13 09:24:50.059390355 +0000 UTC m=+1.145670909 container died af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:24:50 compute-0 nova_compute[248510]: 2025-12-13 09:24:50.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e-merged.mount: Deactivated successfully.
Dec 13 09:24:50 compute-0 podman[410518]: 2025-12-13 09:24:50.11145812 +0000 UTC m=+1.197738684 container remove af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:24:50 compute-0 systemd[1]: libpod-conmon-af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a.scope: Deactivated successfully.
Dec 13 09:24:50 compute-0 sudo[410426]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:24:50 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:24:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:24:50 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:24:50 compute-0 sudo[410629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:24:50 compute-0 sudo[410629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:24:50 compute-0 sudo[410629]: pam_unix(sudo:session): session closed for user root
Dec 13 09:24:50 compute-0 ceph-mon[76537]: pgmap v3711: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:24:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:24:50 compute-0 nova_compute[248510]: 2025-12-13 09:24:50.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:52 compute-0 ceph-mon[76537]: pgmap v3712: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:54 compute-0 ceph-mon[76537]: pgmap v3713: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:55 compute-0 nova_compute[248510]: 2025-12-13 09:24:55.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3714: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:24:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:55.461 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:24:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:24:55.461 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:24:55 compute-0 nova_compute[248510]: 2025-12-13 09:24:55.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:24:56 compute-0 ceph-mon[76537]: pgmap v3714: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:56 compute-0 nova_compute[248510]: 2025-12-13 09:24:56.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:24:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:58 compute-0 ceph-mon[76537]: pgmap v3715: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:24:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:24:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:00 compute-0 nova_compute[248510]: 2025-12-13 09:25:00.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:00 compute-0 ceph-mon[76537]: pgmap v3716: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:00 compute-0 nova_compute[248510]: 2025-12-13 09:25:00.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:02 compute-0 ceph-mon[76537]: pgmap v3717: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:04 compute-0 ceph-mon[76537]: pgmap v3718: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:05 compute-0 nova_compute[248510]: 2025-12-13 09:25:05.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:05 compute-0 nova_compute[248510]: 2025-12-13 09:25:05.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:06 compute-0 ceph-mon[76537]: pgmap v3719: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:08 compute-0 ceph-mon[76537]: pgmap v3720: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:25:09
Dec 13 09:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Dec 13 09:25:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:25:10 compute-0 nova_compute[248510]: 2025-12-13 09:25:10.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:25:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:25:10 compute-0 ceph-mon[76537]: pgmap v3721: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:10 compute-0 nova_compute[248510]: 2025-12-13 09:25:10.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:25:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:12 compute-0 ceph-mon[76537]: pgmap v3722: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:14 compute-0 ceph-mon[76537]: pgmap v3723: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:15 compute-0 nova_compute[248510]: 2025-12-13 09:25:15.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:25:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1468542223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:25:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:25:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1468542223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:25:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1468542223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:25:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1468542223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:25:15 compute-0 nova_compute[248510]: 2025-12-13 09:25:15.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:16 compute-0 ceph-mon[76537]: pgmap v3724: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:18 compute-0 ceph-mon[76537]: pgmap v3725: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:18 compute-0 podman[410657]: 2025-12-13 09:25:18.985412023 +0000 UTC m=+0.063290528 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:25:18 compute-0 podman[410656]: 2025-12-13 09:25:18.990304105 +0000 UTC m=+0.072123569 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=multipathd, container_name=multipathd)
Dec 13 09:25:19 compute-0 podman[410655]: 2025-12-13 09:25:19.074371262 +0000 UTC m=+0.156494283 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:25:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:19 compute-0 ceph-mon[76537]: pgmap v3726: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.666 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.667 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.684 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.776 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.777 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.791 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 09:25:20 compute-0 nova_compute[248510]: 2025-12-13 09:25:20.792 248514 INFO nova.compute.claims [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Claim successful on node compute-0.ctlplane.example.com
Dec 13 09:25:21 compute-0 nova_compute[248510]: 2025-12-13 09:25:21.186 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.501117963890875e-05 of space, bias 1.0, pg target 0.004503353891672625 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696679221549024 of space, bias 1.0, pg target 0.20090037664647073 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.715245915521144e-07 of space, bias 4.0, pg target 0.0006858295098625373 quantized to 16 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:25:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:25:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:25:21 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653473999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:25:21 compute-0 nova_compute[248510]: 2025-12-13 09:25:21.781 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:21 compute-0 nova_compute[248510]: 2025-12-13 09:25:21.789 248514 DEBUG nova.compute.provider_tree [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:25:21 compute-0 nova_compute[248510]: 2025-12-13 09:25:21.970 248514 DEBUG nova.scheduler.client.report [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:25:22 compute-0 ceph-mon[76537]: pgmap v3727: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/653473999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:25:22 compute-0 ovn_controller[148476]: 2025-12-13T09:25:22Z|01663|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 13 09:25:22 compute-0 nova_compute[248510]: 2025-12-13 09:25:22.537 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:22 compute-0 nova_compute[248510]: 2025-12-13 09:25:22.538 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 09:25:22 compute-0 nova_compute[248510]: 2025-12-13 09:25:22.683 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 09:25:22 compute-0 nova_compute[248510]: 2025-12-13 09:25:22.684 248514 DEBUG nova.network.neutron [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 09:25:22 compute-0 nova_compute[248510]: 2025-12-13 09:25:22.708 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 09:25:22 compute-0 nova_compute[248510]: 2025-12-13 09:25:22.740 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.065 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.067 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.067 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Creating image(s)
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.094 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.120 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.144 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.149 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.238 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.239 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.240 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.240 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.268 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.273 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.329 248514 DEBUG nova.network.neutron [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.330 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 09:25:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.655 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.724 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] resizing rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.806 248514 DEBUG nova.objects.instance [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lazy-loading 'migration_context' on Instance uuid 958d2cbf-5c68-4aa8-88b5-117c871f8959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.984 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.985 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Ensure instance console log exists: /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.985 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.986 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.986 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.987 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.992 248514 WARNING nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.997 248514 DEBUG nova.virt.libvirt.host [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 09:25:23 compute-0 nova_compute[248510]: 2025-12-13 09:25:23.998 248514 DEBUG nova.virt.libvirt.host [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.000 248514 DEBUG nova.virt.libvirt.host [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.001 248514 DEBUG nova.virt.libvirt.host [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.001 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.001 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.003 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.003 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.003 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.003 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.006 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:24 compute-0 ceph-mon[76537]: pgmap v3728: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:25:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734257284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.580 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.609 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:25:24 compute-0 nova_compute[248510]: 2025-12-13 09:25:24.614 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 09:25:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280297851' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.189 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.191 248514 DEBUG nova.objects.instance [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 958d2cbf-5c68-4aa8-88b5-117c871f8959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:25:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.304 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] End _get_guest_xml xml=<domain type="kvm">
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <uuid>958d2cbf-5c68-4aa8-88b5-117c871f8959</uuid>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <name>instance-0000009a</name>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <memory>131072</memory>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <vcpu>1</vcpu>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <metadata>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <nova:name>tempest-AggregatesAdminTestJSON-server-1018707817</nova:name>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <nova:creationTime>2025-12-13 09:25:23</nova:creationTime>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <nova:flavor name="m1.nano">
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <nova:memory>128</nova:memory>
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <nova:disk>1</nova:disk>
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <nova:swap>0</nova:swap>
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <nova:vcpus>1</nova:vcpus>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       </nova:flavor>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <nova:owner>
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <nova:user uuid="075d71b5d40a478690478a7e4dcb68bc">tempest-AggregatesAdminTestJSON-1874549376-project-member</nova:user>
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <nova:project uuid="77bc8076d5d1433c8b0b9fc3dc0286e2">tempest-AggregatesAdminTestJSON-1874549376</nova:project>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       </nova:owner>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <nova:ports/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     </nova:instance>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   </metadata>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <sysinfo type="smbios">
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <system>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <entry name="manufacturer">RDO</entry>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <entry name="product">OpenStack Compute</entry>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <entry name="serial">958d2cbf-5c68-4aa8-88b5-117c871f8959</entry>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <entry name="uuid">958d2cbf-5c68-4aa8-88b5-117c871f8959</entry>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <entry name="family">Virtual Machine</entry>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     </system>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   </sysinfo>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <os>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <boot dev="hd"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <smbios mode="sysinfo"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   </os>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <features>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <acpi/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <apic/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <vmcoreinfo/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   </features>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <clock offset="utc">
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <timer name="hpet" present="no"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   </clock>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <cpu mode="host-model" match="exact">
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   </cpu>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   <devices>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <disk type="network" device="disk">
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/958d2cbf-5c68-4aa8-88b5-117c871f8959_disk">
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       </source>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <target dev="vda" bus="virtio"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <disk type="network" device="cdrom">
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <driver type="raw" cache="none"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <source protocol="rbd" name="vms/958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config">
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <host name="192.168.122.100" port="6789"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       </source>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <auth username="openstack">
Dec 13 09:25:25 compute-0 nova_compute[248510]:         <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       </auth>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <target dev="sda" bus="sata"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     </disk>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <serial type="pty">
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <log file="/var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/console.log" append="off"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     </serial>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <video>
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <model type="virtio"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     </video>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <input type="tablet" bus="usb"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <rng model="virtio">
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <backend model="random">/dev/urandom</backend>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     </rng>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <controller type="usb" index="0"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     <memballoon model="virtio">
Dec 13 09:25:25 compute-0 nova_compute[248510]:       <stats period="10"/>
Dec 13 09:25:25 compute-0 nova_compute[248510]:     </memballoon>
Dec 13 09:25:25 compute-0 nova_compute[248510]:   </devices>
Dec 13 09:25:25 compute-0 nova_compute[248510]: </domain>
Dec 13 09:25:25 compute-0 nova_compute[248510]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.580 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.581 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.582 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Using config drive
Dec 13 09:25:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/734257284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:25:25 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3280297851' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.685 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.950 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Creating config drive at /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config
Dec 13 09:25:25 compute-0 nova_compute[248510]: 2025-12-13 09:25:25.959 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp402zzc4j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:26 compute-0 nova_compute[248510]: 2025-12-13 09:25:26.119 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp402zzc4j" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:26 compute-0 nova_compute[248510]: 2025-12-13 09:25:26.155 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 09:25:26 compute-0 nova_compute[248510]: 2025-12-13 09:25:26.161 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:26 compute-0 nova_compute[248510]: 2025-12-13 09:25:26.469 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:26 compute-0 nova_compute[248510]: 2025-12-13 09:25:26.471 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Deleting local config drive /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config because it was imported into RBD.
Dec 13 09:25:26 compute-0 systemd-machined[210538]: New machine qemu-187-instance-0000009a.
Dec 13 09:25:26 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-0000009a.
Dec 13 09:25:26 compute-0 ceph-mon[76537]: pgmap v3729: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Dec 13 09:25:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Dec 13 09:25:27 compute-0 ceph-mon[76537]: pgmap v3730: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.640 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617927.6396694, 958d2cbf-5c68-4aa8-88b5-117c871f8959 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.641 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] VM Resumed (Lifecycle Event)
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.645 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.646 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.651 248514 INFO nova.virt.libvirt.driver [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance spawned successfully.
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.652 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.670 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.677 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.683 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.684 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.684 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.685 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.685 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.686 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.712 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.713 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617927.6415715, 958d2cbf-5c68-4aa8-88b5-117c871f8959 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.713 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] VM Started (Lifecycle Event)
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.877 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.882 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.907 248514 INFO nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Took 4.84 seconds to spawn the instance on the hypervisor.
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.908 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:25:27 compute-0 nova_compute[248510]: 2025-12-13 09:25:27.909 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 09:25:28 compute-0 nova_compute[248510]: 2025-12-13 09:25:28.047 248514 INFO nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Took 7.32 seconds to build instance.
Dec 13 09:25:28 compute-0 nova_compute[248510]: 2025-12-13 09:25:28.089 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 806 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 13 09:25:29 compute-0 nova_compute[248510]: 2025-12-13 09:25:29.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:30 compute-0 ceph-mon[76537]: pgmap v3731: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 806 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.433 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.433 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.434 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "958d2cbf-5c68-4aa8-88b5-117c871f8959-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.434 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.434 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.435 248514 INFO nova.compute.manager [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Terminating instance
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.436 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "refresh_cache-958d2cbf-5c68-4aa8-88b5-117c871f8959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.436 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquired lock "refresh_cache-958d2cbf-5c68-4aa8-88b5-117c871f8959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.436 248514 DEBUG nova.network.neutron [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.799 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:30 compute-0 nova_compute[248510]: 2025-12-13 09:25:30.967 248514 DEBUG nova.network.neutron [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:25:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 09:25:31 compute-0 nova_compute[248510]: 2025-12-13 09:25:31.296 248514 DEBUG nova.network.neutron [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:25:31 compute-0 nova_compute[248510]: 2025-12-13 09:25:31.313 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Releasing lock "refresh_cache-958d2cbf-5c68-4aa8-88b5-117c871f8959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 09:25:31 compute-0 nova_compute[248510]: 2025-12-13 09:25:31.314 248514 DEBUG nova.compute.manager [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 09:25:31 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Dec 13 09:25:31 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Consumed 4.447s CPU time.
Dec 13 09:25:31 compute-0 systemd-machined[210538]: Machine qemu-187-instance-0000009a terminated.
Dec 13 09:25:31 compute-0 nova_compute[248510]: 2025-12-13 09:25:31.748 248514 INFO nova.virt.libvirt.driver [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance destroyed successfully.
Dec 13 09:25:31 compute-0 nova_compute[248510]: 2025-12-13 09:25:31.749 248514 DEBUG nova.objects.instance [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lazy-loading 'resources' on Instance uuid 958d2cbf-5c68-4aa8-88b5-117c871f8959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 09:25:31 compute-0 nova_compute[248510]: 2025-12-13 09:25:31.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:31 compute-0 ceph-mon[76537]: pgmap v3732: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 09:25:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 85 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Dec 13 09:25:33 compute-0 nova_compute[248510]: 2025-12-13 09:25:33.282 248514 INFO nova.virt.libvirt.driver [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Deleting instance files /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959_del
Dec 13 09:25:33 compute-0 nova_compute[248510]: 2025-12-13 09:25:33.283 248514 INFO nova.virt.libvirt.driver [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Deletion of /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959_del complete
Dec 13 09:25:33 compute-0 nova_compute[248510]: 2025-12-13 09:25:33.340 248514 INFO nova.compute.manager [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Took 2.03 seconds to destroy the instance on the hypervisor.
Dec 13 09:25:33 compute-0 nova_compute[248510]: 2025-12-13 09:25:33.341 248514 DEBUG oslo.service.loopingcall [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 09:25:33 compute-0 nova_compute[248510]: 2025-12-13 09:25:33.341 248514 DEBUG nova.compute.manager [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 09:25:33 compute-0 nova_compute[248510]: 2025-12-13 09:25:33.341 248514 DEBUG nova.network.neutron [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 09:25:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:33 compute-0 nova_compute[248510]: 2025-12-13 09:25:33.955 248514 DEBUG nova.network.neutron [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 09:25:33 compute-0 nova_compute[248510]: 2025-12-13 09:25:33.972 248514 DEBUG nova.network.neutron [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 09:25:34 compute-0 nova_compute[248510]: 2025-12-13 09:25:34.005 248514 INFO nova.compute.manager [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Took 0.66 seconds to deallocate network for instance.
Dec 13 09:25:34 compute-0 nova_compute[248510]: 2025-12-13 09:25:34.066 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:34 compute-0 nova_compute[248510]: 2025-12-13 09:25:34.067 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:34 compute-0 ceph-mon[76537]: pgmap v3733: 321 pgs: 321 active+clean; 85 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Dec 13 09:25:34 compute-0 nova_compute[248510]: 2025-12-13 09:25:34.352 248514 DEBUG oslo_concurrency.processutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:25:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803142436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:25:34 compute-0 nova_compute[248510]: 2025-12-13 09:25:34.972 248514 DEBUG oslo_concurrency.processutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:34 compute-0 nova_compute[248510]: 2025-12-13 09:25:34.981 248514 DEBUG nova.compute.provider_tree [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:25:35 compute-0 nova_compute[248510]: 2025-12-13 09:25:35.007 248514 DEBUG nova.scheduler.client.report [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:25:35 compute-0 nova_compute[248510]: 2025-12-13 09:25:35.035 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:35 compute-0 nova_compute[248510]: 2025-12-13 09:25:35.064 248514 INFO nova.scheduler.client.report [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Deleted allocations for instance 958d2cbf-5c68-4aa8-88b5-117c871f8959
Dec 13 09:25:35 compute-0 nova_compute[248510]: 2025-12-13 09:25:35.085 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:35 compute-0 nova_compute[248510]: 2025-12-13 09:25:35.161 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Dec 13 09:25:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3803142436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:25:35 compute-0 nova_compute[248510]: 2025-12-13 09:25:35.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:36 compute-0 ceph-mon[76537]: pgmap v3734: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Dec 13 09:25:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 489 KiB/s wr, 113 op/s
Dec 13 09:25:37 compute-0 ceph-mon[76537]: pgmap v3735: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 489 KiB/s wr, 113 op/s
Dec 13 09:25:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 489 KiB/s wr, 114 op/s
Dec 13 09:25:39 compute-0 ceph-mon[76537]: pgmap v3736: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 489 KiB/s wr, 114 op/s
Dec 13 09:25:39 compute-0 nova_compute[248510]: 2025-12-13 09:25:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:39 compute-0 nova_compute[248510]: 2025-12-13 09:25:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:25:39 compute-0 nova_compute[248510]: 2025-12-13 09:25:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:25:39 compute-0 nova_compute[248510]: 2025-12-13 09:25:39.808 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:25:39 compute-0 sshd-session[411122]: Invalid user cosmos from 80.94.92.165 port 36744
Dec 13 09:25:40 compute-0 nova_compute[248510]: 2025-12-13 09:25:40.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:40 compute-0 sshd-session[411122]: Connection closed by invalid user cosmos 80.94.92.165 port 36744 [preauth]
Dec 13 09:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:25:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:25:40 compute-0 nova_compute[248510]: 2025-12-13 09:25:40.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 65 op/s
Dec 13 09:25:41 compute-0 nova_compute[248510]: 2025-12-13 09:25:41.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:25:42.133 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:25:42 compute-0 nova_compute[248510]: 2025-12-13 09:25:42.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:42 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:25:42.135 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:25:42 compute-0 ceph-mon[76537]: pgmap v3737: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 65 op/s
Dec 13 09:25:42 compute-0 nova_compute[248510]: 2025-12-13 09:25:42.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:42 compute-0 nova_compute[248510]: 2025-12-13 09:25:42.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:42 compute-0 nova_compute[248510]: 2025-12-13 09:25:42.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:42 compute-0 nova_compute[248510]: 2025-12-13 09:25:42.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:42 compute-0 nova_compute[248510]: 2025-12-13 09:25:42.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:42 compute-0 nova_compute[248510]: 2025-12-13 09:25:42.799 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:25:42 compute-0 nova_compute[248510]: 2025-12-13 09:25:42.799 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Dec 13 09:25:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:25:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2783843062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:25:43 compute-0 nova_compute[248510]: 2025-12-13 09:25:43.375 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2783843062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:25:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:43 compute-0 nova_compute[248510]: 2025-12-13 09:25:43.559 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:25:43 compute-0 nova_compute[248510]: 2025-12-13 09:25:43.560 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3437MB free_disk=59.98737745825201GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:25:43 compute-0 nova_compute[248510]: 2025-12-13 09:25:43.561 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:43 compute-0 nova_compute[248510]: 2025-12-13 09:25:43.561 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:43 compute-0 nova_compute[248510]: 2025-12-13 09:25:43.641 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:25:43 compute-0 nova_compute[248510]: 2025-12-13 09:25:43.642 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:25:43 compute-0 nova_compute[248510]: 2025-12-13 09:25:43.659 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:25:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:25:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025571377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:25:44 compute-0 nova_compute[248510]: 2025-12-13 09:25:44.234 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:25:44 compute-0 nova_compute[248510]: 2025-12-13 09:25:44.244 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:25:44 compute-0 nova_compute[248510]: 2025-12-13 09:25:44.283 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:25:44 compute-0 nova_compute[248510]: 2025-12-13 09:25:44.331 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:25:44 compute-0 nova_compute[248510]: 2025-12-13 09:25:44.331 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:44 compute-0 ceph-mon[76537]: pgmap v3738: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Dec 13 09:25:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2025571377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:25:45 compute-0 nova_compute[248510]: 2025-12-13 09:25:45.090 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 938 B/s wr, 22 op/s
Dec 13 09:25:45 compute-0 nova_compute[248510]: 2025-12-13 09:25:45.333 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:45 compute-0 nova_compute[248510]: 2025-12-13 09:25:45.334 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:45 compute-0 nova_compute[248510]: 2025-12-13 09:25:45.334 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:25:45 compute-0 nova_compute[248510]: 2025-12-13 09:25:45.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:46 compute-0 ceph-mon[76537]: pgmap v3739: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 938 B/s wr, 22 op/s
Dec 13 09:25:46 compute-0 nova_compute[248510]: 2025-12-13 09:25:46.745 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617931.7435424, 958d2cbf-5c68-4aa8-88b5-117c871f8959 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 09:25:46 compute-0 nova_compute[248510]: 2025-12-13 09:25:46.745 248514 INFO nova.compute.manager [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] VM Stopped (Lifecycle Event)
Dec 13 09:25:46 compute-0 nova_compute[248510]: 2025-12-13 09:25:46.777 248514 DEBUG nova.compute.manager [None req-3b6d0744-5dbf-4d2f-a044-329a0a30429c - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 09:25:47 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:25:47.137 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:25:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 09:25:47 compute-0 ceph-mon[76537]: pgmap v3740: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 09:25:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 09:25:50 compute-0 podman[411170]: 2025-12-13 09:25:49.999558974 +0000 UTC m=+0.081754131 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 09:25:50 compute-0 podman[411171]: 2025-12-13 09:25:50.019116934 +0000 UTC m=+0.092485870 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 09:25:50 compute-0 podman[411169]: 2025-12-13 09:25:50.033014672 +0000 UTC m=+0.111452845 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:25:50 compute-0 nova_compute[248510]: 2025-12-13 09:25:50.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:50 compute-0 ceph-mon[76537]: pgmap v3741: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 09:25:50 compute-0 sudo[411231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:25:50 compute-0 sudo[411231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:50 compute-0 sudo[411231]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:50 compute-0 sudo[411256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:25:50 compute-0 sudo[411256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:50 compute-0 nova_compute[248510]: 2025-12-13 09:25:50.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:51 compute-0 sudo[411256]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:25:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:25:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:25:51 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:25:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:25:51 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:25:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:25:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:25:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:25:51 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:25:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:25:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:25:51 compute-0 sudo[411313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:25:51 compute-0 sudo[411313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:51 compute-0 sudo[411313]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:51 compute-0 sudo[411338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:25:51 compute-0 sudo[411338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:25:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:25:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:25:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:25:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:25:51 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:25:51 compute-0 podman[411376]: 2025-12-13 09:25:51.535650369 +0000 UTC m=+0.045338408 container create fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 09:25:51 compute-0 systemd[1]: Started libpod-conmon-fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173.scope.
Dec 13 09:25:51 compute-0 podman[411376]: 2025-12-13 09:25:51.516517729 +0000 UTC m=+0.026205788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:25:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:25:51 compute-0 podman[411376]: 2025-12-13 09:25:51.64063384 +0000 UTC m=+0.150321899 container init fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:25:51 compute-0 podman[411376]: 2025-12-13 09:25:51.649267427 +0000 UTC m=+0.158955466 container start fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:25:51 compute-0 podman[411376]: 2025-12-13 09:25:51.652711273 +0000 UTC m=+0.162399332 container attach fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:25:51 compute-0 thirsty_mayer[411392]: 167 167
Dec 13 09:25:51 compute-0 systemd[1]: libpod-fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173.scope: Deactivated successfully.
Dec 13 09:25:51 compute-0 podman[411376]: 2025-12-13 09:25:51.655914983 +0000 UTC m=+0.165603052 container died fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 09:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2dbb4d6ccdceca01990aa13f198529ad3cb278d150389150f4bbfabc60f837f-merged.mount: Deactivated successfully.
Dec 13 09:25:51 compute-0 podman[411376]: 2025-12-13 09:25:51.701527447 +0000 UTC m=+0.211215486 container remove fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:25:51 compute-0 systemd[1]: libpod-conmon-fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173.scope: Deactivated successfully.
Dec 13 09:25:51 compute-0 podman[411416]: 2025-12-13 09:25:51.940637039 +0000 UTC m=+0.059186214 container create 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:25:51 compute-0 systemd[1]: Started libpod-conmon-50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831.scope.
Dec 13 09:25:52 compute-0 podman[411416]: 2025-12-13 09:25:51.917848918 +0000 UTC m=+0.036398153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:25:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:52 compute-0 podman[411416]: 2025-12-13 09:25:52.045115088 +0000 UTC m=+0.163664293 container init 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:25:52 compute-0 podman[411416]: 2025-12-13 09:25:52.055233832 +0000 UTC m=+0.173783007 container start 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:25:52 compute-0 podman[411416]: 2025-12-13 09:25:52.059271653 +0000 UTC m=+0.177820828 container attach 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 09:25:52 compute-0 ceph-mon[76537]: pgmap v3742: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:52 compute-0 crazy_shaw[411433]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:25:52 compute-0 crazy_shaw[411433]: --> All data devices are unavailable
Dec 13 09:25:52 compute-0 systemd[1]: libpod-50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831.scope: Deactivated successfully.
Dec 13 09:25:52 compute-0 podman[411416]: 2025-12-13 09:25:52.65635456 +0000 UTC m=+0.774903735 container died 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154-merged.mount: Deactivated successfully.
Dec 13 09:25:52 compute-0 podman[411416]: 2025-12-13 09:25:52.716655802 +0000 UTC m=+0.835204977 container remove 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 09:25:52 compute-0 systemd[1]: libpod-conmon-50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831.scope: Deactivated successfully.
Dec 13 09:25:52 compute-0 sudo[411338]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:52 compute-0 sudo[411466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:25:52 compute-0 sudo[411466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:52 compute-0 sudo[411466]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:52 compute-0 sudo[411491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:25:52 compute-0 sudo[411491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:53 compute-0 podman[411527]: 2025-12-13 09:25:53.263996292 +0000 UTC m=+0.064144829 container create e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:25:53 compute-0 systemd[1]: Started libpod-conmon-e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7.scope.
Dec 13 09:25:53 compute-0 podman[411527]: 2025-12-13 09:25:53.237371585 +0000 UTC m=+0.037520182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:25:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:25:53 compute-0 podman[411527]: 2025-12-13 09:25:53.360780398 +0000 UTC m=+0.160928935 container init e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:25:53 compute-0 podman[411527]: 2025-12-13 09:25:53.367672981 +0000 UTC m=+0.167821498 container start e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 09:25:53 compute-0 podman[411527]: 2025-12-13 09:25:53.371321032 +0000 UTC m=+0.171469549 container attach e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 09:25:53 compute-0 lucid_murdock[411543]: 167 167
Dec 13 09:25:53 compute-0 systemd[1]: libpod-e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7.scope: Deactivated successfully.
Dec 13 09:25:53 compute-0 podman[411527]: 2025-12-13 09:25:53.376698227 +0000 UTC m=+0.176846734 container died e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 09:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3478e4047d20cbb5e9647f8a84f895551596915a0f5511e3b56ad7a4174eb38-merged.mount: Deactivated successfully.
Dec 13 09:25:53 compute-0 podman[411527]: 2025-12-13 09:25:53.415557811 +0000 UTC m=+0.215706318 container remove e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:25:53 compute-0 systemd[1]: libpod-conmon-e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7.scope: Deactivated successfully.
Dec 13 09:25:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:53 compute-0 podman[411565]: 2025-12-13 09:25:53.60422225 +0000 UTC m=+0.058093687 container create 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:25:53 compute-0 systemd[1]: Started libpod-conmon-97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799.scope.
Dec 13 09:25:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:25:53 compute-0 podman[411565]: 2025-12-13 09:25:53.574991428 +0000 UTC m=+0.028862955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:53 compute-0 podman[411565]: 2025-12-13 09:25:53.692259797 +0000 UTC m=+0.146131244 container init 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:25:53 compute-0 podman[411565]: 2025-12-13 09:25:53.704223187 +0000 UTC m=+0.158094634 container start 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:25:53 compute-0 podman[411565]: 2025-12-13 09:25:53.708517265 +0000 UTC m=+0.162388692 container attach 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 09:25:54 compute-0 brave_villani[411581]: {
Dec 13 09:25:54 compute-0 brave_villani[411581]:     "0": [
Dec 13 09:25:54 compute-0 brave_villani[411581]:         {
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "devices": [
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "/dev/loop3"
Dec 13 09:25:54 compute-0 brave_villani[411581]:             ],
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_name": "ceph_lv0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_size": "21470642176",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "name": "ceph_lv0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "tags": {
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cluster_name": "ceph",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.crush_device_class": "",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.encrypted": "0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.objectstore": "bluestore",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osd_id": "0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.type": "block",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.vdo": "0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.with_tpm": "0"
Dec 13 09:25:54 compute-0 brave_villani[411581]:             },
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "type": "block",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "vg_name": "ceph_vg0"
Dec 13 09:25:54 compute-0 brave_villani[411581]:         }
Dec 13 09:25:54 compute-0 brave_villani[411581]:     ],
Dec 13 09:25:54 compute-0 brave_villani[411581]:     "1": [
Dec 13 09:25:54 compute-0 brave_villani[411581]:         {
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "devices": [
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "/dev/loop4"
Dec 13 09:25:54 compute-0 brave_villani[411581]:             ],
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_name": "ceph_lv1",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_size": "21470642176",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "name": "ceph_lv1",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "tags": {
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cluster_name": "ceph",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.crush_device_class": "",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.encrypted": "0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.objectstore": "bluestore",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osd_id": "1",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.type": "block",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.vdo": "0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.with_tpm": "0"
Dec 13 09:25:54 compute-0 brave_villani[411581]:             },
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "type": "block",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "vg_name": "ceph_vg1"
Dec 13 09:25:54 compute-0 brave_villani[411581]:         }
Dec 13 09:25:54 compute-0 brave_villani[411581]:     ],
Dec 13 09:25:54 compute-0 brave_villani[411581]:     "2": [
Dec 13 09:25:54 compute-0 brave_villani[411581]:         {
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "devices": [
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "/dev/loop5"
Dec 13 09:25:54 compute-0 brave_villani[411581]:             ],
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_name": "ceph_lv2",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_size": "21470642176",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "name": "ceph_lv2",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "tags": {
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.cluster_name": "ceph",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.crush_device_class": "",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.encrypted": "0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.objectstore": "bluestore",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osd_id": "2",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.type": "block",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.vdo": "0",
Dec 13 09:25:54 compute-0 brave_villani[411581]:                 "ceph.with_tpm": "0"
Dec 13 09:25:54 compute-0 brave_villani[411581]:             },
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "type": "block",
Dec 13 09:25:54 compute-0 brave_villani[411581]:             "vg_name": "ceph_vg2"
Dec 13 09:25:54 compute-0 brave_villani[411581]:         }
Dec 13 09:25:54 compute-0 brave_villani[411581]:     ]
Dec 13 09:25:54 compute-0 brave_villani[411581]: }
Dec 13 09:25:54 compute-0 systemd[1]: libpod-97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799.scope: Deactivated successfully.
Dec 13 09:25:54 compute-0 podman[411565]: 2025-12-13 09:25:54.082670023 +0000 UTC m=+0.536541450 container died 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419-merged.mount: Deactivated successfully.
Dec 13 09:25:54 compute-0 podman[411565]: 2025-12-13 09:25:54.138833391 +0000 UTC m=+0.592704828 container remove 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:25:54 compute-0 systemd[1]: libpod-conmon-97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799.scope: Deactivated successfully.
Dec 13 09:25:54 compute-0 sudo[411491]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:54 compute-0 sudo[411604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:25:54 compute-0 sudo[411604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:54 compute-0 sudo[411604]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:54 compute-0 ceph-mon[76537]: pgmap v3743: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:54 compute-0 sudo[411629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:25:54 compute-0 sudo[411629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:54 compute-0 podman[411667]: 2025-12-13 09:25:54.735689663 +0000 UTC m=+0.073350900 container create 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:25:54 compute-0 systemd[1]: Started libpod-conmon-7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e.scope.
Dec 13 09:25:54 compute-0 podman[411667]: 2025-12-13 09:25:54.706210424 +0000 UTC m=+0.043871721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:25:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:25:54 compute-0 podman[411667]: 2025-12-13 09:25:54.834615732 +0000 UTC m=+0.172277029 container init 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:25:54 compute-0 podman[411667]: 2025-12-13 09:25:54.849049274 +0000 UTC m=+0.186710501 container start 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:25:54 compute-0 podman[411667]: 2025-12-13 09:25:54.852676715 +0000 UTC m=+0.190337932 container attach 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:25:54 compute-0 happy_wright[411683]: 167 167
Dec 13 09:25:54 compute-0 systemd[1]: libpod-7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e.scope: Deactivated successfully.
Dec 13 09:25:54 compute-0 podman[411667]: 2025-12-13 09:25:54.858799439 +0000 UTC m=+0.196460656 container died 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-88956d8d9841c168bed1cca6c6a029cf1365c8060bebdd223dcab78c95697cae-merged.mount: Deactivated successfully.
Dec 13 09:25:54 compute-0 podman[411667]: 2025-12-13 09:25:54.901452138 +0000 UTC m=+0.239113345 container remove 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:25:54 compute-0 systemd[1]: libpod-conmon-7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e.scope: Deactivated successfully.
Dec 13 09:25:55 compute-0 podman[411706]: 2025-12-13 09:25:55.103477942 +0000 UTC m=+0.044970898 container create 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 09:25:55 compute-0 nova_compute[248510]: 2025-12-13 09:25:55.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:55 compute-0 systemd[1]: Started libpod-conmon-78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a.scope.
Dec 13 09:25:55 compute-0 podman[411706]: 2025-12-13 09:25:55.084430505 +0000 UTC m=+0.025923461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:25:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:25:55 compute-0 podman[411706]: 2025-12-13 09:25:55.216761292 +0000 UTC m=+0.158254278 container init 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:25:55 compute-0 podman[411706]: 2025-12-13 09:25:55.224341872 +0000 UTC m=+0.165834828 container start 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:25:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:55 compute-0 podman[411706]: 2025-12-13 09:25:55.231772458 +0000 UTC m=+0.173265414 container attach 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:25:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:25:55.461 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:25:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:25:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:25:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:25:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:25:55 compute-0 nova_compute[248510]: 2025-12-13 09:25:55.814 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:25:56 compute-0 lvm[411801]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:25:56 compute-0 lvm[411799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:25:56 compute-0 lvm[411799]: VG ceph_vg0 finished
Dec 13 09:25:56 compute-0 lvm[411801]: VG ceph_vg1 finished
Dec 13 09:25:56 compute-0 lvm[411803]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:25:56 compute-0 lvm[411803]: VG ceph_vg2 finished
Dec 13 09:25:56 compute-0 sharp_gagarin[411722]: {}
Dec 13 09:25:56 compute-0 systemd[1]: libpod-78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a.scope: Deactivated successfully.
Dec 13 09:25:56 compute-0 systemd[1]: libpod-78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a.scope: Consumed 1.584s CPU time.
Dec 13 09:25:56 compute-0 podman[411706]: 2025-12-13 09:25:56.142207489 +0000 UTC m=+1.083700445 container died 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:25:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f-merged.mount: Deactivated successfully.
Dec 13 09:25:56 compute-0 podman[411706]: 2025-12-13 09:25:56.212100021 +0000 UTC m=+1.153592977 container remove 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 09:25:56 compute-0 systemd[1]: libpod-conmon-78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a.scope: Deactivated successfully.
Dec 13 09:25:56 compute-0 sudo[411629]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:25:56 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:25:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:25:56 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:25:56 compute-0 ceph-mon[76537]: pgmap v3744: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:56 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:25:56 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:25:56 compute-0 sudo[411817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:25:56 compute-0 sudo[411817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:25:56 compute-0 sudo[411817]: pam_unix(sudo:session): session closed for user root
Dec 13 09:25:56 compute-0 nova_compute[248510]: 2025-12-13 09:25:56.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:25:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:58 compute-0 ceph-mon[76537]: pgmap v3745: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:25:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:25:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:00 compute-0 nova_compute[248510]: 2025-12-13 09:26:00.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:00 compute-0 ceph-mon[76537]: pgmap v3746: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:00 compute-0 nova_compute[248510]: 2025-12-13 09:26:00.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.339437) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961339472, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1092, "num_deletes": 251, "total_data_size": 1682058, "memory_usage": 1701088, "flush_reason": "Manual Compaction"}
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961353852, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1641299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74080, "largest_seqno": 75171, "table_properties": {"data_size": 1635982, "index_size": 2776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11377, "raw_average_key_size": 19, "raw_value_size": 1625349, "raw_average_value_size": 2821, "num_data_blocks": 124, "num_entries": 576, "num_filter_entries": 576, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617859, "oldest_key_time": 1765617859, "file_creation_time": 1765617961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 14456 microseconds, and 6463 cpu microseconds.
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.353890) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1641299 bytes OK
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.353911) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.355764) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.355777) EVENT_LOG_v1 {"time_micros": 1765617961355773, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.355797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1676971, prev total WAL file size 1676971, number of live WAL files 2.
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.356542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(1602KB)], [176(9879KB)]
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961356570, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11757709, "oldest_snapshot_seqno": -1}
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9246 keys, 9916046 bytes, temperature: kUnknown
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961419851, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9916046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9860075, "index_size": 31730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 243086, "raw_average_key_size": 26, "raw_value_size": 9701178, "raw_average_value_size": 1049, "num_data_blocks": 1213, "num_entries": 9246, "num_filter_entries": 9246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.420130) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9916046 bytes
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.421434) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.5 rd, 156.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.6 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(13.2) write-amplify(6.0) OK, records in: 9760, records dropped: 514 output_compression: NoCompression
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.421449) EVENT_LOG_v1 {"time_micros": 1765617961421442, "job": 110, "event": "compaction_finished", "compaction_time_micros": 63367, "compaction_time_cpu_micros": 27207, "output_level": 6, "num_output_files": 1, "total_output_size": 9916046, "num_input_records": 9760, "num_output_records": 9246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961421785, "job": 110, "event": "table_file_deletion", "file_number": 178}
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961423387, "job": 110, "event": "table_file_deletion", "file_number": 176}
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.356492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:26:01 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:26:02 compute-0 ceph-mon[76537]: pgmap v3747: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:04 compute-0 ceph-mon[76537]: pgmap v3748: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:04 compute-0 nova_compute[248510]: 2025-12-13 09:26:04.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:04 compute-0 nova_compute[248510]: 2025-12-13 09:26:04.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:26:05 compute-0 nova_compute[248510]: 2025-12-13 09:26:05.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:05 compute-0 nova_compute[248510]: 2025-12-13 09:26:05.821 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:06 compute-0 ceph-mon[76537]: pgmap v3749: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:07 compute-0 nova_compute[248510]: 2025-12-13 09:26:07.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:08 compute-0 ceph-mon[76537]: pgmap v3750: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:26:09
Dec 13 09:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Dec 13 09:26:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:26:10 compute-0 nova_compute[248510]: 2025-12-13 09:26:10.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:26:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:26:10 compute-0 ceph-mon[76537]: pgmap v3751: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:10 compute-0 nova_compute[248510]: 2025-12-13 09:26:10.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:26:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:12 compute-0 ceph-mon[76537]: pgmap v3752: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:14 compute-0 ceph-mon[76537]: pgmap v3753: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:26:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3580943426' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:26:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:26:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3580943426' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:26:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:15 compute-0 nova_compute[248510]: 2025-12-13 09:26:15.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3580943426' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:26:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3580943426' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:26:15 compute-0 nova_compute[248510]: 2025-12-13 09:26:15.827 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:16 compute-0 ceph-mon[76537]: pgmap v3754: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:18 compute-0 ceph-mon[76537]: pgmap v3755: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:19 compute-0 ceph-mon[76537]: pgmap v3756: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:20 compute-0 nova_compute[248510]: 2025-12-13 09:26:20.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:20 compute-0 nova_compute[248510]: 2025-12-13 09:26:20.829 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:20 compute-0 podman[411844]: 2025-12-13 09:26:20.980021831 +0000 UTC m=+0.061861462 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 09:26:20 compute-0 podman[411843]: 2025-12-13 09:26:20.990412262 +0000 UTC m=+0.074912990 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:26:21 compute-0 podman[411842]: 2025-12-13 09:26:21.048520278 +0000 UTC m=+0.132893512 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5066138404945055e-05 of space, bias 1.0, pg target 0.004519841521483516 quantized to 32 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696679221549024 of space, bias 1.0, pg target 0.20090037664647073 quantized to 32 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.715245915521144e-07 of space, bias 4.0, pg target 0.0006858295098625373 quantized to 16 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:26:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:26:22 compute-0 ceph-mon[76537]: pgmap v3757: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:24 compute-0 ceph-mon[76537]: pgmap v3758: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:25 compute-0 nova_compute[248510]: 2025-12-13 09:26:25.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:25 compute-0 nova_compute[248510]: 2025-12-13 09:26:25.831 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:26 compute-0 ceph-mon[76537]: pgmap v3759: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:28 compute-0 ceph-mon[76537]: pgmap v3760: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:29 compute-0 nova_compute[248510]: 2025-12-13 09:26:29.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:30 compute-0 nova_compute[248510]: 2025-12-13 09:26:30.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:30 compute-0 ceph-mon[76537]: pgmap v3761: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:30 compute-0 nova_compute[248510]: 2025-12-13 09:26:30.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:31 compute-0 ceph-mon[76537]: pgmap v3762: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:32 compute-0 nova_compute[248510]: 2025-12-13 09:26:32.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:34 compute-0 ceph-mon[76537]: pgmap v3763: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:35 compute-0 nova_compute[248510]: 2025-12-13 09:26:35.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:35 compute-0 nova_compute[248510]: 2025-12-13 09:26:35.837 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:36 compute-0 ceph-mon[76537]: pgmap v3764: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:38 compute-0 ceph-mon[76537]: pgmap v3765: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:26:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:26:40 compute-0 nova_compute[248510]: 2025-12-13 09:26:40.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:40 compute-0 ceph-mon[76537]: pgmap v3766: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:40 compute-0 nova_compute[248510]: 2025-12-13 09:26:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:40 compute-0 nova_compute[248510]: 2025-12-13 09:26:40.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:26:40 compute-0 nova_compute[248510]: 2025-12-13 09:26:40.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:26:40 compute-0 nova_compute[248510]: 2025-12-13 09:26:40.814 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:26:40 compute-0 nova_compute[248510]: 2025-12-13 09:26:40.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:41 compute-0 nova_compute[248510]: 2025-12-13 09:26:41.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:41 compute-0 nova_compute[248510]: 2025-12-13 09:26:41.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:41 compute-0 nova_compute[248510]: 2025-12-13 09:26:41.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:26:41 compute-0 nova_compute[248510]: 2025-12-13 09:26:41.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:26:42 compute-0 ceph-mon[76537]: pgmap v3767: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:42 compute-0 nova_compute[248510]: 2025-12-13 09:26:42.810 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 13 09:26:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Dec 13 09:26:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Dec 13 09:26:43 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Dec 13 09:26:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:44 compute-0 ceph-mon[76537]: pgmap v3768: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 13 09:26:44 compute-0 ceph-mon[76537]: osdmap e313: 3 total, 3 up, 3 in
Dec 13 09:26:44 compute-0 nova_compute[248510]: 2025-12-13 09:26:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:44 compute-0 nova_compute[248510]: 2025-12-13 09:26:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:44 compute-0 nova_compute[248510]: 2025-12-13 09:26:44.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:26:44 compute-0 nova_compute[248510]: 2025-12-13 09:26:44.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:26:44 compute-0 nova_compute[248510]: 2025-12-13 09:26:44.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:26:44 compute-0 nova_compute[248510]: 2025-12-13 09:26:44.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:26:44 compute-0 nova_compute[248510]: 2025-12-13 09:26:44.804 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:26:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 614 B/s wr, 12 op/s
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.298 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:26:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664761174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.358 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:26:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/664761174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.547 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.548 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3475MB free_disk=59.98737745825201GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.549 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.549 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.618 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.619 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.642 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:26:45 compute-0 nova_compute[248510]: 2025-12-13 09:26:45.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:26:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736735145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:26:46 compute-0 nova_compute[248510]: 2025-12-13 09:26:46.194 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:26:46 compute-0 nova_compute[248510]: 2025-12-13 09:26:46.201 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:26:46 compute-0 nova_compute[248510]: 2025-12-13 09:26:46.224 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:26:46 compute-0 nova_compute[248510]: 2025-12-13 09:26:46.228 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:26:46 compute-0 nova_compute[248510]: 2025-12-13 09:26:46.228 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:26:46 compute-0 ceph-mon[76537]: pgmap v3770: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 614 B/s wr, 12 op/s
Dec 13 09:26:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1736735145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:26:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 614 B/s wr, 12 op/s
Dec 13 09:26:47 compute-0 ceph-mon[76537]: pgmap v3771: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 614 B/s wr, 12 op/s
Dec 13 09:26:48 compute-0 nova_compute[248510]: 2025-12-13 09:26:48.230 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:48 compute-0 nova_compute[248510]: 2025-12-13 09:26:48.231 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:26:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:26:50 compute-0 nova_compute[248510]: 2025-12-13 09:26:50.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:50 compute-0 ceph-mon[76537]: pgmap v3772: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:26:50 compute-0 nova_compute[248510]: 2025-12-13 09:26:50.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:26:52 compute-0 podman[411950]: 2025-12-13 09:26:52.01598819 +0000 UTC m=+0.085853783 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:26:52 compute-0 podman[411949]: 2025-12-13 09:26:52.018915954 +0000 UTC m=+0.091977007 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:26:52 compute-0 podman[411948]: 2025-12-13 09:26:52.026946115 +0000 UTC m=+0.103644849 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:26:52 compute-0 ceph-mon[76537]: pgmap v3773: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:26:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:26:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Dec 13 09:26:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Dec 13 09:26:54 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Dec 13 09:26:54 compute-0 ceph-mon[76537]: pgmap v3774: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:26:54 compute-0 ceph-mon[76537]: osdmap e314: 3 total, 3 up, 3 in
Dec 13 09:26:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 818 B/s wr, 12 op/s
Dec 13 09:26:55 compute-0 nova_compute[248510]: 2025-12-13 09:26:55.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:26:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:26:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:26:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:26:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:26:55 compute-0 nova_compute[248510]: 2025-12-13 09:26:55.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:26:56 compute-0 ceph-mon[76537]: pgmap v3776: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 818 B/s wr, 12 op/s
Dec 13 09:26:56 compute-0 sudo[412009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:26:56 compute-0 sudo[412009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:26:56 compute-0 sudo[412009]: pam_unix(sudo:session): session closed for user root
Dec 13 09:26:56 compute-0 sudo[412034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 09:26:56 compute-0 sudo[412034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:26:56 compute-0 sudo[412034]: pam_unix(sudo:session): session closed for user root
Dec 13 09:26:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:26:56 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:26:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:26:56 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:26:56 compute-0 sudo[412080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:26:56 compute-0 sudo[412080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:26:56 compute-0 sudo[412080]: pam_unix(sudo:session): session closed for user root
Dec 13 09:26:56 compute-0 sudo[412105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:26:57 compute-0 sudo[412105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:26:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 818 B/s wr, 12 op/s
Dec 13 09:26:57 compute-0 sudo[412105]: pam_unix(sudo:session): session closed for user root
Dec 13 09:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:26:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:26:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:26:57 compute-0 sudo[412160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:26:57 compute-0 sudo[412160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:26:57 compute-0 sudo[412160]: pam_unix(sudo:session): session closed for user root
Dec 13 09:26:57 compute-0 sudo[412185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:26:57 compute-0 sudo[412185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:26:57 compute-0 nova_compute[248510]: 2025-12-13 09:26:57.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:26:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:26:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:26:57 compute-0 ceph-mon[76537]: pgmap v3777: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 818 B/s wr, 12 op/s
Dec 13 09:26:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:26:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:26:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:26:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:26:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:26:57 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:26:58 compute-0 podman[412223]: 2025-12-13 09:26:58.015424175 +0000 UTC m=+0.045039810 container create 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 13 09:26:58 compute-0 systemd[1]: Started libpod-conmon-59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381.scope.
Dec 13 09:26:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:26:58 compute-0 podman[412223]: 2025-12-13 09:26:57.996922661 +0000 UTC m=+0.026538326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:26:58 compute-0 podman[412223]: 2025-12-13 09:26:58.10660191 +0000 UTC m=+0.136217625 container init 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:26:58 compute-0 podman[412223]: 2025-12-13 09:26:58.120223772 +0000 UTC m=+0.149839407 container start 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:26:58 compute-0 podman[412223]: 2025-12-13 09:26:58.125121885 +0000 UTC m=+0.154737540 container attach 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 09:26:58 compute-0 interesting_shirley[412240]: 167 167
Dec 13 09:26:58 compute-0 systemd[1]: libpod-59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381.scope: Deactivated successfully.
Dec 13 09:26:58 compute-0 podman[412223]: 2025-12-13 09:26:58.131995837 +0000 UTC m=+0.161611472 container died 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:26:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca4842a20d35c399b7b92c15707803d81c57314ea9f219116971d5e02296bab0-merged.mount: Deactivated successfully.
Dec 13 09:26:58 compute-0 podman[412223]: 2025-12-13 09:26:58.187315514 +0000 UTC m=+0.216931149 container remove 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:26:58 compute-0 systemd[1]: libpod-conmon-59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381.scope: Deactivated successfully.
Dec 13 09:26:58 compute-0 podman[412266]: 2025-12-13 09:26:58.405725868 +0000 UTC m=+0.059471761 container create 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:26:58 compute-0 systemd[1]: Started libpod-conmon-08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568.scope.
Dec 13 09:26:58 compute-0 podman[412266]: 2025-12-13 09:26:58.377167253 +0000 UTC m=+0.030913226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:26:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:58 compute-0 podman[412266]: 2025-12-13 09:26:58.51271812 +0000 UTC m=+0.166464033 container init 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:26:58 compute-0 podman[412266]: 2025-12-13 09:26:58.520307441 +0000 UTC m=+0.174053324 container start 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 09:26:58 compute-0 podman[412266]: 2025-12-13 09:26:58.524749862 +0000 UTC m=+0.178495735 container attach 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 09:26:58 compute-0 strange_lamarr[412283]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:26:58 compute-0 strange_lamarr[412283]: --> All data devices are unavailable
Dec 13 09:26:59 compute-0 systemd[1]: libpod-08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568.scope: Deactivated successfully.
Dec 13 09:26:59 compute-0 podman[412266]: 2025-12-13 09:26:59.002025636 +0000 UTC m=+0.655771539 container died 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0-merged.mount: Deactivated successfully.
Dec 13 09:26:59 compute-0 podman[412266]: 2025-12-13 09:26:59.051825184 +0000 UTC m=+0.705571067 container remove 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:26:59 compute-0 systemd[1]: libpod-conmon-08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568.scope: Deactivated successfully.
Dec 13 09:26:59 compute-0 sudo[412185]: pam_unix(sudo:session): session closed for user root
Dec 13 09:26:59 compute-0 sudo[412314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:26:59 compute-0 sudo[412314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:26:59 compute-0 sudo[412314]: pam_unix(sudo:session): session closed for user root
Dec 13 09:26:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:26:59 compute-0 sudo[412339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:26:59 compute-0 sudo[412339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:26:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:26:59 compute-0 podman[412376]: 2025-12-13 09:26:59.559230353 +0000 UTC m=+0.045559673 container create b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:26:59 compute-0 systemd[1]: Started libpod-conmon-b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657.scope.
Dec 13 09:26:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:26:59 compute-0 podman[412376]: 2025-12-13 09:26:59.538714469 +0000 UTC m=+0.025043819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:26:59 compute-0 podman[412376]: 2025-12-13 09:26:59.651185048 +0000 UTC m=+0.137514468 container init b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 09:26:59 compute-0 podman[412376]: 2025-12-13 09:26:59.659082006 +0000 UTC m=+0.145411326 container start b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:26:59 compute-0 podman[412376]: 2025-12-13 09:26:59.663182289 +0000 UTC m=+0.149511609 container attach b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:26:59 compute-0 systemd[1]: libpod-b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657.scope: Deactivated successfully.
Dec 13 09:26:59 compute-0 upbeat_hopper[412392]: 167 167
Dec 13 09:26:59 compute-0 conmon[412392]: conmon b00ca2e1f6c8c764efd9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657.scope/container/memory.events
Dec 13 09:26:59 compute-0 podman[412376]: 2025-12-13 09:26:59.666353989 +0000 UTC m=+0.152683319 container died b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d7c5ce483511536244761534d6305b73ff3e277e3480001313346be9a51530c-merged.mount: Deactivated successfully.
Dec 13 09:26:59 compute-0 podman[412376]: 2025-12-13 09:26:59.706375242 +0000 UTC m=+0.192704562 container remove b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:26:59 compute-0 systemd[1]: libpod-conmon-b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657.scope: Deactivated successfully.
Dec 13 09:26:59 compute-0 podman[412416]: 2025-12-13 09:26:59.857494709 +0000 UTC m=+0.038232539 container create d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Dec 13 09:26:59 compute-0 systemd[1]: Started libpod-conmon-d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3.scope.
Dec 13 09:26:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:26:59 compute-0 podman[412416]: 2025-12-13 09:26:59.840692488 +0000 UTC m=+0.021430338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:26:59 compute-0 podman[412416]: 2025-12-13 09:26:59.939879734 +0000 UTC m=+0.120617594 container init d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:26:59 compute-0 podman[412416]: 2025-12-13 09:26:59.947028603 +0000 UTC m=+0.127766433 container start d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 13 09:26:59 compute-0 podman[412416]: 2025-12-13 09:26:59.95047333 +0000 UTC m=+0.131211190 container attach d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 09:27:00 compute-0 romantic_mendel[412433]: {
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:     "0": [
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:         {
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "devices": [
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "/dev/loop3"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             ],
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_name": "ceph_lv0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_size": "21470642176",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "name": "ceph_lv0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "tags": {
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cluster_name": "ceph",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.crush_device_class": "",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.encrypted": "0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.objectstore": "bluestore",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osd_id": "0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.type": "block",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.vdo": "0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.with_tpm": "0"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             },
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "type": "block",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "vg_name": "ceph_vg0"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:         }
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:     ],
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:     "1": [
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:         {
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "devices": [
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "/dev/loop4"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             ],
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_name": "ceph_lv1",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_size": "21470642176",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "name": "ceph_lv1",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "tags": {
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cluster_name": "ceph",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.crush_device_class": "",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.encrypted": "0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.objectstore": "bluestore",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osd_id": "1",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.type": "block",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.vdo": "0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.with_tpm": "0"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             },
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "type": "block",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "vg_name": "ceph_vg1"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:         }
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:     ],
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:     "2": [
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:         {
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "devices": [
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "/dev/loop5"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             ],
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_name": "ceph_lv2",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_size": "21470642176",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "name": "ceph_lv2",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "tags": {
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.cluster_name": "ceph",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.crush_device_class": "",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.encrypted": "0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.objectstore": "bluestore",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osd_id": "2",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.type": "block",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.vdo": "0",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:                 "ceph.with_tpm": "0"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             },
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "type": "block",
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:             "vg_name": "ceph_vg2"
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:         }
Dec 13 09:27:00 compute-0 romantic_mendel[412433]:     ]
Dec 13 09:27:00 compute-0 romantic_mendel[412433]: }
Dec 13 09:27:00 compute-0 systemd[1]: libpod-d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3.scope: Deactivated successfully.
Dec 13 09:27:00 compute-0 podman[412416]: 2025-12-13 09:27:00.271134598 +0000 UTC m=+0.451872438 container died d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76-merged.mount: Deactivated successfully.
Dec 13 09:27:00 compute-0 nova_compute[248510]: 2025-12-13 09:27:00.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:00 compute-0 podman[412416]: 2025-12-13 09:27:00.314343311 +0000 UTC m=+0.495081171 container remove d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:27:00 compute-0 ceph-mon[76537]: pgmap v3778: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:00 compute-0 systemd[1]: libpod-conmon-d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3.scope: Deactivated successfully.
Dec 13 09:27:00 compute-0 sudo[412339]: pam_unix(sudo:session): session closed for user root
Dec 13 09:27:00 compute-0 sudo[412452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:27:00 compute-0 sudo[412452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:27:00 compute-0 sudo[412452]: pam_unix(sudo:session): session closed for user root
Dec 13 09:27:00 compute-0 sudo[412477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:27:00 compute-0 sudo[412477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:27:00 compute-0 podman[412514]: 2025-12-13 09:27:00.768588057 +0000 UTC m=+0.039596353 container create 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:27:00 compute-0 systemd[1]: Started libpod-conmon-869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6.scope.
Dec 13 09:27:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:27:00 compute-0 podman[412514]: 2025-12-13 09:27:00.749775356 +0000 UTC m=+0.020783672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:27:00 compute-0 nova_compute[248510]: 2025-12-13 09:27:00.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:00 compute-0 podman[412514]: 2025-12-13 09:27:00.853988198 +0000 UTC m=+0.124996494 container init 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 09:27:00 compute-0 podman[412514]: 2025-12-13 09:27:00.861437235 +0000 UTC m=+0.132445531 container start 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 09:27:00 compute-0 podman[412514]: 2025-12-13 09:27:00.865971899 +0000 UTC m=+0.136980215 container attach 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:27:00 compute-0 suspicious_aryabhata[412530]: 167 167
Dec 13 09:27:00 compute-0 systemd[1]: libpod-869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6.scope: Deactivated successfully.
Dec 13 09:27:00 compute-0 podman[412514]: 2025-12-13 09:27:00.867157808 +0000 UTC m=+0.138166134 container died 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 09:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3aa2a8cb09b9115ada6fec002b12b9fd14bebaa2c0a158be65852e71c4ab69-merged.mount: Deactivated successfully.
Dec 13 09:27:00 compute-0 podman[412514]: 2025-12-13 09:27:00.911953401 +0000 UTC m=+0.182961697 container remove 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:27:00 compute-0 systemd[1]: libpod-conmon-869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6.scope: Deactivated successfully.
Dec 13 09:27:01 compute-0 podman[412553]: 2025-12-13 09:27:01.081192724 +0000 UTC m=+0.044209329 container create 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:27:01 compute-0 systemd[1]: Started libpod-conmon-5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d.scope.
Dec 13 09:27:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:27:01 compute-0 podman[412553]: 2025-12-13 09:27:01.060799413 +0000 UTC m=+0.023816038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:27:01 compute-0 podman[412553]: 2025-12-13 09:27:01.177780695 +0000 UTC m=+0.140797300 container init 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 09:27:01 compute-0 podman[412553]: 2025-12-13 09:27:01.188856013 +0000 UTC m=+0.151872618 container start 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 09:27:01 compute-0 podman[412553]: 2025-12-13 09:27:01.192954416 +0000 UTC m=+0.155971041 container attach 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:27:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:01 compute-0 lvm[412647]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:27:01 compute-0 lvm[412648]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:27:01 compute-0 lvm[412648]: VG ceph_vg1 finished
Dec 13 09:27:01 compute-0 lvm[412647]: VG ceph_vg0 finished
Dec 13 09:27:01 compute-0 lvm[412650]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:27:01 compute-0 lvm[412650]: VG ceph_vg2 finished
Dec 13 09:27:02 compute-0 stupefied_johnson[412569]: {}
Dec 13 09:27:02 compute-0 systemd[1]: libpod-5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d.scope: Deactivated successfully.
Dec 13 09:27:02 compute-0 systemd[1]: libpod-5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d.scope: Consumed 1.528s CPU time.
Dec 13 09:27:02 compute-0 podman[412553]: 2025-12-13 09:27:02.123905252 +0000 UTC m=+1.086921857 container died 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 09:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056-merged.mount: Deactivated successfully.
Dec 13 09:27:02 compute-0 podman[412553]: 2025-12-13 09:27:02.175803793 +0000 UTC m=+1.138820398 container remove 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:27:02 compute-0 systemd[1]: libpod-conmon-5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d.scope: Deactivated successfully.
Dec 13 09:27:02 compute-0 sudo[412477]: pam_unix(sudo:session): session closed for user root
Dec 13 09:27:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:27:02 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:27:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:27:02 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:27:02 compute-0 sudo[412667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:27:02 compute-0 sudo[412667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:27:02 compute-0 sudo[412667]: pam_unix(sudo:session): session closed for user root
Dec 13 09:27:02 compute-0 ceph-mon[76537]: pgmap v3779: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:27:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:27:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:04 compute-0 ceph-mon[76537]: pgmap v3780: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:05 compute-0 nova_compute[248510]: 2025-12-13 09:27:05.306 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:05 compute-0 nova_compute[248510]: 2025-12-13 09:27:05.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:06 compute-0 ceph-mon[76537]: pgmap v3781: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:08 compute-0 ceph-mon[76537]: pgmap v3782: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.201976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029202056, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 255, "total_data_size": 1129572, "memory_usage": 1157352, "flush_reason": "Manual Compaction"}
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029214402, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 1099593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75172, "largest_seqno": 75985, "table_properties": {"data_size": 1095425, "index_size": 1883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9176, "raw_average_key_size": 19, "raw_value_size": 1086943, "raw_average_value_size": 2278, "num_data_blocks": 84, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617962, "oldest_key_time": 1765617962, "file_creation_time": 1765618029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 12553 microseconds, and 7369 cpu microseconds.
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.214528) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 1099593 bytes OK
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.214556) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.216563) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.216617) EVENT_LOG_v1 {"time_micros": 1765618029216609, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.216645) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 1125491, prev total WAL file size 1125491, number of live WAL files 2.
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.217584) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323636' seq:72057594037927935, type:22 .. '6C6F676D0033353137' seq:0, type:0; will stop at (end)
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(1073KB)], [179(9683KB)]
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029217655, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 11015639, "oldest_snapshot_seqno": -1}
Dec 13 09:27:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 9198 keys, 10907740 bytes, temperature: kUnknown
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029318870, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 10907740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10850418, "index_size": 33209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23045, "raw_key_size": 243018, "raw_average_key_size": 26, "raw_value_size": 10690644, "raw_average_value_size": 1162, "num_data_blocks": 1275, "num_entries": 9198, "num_filter_entries": 9198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.319411) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10907740 bytes
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.321495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.7 rd, 107.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(19.9) write-amplify(9.9) OK, records in: 9723, records dropped: 525 output_compression: NoCompression
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.321543) EVENT_LOG_v1 {"time_micros": 1765618029321523, "job": 112, "event": "compaction_finished", "compaction_time_micros": 101333, "compaction_time_cpu_micros": 47172, "output_level": 6, "num_output_files": 1, "total_output_size": 10907740, "num_input_records": 9723, "num_output_records": 9198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029321966, "job": 112, "event": "table_file_deletion", "file_number": 181}
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029323824, "job": 112, "event": "table_file_deletion", "file_number": 179}
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.217451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:27:09 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:27:09
Dec 13 09:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes']
Dec 13 09:27:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:27:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:27:10 compute-0 ceph-mon[76537]: pgmap v3783: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:10 compute-0 nova_compute[248510]: 2025-12-13 09:27:10.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:10 compute-0 nova_compute[248510]: 2025-12-13 09:27:10.858 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:27:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:12 compute-0 ceph-mon[76537]: pgmap v3784: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:14 compute-0 ceph-mon[76537]: pgmap v3785: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:27:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3442094109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:27:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:27:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3442094109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:27:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:15 compute-0 nova_compute[248510]: 2025-12-13 09:27:15.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3442094109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:27:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3442094109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:27:15 compute-0 nova_compute[248510]: 2025-12-13 09:27:15.859 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:16 compute-0 ceph-mon[76537]: pgmap v3786: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:18 compute-0 ceph-mon[76537]: pgmap v3787: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:20 compute-0 nova_compute[248510]: 2025-12-13 09:27:20.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:20 compute-0 ceph-mon[76537]: pgmap v3788: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:20 compute-0 nova_compute[248510]: 2025-12-13 09:27:20.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:20 compute-0 nova_compute[248510]: 2025-12-13 09:27:20.862 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.514783135039224e-05 of space, bias 1.0, pg target 0.004544349405117672 quantized to 32 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669348866335464 of space, bias 1.0, pg target 0.2008046599006392 quantized to 32 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.717419426042354e-07 of space, bias 4.0, pg target 0.0006860903311250824 quantized to 16 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:27:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:27:22 compute-0 ceph-mon[76537]: pgmap v3789: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:23 compute-0 podman[412694]: 2025-12-13 09:27:23.003521008 +0000 UTC m=+0.072391095 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 09:27:23 compute-0 podman[412693]: 2025-12-13 09:27:23.011865787 +0000 UTC m=+0.087817392 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:27:23 compute-0 podman[412692]: 2025-12-13 09:27:23.060592549 +0000 UTC m=+0.141585680 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:27:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:24 compute-0 ceph-mon[76537]: pgmap v3790: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:25 compute-0 nova_compute[248510]: 2025-12-13 09:27:25.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:25 compute-0 nova_compute[248510]: 2025-12-13 09:27:25.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:26 compute-0 ceph-mon[76537]: pgmap v3791: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:26 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 13 09:27:26 compute-0 systemd[1]: virtsecretd.service: Consumed 1.352s CPU time.
Dec 13 09:27:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:28 compute-0 ceph-mon[76537]: pgmap v3792: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:29 compute-0 nova_compute[248510]: 2025-12-13 09:27:29.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:30 compute-0 nova_compute[248510]: 2025-12-13 09:27:30.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:30 compute-0 ceph-mon[76537]: pgmap v3793: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:30 compute-0 nova_compute[248510]: 2025-12-13 09:27:30.869 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:32 compute-0 ceph-mon[76537]: pgmap v3794: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:32 compute-0 nova_compute[248510]: 2025-12-13 09:27:32.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:34 compute-0 ceph-mon[76537]: pgmap v3795: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:35 compute-0 nova_compute[248510]: 2025-12-13 09:27:35.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:35 compute-0 nova_compute[248510]: 2025-12-13 09:27:35.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:36 compute-0 ceph-mon[76537]: pgmap v3796: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:38 compute-0 ceph-mon[76537]: pgmap v3797: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:27:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:27:40 compute-0 nova_compute[248510]: 2025-12-13 09:27:40.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:40 compute-0 ceph-mon[76537]: pgmap v3798: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:40 compute-0 nova_compute[248510]: 2025-12-13 09:27:40.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:42 compute-0 ceph-mon[76537]: pgmap v3799: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:42 compute-0 nova_compute[248510]: 2025-12-13 09:27:42.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:42 compute-0 nova_compute[248510]: 2025-12-13 09:27:42.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:27:42 compute-0 nova_compute[248510]: 2025-12-13 09:27:42.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:27:42 compute-0 nova_compute[248510]: 2025-12-13 09:27:42.789 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:27:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:43 compute-0 ceph-mon[76537]: pgmap v3800: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:43 compute-0 nova_compute[248510]: 2025-12-13 09:27:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:43 compute-0 nova_compute[248510]: 2025-12-13 09:27:43.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3801: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:45 compute-0 nova_compute[248510]: 2025-12-13 09:27:45.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:45 compute-0 nova_compute[248510]: 2025-12-13 09:27:45.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:45 compute-0 nova_compute[248510]: 2025-12-13 09:27:45.875 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:46 compute-0 ceph-mon[76537]: pgmap v3801: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:46 compute-0 nova_compute[248510]: 2025-12-13 09:27:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:46 compute-0 nova_compute[248510]: 2025-12-13 09:27:46.882 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:27:46 compute-0 nova_compute[248510]: 2025-12-13 09:27:46.883 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:27:46 compute-0 nova_compute[248510]: 2025-12-13 09:27:46.883 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:27:46 compute-0 nova_compute[248510]: 2025-12-13 09:27:46.883 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:27:46 compute-0 nova_compute[248510]: 2025-12-13 09:27:46.884 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:27:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3802: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:27:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/204429211' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.554 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.775 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.777 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3510MB free_disk=59.987372557632625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.777 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.777 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.902 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.903 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.925 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.951 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.951 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.969 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:27:47 compute-0 nova_compute[248510]: 2025-12-13 09:27:47.995 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:27:48 compute-0 nova_compute[248510]: 2025-12-13 09:27:48.011 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:27:48 compute-0 ceph-mon[76537]: pgmap v3802: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/204429211' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:27:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:27:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591096340' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:27:48 compute-0 nova_compute[248510]: 2025-12-13 09:27:48.618 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:27:48 compute-0 nova_compute[248510]: 2025-12-13 09:27:48.628 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:27:48 compute-0 nova_compute[248510]: 2025-12-13 09:27:48.665 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:27:48 compute-0 nova_compute[248510]: 2025-12-13 09:27:48.668 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:27:48 compute-0 nova_compute[248510]: 2025-12-13 09:27:48.669 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:27:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3803: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1591096340' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:27:50 compute-0 nova_compute[248510]: 2025-12-13 09:27:50.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:50 compute-0 ceph-mon[76537]: pgmap v3803: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:50 compute-0 nova_compute[248510]: 2025-12-13 09:27:50.669 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:50 compute-0 nova_compute[248510]: 2025-12-13 09:27:50.670 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:27:50 compute-0 nova_compute[248510]: 2025-12-13 09:27:50.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3804: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:52 compute-0 ceph-mon[76537]: pgmap v3804: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3805: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:54 compute-0 podman[412801]: 2025-12-13 09:27:54.020965339 +0000 UTC m=+0.100554941 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202)
Dec 13 09:27:54 compute-0 podman[412802]: 2025-12-13 09:27:54.026686103 +0000 UTC m=+0.090944781 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:27:54 compute-0 podman[412800]: 2025-12-13 09:27:54.072917842 +0000 UTC m=+0.150013302 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 13 09:27:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:54 compute-0 ceph-mon[76537]: pgmap v3805: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3806: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:55 compute-0 nova_compute[248510]: 2025-12-13 09:27:55.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:27:55.464 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:27:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:27:55.464 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:27:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:27:55.464 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:27:55 compute-0 nova_compute[248510]: 2025-12-13 09:27:55.884 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:27:56 compute-0 ceph-mon[76537]: pgmap v3806: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3807: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:57 compute-0 ceph-mon[76537]: pgmap v3807: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:27:57 compute-0 nova_compute[248510]: 2025-12-13 09:27:57.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:27:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:27:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3808: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:00 compute-0 nova_compute[248510]: 2025-12-13 09:28:00.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:00 compute-0 ceph-mon[76537]: pgmap v3808: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:00 compute-0 nova_compute[248510]: 2025-12-13 09:28:00.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3809: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:02 compute-0 sudo[412863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:28:02 compute-0 sudo[412863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:02 compute-0 sudo[412863]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:02 compute-0 ceph-mon[76537]: pgmap v3809: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:02 compute-0 sudo[412888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:28:02 compute-0 sudo[412888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:03 compute-0 sudo[412888]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:28:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:28:03 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:28:03 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:28:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:28:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:28:03 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:28:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3810: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:03 compute-0 sudo[412945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:28:03 compute-0 sudo[412945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:03 compute-0 sudo[412945]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:03 compute-0 sudo[412970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:28:03 compute-0 sudo[412970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:03 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:28:03 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:28:03 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:28:03 compute-0 podman[413006]: 2025-12-13 09:28:03.774781808 +0000 UTC m=+0.104955982 container create 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:28:03 compute-0 podman[413006]: 2025-12-13 09:28:03.693329557 +0000 UTC m=+0.023503751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:28:03 compute-0 systemd[1]: Started libpod-conmon-7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa.scope.
Dec 13 09:28:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:28:03 compute-0 podman[413006]: 2025-12-13 09:28:03.944278847 +0000 UTC m=+0.274453021 container init 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:28:03 compute-0 podman[413006]: 2025-12-13 09:28:03.959102529 +0000 UTC m=+0.289276703 container start 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:28:03 compute-0 podman[413006]: 2025-12-13 09:28:03.963018197 +0000 UTC m=+0.293192391 container attach 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 13 09:28:03 compute-0 sleepy_sanderson[413022]: 167 167
Dec 13 09:28:03 compute-0 systemd[1]: libpod-7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa.scope: Deactivated successfully.
Dec 13 09:28:03 compute-0 conmon[413022]: conmon 7f6aea2e27b4ef5414ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa.scope/container/memory.events
Dec 13 09:28:03 compute-0 podman[413006]: 2025-12-13 09:28:03.970455243 +0000 UTC m=+0.300629447 container died 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0fc4b8c7570c1dc097dde53aeccc838b0e35efc3d0a01fc6ff905421f55ef9b-merged.mount: Deactivated successfully.
Dec 13 09:28:04 compute-0 podman[413006]: 2025-12-13 09:28:04.02295889 +0000 UTC m=+0.353133064 container remove 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:28:04 compute-0 systemd[1]: libpod-conmon-7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa.scope: Deactivated successfully.
Dec 13 09:28:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:04 compute-0 podman[413047]: 2025-12-13 09:28:04.289388507 +0000 UTC m=+0.075982055 container create 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 09:28:04 compute-0 systemd[1]: Started libpod-conmon-1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3.scope.
Dec 13 09:28:04 compute-0 podman[413047]: 2025-12-13 09:28:04.260131704 +0000 UTC m=+0.046725322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:28:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:04 compute-0 podman[413047]: 2025-12-13 09:28:04.39482261 +0000 UTC m=+0.181416168 container init 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:28:04 compute-0 podman[413047]: 2025-12-13 09:28:04.402244276 +0000 UTC m=+0.188837794 container start 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 09:28:04 compute-0 podman[413047]: 2025-12-13 09:28:04.40757657 +0000 UTC m=+0.194170098 container attach 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:28:04 compute-0 ceph-mon[76537]: pgmap v3810: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:04 compute-0 clever_kalam[413064]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:28:04 compute-0 clever_kalam[413064]: --> All data devices are unavailable
Dec 13 09:28:04 compute-0 systemd[1]: libpod-1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3.scope: Deactivated successfully.
Dec 13 09:28:04 compute-0 podman[413047]: 2025-12-13 09:28:04.977916756 +0000 UTC m=+0.764510294 container died 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 09:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7-merged.mount: Deactivated successfully.
Dec 13 09:28:05 compute-0 podman[413047]: 2025-12-13 09:28:05.024293449 +0000 UTC m=+0.810886967 container remove 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:28:05 compute-0 systemd[1]: libpod-conmon-1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3.scope: Deactivated successfully.
Dec 13 09:28:05 compute-0 sudo[412970]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:05 compute-0 sudo[413097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:28:05 compute-0 sudo[413097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:05 compute-0 sudo[413097]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:05 compute-0 sudo[413122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:28:05 compute-0 sudo[413122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3811: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:05 compute-0 nova_compute[248510]: 2025-12-13 09:28:05.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:05 compute-0 ceph-mon[76537]: pgmap v3811: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:05 compute-0 podman[413158]: 2025-12-13 09:28:05.575056315 +0000 UTC m=+0.049256946 container create a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:28:05 compute-0 systemd[1]: Started libpod-conmon-a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19.scope.
Dec 13 09:28:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:28:05 compute-0 podman[413158]: 2025-12-13 09:28:05.554859929 +0000 UTC m=+0.029060600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:28:05 compute-0 podman[413158]: 2025-12-13 09:28:05.664642201 +0000 UTC m=+0.138842832 container init a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:28:05 compute-0 podman[413158]: 2025-12-13 09:28:05.677302668 +0000 UTC m=+0.151503299 container start a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:28:05 compute-0 podman[413158]: 2025-12-13 09:28:05.680810806 +0000 UTC m=+0.155011437 container attach a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 09:28:05 compute-0 flamboyant_ellis[413174]: 167 167
Dec 13 09:28:05 compute-0 systemd[1]: libpod-a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19.scope: Deactivated successfully.
Dec 13 09:28:05 compute-0 podman[413158]: 2025-12-13 09:28:05.683573565 +0000 UTC m=+0.157774206 container died a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 09:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dfc63aa3119a456a913d1e634939e2330edb69b4b96ed03d9d66692ef29822e-merged.mount: Deactivated successfully.
Dec 13 09:28:05 compute-0 podman[413158]: 2025-12-13 09:28:05.728802649 +0000 UTC m=+0.203003270 container remove a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 09:28:05 compute-0 systemd[1]: libpod-conmon-a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19.scope: Deactivated successfully.
Dec 13 09:28:05 compute-0 nova_compute[248510]: 2025-12-13 09:28:05.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:05 compute-0 podman[413196]: 2025-12-13 09:28:05.95902763 +0000 UTC m=+0.066893088 container create 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 09:28:06 compute-0 systemd[1]: Started libpod-conmon-27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9.scope.
Dec 13 09:28:06 compute-0 podman[413196]: 2025-12-13 09:28:05.92673875 +0000 UTC m=+0.034604278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:28:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:06 compute-0 podman[413196]: 2025-12-13 09:28:06.07195116 +0000 UTC m=+0.179816648 container init 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:28:06 compute-0 podman[413196]: 2025-12-13 09:28:06.079178412 +0000 UTC m=+0.187043860 container start 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:28:06 compute-0 podman[413196]: 2025-12-13 09:28:06.082648799 +0000 UTC m=+0.190514237 container attach 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:28:06 compute-0 charming_yalow[413213]: {
Dec 13 09:28:06 compute-0 charming_yalow[413213]:     "0": [
Dec 13 09:28:06 compute-0 charming_yalow[413213]:         {
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "devices": [
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "/dev/loop3"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             ],
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_name": "ceph_lv0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_size": "21470642176",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "name": "ceph_lv0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "tags": {
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cluster_name": "ceph",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.crush_device_class": "",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.encrypted": "0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.objectstore": "bluestore",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osd_id": "0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.type": "block",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.vdo": "0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.with_tpm": "0"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             },
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "type": "block",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "vg_name": "ceph_vg0"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:         }
Dec 13 09:28:06 compute-0 charming_yalow[413213]:     ],
Dec 13 09:28:06 compute-0 charming_yalow[413213]:     "1": [
Dec 13 09:28:06 compute-0 charming_yalow[413213]:         {
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "devices": [
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "/dev/loop4"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             ],
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_name": "ceph_lv1",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_size": "21470642176",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "name": "ceph_lv1",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "tags": {
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cluster_name": "ceph",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.crush_device_class": "",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.encrypted": "0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.objectstore": "bluestore",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osd_id": "1",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.type": "block",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.vdo": "0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.with_tpm": "0"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             },
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "type": "block",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "vg_name": "ceph_vg1"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:         }
Dec 13 09:28:06 compute-0 charming_yalow[413213]:     ],
Dec 13 09:28:06 compute-0 charming_yalow[413213]:     "2": [
Dec 13 09:28:06 compute-0 charming_yalow[413213]:         {
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "devices": [
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "/dev/loop5"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             ],
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_name": "ceph_lv2",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_size": "21470642176",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "name": "ceph_lv2",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "tags": {
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.cluster_name": "ceph",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.crush_device_class": "",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.encrypted": "0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.objectstore": "bluestore",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osd_id": "2",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.type": "block",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.vdo": "0",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:                 "ceph.with_tpm": "0"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             },
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "type": "block",
Dec 13 09:28:06 compute-0 charming_yalow[413213]:             "vg_name": "ceph_vg2"
Dec 13 09:28:06 compute-0 charming_yalow[413213]:         }
Dec 13 09:28:06 compute-0 charming_yalow[413213]:     ]
Dec 13 09:28:06 compute-0 charming_yalow[413213]: }
Dec 13 09:28:06 compute-0 systemd[1]: libpod-27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9.scope: Deactivated successfully.
Dec 13 09:28:06 compute-0 podman[413196]: 2025-12-13 09:28:06.422755234 +0000 UTC m=+0.530620722 container died 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41-merged.mount: Deactivated successfully.
Dec 13 09:28:06 compute-0 podman[413196]: 2025-12-13 09:28:06.480519862 +0000 UTC m=+0.588385310 container remove 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:28:06 compute-0 systemd[1]: libpod-conmon-27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9.scope: Deactivated successfully.
Dec 13 09:28:06 compute-0 sudo[413122]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:06 compute-0 sudo[413234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:28:06 compute-0 sudo[413234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:06 compute-0 sudo[413234]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:06 compute-0 sudo[413259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:28:06 compute-0 sudo[413259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:07 compute-0 podman[413296]: 2025-12-13 09:28:07.015187955 +0000 UTC m=+0.032572538 container create aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:28:07 compute-0 systemd[1]: Started libpod-conmon-aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb.scope.
Dec 13 09:28:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:28:07 compute-0 podman[413296]: 2025-12-13 09:28:07.001595204 +0000 UTC m=+0.018979807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:28:07 compute-0 podman[413296]: 2025-12-13 09:28:07.103533929 +0000 UTC m=+0.120918532 container init aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:28:07 compute-0 podman[413296]: 2025-12-13 09:28:07.110060963 +0000 UTC m=+0.127445576 container start aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:28:07 compute-0 podman[413296]: 2025-12-13 09:28:07.114421642 +0000 UTC m=+0.131806245 container attach aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 09:28:07 compute-0 amazing_euler[413313]: 167 167
Dec 13 09:28:07 compute-0 systemd[1]: libpod-aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb.scope: Deactivated successfully.
Dec 13 09:28:07 compute-0 podman[413296]: 2025-12-13 09:28:07.115752715 +0000 UTC m=+0.133137328 container died aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:28:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a267a8ec68d3da8edfb8dd5113e980739e67b2c19638abf10bb51a5fe15197a0-merged.mount: Deactivated successfully.
Dec 13 09:28:07 compute-0 podman[413296]: 2025-12-13 09:28:07.16339967 +0000 UTC m=+0.180784273 container remove aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:28:07 compute-0 systemd[1]: libpod-conmon-aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb.scope: Deactivated successfully.
Dec 13 09:28:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3812: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:07 compute-0 podman[413336]: 2025-12-13 09:28:07.38760858 +0000 UTC m=+0.074315334 container create 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:28:07 compute-0 systemd[1]: Started libpod-conmon-1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb.scope.
Dec 13 09:28:07 compute-0 podman[413336]: 2025-12-13 09:28:07.35650488 +0000 UTC m=+0.043211704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:28:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:28:07 compute-0 podman[413336]: 2025-12-13 09:28:07.489214077 +0000 UTC m=+0.175920821 container init 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 09:28:07 compute-0 podman[413336]: 2025-12-13 09:28:07.495999077 +0000 UTC m=+0.182705841 container start 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 09:28:07 compute-0 podman[413336]: 2025-12-13 09:28:07.50090091 +0000 UTC m=+0.187607664 container attach 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:28:08 compute-0 lvm[413433]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:28:08 compute-0 lvm[413432]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:28:08 compute-0 lvm[413432]: VG ceph_vg1 finished
Dec 13 09:28:08 compute-0 lvm[413433]: VG ceph_vg0 finished
Dec 13 09:28:08 compute-0 lvm[413435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:28:08 compute-0 lvm[413435]: VG ceph_vg2 finished
Dec 13 09:28:08 compute-0 lvm[413437]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:28:08 compute-0 lvm[413437]: VG ceph_vg2 finished
Dec 13 09:28:08 compute-0 lvm[413439]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:28:08 compute-0 lvm[413439]: VG ceph_vg2 finished
Dec 13 09:28:08 compute-0 relaxed_keller[413353]: {}
Dec 13 09:28:08 compute-0 ceph-mon[76537]: pgmap v3812: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:08 compute-0 systemd[1]: libpod-1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb.scope: Deactivated successfully.
Dec 13 09:28:08 compute-0 systemd[1]: libpod-1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb.scope: Consumed 1.491s CPU time.
Dec 13 09:28:08 compute-0 podman[413336]: 2025-12-13 09:28:08.409873554 +0000 UTC m=+1.096580308 container died 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 09:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153-merged.mount: Deactivated successfully.
Dec 13 09:28:08 compute-0 podman[413336]: 2025-12-13 09:28:08.464295708 +0000 UTC m=+1.151002432 container remove 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:28:08 compute-0 systemd[1]: libpod-conmon-1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb.scope: Deactivated successfully.
Dec 13 09:28:08 compute-0 sudo[413259]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:28:08 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:28:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:28:08 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:28:08 compute-0 sudo[413450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:28:08 compute-0 sudo[413450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:28:08 compute-0 sudo[413450]: pam_unix(sudo:session): session closed for user root
Dec 13 09:28:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3813: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:28:09
Dec 13 09:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'vms']
Dec 13 09:28:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:28:09 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:28:09 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:28:09 compute-0 ceph-mon[76537]: pgmap v3813: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:28:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:28:10 compute-0 nova_compute[248510]: 2025-12-13 09:28:10.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:10 compute-0 nova_compute[248510]: 2025-12-13 09:28:10.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:28:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3814: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:12 compute-0 ceph-mon[76537]: pgmap v3814: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3815: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:14 compute-0 ceph-mon[76537]: pgmap v3815: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:28:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2617986221' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:28:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:28:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2617986221' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:28:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3816: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:15 compute-0 nova_compute[248510]: 2025-12-13 09:28:15.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2617986221' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:28:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2617986221' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:28:15 compute-0 nova_compute[248510]: 2025-12-13 09:28:15.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:16 compute-0 ceph-mon[76537]: pgmap v3816: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3817: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:18 compute-0 ceph-mon[76537]: pgmap v3817: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3818: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:20 compute-0 nova_compute[248510]: 2025-12-13 09:28:20.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:20 compute-0 ceph-mon[76537]: pgmap v3818: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:21 compute-0 nova_compute[248510]: 2025-12-13 09:28:21.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3819: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.514783135039224e-05 of space, bias 1.0, pg target 0.004544349405117672 quantized to 32 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669348866335464 of space, bias 1.0, pg target 0.2008046599006392 quantized to 32 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.717419426042354e-07 of space, bias 4.0, pg target 0.0006860903311250824 quantized to 16 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:28:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:28:22 compute-0 ceph-mon[76537]: pgmap v3819: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3820: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:24 compute-0 ceph-mon[76537]: pgmap v3820: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:24 compute-0 podman[413476]: 2025-12-13 09:28:24.987038978 +0000 UTC m=+0.067451522 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 09:28:24 compute-0 podman[413477]: 2025-12-13 09:28:24.996102995 +0000 UTC m=+0.070482178 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:28:25 compute-0 podman[413475]: 2025-12-13 09:28:25.020043925 +0000 UTC m=+0.102917971 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 09:28:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3821: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:25 compute-0 nova_compute[248510]: 2025-12-13 09:28:25.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:26 compute-0 nova_compute[248510]: 2025-12-13 09:28:26.043 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:26 compute-0 ceph-mon[76537]: pgmap v3821: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3822: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:28 compute-0 ceph-mon[76537]: pgmap v3822: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3823: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:29 compute-0 nova_compute[248510]: 2025-12-13 09:28:29.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:30 compute-0 nova_compute[248510]: 2025-12-13 09:28:30.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:30 compute-0 ceph-mon[76537]: pgmap v3823: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:31 compute-0 nova_compute[248510]: 2025-12-13 09:28:31.046 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3824: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:32 compute-0 ceph-mon[76537]: pgmap v3824: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3825: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:34 compute-0 ceph-mon[76537]: pgmap v3825: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:34 compute-0 nova_compute[248510]: 2025-12-13 09:28:34.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3826: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:35 compute-0 nova_compute[248510]: 2025-12-13 09:28:35.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:36 compute-0 nova_compute[248510]: 2025-12-13 09:28:36.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:36 compute-0 ceph-mon[76537]: pgmap v3826: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3827: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:38 compute-0 ceph-mon[76537]: pgmap v3827: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3828: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:39 compute-0 ceph-mon[76537]: pgmap v3828: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:39 compute-0 sshd-session[413536]: Invalid user near from 80.94.92.165 port 39400
Dec 13 09:28:40 compute-0 sshd-session[413536]: Connection closed by invalid user near 80.94.92.165 port 39400 [preauth]
Dec 13 09:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:28:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:28:40 compute-0 nova_compute[248510]: 2025-12-13 09:28:40.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:41 compute-0 nova_compute[248510]: 2025-12-13 09:28:41.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3829: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:42 compute-0 ceph-mon[76537]: pgmap v3829: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:28:42 compute-0 nova_compute[248510]: 2025-12-13 09:28:42.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:42 compute-0 nova_compute[248510]: 2025-12-13 09:28:42.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:28:42 compute-0 nova_compute[248510]: 2025-12-13 09:28:42.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:28:42 compute-0 nova_compute[248510]: 2025-12-13 09:28:42.792 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:28:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3830: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1 op/s
Dec 13 09:28:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:44 compute-0 ceph-mon[76537]: pgmap v3830: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1 op/s
Dec 13 09:28:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3831: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 09:28:45 compute-0 nova_compute[248510]: 2025-12-13 09:28:45.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:45 compute-0 nova_compute[248510]: 2025-12-13 09:28:45.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:45 compute-0 nova_compute[248510]: 2025-12-13 09:28:45.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:46 compute-0 nova_compute[248510]: 2025-12-13 09:28:46.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:46 compute-0 ceph-mon[76537]: pgmap v3831: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 09:28:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3832: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 09:28:47 compute-0 nova_compute[248510]: 2025-12-13 09:28:47.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:47 compute-0 nova_compute[248510]: 2025-12-13 09:28:47.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:47 compute-0 nova_compute[248510]: 2025-12-13 09:28:47.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:28:47 compute-0 nova_compute[248510]: 2025-12-13 09:28:47.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:28:47 compute-0 nova_compute[248510]: 2025-12-13 09:28:47.839 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:28:47 compute-0 nova_compute[248510]: 2025-12-13 09:28:47.839 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:28:47 compute-0 nova_compute[248510]: 2025-12-13 09:28:47.839 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:28:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:28:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559057708' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:28:48 compute-0 nova_compute[248510]: 2025-12-13 09:28:48.408 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:28:48 compute-0 ceph-mon[76537]: pgmap v3832: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 09:28:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1559057708' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:28:48 compute-0 nova_compute[248510]: 2025-12-13 09:28:48.634 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:28:48 compute-0 nova_compute[248510]: 2025-12-13 09:28:48.635 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3466MB free_disk=59.987372557632625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:28:48 compute-0 nova_compute[248510]: 2025-12-13 09:28:48.635 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:28:48 compute-0 nova_compute[248510]: 2025-12-13 09:28:48.636 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:28:48 compute-0 nova_compute[248510]: 2025-12-13 09:28:48.720 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:28:48 compute-0 nova_compute[248510]: 2025-12-13 09:28:48.721 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:28:48 compute-0 nova_compute[248510]: 2025-12-13 09:28:48.891 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:28:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3833: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 09:28:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:28:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708456428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:28:49 compute-0 nova_compute[248510]: 2025-12-13 09:28:49.555 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:28:49 compute-0 nova_compute[248510]: 2025-12-13 09:28:49.564 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:28:49 compute-0 nova_compute[248510]: 2025-12-13 09:28:49.640 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:28:49 compute-0 nova_compute[248510]: 2025-12-13 09:28:49.643 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:28:49 compute-0 nova_compute[248510]: 2025-12-13 09:28:49.643 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:28:50 compute-0 nova_compute[248510]: 2025-12-13 09:28:50.351 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:50 compute-0 ceph-mon[76537]: pgmap v3833: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 09:28:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1708456428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:28:51 compute-0 nova_compute[248510]: 2025-12-13 09:28:51.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3834: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 09:28:51 compute-0 nova_compute[248510]: 2025-12-13 09:28:51.643 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:51 compute-0 nova_compute[248510]: 2025-12-13 09:28:51.644 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:28:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Dec 13 09:28:52 compute-0 ceph-mon[76537]: pgmap v3834: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 09:28:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Dec 13 09:28:52 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Dec 13 09:28:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3836: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 614 B/s wr, 12 op/s
Dec 13 09:28:53 compute-0 ceph-mon[76537]: osdmap e315: 3 total, 3 up, 3 in
Dec 13 09:28:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:54 compute-0 ceph-mon[76537]: pgmap v3836: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 614 B/s wr, 12 op/s
Dec 13 09:28:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3837: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:28:55 compute-0 nova_compute[248510]: 2025-12-13 09:28:55.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:28:55.464 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:28:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:28:55.465 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:28:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:28:55.465 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:28:55 compute-0 podman[413583]: 2025-12-13 09:28:55.98151458 +0000 UTC m=+0.063151864 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 09:28:55 compute-0 podman[413584]: 2025-12-13 09:28:55.991392477 +0000 UTC m=+0.065328508 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 09:28:56 compute-0 podman[413582]: 2025-12-13 09:28:56.018859476 +0000 UTC m=+0.102864970 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:28:56 compute-0 nova_compute[248510]: 2025-12-13 09:28:56.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:28:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Dec 13 09:28:56 compute-0 ceph-mon[76537]: pgmap v3837: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:28:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Dec 13 09:28:56 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Dec 13 09:28:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3839: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 09:28:57 compute-0 ceph-mon[76537]: osdmap e316: 3 total, 3 up, 3 in
Dec 13 09:28:58 compute-0 ceph-mon[76537]: pgmap v3839: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 09:28:58 compute-0 nova_compute[248510]: 2025-12-13 09:28:58.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:28:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:28:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Dec 13 09:28:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Dec 13 09:28:59 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Dec 13 09:28:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3841: 321 pgs: 321 active+clean; 8.5 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 KiB/s wr, 39 op/s
Dec 13 09:29:00 compute-0 ceph-mon[76537]: osdmap e317: 3 total, 3 up, 3 in
Dec 13 09:29:00 compute-0 ceph-mon[76537]: pgmap v3841: 321 pgs: 321 active+clean; 8.5 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 KiB/s wr, 39 op/s
Dec 13 09:29:00 compute-0 nova_compute[248510]: 2025-12-13 09:29:00.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:01 compute-0 nova_compute[248510]: 2025-12-13 09:29:01.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3842: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 55 op/s
Dec 13 09:29:02 compute-0 ceph-mon[76537]: pgmap v3842: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 55 op/s
Dec 13 09:29:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3843: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 09:29:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Dec 13 09:29:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Dec 13 09:29:04 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Dec 13 09:29:04 compute-0 ceph-mon[76537]: pgmap v3843: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 09:29:04 compute-0 ceph-mon[76537]: osdmap e318: 3 total, 3 up, 3 in
Dec 13 09:29:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3845: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 09:29:05 compute-0 nova_compute[248510]: 2025-12-13 09:29:05.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:06 compute-0 nova_compute[248510]: 2025-12-13 09:29:06.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:06 compute-0 ceph-mon[76537]: pgmap v3845: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 09:29:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3846: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Dec 13 09:29:08 compute-0 ceph-mon[76537]: pgmap v3846: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Dec 13 09:29:08 compute-0 sudo[413647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:29:08 compute-0 sudo[413647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:08 compute-0 sudo[413647]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:08 compute-0 sudo[413672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:29:08 compute-0 sudo[413672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3847: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Dec 13 09:29:09 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Dec 13 09:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:29:09
Dec 13 09:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr']
Dec 13 09:29:09 compute-0 sudo[413672]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 09:29:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:29:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:29:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:29:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:29:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:29:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:29:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:29:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:29:09 compute-0 sudo[413729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:29:09 compute-0 sudo[413729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:09 compute-0 sudo[413729]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:09 compute-0 sudo[413754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:29:09 compute-0 sudo[413754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:10 compute-0 podman[413790]: 2025-12-13 09:29:10.118726362 +0000 UTC m=+0.109962047 container create 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 09:29:10 compute-0 podman[413790]: 2025-12-13 09:29:10.044314997 +0000 UTC m=+0.035550722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:29:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:29:10 compute-0 systemd[1]: Started libpod-conmon-48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88.scope.
Dec 13 09:29:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:29:10 compute-0 podman[413790]: 2025-12-13 09:29:10.296285853 +0000 UTC m=+0.287521558 container init 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:29:10 compute-0 podman[413790]: 2025-12-13 09:29:10.309470314 +0000 UTC m=+0.300705999 container start 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:29:10 compute-0 podman[413790]: 2025-12-13 09:29:10.313655729 +0000 UTC m=+0.304891414 container attach 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 09:29:10 compute-0 pedantic_hawking[413806]: 167 167
Dec 13 09:29:10 compute-0 systemd[1]: libpod-48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88.scope: Deactivated successfully.
Dec 13 09:29:10 compute-0 podman[413790]: 2025-12-13 09:29:10.317957896 +0000 UTC m=+0.309193551 container died 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-155228ff6f7e6894ef4bce14f7a2eb572d8c5cd730f506c86523edbb5b479de4-merged.mount: Deactivated successfully.
Dec 13 09:29:10 compute-0 nova_compute[248510]: 2025-12-13 09:29:10.361 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:10 compute-0 podman[413790]: 2025-12-13 09:29:10.36635729 +0000 UTC m=+0.357592935 container remove 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:29:10 compute-0 systemd[1]: libpod-conmon-48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88.scope: Deactivated successfully.
Dec 13 09:29:10 compute-0 ceph-mon[76537]: pgmap v3847: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Dec 13 09:29:10 compute-0 ceph-mon[76537]: osdmap e319: 3 total, 3 up, 3 in
Dec 13 09:29:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 09:29:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:29:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:29:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:29:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:29:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:29:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:29:10 compute-0 podman[413830]: 2025-12-13 09:29:10.655327103 +0000 UTC m=+0.111402123 container create 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 09:29:10 compute-0 podman[413830]: 2025-12-13 09:29:10.588630011 +0000 UTC m=+0.044705051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:29:10 compute-0 systemd[1]: Started libpod-conmon-601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d.scope.
Dec 13 09:29:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:10 compute-0 podman[413830]: 2025-12-13 09:29:10.747127784 +0000 UTC m=+0.203202824 container init 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 09:29:10 compute-0 podman[413830]: 2025-12-13 09:29:10.760734675 +0000 UTC m=+0.216809735 container start 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:29:10 compute-0 podman[413830]: 2025-12-13 09:29:10.76530327 +0000 UTC m=+0.221378310 container attach 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:29:11 compute-0 nova_compute[248510]: 2025-12-13 09:29:11.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:29:11 compute-0 pedantic_boyd[413846]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:29:11 compute-0 pedantic_boyd[413846]: --> All data devices are unavailable
Dec 13 09:29:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3849: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s
Dec 13 09:29:11 compute-0 systemd[1]: libpod-601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d.scope: Deactivated successfully.
Dec 13 09:29:11 compute-0 podman[413830]: 2025-12-13 09:29:11.364230853 +0000 UTC m=+0.820305913 container died 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5-merged.mount: Deactivated successfully.
Dec 13 09:29:11 compute-0 podman[413830]: 2025-12-13 09:29:11.425671583 +0000 UTC m=+0.881746633 container remove 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:29:11 compute-0 systemd[1]: libpod-conmon-601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d.scope: Deactivated successfully.
Dec 13 09:29:11 compute-0 sudo[413754]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:11 compute-0 sudo[413879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:29:11 compute-0 sudo[413879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:11 compute-0 sudo[413879]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:11 compute-0 sudo[413904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:29:11 compute-0 sudo[413904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:11 compute-0 podman[413941]: 2025-12-13 09:29:11.962357356 +0000 UTC m=+0.059057241 container create 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 09:29:12 compute-0 systemd[1]: Started libpod-conmon-43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0.scope.
Dec 13 09:29:12 compute-0 podman[413941]: 2025-12-13 09:29:11.933990475 +0000 UTC m=+0.030690380 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:29:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:29:12 compute-0 podman[413941]: 2025-12-13 09:29:12.051673915 +0000 UTC m=+0.148373780 container init 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 09:29:12 compute-0 podman[413941]: 2025-12-13 09:29:12.05905882 +0000 UTC m=+0.155758675 container start 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 09:29:12 compute-0 podman[413941]: 2025-12-13 09:29:12.063185434 +0000 UTC m=+0.159885309 container attach 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 09:29:12 compute-0 dreamy_lamarr[413958]: 167 167
Dec 13 09:29:12 compute-0 systemd[1]: libpod-43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0.scope: Deactivated successfully.
Dec 13 09:29:12 compute-0 podman[413941]: 2025-12-13 09:29:12.067271956 +0000 UTC m=+0.163971801 container died 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 09:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6b2d06491f59edc243fa816f39e64fc58a12cf0070e148bc6134a90b0886b3c-merged.mount: Deactivated successfully.
Dec 13 09:29:12 compute-0 podman[413941]: 2025-12-13 09:29:12.118739656 +0000 UTC m=+0.215439511 container remove 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:29:12 compute-0 systemd[1]: libpod-conmon-43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0.scope: Deactivated successfully.
Dec 13 09:29:12 compute-0 podman[413981]: 2025-12-13 09:29:12.301264991 +0000 UTC m=+0.057623836 container create 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 09:29:12 compute-0 systemd[1]: Started libpod-conmon-5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b.scope.
Dec 13 09:29:12 compute-0 podman[413981]: 2025-12-13 09:29:12.27970184 +0000 UTC m=+0.036060455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:29:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:12 compute-0 podman[413981]: 2025-12-13 09:29:12.407258208 +0000 UTC m=+0.163616823 container init 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:29:12 compute-0 podman[413981]: 2025-12-13 09:29:12.422807187 +0000 UTC m=+0.179165782 container start 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 09:29:12 compute-0 podman[413981]: 2025-12-13 09:29:12.427045624 +0000 UTC m=+0.183404279 container attach 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:29:12 compute-0 ceph-mon[76537]: pgmap v3849: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]: {
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:     "0": [
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:         {
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "devices": [
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "/dev/loop3"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             ],
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_name": "ceph_lv0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_size": "21470642176",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "name": "ceph_lv0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "tags": {
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cluster_name": "ceph",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.crush_device_class": "",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.encrypted": "0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.objectstore": "bluestore",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osd_id": "0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.type": "block",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.vdo": "0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.with_tpm": "0"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             },
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "type": "block",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "vg_name": "ceph_vg0"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:         }
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:     ],
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:     "1": [
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:         {
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "devices": [
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "/dev/loop4"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             ],
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_name": "ceph_lv1",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_size": "21470642176",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "name": "ceph_lv1",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "tags": {
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cluster_name": "ceph",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.crush_device_class": "",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.encrypted": "0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.objectstore": "bluestore",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osd_id": "1",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.type": "block",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.vdo": "0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.with_tpm": "0"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             },
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "type": "block",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "vg_name": "ceph_vg1"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:         }
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:     ],
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:     "2": [
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:         {
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "devices": [
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "/dev/loop5"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             ],
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_name": "ceph_lv2",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_size": "21470642176",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "name": "ceph_lv2",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "tags": {
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.cluster_name": "ceph",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.crush_device_class": "",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.encrypted": "0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.objectstore": "bluestore",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osd_id": "2",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.type": "block",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.vdo": "0",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:                 "ceph.with_tpm": "0"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             },
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "type": "block",
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:             "vg_name": "ceph_vg2"
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:         }
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]:     ]
Dec 13 09:29:12 compute-0 relaxed_almeida[413998]: }
Dec 13 09:29:12 compute-0 systemd[1]: libpod-5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b.scope: Deactivated successfully.
Dec 13 09:29:12 compute-0 conmon[413998]: conmon 5b8b8b7e80b51eb8f2ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b.scope/container/memory.events
Dec 13 09:29:12 compute-0 podman[413981]: 2025-12-13 09:29:12.796366271 +0000 UTC m=+0.552724876 container died 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 09:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d-merged.mount: Deactivated successfully.
Dec 13 09:29:12 compute-0 podman[413981]: 2025-12-13 09:29:12.848422076 +0000 UTC m=+0.604780681 container remove 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 09:29:12 compute-0 systemd[1]: libpod-conmon-5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b.scope: Deactivated successfully.
Dec 13 09:29:12 compute-0 sudo[413904]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:13 compute-0 sudo[414019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:29:13 compute-0 sudo[414019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:13 compute-0 sudo[414019]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:13 compute-0 sudo[414044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:29:13 compute-0 sudo[414044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3850: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1011 B/s wr, 13 op/s
Dec 13 09:29:13 compute-0 podman[414081]: 2025-12-13 09:29:13.4078967 +0000 UTC m=+0.044951167 container create 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:29:13 compute-0 systemd[1]: Started libpod-conmon-903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723.scope.
Dec 13 09:29:13 compute-0 podman[414081]: 2025-12-13 09:29:13.389982921 +0000 UTC m=+0.027037408 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:29:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:29:13 compute-0 podman[414081]: 2025-12-13 09:29:13.504482332 +0000 UTC m=+0.141536829 container init 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:29:13 compute-0 podman[414081]: 2025-12-13 09:29:13.513383655 +0000 UTC m=+0.150438122 container start 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 09:29:13 compute-0 podman[414081]: 2025-12-13 09:29:13.517292783 +0000 UTC m=+0.154347250 container attach 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:29:13 compute-0 stoic_varahamihira[414095]: 167 167
Dec 13 09:29:13 compute-0 systemd[1]: libpod-903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723.scope: Deactivated successfully.
Dec 13 09:29:13 compute-0 podman[414081]: 2025-12-13 09:29:13.520831621 +0000 UTC m=+0.157886088 container died 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2868d44251965fd730497215f4f3ab0f2705bacf1df0835a51387a550204da82-merged.mount: Deactivated successfully.
Dec 13 09:29:13 compute-0 podman[414081]: 2025-12-13 09:29:13.565734857 +0000 UTC m=+0.202789364 container remove 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:29:13 compute-0 systemd[1]: libpod-conmon-903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723.scope: Deactivated successfully.
Dec 13 09:29:13 compute-0 podman[414121]: 2025-12-13 09:29:13.78767913 +0000 UTC m=+0.069225286 container create 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:29:13 compute-0 systemd[1]: Started libpod-conmon-6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24.scope.
Dec 13 09:29:13 compute-0 podman[414121]: 2025-12-13 09:29:13.754025157 +0000 UTC m=+0.035571403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:29:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:29:13 compute-0 podman[414121]: 2025-12-13 09:29:13.909949596 +0000 UTC m=+0.191495782 container init 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:29:13 compute-0 podman[414121]: 2025-12-13 09:29:13.919161076 +0000 UTC m=+0.200707232 container start 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 09:29:13 compute-0 podman[414121]: 2025-12-13 09:29:13.923129976 +0000 UTC m=+0.204676162 container attach 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:29:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:14 compute-0 ceph-mon[76537]: pgmap v3850: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1011 B/s wr, 13 op/s
Dec 13 09:29:14 compute-0 lvm[414215]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:29:14 compute-0 lvm[414216]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:29:14 compute-0 lvm[414216]: VG ceph_vg1 finished
Dec 13 09:29:14 compute-0 lvm[414215]: VG ceph_vg0 finished
Dec 13 09:29:14 compute-0 lvm[414218]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:29:14 compute-0 lvm[414218]: VG ceph_vg2 finished
Dec 13 09:29:14 compute-0 silly_visvesvaraya[414137]: {}
Dec 13 09:29:14 compute-0 systemd[1]: libpod-6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24.scope: Deactivated successfully.
Dec 13 09:29:14 compute-0 podman[414121]: 2025-12-13 09:29:14.879262263 +0000 UTC m=+1.160808409 container died 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:29:14 compute-0 systemd[1]: libpod-6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24.scope: Consumed 1.565s CPU time.
Dec 13 09:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552-merged.mount: Deactivated successfully.
Dec 13 09:29:14 compute-0 podman[414121]: 2025-12-13 09:29:14.941254757 +0000 UTC m=+1.222800903 container remove 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:29:14 compute-0 systemd[1]: libpod-conmon-6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24.scope: Deactivated successfully.
Dec 13 09:29:14 compute-0 sudo[414044]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:29:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:29:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:29:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:29:15 compute-0 sudo[414234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:29:15 compute-0 sudo[414234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:29:15 compute-0 sudo[414234]: pam_unix(sudo:session): session closed for user root
Dec 13 09:29:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:29:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288270553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:29:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:29:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288270553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:29:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3851: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 09:29:15 compute-0 nova_compute[248510]: 2025-12-13 09:29:15.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:16 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:29:16 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:29:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4288270553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:29:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4288270553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:29:16 compute-0 ceph-mon[76537]: pgmap v3851: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 09:29:16 compute-0 nova_compute[248510]: 2025-12-13 09:29:16.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3852: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 09:29:18 compute-0 ceph-mon[76537]: pgmap v3852: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 09:29:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3853: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 09:29:19 compute-0 ceph-mon[76537]: pgmap v3853: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 09:29:20 compute-0 nova_compute[248510]: 2025-12-13 09:29:20.364 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:21 compute-0 nova_compute[248510]: 2025-12-13 09:29:21.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3854: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5195834882760674e-05 of space, bias 1.0, pg target 0.004558750464828202 quantized to 32 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.151001443556761e-06 of space, bias 1.0, pg target 0.0012453004330670282 quantized to 32 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.642278062309101e-07 of space, bias 4.0, pg target 0.0006770733674770921 quantized to 16 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:29:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:29:22 compute-0 ceph-mon[76537]: pgmap v3854: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Dec 13 09:29:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3855: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Dec 13 09:29:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:24 compute-0 ceph-mon[76537]: pgmap v3855: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Dec 13 09:29:24 compute-0 nova_compute[248510]: 2025-12-13 09:29:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3856: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 2 op/s
Dec 13 09:29:25 compute-0 nova_compute[248510]: 2025-12-13 09:29:25.366 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:25 compute-0 ceph-mon[76537]: pgmap v3856: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 2 op/s
Dec 13 09:29:26 compute-0 nova_compute[248510]: 2025-12-13 09:29:26.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:27 compute-0 podman[414261]: 2025-12-13 09:29:27.024147227 +0000 UTC m=+0.101049044 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 09:29:27 compute-0 podman[414259]: 2025-12-13 09:29:27.056826946 +0000 UTC m=+0.140037691 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:29:27 compute-0 podman[414260]: 2025-12-13 09:29:27.057257037 +0000 UTC m=+0.140416041 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Dec 13 09:29:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3857: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:28 compute-0 ceph-mon[76537]: pgmap v3857: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3858: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:30 compute-0 nova_compute[248510]: 2025-12-13 09:29:30.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:30 compute-0 ceph-mon[76537]: pgmap v3858: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:30 compute-0 nova_compute[248510]: 2025-12-13 09:29:30.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:31 compute-0 nova_compute[248510]: 2025-12-13 09:29:31.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3859: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:32 compute-0 ceph-mon[76537]: pgmap v3859: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3860: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:34 compute-0 ceph-mon[76537]: pgmap v3860: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3861: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:35 compute-0 nova_compute[248510]: 2025-12-13 09:29:35.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:35 compute-0 nova_compute[248510]: 2025-12-13 09:29:35.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:36 compute-0 nova_compute[248510]: 2025-12-13 09:29:36.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:36 compute-0 ceph-mon[76537]: pgmap v3861: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3862: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:37 compute-0 ceph-mon[76537]: pgmap v3862: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3863: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:39 compute-0 ceph-mon[76537]: pgmap v3863: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:29:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:29:40 compute-0 nova_compute[248510]: 2025-12-13 09:29:40.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:41 compute-0 nova_compute[248510]: 2025-12-13 09:29:41.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3864: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:42 compute-0 ceph-mon[76537]: pgmap v3864: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3865: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:43 compute-0 ceph-mon[76537]: pgmap v3865: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.656934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183657176, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1518, "num_deletes": 253, "total_data_size": 2528944, "memory_usage": 2562600, "flush_reason": "Manual Compaction"}
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183675975, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2472697, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75986, "largest_seqno": 77503, "table_properties": {"data_size": 2465528, "index_size": 4239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14823, "raw_average_key_size": 20, "raw_value_size": 2451142, "raw_average_value_size": 3330, "num_data_blocks": 189, "num_entries": 736, "num_filter_entries": 736, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618030, "oldest_key_time": 1765618030, "file_creation_time": 1765618183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 19106 microseconds, and 6674 cpu microseconds.
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.676035) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2472697 bytes OK
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.676107) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.678259) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.678330) EVENT_LOG_v1 {"time_micros": 1765618183678317, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.678366) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2522301, prev total WAL file size 2522301, number of live WAL files 2.
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.679917) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2414KB)], [182(10MB)]
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183680010, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 13380437, "oldest_snapshot_seqno": -1}
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9412 keys, 11572786 bytes, temperature: kUnknown
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183755893, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 11572786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11513228, "index_size": 34933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 248118, "raw_average_key_size": 26, "raw_value_size": 11348815, "raw_average_value_size": 1205, "num_data_blocks": 1343, "num_entries": 9412, "num_filter_entries": 9412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.756255) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 11572786 bytes
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.757720) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.1 rd, 152.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 10.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(10.1) write-amplify(4.7) OK, records in: 9934, records dropped: 522 output_compression: NoCompression
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.757740) EVENT_LOG_v1 {"time_micros": 1765618183757730, "job": 114, "event": "compaction_finished", "compaction_time_micros": 75969, "compaction_time_cpu_micros": 34249, "output_level": 6, "num_output_files": 1, "total_output_size": 11572786, "num_input_records": 9934, "num_output_records": 9412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183758332, "job": 114, "event": "table_file_deletion", "file_number": 184}
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183760248, "job": 114, "event": "table_file_deletion", "file_number": 182}
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.679775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:29:43 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:29:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:44 compute-0 nova_compute[248510]: 2025-12-13 09:29:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:44 compute-0 nova_compute[248510]: 2025-12-13 09:29:44.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:29:44 compute-0 nova_compute[248510]: 2025-12-13 09:29:44.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:29:44 compute-0 nova_compute[248510]: 2025-12-13 09:29:44.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:29:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3866: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:45 compute-0 nova_compute[248510]: 2025-12-13 09:29:45.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:46 compute-0 nova_compute[248510]: 2025-12-13 09:29:46.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:46 compute-0 ceph-mon[76537]: pgmap v3866: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:46 compute-0 nova_compute[248510]: 2025-12-13 09:29:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:46 compute-0 nova_compute[248510]: 2025-12-13 09:29:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3867: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:48 compute-0 ceph-mon[76537]: pgmap v3867: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:48 compute-0 nova_compute[248510]: 2025-12-13 09:29:48.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3868: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:49 compute-0 nova_compute[248510]: 2025-12-13 09:29:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:49 compute-0 nova_compute[248510]: 2025-12-13 09:29:49.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:29:49 compute-0 nova_compute[248510]: 2025-12-13 09:29:49.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:29:49 compute-0 nova_compute[248510]: 2025-12-13 09:29:49.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:29:49 compute-0 nova_compute[248510]: 2025-12-13 09:29:49.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:29:49 compute-0 nova_compute[248510]: 2025-12-13 09:29:49.804 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:29:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726806841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.434 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:29:50 compute-0 ceph-mon[76537]: pgmap v3868: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/726806841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.592 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.593 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3495MB free_disk=59.987369677983224GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.594 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.594 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.932 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.932 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:29:50 compute-0 nova_compute[248510]: 2025-12-13 09:29:50.966 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:29:51 compute-0 nova_compute[248510]: 2025-12-13 09:29:51.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3869: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:29:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985874201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:29:51 compute-0 nova_compute[248510]: 2025-12-13 09:29:51.614 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:29:51 compute-0 nova_compute[248510]: 2025-12-13 09:29:51.620 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:29:51 compute-0 nova_compute[248510]: 2025-12-13 09:29:51.805 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:29:51 compute-0 nova_compute[248510]: 2025-12-13 09:29:51.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:29:51 compute-0 nova_compute[248510]: 2025-12-13 09:29:51.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:29:52 compute-0 ceph-mon[76537]: pgmap v3869: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2985874201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:29:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3870: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:53 compute-0 nova_compute[248510]: 2025-12-13 09:29:53.809 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:53 compute-0 nova_compute[248510]: 2025-12-13 09:29:53.810 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:29:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:54 compute-0 ceph-mon[76537]: pgmap v3870: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3871: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:55 compute-0 nova_compute[248510]: 2025-12-13 09:29:55.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:29:55.466 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:29:55.466 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:29:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:29:55.467 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:29:55 compute-0 ceph-mon[76537]: pgmap v3871: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:56 compute-0 nova_compute[248510]: 2025-12-13 09:29:56.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:29:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3872: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:57 compute-0 podman[414370]: 2025-12-13 09:29:57.98087071 +0000 UTC m=+0.057863831 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 09:29:57 compute-0 podman[414366]: 2025-12-13 09:29:57.993329962 +0000 UTC m=+0.076895238 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:29:58 compute-0 podman[414365]: 2025-12-13 09:29:58.004648086 +0000 UTC m=+0.094223763 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 09:29:58 compute-0 ceph-mon[76537]: pgmap v3872: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:58 compute-0 nova_compute[248510]: 2025-12-13 09:29:58.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:29:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:29:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3873: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:29:59 compute-0 ceph-mon[76537]: pgmap v3873: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:00 compute-0 nova_compute[248510]: 2025-12-13 09:30:00.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:01 compute-0 nova_compute[248510]: 2025-12-13 09:30:01.091 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3874: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:02 compute-0 ceph-mon[76537]: pgmap v3874: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3875: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:03 compute-0 ceph-mon[76537]: pgmap v3875: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3876: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:05 compute-0 nova_compute[248510]: 2025-12-13 09:30:05.386 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:05 compute-0 ceph-mon[76537]: pgmap v3876: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:06 compute-0 nova_compute[248510]: 2025-12-13 09:30:06.094 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3877: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:08 compute-0 ceph-mon[76537]: pgmap v3877: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3878: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:30:09
Dec 13 09:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'vms']
Dec 13 09:30:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:30:09 compute-0 ceph-mon[76537]: pgmap v3878: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:30:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:30:10 compute-0 nova_compute[248510]: 2025-12-13 09:30:10.387 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:11 compute-0 nova_compute[248510]: 2025-12-13 09:30:11.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:30:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3879: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:12 compute-0 ceph-mon[76537]: pgmap v3879: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3880: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:14 compute-0 ceph-mon[76537]: pgmap v3880: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:30:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3977364793' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:30:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:30:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3977364793' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:30:15 compute-0 sudo[414431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:30:15 compute-0 sudo[414431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:15 compute-0 sudo[414431]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:15 compute-0 sudo[414456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 09:30:15 compute-0 sudo[414456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3881: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:15 compute-0 nova_compute[248510]: 2025-12-13 09:30:15.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3977364793' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:30:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3977364793' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:30:15 compute-0 podman[414524]: 2025-12-13 09:30:15.691625632 +0000 UTC m=+0.067375190 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:30:15 compute-0 podman[414524]: 2025-12-13 09:30:15.784545401 +0000 UTC m=+0.160294969 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:30:16 compute-0 nova_compute[248510]: 2025-12-13 09:30:16.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:16 compute-0 ceph-mon[76537]: pgmap v3881: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:16 compute-0 sudo[414456]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:30:16 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:30:16 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:16 compute-0 sudo[414710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:30:16 compute-0 sudo[414710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:16 compute-0 sudo[414710]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:17 compute-0 sudo[414735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:30:17 compute-0 sudo[414735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3882: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:17 compute-0 sudo[414735]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:30:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:30:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:30:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:30:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:30:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:30:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:30:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:30:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:30:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:30:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:30:17 compute-0 sudo[414790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:30:17 compute-0 sudo[414790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:17 compute-0 sudo[414790]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:17 compute-0 sudo[414815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:30:17 compute-0 sudo[414815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:17 compute-0 ceph-mon[76537]: pgmap v3882: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:30:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:30:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:30:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:30:17 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:30:18 compute-0 podman[414852]: 2025-12-13 09:30:17.981259865 +0000 UTC m=+0.035791678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:30:18 compute-0 podman[414852]: 2025-12-13 09:30:18.09875537 +0000 UTC m=+0.153287183 container create b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 09:30:18 compute-0 systemd[1]: Started libpod-conmon-b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9.scope.
Dec 13 09:30:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:30:18 compute-0 podman[414852]: 2025-12-13 09:30:18.199007783 +0000 UTC m=+0.253539626 container init b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:30:18 compute-0 podman[414852]: 2025-12-13 09:30:18.209391184 +0000 UTC m=+0.263922967 container start b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:30:18 compute-0 podman[414852]: 2025-12-13 09:30:18.212664126 +0000 UTC m=+0.267195959 container attach b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:30:18 compute-0 condescending_elbakyan[414868]: 167 167
Dec 13 09:30:18 compute-0 systemd[1]: libpod-b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9.scope: Deactivated successfully.
Dec 13 09:30:18 compute-0 podman[414852]: 2025-12-13 09:30:18.220120042 +0000 UTC m=+0.274651825 container died b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 09:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-55edef3dc718a03f03a0bc075238af2d4ce87ff8dfa81c21f44c97f062108788-merged.mount: Deactivated successfully.
Dec 13 09:30:18 compute-0 podman[414852]: 2025-12-13 09:30:18.270108396 +0000 UTC m=+0.324640169 container remove b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 09:30:18 compute-0 systemd[1]: libpod-conmon-b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9.scope: Deactivated successfully.
Dec 13 09:30:18 compute-0 podman[414890]: 2025-12-13 09:30:18.428386133 +0000 UTC m=+0.024589497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:30:18 compute-0 podman[414890]: 2025-12-13 09:30:18.577724127 +0000 UTC m=+0.173927471 container create dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:30:18 compute-0 systemd[1]: Started libpod-conmon-dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51.scope.
Dec 13 09:30:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:19 compute-0 podman[414890]: 2025-12-13 09:30:19.068826557 +0000 UTC m=+0.665029931 container init dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 09:30:19 compute-0 podman[414890]: 2025-12-13 09:30:19.078268154 +0000 UTC m=+0.674471498 container start dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:30:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:19 compute-0 podman[414890]: 2025-12-13 09:30:19.316134146 +0000 UTC m=+0.912337490 container attach dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:30:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3883: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:19 compute-0 modest_meninsky[414907]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:30:19 compute-0 modest_meninsky[414907]: --> All data devices are unavailable
Dec 13 09:30:19 compute-0 systemd[1]: libpod-dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51.scope: Deactivated successfully.
Dec 13 09:30:19 compute-0 podman[414890]: 2025-12-13 09:30:19.63902712 +0000 UTC m=+1.235230514 container died dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:30:19 compute-0 ceph-mon[76537]: pgmap v3883: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:20 compute-0 nova_compute[248510]: 2025-12-13 09:30:20.406 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d-merged.mount: Deactivated successfully.
Dec 13 09:30:21 compute-0 nova_compute[248510]: 2025-12-13 09:30:21.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:21 compute-0 podman[414890]: 2025-12-13 09:30:21.175208236 +0000 UTC m=+2.771411580 container remove dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:30:21 compute-0 sudo[414815]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:21 compute-0 systemd[1]: libpod-conmon-dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51.scope: Deactivated successfully.
Dec 13 09:30:21 compute-0 sudo[414938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:30:21 compute-0 sudo[414938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:21 compute-0 sudo[414938]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:21 compute-0 sudo[414963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3884: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:21 compute-0 sudo[414963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5195834882760674e-05 of space, bias 1.0, pg target 0.004558750464828202 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.151001443556761e-06 of space, bias 1.0, pg target 0.0012453004330670282 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.642278062309101e-07 of space, bias 4.0, pg target 0.0006770733674770921 quantized to 16 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:30:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:30:21 compute-0 podman[415000]: 2025-12-13 09:30:21.671685402 +0000 UTC m=+0.028775952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:30:21 compute-0 ceph-mon[76537]: pgmap v3884: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:22 compute-0 podman[415000]: 2025-12-13 09:30:22.018629809 +0000 UTC m=+0.375720299 container create 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:30:22 compute-0 systemd[1]: Started libpod-conmon-5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a.scope.
Dec 13 09:30:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:30:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3885: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:24 compute-0 podman[415000]: 2025-12-13 09:30:24.564814772 +0000 UTC m=+2.921905242 container init 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:30:24 compute-0 podman[415000]: 2025-12-13 09:30:24.578325581 +0000 UTC m=+2.935416071 container start 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:30:24 compute-0 zen_galileo[415016]: 167 167
Dec 13 09:30:24 compute-0 systemd[1]: libpod-5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a.scope: Deactivated successfully.
Dec 13 09:30:25 compute-0 podman[415000]: 2025-12-13 09:30:25.354563919 +0000 UTC m=+3.711654389 container attach 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:30:25 compute-0 podman[415000]: 2025-12-13 09:30:25.355446461 +0000 UTC m=+3.712536921 container died 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 09:30:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3886: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:25 compute-0 nova_compute[248510]: 2025-12-13 09:30:25.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:25 compute-0 ceph-mon[76537]: pgmap v3885: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:26 compute-0 nova_compute[248510]: 2025-12-13 09:30:26.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-aacc17f652493c1933223ea4888585b4294ab071b8d8c18e6646088b6aa7729a-merged.mount: Deactivated successfully.
Dec 13 09:30:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3887: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:27 compute-0 ceph-mon[76537]: pgmap v3886: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:28 compute-0 podman[415000]: 2025-12-13 09:30:28.468262728 +0000 UTC m=+6.825353168 container remove 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:30:28 compute-0 systemd[1]: libpod-conmon-5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a.scope: Deactivated successfully.
Dec 13 09:30:28 compute-0 podman[415038]: 2025-12-13 09:30:28.605694843 +0000 UTC m=+0.070932669 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Dec 13 09:30:28 compute-0 podman[415037]: 2025-12-13 09:30:28.607109789 +0000 UTC m=+0.073360220 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:30:28 compute-0 podman[415035]: 2025-12-13 09:30:28.669363389 +0000 UTC m=+0.135807425 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 09:30:28 compute-0 podman[415098]: 2025-12-13 09:30:28.665298727 +0000 UTC m=+0.030581267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:30:29 compute-0 podman[415098]: 2025-12-13 09:30:29.075447468 +0000 UTC m=+0.440729978 container create 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:30:29 compute-0 ceph-mon[76537]: pgmap v3887: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3888: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:29 compute-0 systemd[1]: Started libpod-conmon-981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262.scope.
Dec 13 09:30:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:30 compute-0 podman[415098]: 2025-12-13 09:30:30.261383427 +0000 UTC m=+1.626666017 container init 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Dec 13 09:30:30 compute-0 podman[415098]: 2025-12-13 09:30:30.27109464 +0000 UTC m=+1.636377170 container start 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:30:30 compute-0 nova_compute[248510]: 2025-12-13 09:30:30.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:30 compute-0 ceph-mon[76537]: pgmap v3888: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]: {
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:     "0": [
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:         {
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "devices": [
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "/dev/loop3"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             ],
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_name": "ceph_lv0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_size": "21470642176",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "name": "ceph_lv0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "tags": {
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cluster_name": "ceph",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.crush_device_class": "",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.encrypted": "0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.objectstore": "bluestore",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osd_id": "0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.type": "block",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.vdo": "0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.with_tpm": "0"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             },
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "type": "block",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "vg_name": "ceph_vg0"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:         }
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:     ],
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:     "1": [
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:         {
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "devices": [
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "/dev/loop4"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             ],
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_name": "ceph_lv1",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_size": "21470642176",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "name": "ceph_lv1",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "tags": {
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cluster_name": "ceph",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.crush_device_class": "",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.encrypted": "0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.objectstore": "bluestore",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osd_id": "1",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.type": "block",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.vdo": "0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.with_tpm": "0"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             },
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "type": "block",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "vg_name": "ceph_vg1"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:         }
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:     ],
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:     "2": [
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:         {
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "devices": [
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "/dev/loop5"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             ],
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_name": "ceph_lv2",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_size": "21470642176",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "name": "ceph_lv2",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "tags": {
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.cluster_name": "ceph",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.crush_device_class": "",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.encrypted": "0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.objectstore": "bluestore",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osd_id": "2",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.type": "block",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.vdo": "0",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:                 "ceph.with_tpm": "0"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             },
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "type": "block",
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:             "vg_name": "ceph_vg2"
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:         }
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]:     ]
Dec 13 09:30:30 compute-0 laughing_mclaren[415124]: }
Dec 13 09:30:30 compute-0 systemd[1]: libpod-981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262.scope: Deactivated successfully.
Dec 13 09:30:30 compute-0 podman[415098]: 2025-12-13 09:30:30.955031905 +0000 UTC m=+2.320314505 container attach 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:30:30 compute-0 podman[415098]: 2025-12-13 09:30:30.956763668 +0000 UTC m=+2.322046208 container died 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 09:30:31 compute-0 nova_compute[248510]: 2025-12-13 09:30:31.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3889: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:31 compute-0 nova_compute[248510]: 2025-12-13 09:30:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3890: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:33 compute-0 ceph-mon[76537]: pgmap v3889: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517-merged.mount: Deactivated successfully.
Dec 13 09:30:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3891: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:35 compute-0 nova_compute[248510]: 2025-12-13 09:30:35.412 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:35 compute-0 nova_compute[248510]: 2025-12-13 09:30:35.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:36 compute-0 nova_compute[248510]: 2025-12-13 09:30:36.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:37 compute-0 ceph-mon[76537]: pgmap v3890: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3892: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:38 compute-0 podman[415098]: 2025-12-13 09:30:38.143430645 +0000 UTC m=+9.508713195 container remove 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:30:38 compute-0 systemd[1]: libpod-conmon-981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262.scope: Deactivated successfully.
Dec 13 09:30:38 compute-0 sudo[414963]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:38 compute-0 sudo[415146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:30:38 compute-0 sudo[415146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:38 compute-0 sudo[415146]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:38 compute-0 sudo[415171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:30:38 compute-0 sudo[415171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:38 compute-0 podman[415209]: 2025-12-13 09:30:38.689854732 +0000 UTC m=+0.027684165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:30:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3893: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:39 compute-0 podman[415209]: 2025-12-13 09:30:39.628951082 +0000 UTC m=+0.966780505 container create 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 09:30:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:39 compute-0 ceph-mon[76537]: pgmap v3891: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:39 compute-0 ceph-mon[76537]: pgmap v3892: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:40 compute-0 systemd[1]: Started libpod-conmon-726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba.scope.
Dec 13 09:30:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:30:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:30:40 compute-0 nova_compute[248510]: 2025-12-13 09:30:40.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:40 compute-0 podman[415209]: 2025-12-13 09:30:40.538457221 +0000 UTC m=+1.876286634 container init 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:30:40 compute-0 podman[415209]: 2025-12-13 09:30:40.551097118 +0000 UTC m=+1.888926501 container start 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:30:40 compute-0 determined_northcutt[415225]: 167 167
Dec 13 09:30:40 compute-0 systemd[1]: libpod-726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba.scope: Deactivated successfully.
Dec 13 09:30:41 compute-0 podman[415209]: 2025-12-13 09:30:41.041671785 +0000 UTC m=+2.379501208 container attach 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 09:30:41 compute-0 podman[415209]: 2025-12-13 09:30:41.043332607 +0000 UTC m=+2.381162040 container died 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:30:41 compute-0 nova_compute[248510]: 2025-12-13 09:30:41.114 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:41 compute-0 ceph-mon[76537]: pgmap v3893: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ef48f9b5d1e57e657cc2714904214f1eb81723ad1fef815b1d978848832244d-merged.mount: Deactivated successfully.
Dec 13 09:30:41 compute-0 nova_compute[248510]: 2025-12-13 09:30:41.378 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3894: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:42 compute-0 podman[415209]: 2025-12-13 09:30:42.469185797 +0000 UTC m=+3.807015210 container remove 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 09:30:42 compute-0 systemd[1]: libpod-conmon-726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba.scope: Deactivated successfully.
Dec 13 09:30:42 compute-0 ceph-mon[76537]: pgmap v3894: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:42 compute-0 podman[415250]: 2025-12-13 09:30:42.727664076 +0000 UTC m=+0.049369548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:30:42 compute-0 podman[415250]: 2025-12-13 09:30:42.953798275 +0000 UTC m=+0.275503717 container create b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 09:30:43 compute-0 systemd[1]: Started libpod-conmon-b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc.scope.
Dec 13 09:30:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:30:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3895: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:43 compute-0 podman[415250]: 2025-12-13 09:30:43.490248792 +0000 UTC m=+0.811954304 container init b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 09:30:43 compute-0 podman[415250]: 2025-12-13 09:30:43.496671303 +0000 UTC m=+0.818376735 container start b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:30:43 compute-0 podman[415250]: 2025-12-13 09:30:43.78617087 +0000 UTC m=+1.107876322 container attach b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 09:30:44 compute-0 lvm[415344]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:30:44 compute-0 lvm[415344]: VG ceph_vg1 finished
Dec 13 09:30:44 compute-0 lvm[415345]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:30:44 compute-0 lvm[415345]: VG ceph_vg0 finished
Dec 13 09:30:44 compute-0 lvm[415347]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:30:44 compute-0 lvm[415347]: VG ceph_vg2 finished
Dec 13 09:30:44 compute-0 agitated_mendeleev[415266]: {}
Dec 13 09:30:44 compute-0 systemd[1]: libpod-b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc.scope: Deactivated successfully.
Dec 13 09:30:44 compute-0 systemd[1]: libpod-b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc.scope: Consumed 1.503s CPU time.
Dec 13 09:30:44 compute-0 podman[415350]: 2025-12-13 09:30:44.431170378 +0000 UTC m=+0.034183928 container died b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:30:44 compute-0 ceph-mon[76537]: pgmap v3895: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:44 compute-0 nova_compute[248510]: 2025-12-13 09:30:44.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:44 compute-0 nova_compute[248510]: 2025-12-13 09:30:44.777 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:30:44 compute-0 nova_compute[248510]: 2025-12-13 09:30:44.777 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:30:44 compute-0 nova_compute[248510]: 2025-12-13 09:30:44.802 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:30:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9-merged.mount: Deactivated successfully.
Dec 13 09:30:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3896: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:45 compute-0 nova_compute[248510]: 2025-12-13 09:30:45.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:45 compute-0 podman[415350]: 2025-12-13 09:30:45.434105559 +0000 UTC m=+1.037119109 container remove b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:30:45 compute-0 systemd[1]: libpod-conmon-b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc.scope: Deactivated successfully.
Dec 13 09:30:45 compute-0 sudo[415171]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:30:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:30:45 compute-0 ceph-mon[76537]: pgmap v3896: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:45 compute-0 sudo[415366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:30:45 compute-0 sudo[415366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:30:45 compute-0 sudo[415366]: pam_unix(sudo:session): session closed for user root
Dec 13 09:30:46 compute-0 nova_compute[248510]: 2025-12-13 09:30:46.116 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:46 compute-0 nova_compute[248510]: 2025-12-13 09:30:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:46 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:30:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3897: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:48 compute-0 ceph-mon[76537]: pgmap v3897: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:48 compute-0 nova_compute[248510]: 2025-12-13 09:30:48.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:30:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 77K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1351 writes, 6107 keys, 1351 commit groups, 1.0 writes per commit group, ingest: 9.18 MB, 0.02 MB/s
                                           Interval WAL: 1351 writes, 1351 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     25.0      4.00              0.36        57    0.070       0      0       0.0       0.0
                                             L6      1/0   11.04 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.0     77.7     66.0      7.62              1.63        56    0.136    398K    29K       0.0       0.0
                                            Sum      1/0   11.04 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.0     51.0     51.9     11.62              1.99       113    0.103    398K    29K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.3     44.9     45.1      1.33              0.20        10    0.133     48K   2535       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     77.7     66.0      7.62              1.63        56    0.136    398K    29K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     25.0      3.99              0.36        56    0.071       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.097, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.59 GB write, 0.08 MB/s write, 0.58 GB read, 0.08 MB/s read, 11.6 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 68.32 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000898 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4211,65.50 MB,21.545%) FilterBlock(114,1.08 MB,0.354039%) IndexBlock(114,1.75 MB,0.576064%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 09:30:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3898: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:49 compute-0 nova_compute[248510]: 2025-12-13 09:30:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:49 compute-0 nova_compute[248510]: 2025-12-13 09:30:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:49 compute-0 nova_compute[248510]: 2025-12-13 09:30:49.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:30:49 compute-0 nova_compute[248510]: 2025-12-13 09:30:49.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:30:49 compute-0 nova_compute[248510]: 2025-12-13 09:30:49.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:30:49 compute-0 nova_compute[248510]: 2025-12-13 09:30:49.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:30:49 compute-0 nova_compute[248510]: 2025-12-13 09:30:49.810 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:30:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:50 compute-0 ceph-mon[76537]: pgmap v3898: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:30:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487398291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.421 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.438 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.610 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.611 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3469MB free_disk=59.987369677983224GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.612 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.612 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.681 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.682 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:30:50 compute-0 nova_compute[248510]: 2025-12-13 09:30:50.759 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:30:51 compute-0 nova_compute[248510]: 2025-12-13 09:30:51.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:30:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372129658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:30:51 compute-0 nova_compute[248510]: 2025-12-13 09:30:51.332 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:30:51 compute-0 nova_compute[248510]: 2025-12-13 09:30:51.338 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:30:51 compute-0 nova_compute[248510]: 2025-12-13 09:30:51.362 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:30:51 compute-0 nova_compute[248510]: 2025-12-13 09:30:51.363 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:30:51 compute-0 nova_compute[248510]: 2025-12-13 09:30:51.363 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:30:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3899: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2487398291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:30:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1372129658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:30:53 compute-0 ceph-mon[76537]: pgmap v3899: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3900: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:54 compute-0 ceph-mon[76537]: pgmap v3900: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:30:55 compute-0 nova_compute[248510]: 2025-12-13 09:30:55.363 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:30:55 compute-0 nova_compute[248510]: 2025-12-13 09:30:55.364 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:30:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3901: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:55 compute-0 nova_compute[248510]: 2025-12-13 09:30:55.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:30:55.466 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:30:55.467 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:30:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:30:55.467 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:30:56 compute-0 ceph-mon[76537]: pgmap v3901: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:56 compute-0 nova_compute[248510]: 2025-12-13 09:30:56.122 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:30:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3902: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:58 compute-0 ceph-mon[76537]: pgmap v3902: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:30:58 compute-0 podman[415436]: 2025-12-13 09:30:58.993901019 +0000 UTC m=+0.078880459 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:30:59 compute-0 podman[415437]: 2025-12-13 09:30:59.016139016 +0000 UTC m=+0.096593782 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:30:59 compute-0 podman[415435]: 2025-12-13 09:30:59.047849201 +0000 UTC m=+0.134759269 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 09:30:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Dec 13 09:30:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Dec 13 09:30:59 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Dec 13 09:30:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3904: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 102 B/s wr, 0 op/s
Dec 13 09:31:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:00 compute-0 ceph-mon[76537]: osdmap e320: 3 total, 3 up, 3 in
Dec 13 09:31:00 compute-0 ceph-mon[76537]: pgmap v3904: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 102 B/s wr, 0 op/s
Dec 13 09:31:00 compute-0 nova_compute[248510]: 2025-12-13 09:31:00.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:00 compute-0 nova_compute[248510]: 2025-12-13 09:31:00.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:01 compute-0 nova_compute[248510]: 2025-12-13 09:31:01.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Dec 13 09:31:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Dec 13 09:31:01 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Dec 13 09:31:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3906: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 383 B/s rd, 255 B/s wr, 1 op/s
Dec 13 09:31:02 compute-0 ceph-mon[76537]: osdmap e321: 3 total, 3 up, 3 in
Dec 13 09:31:02 compute-0 ceph-mon[76537]: pgmap v3906: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 383 B/s rd, 255 B/s wr, 1 op/s
Dec 13 09:31:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3907: 321 pgs: 321 active+clean; 16 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 MiB/s wr, 24 op/s
Dec 13 09:31:04 compute-0 ceph-mon[76537]: pgmap v3907: 321 pgs: 321 active+clean; 16 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 MiB/s wr, 24 op/s
Dec 13 09:31:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3908: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 13 09:31:05 compute-0 nova_compute[248510]: 2025-12-13 09:31:05.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:06 compute-0 nova_compute[248510]: 2025-12-13 09:31:06.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:06 compute-0 ceph-mon[76537]: pgmap v3908: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 13 09:31:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3909: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.0 MiB/s wr, 46 op/s
Dec 13 09:31:08 compute-0 ceph-mon[76537]: pgmap v3909: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.0 MiB/s wr, 46 op/s
Dec 13 09:31:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3910: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Dec 13 09:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:31:09
Dec 13 09:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log']
Dec 13 09:31:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:31:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:31:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:31:10 compute-0 nova_compute[248510]: 2025-12-13 09:31:10.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:10 compute-0 ceph-mon[76537]: pgmap v3910: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Dec 13 09:31:11 compute-0 nova_compute[248510]: 2025-12-13 09:31:11.129 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:31:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3911: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Dec 13 09:31:11 compute-0 ceph-mon[76537]: pgmap v3911: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Dec 13 09:31:12 compute-0 nova_compute[248510]: 2025-12-13 09:31:12.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:12 compute-0 nova_compute[248510]: 2025-12-13 09:31:12.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:31:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3912: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Dec 13 09:31:13 compute-0 nova_compute[248510]: 2025-12-13 09:31:13.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:31:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2635968025' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:31:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:31:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2635968025' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:31:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3913: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.1 MiB/s wr, 15 op/s
Dec 13 09:31:15 compute-0 nova_compute[248510]: 2025-12-13 09:31:15.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:15 compute-0 ceph-mon[76537]: pgmap v3912: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Dec 13 09:31:16 compute-0 nova_compute[248510]: 2025-12-13 09:31:16.131 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2635968025' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:31:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2635968025' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:31:17 compute-0 ceph-mon[76537]: pgmap v3913: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.1 MiB/s wr, 15 op/s
Dec 13 09:31:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3914: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:18 compute-0 ceph-mon[76537]: pgmap v3914: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3915: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:20 compute-0 nova_compute[248510]: 2025-12-13 09:31:20.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:20 compute-0 ceph-mon[76537]: pgmap v3915: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:21 compute-0 nova_compute[248510]: 2025-12-13 09:31:21.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3916: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5217104235718227e-05 of space, bias 1.0, pg target 0.004565131270715468 quantized to 32 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006700045834095627 of space, bias 1.0, pg target 0.20100137502286883 quantized to 32 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64414107132728e-07 of space, bias 4.0, pg target 0.0006772969285592736 quantized to 16 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:31:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:31:22 compute-0 ceph-mon[76537]: pgmap v3916: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3917: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:24 compute-0 ceph-mon[76537]: pgmap v3917: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:31:25.305 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:31:25 compute-0 nova_compute[248510]: 2025-12-13 09:31:25.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:25 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:31:25.305 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:31:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3918: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:25 compute-0 nova_compute[248510]: 2025-12-13 09:31:25.477 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:25 compute-0 ceph-mon[76537]: pgmap v3918: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:26 compute-0 nova_compute[248510]: 2025-12-13 09:31:26.137 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3919: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:28 compute-0 ceph-mon[76537]: pgmap v3919: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:28 compute-0 nova_compute[248510]: 2025-12-13 09:31:28.843 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3920: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:29 compute-0 podman[415498]: 2025-12-13 09:31:29.977119976 +0000 UTC m=+0.060522818 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:31:29 compute-0 podman[415499]: 2025-12-13 09:31:29.998033821 +0000 UTC m=+0.070009146 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:31:30 compute-0 podman[415497]: 2025-12-13 09:31:30.012059132 +0000 UTC m=+0.100026118 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 09:31:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:30 compute-0 nova_compute[248510]: 2025-12-13 09:31:30.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:30 compute-0 ceph-mon[76537]: pgmap v3920: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:31 compute-0 nova_compute[248510]: 2025-12-13 09:31:31.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3921: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:31:32.308 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:31:32 compute-0 ceph-mon[76537]: pgmap v3921: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3922: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:33 compute-0 nova_compute[248510]: 2025-12-13 09:31:33.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:34 compute-0 ceph-mon[76537]: pgmap v3922: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3923: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:35 compute-0 nova_compute[248510]: 2025-12-13 09:31:35.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:35 compute-0 ceph-mon[76537]: pgmap v3923: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:35 compute-0 nova_compute[248510]: 2025-12-13 09:31:35.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:36 compute-0 nova_compute[248510]: 2025-12-13 09:31:36.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3924: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:38 compute-0 sshd-session[415561]: Invalid user avalanche from 80.94.92.165 port 42038
Dec 13 09:31:38 compute-0 sshd-session[415561]: Connection closed by invalid user avalanche 80.94.92.165 port 42038 [preauth]
Dec 13 09:31:38 compute-0 ceph-mon[76537]: pgmap v3924: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3925: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 170 B/s wr, 5 op/s
Dec 13 09:31:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Dec 13 09:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:31:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:31:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Dec 13 09:31:40 compute-0 ceph-mon[76537]: pgmap v3925: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 170 B/s wr, 5 op/s
Dec 13 09:31:40 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Dec 13 09:31:40 compute-0 nova_compute[248510]: 2025-12-13 09:31:40.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:41 compute-0 nova_compute[248510]: 2025-12-13 09:31:41.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3927: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 204 B/s wr, 6 op/s
Dec 13 09:31:41 compute-0 ceph-mon[76537]: osdmap e322: 3 total, 3 up, 3 in
Dec 13 09:31:41 compute-0 nova_compute[248510]: 2025-12-13 09:31:41.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:41 compute-0 nova_compute[248510]: 2025-12-13 09:31:41.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:31:41 compute-0 nova_compute[248510]: 2025-12-13 09:31:41.794 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:31:42 compute-0 ceph-mon[76537]: pgmap v3927: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 204 B/s wr, 6 op/s
Dec 13 09:31:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3928: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 613 B/s wr, 18 op/s
Dec 13 09:31:43 compute-0 ceph-mon[76537]: pgmap v3928: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 613 B/s wr, 18 op/s
Dec 13 09:31:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3929: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:31:45 compute-0 nova_compute[248510]: 2025-12-13 09:31:45.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:45 compute-0 nova_compute[248510]: 2025-12-13 09:31:45.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:45 compute-0 nova_compute[248510]: 2025-12-13 09:31:45.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:31:45 compute-0 nova_compute[248510]: 2025-12-13 09:31:45.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:31:45 compute-0 nova_compute[248510]: 2025-12-13 09:31:45.816 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:31:46 compute-0 sudo[415563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:31:46 compute-0 sudo[415563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:46 compute-0 sudo[415563]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:46 compute-0 sudo[415588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:31:46 compute-0 sudo[415588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:46 compute-0 nova_compute[248510]: 2025-12-13 09:31:46.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:46 compute-0 ceph-mon[76537]: pgmap v3929: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:31:46 compute-0 sudo[415588]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:31:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:31:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:31:46 compute-0 nova_compute[248510]: 2025-12-13 09:31:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:46 compute-0 sudo[415644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:31:46 compute-0 sudo[415644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:46 compute-0 sudo[415644]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:46 compute-0 sudo[415669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:31:46 compute-0 sudo[415669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:47 compute-0 podman[415706]: 2025-12-13 09:31:47.169981524 +0000 UTC m=+0.057128673 container create 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:31:47 compute-0 systemd[1]: Started libpod-conmon-00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e.scope.
Dec 13 09:31:47 compute-0 podman[415706]: 2025-12-13 09:31:47.144758112 +0000 UTC m=+0.031905341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:31:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:31:47 compute-0 podman[415706]: 2025-12-13 09:31:47.279281974 +0000 UTC m=+0.166429153 container init 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:31:47 compute-0 podman[415706]: 2025-12-13 09:31:47.293160482 +0000 UTC m=+0.180307671 container start 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 09:31:47 compute-0 podman[415706]: 2025-12-13 09:31:47.297721236 +0000 UTC m=+0.184868415 container attach 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:31:47 compute-0 competent_faraday[415722]: 167 167
Dec 13 09:31:47 compute-0 systemd[1]: libpod-00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e.scope: Deactivated successfully.
Dec 13 09:31:47 compute-0 conmon[415722]: conmon 00285e0473e7f64ac435 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e.scope/container/memory.events
Dec 13 09:31:47 compute-0 podman[415706]: 2025-12-13 09:31:47.304132907 +0000 UTC m=+0.191280096 container died 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 09:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ace8d9907bf831be6c850b46cdb4bdf83267df6422453da9b873941a3bc26c10-merged.mount: Deactivated successfully.
Dec 13 09:31:47 compute-0 podman[415706]: 2025-12-13 09:31:47.358699165 +0000 UTC m=+0.245846314 container remove 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:31:47 compute-0 systemd[1]: libpod-conmon-00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e.scope: Deactivated successfully.
Dec 13 09:31:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3930: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:31:47 compute-0 podman[415745]: 2025-12-13 09:31:47.518709606 +0000 UTC m=+0.043818340 container create 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:31:47 compute-0 systemd[1]: Started libpod-conmon-9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df.scope.
Dec 13 09:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:31:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:31:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:47 compute-0 podman[415745]: 2025-12-13 09:31:47.501964976 +0000 UTC m=+0.027073730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:31:47 compute-0 podman[415745]: 2025-12-13 09:31:47.603932002 +0000 UTC m=+0.129040746 container init 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:31:47 compute-0 podman[415745]: 2025-12-13 09:31:47.619832691 +0000 UTC m=+0.144941435 container start 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:31:47 compute-0 podman[415745]: 2025-12-13 09:31:47.625239656 +0000 UTC m=+0.150348390 container attach 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:31:48 compute-0 eager_leavitt[415762]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:31:48 compute-0 eager_leavitt[415762]: --> All data devices are unavailable
Dec 13 09:31:48 compute-0 systemd[1]: libpod-9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df.scope: Deactivated successfully.
Dec 13 09:31:48 compute-0 podman[415745]: 2025-12-13 09:31:48.105555566 +0000 UTC m=+0.630664300 container died 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4-merged.mount: Deactivated successfully.
Dec 13 09:31:48 compute-0 podman[415745]: 2025-12-13 09:31:48.177051478 +0000 UTC m=+0.702160222 container remove 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:31:48 compute-0 systemd[1]: libpod-conmon-9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df.scope: Deactivated successfully.
Dec 13 09:31:48 compute-0 sudo[415669]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:48 compute-0 sudo[415797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:31:48 compute-0 sudo[415797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:48 compute-0 sudo[415797]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:48 compute-0 sudo[415822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:31:48 compute-0 sudo[415822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:48 compute-0 ceph-mon[76537]: pgmap v3930: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:31:48 compute-0 podman[415861]: 2025-12-13 09:31:48.748886723 +0000 UTC m=+0.068123169 container create b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:31:48 compute-0 systemd[1]: Started libpod-conmon-b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0.scope.
Dec 13 09:31:48 compute-0 podman[415861]: 2025-12-13 09:31:48.726934812 +0000 UTC m=+0.046171298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:31:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:31:48 compute-0 podman[415861]: 2025-12-13 09:31:48.849766151 +0000 UTC m=+0.169002607 container init b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:31:48 compute-0 podman[415861]: 2025-12-13 09:31:48.856602723 +0000 UTC m=+0.175839169 container start b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:31:48 compute-0 podman[415861]: 2025-12-13 09:31:48.860010478 +0000 UTC m=+0.179246924 container attach b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:31:48 compute-0 interesting_blackwell[415877]: 167 167
Dec 13 09:31:48 compute-0 systemd[1]: libpod-b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0.scope: Deactivated successfully.
Dec 13 09:31:48 compute-0 conmon[415877]: conmon b9dada57b6b656f871f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0.scope/container/memory.events
Dec 13 09:31:48 compute-0 podman[415861]: 2025-12-13 09:31:48.866282555 +0000 UTC m=+0.185519051 container died b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1f2793a815c206edc09f6814ea15fac1b1ee91c5ef2115d38b9714c8b867430-merged.mount: Deactivated successfully.
Dec 13 09:31:48 compute-0 podman[415861]: 2025-12-13 09:31:48.917698274 +0000 UTC m=+0.236934720 container remove b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 09:31:48 compute-0 systemd[1]: libpod-conmon-b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0.scope: Deactivated successfully.
Dec 13 09:31:49 compute-0 podman[415900]: 2025-12-13 09:31:49.105791799 +0000 UTC m=+0.048139948 container create 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Dec 13 09:31:49 compute-0 systemd[1]: Started libpod-conmon-7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072.scope.
Dec 13 09:31:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:49 compute-0 podman[415900]: 2025-12-13 09:31:49.084660219 +0000 UTC m=+0.027008388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:31:49 compute-0 podman[415900]: 2025-12-13 09:31:49.184984944 +0000 UTC m=+0.127333093 container init 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:31:49 compute-0 podman[415900]: 2025-12-13 09:31:49.192038431 +0000 UTC m=+0.134386600 container start 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 09:31:49 compute-0 podman[415900]: 2025-12-13 09:31:49.196323678 +0000 UTC m=+0.138671837 container attach 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 09:31:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3931: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Dec 13 09:31:49 compute-0 goofy_newton[415917]: {
Dec 13 09:31:49 compute-0 goofy_newton[415917]:     "0": [
Dec 13 09:31:49 compute-0 goofy_newton[415917]:         {
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "devices": [
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "/dev/loop3"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             ],
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_name": "ceph_lv0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_size": "21470642176",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "name": "ceph_lv0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "tags": {
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cluster_name": "ceph",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.crush_device_class": "",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.encrypted": "0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.objectstore": "bluestore",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osd_id": "0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.type": "block",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.vdo": "0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.with_tpm": "0"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             },
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "type": "block",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "vg_name": "ceph_vg0"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:         }
Dec 13 09:31:49 compute-0 goofy_newton[415917]:     ],
Dec 13 09:31:49 compute-0 goofy_newton[415917]:     "1": [
Dec 13 09:31:49 compute-0 goofy_newton[415917]:         {
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "devices": [
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "/dev/loop4"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             ],
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_name": "ceph_lv1",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_size": "21470642176",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "name": "ceph_lv1",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "tags": {
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cluster_name": "ceph",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.crush_device_class": "",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.encrypted": "0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.objectstore": "bluestore",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osd_id": "1",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.type": "block",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.vdo": "0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.with_tpm": "0"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             },
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "type": "block",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "vg_name": "ceph_vg1"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:         }
Dec 13 09:31:49 compute-0 goofy_newton[415917]:     ],
Dec 13 09:31:49 compute-0 goofy_newton[415917]:     "2": [
Dec 13 09:31:49 compute-0 goofy_newton[415917]:         {
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "devices": [
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "/dev/loop5"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             ],
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_name": "ceph_lv2",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_size": "21470642176",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "name": "ceph_lv2",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "tags": {
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.cluster_name": "ceph",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.crush_device_class": "",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.encrypted": "0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.objectstore": "bluestore",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osd_id": "2",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.type": "block",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.vdo": "0",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:                 "ceph.with_tpm": "0"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             },
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "type": "block",
Dec 13 09:31:49 compute-0 goofy_newton[415917]:             "vg_name": "ceph_vg2"
Dec 13 09:31:49 compute-0 goofy_newton[415917]:         }
Dec 13 09:31:49 compute-0 goofy_newton[415917]:     ]
Dec 13 09:31:49 compute-0 goofy_newton[415917]: }
Dec 13 09:31:49 compute-0 systemd[1]: libpod-7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072.scope: Deactivated successfully.
Dec 13 09:31:49 compute-0 podman[415900]: 2025-12-13 09:31:49.520755011 +0000 UTC m=+0.463103180 container died 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532-merged.mount: Deactivated successfully.
Dec 13 09:31:49 compute-0 podman[415900]: 2025-12-13 09:31:49.564337423 +0000 UTC m=+0.506685572 container remove 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 09:31:49 compute-0 systemd[1]: libpod-conmon-7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072.scope: Deactivated successfully.
Dec 13 09:31:49 compute-0 ceph-mon[76537]: pgmap v3931: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Dec 13 09:31:49 compute-0 sudo[415822]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:49 compute-0 sudo[415938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:31:49 compute-0 sudo[415938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:49 compute-0 sudo[415938]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:49 compute-0 sudo[415963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:31:49 compute-0 nova_compute[248510]: 2025-12-13 09:31:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:49 compute-0 sudo[415963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Dec 13 09:31:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Dec 13 09:31:50 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Dec 13 09:31:50 compute-0 podman[416001]: 2025-12-13 09:31:50.052969061 +0000 UTC m=+0.046717792 container create c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:31:50 compute-0 systemd[1]: Started libpod-conmon-c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7.scope.
Dec 13 09:31:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:31:50 compute-0 podman[416001]: 2025-12-13 09:31:50.032497848 +0000 UTC m=+0.026246599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:31:50 compute-0 podman[416001]: 2025-12-13 09:31:50.139653244 +0000 UTC m=+0.133401995 container init c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 09:31:50 compute-0 podman[416001]: 2025-12-13 09:31:50.147824339 +0000 UTC m=+0.141573070 container start c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:31:50 compute-0 podman[416001]: 2025-12-13 09:31:50.151834419 +0000 UTC m=+0.145583170 container attach c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:31:50 compute-0 zen_margulis[416017]: 167 167
Dec 13 09:31:50 compute-0 systemd[1]: libpod-c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7.scope: Deactivated successfully.
Dec 13 09:31:50 compute-0 podman[416001]: 2025-12-13 09:31:50.154997448 +0000 UTC m=+0.148746179 container died c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-929246e319ae01d6df15c0d2f2c3e9327add32e62d1956fd8de0bef96078b390-merged.mount: Deactivated successfully.
Dec 13 09:31:50 compute-0 podman[416001]: 2025-12-13 09:31:50.196337325 +0000 UTC m=+0.190086056 container remove c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:31:50 compute-0 systemd[1]: libpod-conmon-c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7.scope: Deactivated successfully.
Dec 13 09:31:50 compute-0 podman[416041]: 2025-12-13 09:31:50.363172627 +0000 UTC m=+0.048566559 container create 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:31:50 compute-0 systemd[1]: Started libpod-conmon-40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2.scope.
Dec 13 09:31:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:31:50 compute-0 podman[416041]: 2025-12-13 09:31:50.344694334 +0000 UTC m=+0.030088286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:31:50 compute-0 podman[416041]: 2025-12-13 09:31:50.441784657 +0000 UTC m=+0.127178609 container init 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:31:50 compute-0 podman[416041]: 2025-12-13 09:31:50.449619964 +0000 UTC m=+0.135013896 container start 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:31:50 compute-0 podman[416041]: 2025-12-13 09:31:50.452954927 +0000 UTC m=+0.138348879 container attach 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:31:50 compute-0 nova_compute[248510]: 2025-12-13 09:31:50.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:50 compute-0 nova_compute[248510]: 2025-12-13 09:31:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:50 compute-0 nova_compute[248510]: 2025-12-13 09:31:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:50 compute-0 nova_compute[248510]: 2025-12-13 09:31:50.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:31:50 compute-0 nova_compute[248510]: 2025-12-13 09:31:50.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:31:50 compute-0 nova_compute[248510]: 2025-12-13 09:31:50.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:31:50 compute-0 nova_compute[248510]: 2025-12-13 09:31:50.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:31:50 compute-0 nova_compute[248510]: 2025-12-13 09:31:50.805 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.150 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:51 compute-0 lvm[416156]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:31:51 compute-0 lvm[416157]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:31:51 compute-0 lvm[416157]: VG ceph_vg1 finished
Dec 13 09:31:51 compute-0 lvm[416156]: VG ceph_vg0 finished
Dec 13 09:31:51 compute-0 lvm[416159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:31:51 compute-0 lvm[416159]: VG ceph_vg2 finished
Dec 13 09:31:51 compute-0 pedantic_williamson[416058]: {}
Dec 13 09:31:51 compute-0 systemd[1]: libpod-40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2.scope: Deactivated successfully.
Dec 13 09:31:51 compute-0 podman[416041]: 2025-12-13 09:31:51.348447475 +0000 UTC m=+1.033841427 container died 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:31:51 compute-0 systemd[1]: libpod-40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2.scope: Consumed 1.511s CPU time.
Dec 13 09:31:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:31:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474496401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.384 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:31:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3933: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Dec 13 09:31:51 compute-0 ceph-mon[76537]: osdmap e323: 3 total, 3 up, 3 in
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.541 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.542 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3394MB free_disk=59.98736613895744GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.542 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.542 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.613 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.614 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:31:51 compute-0 nova_compute[248510]: 2025-12-13 09:31:51.636 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2-merged.mount: Deactivated successfully.
Dec 13 09:31:51 compute-0 podman[416041]: 2025-12-13 09:31:51.935984782 +0000 UTC m=+1.621378714 container remove 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:31:51 compute-0 systemd[1]: libpod-conmon-40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2.scope: Deactivated successfully.
Dec 13 09:31:52 compute-0 sudo[415963]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:31:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:31:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:31:52 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:31:52 compute-0 systemd[1]: Starting dnf makecache...
Dec 13 09:31:52 compute-0 sudo[416197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:31:52 compute-0 sudo[416197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:31:52 compute-0 sudo[416197]: pam_unix(sudo:session): session closed for user root
Dec 13 09:31:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:31:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7845145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:31:52 compute-0 nova_compute[248510]: 2025-12-13 09:31:52.342 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:31:52 compute-0 nova_compute[248510]: 2025-12-13 09:31:52.357 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:31:52 compute-0 nova_compute[248510]: 2025-12-13 09:31:52.377 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:31:52 compute-0 nova_compute[248510]: 2025-12-13 09:31:52.379 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:31:52 compute-0 nova_compute[248510]: 2025-12-13 09:31:52.379 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:31:52 compute-0 dnf[416221]: Metadata cache refreshed recently.
Dec 13 09:31:52 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 13 09:31:52 compute-0 systemd[1]: Finished dnf makecache.
Dec 13 09:31:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2474496401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:31:52 compute-0 ceph-mon[76537]: pgmap v3933: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Dec 13 09:31:52 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:31:52 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:31:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/7845145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:31:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3934: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 818 B/s wr, 6 op/s
Dec 13 09:31:54 compute-0 ceph-mon[76537]: pgmap v3934: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 818 B/s wr, 6 op/s
Dec 13 09:31:54 compute-0 nova_compute[248510]: 2025-12-13 09:31:54.833 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:54 compute-0 nova_compute[248510]: 2025-12-13 09:31:54.856 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:31:54 compute-0 nova_compute[248510]: 2025-12-13 09:31:54.856 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:31:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:31:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3935: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:31:55.468 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:31:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:31:55.468 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:31:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:31:55.468 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:31:55 compute-0 nova_compute[248510]: 2025-12-13 09:31:55.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:55 compute-0 ceph-mon[76537]: pgmap v3935: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:56 compute-0 nova_compute[248510]: 2025-12-13 09:31:56.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:31:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3936: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:58 compute-0 ceph-mon[76537]: pgmap v3936: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:31:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3937: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:00 compute-0 nova_compute[248510]: 2025-12-13 09:32:00.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:00 compute-0 ceph-mon[76537]: pgmap v3937: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:00 compute-0 podman[416228]: 2025-12-13 09:32:00.990587122 +0000 UTC m=+0.074073788 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 09:32:01 compute-0 podman[416227]: 2025-12-13 09:32:01.002525121 +0000 UTC m=+0.084897479 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec 13 09:32:01 compute-0 podman[416226]: 2025-12-13 09:32:01.031932428 +0000 UTC m=+0.114928612 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 13 09:32:01 compute-0 nova_compute[248510]: 2025-12-13 09:32:01.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3938: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:02 compute-0 ceph-mon[76537]: pgmap v3938: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:02 compute-0 nova_compute[248510]: 2025-12-13 09:32:02.797 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3939: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:32:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 187K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1267 writes, 4441 keys, 1267 commit groups, 1.0 writes per commit group, ingest: 3.93 MB, 0.01 MB/s
                                           Interval WAL: 1267 writes, 514 syncs, 2.46 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:32:03 compute-0 ceph-osd[87041]: bluestore.MempoolThread fragmentation_score=0.004811 took=0.000062s
Dec 13 09:32:04 compute-0 ceph-mon[76537]: pgmap v3939: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3940: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:05 compute-0 nova_compute[248510]: 2025-12-13 09:32:05.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:06 compute-0 nova_compute[248510]: 2025-12-13 09:32:06.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:06 compute-0 ceph-mon[76537]: pgmap v3940: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3941: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:08 compute-0 ceph-mon[76537]: pgmap v3941: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3942: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:32:09
Dec 13 09:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'cephfs.cephfs.meta']
Dec 13 09:32:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:32:09 compute-0 ceph-mon[76537]: pgmap v3942: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:32:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:32:10 compute-0 nova_compute[248510]: 2025-12-13 09:32:10.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:32:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7201.8 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 997 writes, 3370 keys, 997 commit groups, 1.0 writes per commit group, ingest: 2.81 MB, 0.00 MB/s
                                           Interval WAL: 997 writes, 422 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:32:11 compute-0 ceph-osd[88086]: bluestore.MempoolThread fragmentation_score=0.004687 took=0.000062s
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:32:11 compute-0 nova_compute[248510]: 2025-12-13 09:32:11.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3943: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:12 compute-0 ceph-mon[76537]: pgmap v3943: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3944: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:14 compute-0 ceph-mon[76537]: pgmap v3944: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:32:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345319997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:32:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:32:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345319997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:32:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3945: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:15 compute-0 nova_compute[248510]: 2025-12-13 09:32:15.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/345319997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:32:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/345319997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:32:16 compute-0 nova_compute[248510]: 2025-12-13 09:32:16.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:16 compute-0 ceph-mon[76537]: pgmap v3945: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3946: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:18 compute-0 ceph-mon[76537]: pgmap v3946: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3947: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:32:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 153K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 912 writes, 2666 keys, 912 commit groups, 1.0 writes per commit group, ingest: 1.84 MB, 0.00 MB/s
                                           Interval WAL: 912 writes, 399 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:32:19 compute-0 ceph-osd[89221]: bluestore.MempoolThread fragmentation_score=0.004241 took=0.000088s
Dec 13 09:32:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:20 compute-0 nova_compute[248510]: 2025-12-13 09:32:20.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:20 compute-0 ceph-mon[76537]: pgmap v3947: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:21 compute-0 nova_compute[248510]: 2025-12-13 09:32:21.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3948: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:32:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:32:21 compute-0 ceph-mon[76537]: pgmap v3948: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3949: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 09:32:23 compute-0 ceph-mon[76537]: pgmap v3949: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3950: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:25 compute-0 nova_compute[248510]: 2025-12-13 09:32:25.500 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:26 compute-0 nova_compute[248510]: 2025-12-13 09:32:26.209 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:27 compute-0 ceph-mon[76537]: pgmap v3950: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3951: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:28 compute-0 ceph-mon[76537]: pgmap v3951: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3952: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:30 compute-0 nova_compute[248510]: 2025-12-13 09:32:30.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:30 compute-0 ceph-mon[76537]: pgmap v3952: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:31 compute-0 nova_compute[248510]: 2025-12-13 09:32:31.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3953: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:31 compute-0 podman[416288]: 2025-12-13 09:32:31.989984698 +0000 UTC m=+0.059811773 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:32:31 compute-0 podman[416287]: 2025-12-13 09:32:31.989814704 +0000 UTC m=+0.069492056 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 09:32:32 compute-0 podman[416286]: 2025-12-13 09:32:32.02150547 +0000 UTC m=+0.098234268 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:32:32 compute-0 ceph-mon[76537]: pgmap v3953: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3954: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:34 compute-0 ceph-mon[76537]: pgmap v3954: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3955: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:35 compute-0 nova_compute[248510]: 2025-12-13 09:32:35.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:35 compute-0 nova_compute[248510]: 2025-12-13 09:32:35.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:36 compute-0 nova_compute[248510]: 2025-12-13 09:32:36.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:36 compute-0 ceph-mon[76537]: pgmap v3955: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3956: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:37 compute-0 nova_compute[248510]: 2025-12-13 09:32:37.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:38 compute-0 ceph-mon[76537]: pgmap v3956: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3957: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:32:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:32:40 compute-0 nova_compute[248510]: 2025-12-13 09:32:40.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:40 compute-0 ceph-mon[76537]: pgmap v3957: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:41 compute-0 nova_compute[248510]: 2025-12-13 09:32:41.217 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3958: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:42 compute-0 ceph-mon[76537]: pgmap v3958: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3959: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:44 compute-0 ceph-mon[76537]: pgmap v3959: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.043035) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365043084, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1675, "num_deletes": 252, "total_data_size": 2861938, "memory_usage": 2902568, "flush_reason": "Manual Compaction"}
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365059281, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1659236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77504, "largest_seqno": 79178, "table_properties": {"data_size": 1653574, "index_size": 2800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14877, "raw_average_key_size": 20, "raw_value_size": 1640868, "raw_average_value_size": 2298, "num_data_blocks": 128, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618185, "oldest_key_time": 1765618185, "file_creation_time": 1765618365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 16386 microseconds, and 6589 cpu microseconds.
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.059407) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1659236 bytes OK
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.059449) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.062502) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.062537) EVENT_LOG_v1 {"time_micros": 1765618365062527, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.062571) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 2854713, prev total WAL file size 2854713, number of live WAL files 2.
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.064631) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323632' seq:72057594037927935, type:22 .. '6D6772737461740033353134' seq:0, type:0; will stop at (end)
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1620KB)], [185(11MB)]
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365064697, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 13232022, "oldest_snapshot_seqno": -1}
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 9685 keys, 10887440 bytes, temperature: kUnknown
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365181629, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 10887440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10828288, "index_size": 33823, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24261, "raw_key_size": 253927, "raw_average_key_size": 26, "raw_value_size": 10661319, "raw_average_value_size": 1100, "num_data_blocks": 1303, "num_entries": 9685, "num_filter_entries": 9685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.182009) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 10887440 bytes
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.184058) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.1 rd, 93.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(14.5) write-amplify(6.6) OK, records in: 10126, records dropped: 441 output_compression: NoCompression
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.184100) EVENT_LOG_v1 {"time_micros": 1765618365184086, "job": 116, "event": "compaction_finished", "compaction_time_micros": 117045, "compaction_time_cpu_micros": 44341, "output_level": 6, "num_output_files": 1, "total_output_size": 10887440, "num_input_records": 10126, "num_output_records": 9685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365184525, "job": 116, "event": "table_file_deletion", "file_number": 187}
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365186799, "job": 116, "event": "table_file_deletion", "file_number": 185}
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.064460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:32:45 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:32:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3960: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:45 compute-0 nova_compute[248510]: 2025-12-13 09:32:45.508 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:46 compute-0 ceph-mon[76537]: pgmap v3960: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:46 compute-0 nova_compute[248510]: 2025-12-13 09:32:46.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:46 compute-0 nova_compute[248510]: 2025-12-13 09:32:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:46 compute-0 nova_compute[248510]: 2025-12-13 09:32:46.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:32:46 compute-0 nova_compute[248510]: 2025-12-13 09:32:46.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:32:46 compute-0 nova_compute[248510]: 2025-12-13 09:32:46.799 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:32:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3961: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:48 compute-0 ceph-mon[76537]: pgmap v3961: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:48 compute-0 nova_compute[248510]: 2025-12-13 09:32:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3962: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:49 compute-0 nova_compute[248510]: 2025-12-13 09:32:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:50 compute-0 nova_compute[248510]: 2025-12-13 09:32:50.510 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:50 compute-0 ceph-mon[76537]: pgmap v3962: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:50 compute-0 nova_compute[248510]: 2025-12-13 09:32:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:50 compute-0 nova_compute[248510]: 2025-12-13 09:32:50.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:50 compute-0 nova_compute[248510]: 2025-12-13 09:32:50.818 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:32:50 compute-0 nova_compute[248510]: 2025-12-13 09:32:50.818 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:32:50 compute-0 nova_compute[248510]: 2025-12-13 09:32:50.818 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:32:50 compute-0 nova_compute[248510]: 2025-12-13 09:32:50.818 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:32:50 compute-0 nova_compute[248510]: 2025-12-13 09:32:50.819 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:32:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410393467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.370 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:32:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3963: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/410393467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.571 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.572 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.572 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.572 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.745 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.745 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.764 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.795 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.795 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.811 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.833 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:32:51 compute-0 nova_compute[248510]: 2025-12-13 09:32:51.858 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:32:52 compute-0 sudo[416395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:32:52 compute-0 sudo[416395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:52 compute-0 sudo[416395]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:52 compute-0 sudo[416420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:32:52 compute-0 sudo[416420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:32:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770924869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:32:52 compute-0 nova_compute[248510]: 2025-12-13 09:32:52.420 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:32:52 compute-0 nova_compute[248510]: 2025-12-13 09:32:52.428 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:32:52 compute-0 nova_compute[248510]: 2025-12-13 09:32:52.452 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:32:52 compute-0 nova_compute[248510]: 2025-12-13 09:32:52.455 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:32:52 compute-0 nova_compute[248510]: 2025-12-13 09:32:52.455 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:32:52 compute-0 ceph-mon[76537]: pgmap v3963: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3770924869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:32:52 compute-0 sudo[416420]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:32:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:32:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:32:53 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:32:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:32:53 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:32:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:32:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:32:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:32:53 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:32:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:32:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:32:53 compute-0 sudo[416478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:32:53 compute-0 sudo[416478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:53 compute-0 sudo[416478]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:53 compute-0 sudo[416503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:32:53 compute-0 sudo[416503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3964: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:53 compute-0 podman[416540]: 2025-12-13 09:32:53.53971193 +0000 UTC m=+0.060392598 container create 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 09:32:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:32:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:32:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:32:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:32:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:32:53 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:32:53 compute-0 systemd[1]: Started libpod-conmon-4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed.scope.
Dec 13 09:32:53 compute-0 podman[416540]: 2025-12-13 09:32:53.517384049 +0000 UTC m=+0.038064727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:32:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:32:53 compute-0 podman[416540]: 2025-12-13 09:32:53.646557103 +0000 UTC m=+0.167237781 container init 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:32:53 compute-0 podman[416540]: 2025-12-13 09:32:53.658155665 +0000 UTC m=+0.178836343 container start 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:32:53 compute-0 podman[416540]: 2025-12-13 09:32:53.662195716 +0000 UTC m=+0.182876494 container attach 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:32:53 compute-0 naughty_golick[416556]: 167 167
Dec 13 09:32:53 compute-0 systemd[1]: libpod-4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed.scope: Deactivated successfully.
Dec 13 09:32:53 compute-0 conmon[416556]: conmon 4dd7eae55a5a336ddd31 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed.scope/container/memory.events
Dec 13 09:32:53 compute-0 podman[416540]: 2025-12-13 09:32:53.671327065 +0000 UTC m=+0.192007733 container died 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:32:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-244a785459fddbaf6c6e792d9287624937b7f89cb27e1db2e4f98102ad6d8960-merged.mount: Deactivated successfully.
Dec 13 09:32:53 compute-0 podman[416540]: 2025-12-13 09:32:53.740437011 +0000 UTC m=+0.261117679 container remove 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 09:32:53 compute-0 systemd[1]: libpod-conmon-4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed.scope: Deactivated successfully.
Dec 13 09:32:54 compute-0 podman[416579]: 2025-12-13 09:32:54.049153724 +0000 UTC m=+0.111608924 container create 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:32:54 compute-0 podman[416579]: 2025-12-13 09:32:53.984363697 +0000 UTC m=+0.046818907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:32:54 compute-0 systemd[1]: Started libpod-conmon-594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf.scope.
Dec 13 09:32:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:54 compute-0 podman[416579]: 2025-12-13 09:32:54.175418734 +0000 UTC m=+0.237874044 container init 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:32:54 compute-0 podman[416579]: 2025-12-13 09:32:54.190755119 +0000 UTC m=+0.253210349 container start 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:32:54 compute-0 podman[416579]: 2025-12-13 09:32:54.195712264 +0000 UTC m=+0.258167564 container attach 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:32:54 compute-0 ceph-mon[76537]: pgmap v3964: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:54 compute-0 priceless_feynman[416595]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:32:54 compute-0 priceless_feynman[416595]: --> All data devices are unavailable
Dec 13 09:32:54 compute-0 systemd[1]: libpod-594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf.scope: Deactivated successfully.
Dec 13 09:32:54 compute-0 podman[416579]: 2025-12-13 09:32:54.853245997 +0000 UTC m=+0.915701307 container died 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af-merged.mount: Deactivated successfully.
Dec 13 09:32:54 compute-0 podman[416579]: 2025-12-13 09:32:54.917770808 +0000 UTC m=+0.980226028 container remove 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:32:54 compute-0 systemd[1]: libpod-conmon-594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf.scope: Deactivated successfully.
Dec 13 09:32:54 compute-0 sudo[416503]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:32:55 compute-0 sudo[416628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:32:55 compute-0 sudo[416628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:55 compute-0 sudo[416628]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:55 compute-0 sudo[416653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:32:55 compute-0 sudo[416653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:55 compute-0 nova_compute[248510]: 2025-12-13 09:32:55.455 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:32:55 compute-0 nova_compute[248510]: 2025-12-13 09:32:55.456 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:32:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:32:55.469 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:32:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:32:55.470 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:32:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:32:55.470 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:32:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3965: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:55 compute-0 nova_compute[248510]: 2025-12-13 09:32:55.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:55 compute-0 podman[416690]: 2025-12-13 09:32:55.558241143 +0000 UTC m=+0.072671796 container create 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:32:55 compute-0 systemd[1]: Started libpod-conmon-457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c.scope.
Dec 13 09:32:55 compute-0 podman[416690]: 2025-12-13 09:32:55.530159098 +0000 UTC m=+0.044589791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:32:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:32:55 compute-0 podman[416690]: 2025-12-13 09:32:55.680126314 +0000 UTC m=+0.194556927 container init 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:32:55 compute-0 podman[416690]: 2025-12-13 09:32:55.691737826 +0000 UTC m=+0.206168469 container start 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:32:55 compute-0 podman[416690]: 2025-12-13 09:32:55.696345801 +0000 UTC m=+0.210776414 container attach 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:32:55 compute-0 upbeat_allen[416706]: 167 167
Dec 13 09:32:55 compute-0 systemd[1]: libpod-457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c.scope: Deactivated successfully.
Dec 13 09:32:55 compute-0 conmon[416706]: conmon 457e8ecf178508449e6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c.scope/container/memory.events
Dec 13 09:32:55 compute-0 podman[416690]: 2025-12-13 09:32:55.702491276 +0000 UTC m=+0.216921899 container died 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-efd0b6dc5f0ca08176122418a644ee45986ce092a135e4ab7265eeb5eac6dc37-merged.mount: Deactivated successfully.
Dec 13 09:32:55 compute-0 podman[416690]: 2025-12-13 09:32:55.759450256 +0000 UTC m=+0.273880879 container remove 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 09:32:55 compute-0 systemd[1]: libpod-conmon-457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c.scope: Deactivated successfully.
Dec 13 09:32:55 compute-0 podman[416730]: 2025-12-13 09:32:55.97858661 +0000 UTC m=+0.057392253 container create d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 09:32:56 compute-0 systemd[1]: Started libpod-conmon-d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5.scope.
Dec 13 09:32:56 compute-0 podman[416730]: 2025-12-13 09:32:55.951458108 +0000 UTC m=+0.030263841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:32:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:56 compute-0 podman[416730]: 2025-12-13 09:32:56.096141692 +0000 UTC m=+0.174947375 container init d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 09:32:56 compute-0 podman[416730]: 2025-12-13 09:32:56.109889547 +0000 UTC m=+0.188695180 container start d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 09:32:56 compute-0 podman[416730]: 2025-12-13 09:32:56.113666402 +0000 UTC m=+0.192472135 container attach d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:32:56 compute-0 nova_compute[248510]: 2025-12-13 09:32:56.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:32:56 compute-0 cranky_yonath[416746]: {
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:     "0": [
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:         {
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "devices": [
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "/dev/loop3"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             ],
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_name": "ceph_lv0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_size": "21470642176",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "name": "ceph_lv0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "tags": {
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cluster_name": "ceph",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.crush_device_class": "",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.encrypted": "0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.objectstore": "bluestore",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osd_id": "0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.type": "block",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.vdo": "0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.with_tpm": "0"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             },
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "type": "block",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "vg_name": "ceph_vg0"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:         }
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:     ],
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:     "1": [
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:         {
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "devices": [
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "/dev/loop4"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             ],
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_name": "ceph_lv1",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_size": "21470642176",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "name": "ceph_lv1",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "tags": {
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cluster_name": "ceph",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.crush_device_class": "",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.encrypted": "0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.objectstore": "bluestore",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osd_id": "1",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.type": "block",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.vdo": "0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.with_tpm": "0"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             },
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "type": "block",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "vg_name": "ceph_vg1"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:         }
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:     ],
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:     "2": [
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:         {
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "devices": [
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "/dev/loop5"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             ],
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_name": "ceph_lv2",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_size": "21470642176",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "name": "ceph_lv2",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "tags": {
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.cluster_name": "ceph",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.crush_device_class": "",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.encrypted": "0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.objectstore": "bluestore",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osd_id": "2",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.type": "block",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.vdo": "0",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:                 "ceph.with_tpm": "0"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             },
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "type": "block",
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:             "vg_name": "ceph_vg2"
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:         }
Dec 13 09:32:56 compute-0 cranky_yonath[416746]:     ]
Dec 13 09:32:56 compute-0 cranky_yonath[416746]: }
Dec 13 09:32:56 compute-0 systemd[1]: libpod-d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5.scope: Deactivated successfully.
Dec 13 09:32:56 compute-0 podman[416730]: 2025-12-13 09:32:56.457931958 +0000 UTC m=+0.536737621 container died d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 09:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7-merged.mount: Deactivated successfully.
Dec 13 09:32:56 compute-0 podman[416730]: 2025-12-13 09:32:56.519119855 +0000 UTC m=+0.597925538 container remove d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 09:32:56 compute-0 systemd[1]: libpod-conmon-d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5.scope: Deactivated successfully.
Dec 13 09:32:56 compute-0 sudo[416653]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:56 compute-0 ceph-mon[76537]: pgmap v3965: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:56 compute-0 sudo[416768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:32:56 compute-0 sudo[416768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:56 compute-0 sudo[416768]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:56 compute-0 sudo[416793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:32:56 compute-0 sudo[416793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:57 compute-0 podman[416830]: 2025-12-13 09:32:57.067808404 +0000 UTC m=+0.044598100 container create 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:32:57 compute-0 systemd[1]: Started libpod-conmon-803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840.scope.
Dec 13 09:32:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:32:57 compute-0 podman[416830]: 2025-12-13 09:32:57.046708445 +0000 UTC m=+0.023498171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:32:57 compute-0 podman[416830]: 2025-12-13 09:32:57.149463445 +0000 UTC m=+0.126253171 container init 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:32:57 compute-0 podman[416830]: 2025-12-13 09:32:57.155473546 +0000 UTC m=+0.132263242 container start 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 09:32:57 compute-0 podman[416830]: 2025-12-13 09:32:57.161016265 +0000 UTC m=+0.137806001 container attach 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:32:57 compute-0 nice_gould[416846]: 167 167
Dec 13 09:32:57 compute-0 systemd[1]: libpod-803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840.scope: Deactivated successfully.
Dec 13 09:32:57 compute-0 podman[416830]: 2025-12-13 09:32:57.163660082 +0000 UTC m=+0.140449788 container died 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 09:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8b01fc53cd650760661d9fbf214dcdd1fbeb5f63d5c573759802bb15da13fd2-merged.mount: Deactivated successfully.
Dec 13 09:32:57 compute-0 podman[416830]: 2025-12-13 09:32:57.211607826 +0000 UTC m=+0.188397512 container remove 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 09:32:57 compute-0 systemd[1]: libpod-conmon-803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840.scope: Deactivated successfully.
Dec 13 09:32:57 compute-0 podman[416870]: 2025-12-13 09:32:57.408221084 +0000 UTC m=+0.054831198 container create d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:32:57 compute-0 podman[416870]: 2025-12-13 09:32:57.380464897 +0000 UTC m=+0.027075071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:32:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3966: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:57 compute-0 systemd[1]: Started libpod-conmon-d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce.scope.
Dec 13 09:32:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:32:57 compute-0 podman[416870]: 2025-12-13 09:32:57.709164452 +0000 UTC m=+0.355774546 container init d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:32:57 compute-0 podman[416870]: 2025-12-13 09:32:57.723489672 +0000 UTC m=+0.370099756 container start d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:32:57 compute-0 podman[416870]: 2025-12-13 09:32:57.796675639 +0000 UTC m=+0.443285813 container attach d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:32:57 compute-0 ceph-mon[76537]: pgmap v3966: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:58 compute-0 lvm[416968]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:32:58 compute-0 lvm[416968]: VG ceph_vg2 finished
Dec 13 09:32:58 compute-0 lvm[416967]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:32:58 compute-0 lvm[416967]: VG ceph_vg1 finished
Dec 13 09:32:58 compute-0 lvm[416966]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:32:58 compute-0 lvm[416966]: VG ceph_vg0 finished
Dec 13 09:32:58 compute-0 nice_cori[416887]: {}
Dec 13 09:32:58 compute-0 systemd[1]: libpod-d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce.scope: Deactivated successfully.
Dec 13 09:32:58 compute-0 podman[416870]: 2025-12-13 09:32:58.656450132 +0000 UTC m=+1.303060226 container died d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 09:32:58 compute-0 systemd[1]: libpod-d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce.scope: Consumed 1.519s CPU time.
Dec 13 09:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd-merged.mount: Deactivated successfully.
Dec 13 09:32:58 compute-0 podman[416870]: 2025-12-13 09:32:58.711879924 +0000 UTC m=+1.358490048 container remove d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:32:58 compute-0 systemd[1]: libpod-conmon-d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce.scope: Deactivated successfully.
Dec 13 09:32:58 compute-0 sudo[416793]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:32:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:32:58 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:32:58 compute-0 sudo[416984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:32:58 compute-0 sudo[416984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:32:58 compute-0 sudo[416984]: pam_unix(sudo:session): session closed for user root
Dec 13 09:32:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3967: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:32:59 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:32:59 compute-0 ceph-mon[76537]: pgmap v3967: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:00 compute-0 nova_compute[248510]: 2025-12-13 09:33:00.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:01 compute-0 nova_compute[248510]: 2025-12-13 09:33:01.235 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3968: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:02 compute-0 ceph-mon[76537]: pgmap v3968: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:02 compute-0 nova_compute[248510]: 2025-12-13 09:33:02.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:03 compute-0 podman[417011]: 2025-12-13 09:33:03.034064591 +0000 UTC m=+0.113507911 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:33:03 compute-0 podman[417010]: 2025-12-13 09:33:03.043053857 +0000 UTC m=+0.122049826 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec 13 09:33:03 compute-0 podman[417009]: 2025-12-13 09:33:03.056502215 +0000 UTC m=+0.135100734 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:33:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3969: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:04 compute-0 ceph-mon[76537]: pgmap v3969: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3970: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:05 compute-0 nova_compute[248510]: 2025-12-13 09:33:05.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:06 compute-0 nova_compute[248510]: 2025-12-13 09:33:06.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:06 compute-0 ceph-mon[76537]: pgmap v3970: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3971: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:08 compute-0 ceph-mon[76537]: pgmap v3971: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3972: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:33:09
Dec 13 09:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data']
Dec 13 09:33:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:33:09 compute-0 ceph-mon[76537]: pgmap v3972: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:33:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:33:10 compute-0 nova_compute[248510]: 2025-12-13 09:33:10.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:33:11 compute-0 nova_compute[248510]: 2025-12-13 09:33:11.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3973: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:12 compute-0 ceph-mon[76537]: pgmap v3973: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3974: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:14 compute-0 ceph-mon[76537]: pgmap v3974: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:33:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/793515846' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:33:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:33:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/793515846' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:33:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3975: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:15 compute-0 nova_compute[248510]: 2025-12-13 09:33:15.522 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/793515846' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:33:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/793515846' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:33:16 compute-0 nova_compute[248510]: 2025-12-13 09:33:16.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:16 compute-0 ceph-mon[76537]: pgmap v3975: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3976: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:18 compute-0 ceph-mon[76537]: pgmap v3976: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3977: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:19 compute-0 ceph-mon[76537]: pgmap v3977: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:20 compute-0 nova_compute[248510]: 2025-12-13 09:33:20.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:21 compute-0 nova_compute[248510]: 2025-12-13 09:33:21.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3978: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:33:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:33:22 compute-0 ceph-mon[76537]: pgmap v3978: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3979: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:24 compute-0 ceph-mon[76537]: pgmap v3979: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3980: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:25 compute-0 nova_compute[248510]: 2025-12-13 09:33:25.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:26 compute-0 nova_compute[248510]: 2025-12-13 09:33:26.252 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:26 compute-0 ceph-mon[76537]: pgmap v3980: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3981: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:28 compute-0 ceph-mon[76537]: pgmap v3981: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3982: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:30 compute-0 nova_compute[248510]: 2025-12-13 09:33:30.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:30 compute-0 ceph-mon[76537]: pgmap v3982: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:31 compute-0 nova_compute[248510]: 2025-12-13 09:33:31.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3983: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:32 compute-0 ceph-mon[76537]: pgmap v3983: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.588553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412588839, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 251, "total_data_size": 782998, "memory_usage": 793992, "flush_reason": "Manual Compaction"}
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412598162, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 769668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79179, "largest_seqno": 79821, "table_properties": {"data_size": 766242, "index_size": 1333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7876, "raw_average_key_size": 19, "raw_value_size": 759362, "raw_average_value_size": 1865, "num_data_blocks": 59, "num_entries": 407, "num_filter_entries": 407, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618366, "oldest_key_time": 1765618366, "file_creation_time": 1765618412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 9462 microseconds, and 4175 cpu microseconds.
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.598213) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 769668 bytes OK
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.598237) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.599773) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.599789) EVENT_LOG_v1 {"time_micros": 1765618412599784, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.599815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 779566, prev total WAL file size 779566, number of live WAL files 2.
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.600497) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(751KB)], [188(10MB)]
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412600565, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 11657108, "oldest_snapshot_seqno": -1}
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 9579 keys, 9860761 bytes, temperature: kUnknown
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412687549, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 9860761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9803198, "index_size": 32490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24005, "raw_key_size": 252400, "raw_average_key_size": 26, "raw_value_size": 9638911, "raw_average_value_size": 1006, "num_data_blocks": 1239, "num_entries": 9579, "num_filter_entries": 9579, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.687912) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9860761 bytes
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.689697) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.9 rd, 113.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(28.0) write-amplify(12.8) OK, records in: 10092, records dropped: 513 output_compression: NoCompression
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.689729) EVENT_LOG_v1 {"time_micros": 1765618412689714, "job": 118, "event": "compaction_finished", "compaction_time_micros": 87090, "compaction_time_cpu_micros": 49372, "output_level": 6, "num_output_files": 1, "total_output_size": 9860761, "num_input_records": 10092, "num_output_records": 9579, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412690158, "job": 118, "event": "table_file_deletion", "file_number": 190}
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412694501, "job": 118, "event": "table_file_deletion", "file_number": 188}
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.600338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:33:32 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:33:32 compute-0 nova_compute[248510]: 2025-12-13 09:33:32.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3984: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:34 compute-0 podman[417071]: 2025-12-13 09:33:34.015296765 +0000 UTC m=+0.082262397 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 09:33:34 compute-0 podman[417070]: 2025-12-13 09:33:34.025025069 +0000 UTC m=+0.095334865 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 09:33:34 compute-0 podman[417069]: 2025-12-13 09:33:34.047508394 +0000 UTC m=+0.126407706 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:33:34 compute-0 ceph-mon[76537]: pgmap v3984: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3985: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:35 compute-0 nova_compute[248510]: 2025-12-13 09:33:35.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:35 compute-0 ceph-mon[76537]: pgmap v3985: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:36 compute-0 nova_compute[248510]: 2025-12-13 09:33:36.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3986: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:37 compute-0 nova_compute[248510]: 2025-12-13 09:33:37.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:38 compute-0 ceph-mon[76537]: pgmap v3986: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:38 compute-0 nova_compute[248510]: 2025-12-13 09:33:38.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3987: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:39 compute-0 ceph-mon[76537]: pgmap v3987: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:33:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:33:40 compute-0 nova_compute[248510]: 2025-12-13 09:33:40.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:41 compute-0 nova_compute[248510]: 2025-12-13 09:33:41.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3988: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:41 compute-0 ceph-mon[76537]: pgmap v3988: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3989: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:44 compute-0 ceph-mon[76537]: pgmap v3989: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3990: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:45 compute-0 nova_compute[248510]: 2025-12-13 09:33:45.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:46 compute-0 nova_compute[248510]: 2025-12-13 09:33:46.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:46 compute-0 ceph-mon[76537]: pgmap v3990: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3991: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:48 compute-0 ceph-mon[76537]: pgmap v3991: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:48 compute-0 nova_compute[248510]: 2025-12-13 09:33:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:48 compute-0 nova_compute[248510]: 2025-12-13 09:33:48.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:33:48 compute-0 nova_compute[248510]: 2025-12-13 09:33:48.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:33:48 compute-0 nova_compute[248510]: 2025-12-13 09:33:48.796 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:33:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3992: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:50 compute-0 nova_compute[248510]: 2025-12-13 09:33:50.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:50 compute-0 ceph-mon[76537]: pgmap v3992: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:50 compute-0 nova_compute[248510]: 2025-12-13 09:33:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:50 compute-0 nova_compute[248510]: 2025-12-13 09:33:50.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:50 compute-0 nova_compute[248510]: 2025-12-13 09:33:50.827 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:33:50 compute-0 nova_compute[248510]: 2025-12-13 09:33:50.828 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:33:50 compute-0 nova_compute[248510]: 2025-12-13 09:33:50.828 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:33:50 compute-0 nova_compute[248510]: 2025-12-13 09:33:50.828 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:33:50 compute-0 nova_compute[248510]: 2025-12-13 09:33:50.829 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:33:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244520359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.433 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:33:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3993: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4244520359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.637 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.638 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3508MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.638 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.638 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.948 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.948 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:33:51 compute-0 nova_compute[248510]: 2025-12-13 09:33:51.981 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:33:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:33:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546281179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:33:52 compute-0 nova_compute[248510]: 2025-12-13 09:33:52.557 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:33:52 compute-0 nova_compute[248510]: 2025-12-13 09:33:52.568 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:33:52 compute-0 ceph-mon[76537]: pgmap v3993: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3546281179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:33:52 compute-0 nova_compute[248510]: 2025-12-13 09:33:52.677 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:33:52 compute-0 nova_compute[248510]: 2025-12-13 09:33:52.678 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:33:52 compute-0 nova_compute[248510]: 2025-12-13 09:33:52.679 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:33:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3994: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:53 compute-0 ceph-mon[76537]: pgmap v3994: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:53 compute-0 nova_compute[248510]: 2025-12-13 09:33:53.679 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:53 compute-0 nova_compute[248510]: 2025-12-13 09:33:53.680 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:33:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:33:55.470 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:33:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:33:55.471 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:33:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:33:55.471 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:33:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3995: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:55 compute-0 nova_compute[248510]: 2025-12-13 09:33:55.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:55 compute-0 nova_compute[248510]: 2025-12-13 09:33:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:33:55 compute-0 nova_compute[248510]: 2025-12-13 09:33:55.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:33:56 compute-0 nova_compute[248510]: 2025-12-13 09:33:56.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:33:56 compute-0 ceph-mon[76537]: pgmap v3995: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3996: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:58 compute-0 ceph-mon[76537]: pgmap v3996: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:33:58 compute-0 sudo[417177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:33:58 compute-0 sudo[417177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:33:58 compute-0 sudo[417177]: pam_unix(sudo:session): session closed for user root
Dec 13 09:33:59 compute-0 sudo[417202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:33:59 compute-0 sudo[417202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:33:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3997: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 09:33:59 compute-0 sudo[417202]: pam_unix(sudo:session): session closed for user root
Dec 13 09:33:59 compute-0 sudo[417258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:33:59 compute-0 sudo[417258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:33:59 compute-0 sudo[417258]: pam_unix(sudo:session): session closed for user root
Dec 13 09:33:59 compute-0 sudo[417283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 13 09:33:59 compute-0 sudo[417283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:00 compute-0 sudo[417283]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:34:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:34:00 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:00 compute-0 sudo[417325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:34:00 compute-0 sudo[417325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:00 compute-0 sudo[417325]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:00 compute-0 sudo[417350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- inventory --format=json-pretty --filter-for-batch
Dec 13 09:34:00 compute-0 sudo[417350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:00 compute-0 nova_compute[248510]: 2025-12-13 09:34:00.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:00 compute-0 ceph-mon[76537]: pgmap v3997: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 09:34:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:00 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:00 compute-0 podman[417388]: 2025-12-13 09:34:00.691176851 +0000 UTC m=+0.049785011 container create c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 09:34:00 compute-0 systemd[1]: Started libpod-conmon-c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655.scope.
Dec 13 09:34:00 compute-0 podman[417388]: 2025-12-13 09:34:00.669380564 +0000 UTC m=+0.027988774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:34:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:34:00 compute-0 podman[417388]: 2025-12-13 09:34:00.785847989 +0000 UTC m=+0.144456199 container init c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:34:00 compute-0 podman[417388]: 2025-12-13 09:34:00.79466266 +0000 UTC m=+0.153270830 container start c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:34:00 compute-0 podman[417388]: 2025-12-13 09:34:00.799466091 +0000 UTC m=+0.158074301 container attach c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec 13 09:34:00 compute-0 mystifying_torvalds[417405]: 167 167
Dec 13 09:34:00 compute-0 systemd[1]: libpod-c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655.scope: Deactivated successfully.
Dec 13 09:34:00 compute-0 podman[417388]: 2025-12-13 09:34:00.803134753 +0000 UTC m=+0.161742913 container died c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b035f882825022b984752c0b91c75130ce05187a3051f641c66cf4309c813f8-merged.mount: Deactivated successfully.
Dec 13 09:34:00 compute-0 podman[417388]: 2025-12-13 09:34:00.852519843 +0000 UTC m=+0.211128003 container remove c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:34:00 compute-0 systemd[1]: libpod-conmon-c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655.scope: Deactivated successfully.
Dec 13 09:34:01 compute-0 podman[417429]: 2025-12-13 09:34:01.044170746 +0000 UTC m=+0.045050592 container create 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:34:01 compute-0 systemd[1]: Started libpod-conmon-555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1.scope.
Dec 13 09:34:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:01 compute-0 podman[417429]: 2025-12-13 09:34:01.025426106 +0000 UTC m=+0.026305992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:34:01 compute-0 podman[417429]: 2025-12-13 09:34:01.134084174 +0000 UTC m=+0.134964080 container init 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 09:34:01 compute-0 podman[417429]: 2025-12-13 09:34:01.144080745 +0000 UTC m=+0.144960611 container start 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 09:34:01 compute-0 podman[417429]: 2025-12-13 09:34:01.147688866 +0000 UTC m=+0.148568782 container attach 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:34:01 compute-0 nova_compute[248510]: 2025-12-13 09:34:01.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3998: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 09:34:01 compute-0 youthful_villani[417446]: [
Dec 13 09:34:01 compute-0 youthful_villani[417446]:     {
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "available": false,
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "being_replaced": false,
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "ceph_device_lvm": false,
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "lsm_data": {},
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "lvs": [],
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "path": "/dev/sr0",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "rejected_reasons": [
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "Has a FileSystem",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "Insufficient space (<5GB)"
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         ],
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         "sys_api": {
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "actuators": null,
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "device_nodes": [
Dec 13 09:34:01 compute-0 youthful_villani[417446]:                 "sr0"
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             ],
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "devname": "sr0",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "human_readable_size": "482.00 KB",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "id_bus": "ata",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "model": "QEMU DVD-ROM",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "nr_requests": "2",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "parent": "/dev/sr0",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "partitions": {},
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "path": "/dev/sr0",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "removable": "1",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "rev": "2.5+",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "ro": "0",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "rotational": "1",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "sas_address": "",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "sas_device_handle": "",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "scheduler_mode": "mq-deadline",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "sectors": 0,
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "sectorsize": "2048",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "size": 493568.0,
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "support_discard": "2048",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "type": "disk",
Dec 13 09:34:01 compute-0 youthful_villani[417446]:             "vendor": "QEMU"
Dec 13 09:34:01 compute-0 youthful_villani[417446]:         }
Dec 13 09:34:01 compute-0 youthful_villani[417446]:     }
Dec 13 09:34:01 compute-0 youthful_villani[417446]: ]
Dec 13 09:34:01 compute-0 systemd[1]: libpod-555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1.scope: Deactivated successfully.
Dec 13 09:34:01 compute-0 podman[417429]: 2025-12-13 09:34:01.769906403 +0000 UTC m=+0.770786269 container died 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021-merged.mount: Deactivated successfully.
Dec 13 09:34:01 compute-0 podman[417429]: 2025-12-13 09:34:01.841513211 +0000 UTC m=+0.842393067 container remove 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 13 09:34:01 compute-0 systemd[1]: libpod-conmon-555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1.scope: Deactivated successfully.
Dec 13 09:34:01 compute-0 sudo[417350]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:34:01 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:34:01 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:34:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:34:01 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:34:01 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:34:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:34:01 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:34:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:34:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:34:01 compute-0 sudo[418192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:34:01 compute-0 sudo[418192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:01 compute-0 sudo[418192]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:02 compute-0 sudo[418217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:34:02 compute-0 sudo[418217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:02 compute-0 sshd-session[418242]: Accepted publickey for zuul from 192.168.122.30 port 42660 ssh2: ECDSA SHA256:LSt7YF7gQCPgR0tIb34gZL5shqra8qE8VX5d32veH0w
Dec 13 09:34:02 compute-0 systemd-logind[787]: New session 55 of user zuul.
Dec 13 09:34:02 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec 13 09:34:02 compute-0 sshd-session[418242]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 09:34:02 compute-0 podman[418258]: 2025-12-13 09:34:02.416224514 +0000 UTC m=+0.039901014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:34:02 compute-0 podman[418258]: 2025-12-13 09:34:02.637103961 +0000 UTC m=+0.260780451 container create 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:34:02 compute-0 ceph-mon[76537]: pgmap v3998: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 09:34:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:34:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:34:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:34:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:34:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:34:02 compute-0 systemd[1]: Started libpod-conmon-7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3.scope.
Dec 13 09:34:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:34:02 compute-0 podman[418258]: 2025-12-13 09:34:02.847390792 +0000 UTC m=+0.471067282 container init 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 09:34:02 compute-0 podman[418258]: 2025-12-13 09:34:02.86084009 +0000 UTC m=+0.484516580 container start 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 09:34:02 compute-0 quizzical_galois[418322]: 167 167
Dec 13 09:34:02 compute-0 systemd[1]: libpod-7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3.scope: Deactivated successfully.
Dec 13 09:34:03 compute-0 podman[418258]: 2025-12-13 09:34:03.266008765 +0000 UTC m=+0.889685245 container attach 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 09:34:03 compute-0 podman[418258]: 2025-12-13 09:34:03.267441891 +0000 UTC m=+0.891118351 container died 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 09:34:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3999: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Dec 13 09:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-de67c133a911fbf56954307f821a15b9caf014fb0193efe35a2357ae1a32f4f8-merged.mount: Deactivated successfully.
Dec 13 09:34:04 compute-0 ceph-mon[76537]: pgmap v3999: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Dec 13 09:34:04 compute-0 podman[418258]: 2025-12-13 09:34:04.421859773 +0000 UTC m=+2.045536253 container remove 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 09:34:04 compute-0 systemd[1]: libpod-conmon-7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3.scope: Deactivated successfully.
Dec 13 09:34:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:34:04.511 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:34:04 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:34:04.513 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:34:04 compute-0 nova_compute[248510]: 2025-12-13 09:34:04.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:04 compute-0 podman[418342]: 2025-12-13 09:34:04.609179537 +0000 UTC m=+0.084714378 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:34:04 compute-0 podman[418343]: 2025-12-13 09:34:04.613147477 +0000 UTC m=+0.083919549 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 09:34:04 compute-0 podman[418377]: 2025-12-13 09:34:04.641050108 +0000 UTC m=+0.051706180 container create d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:34:04 compute-0 podman[418341]: 2025-12-13 09:34:04.64752279 +0000 UTC m=+0.116508787 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:34:04 compute-0 systemd[1]: Started libpod-conmon-d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8.scope.
Dec 13 09:34:04 compute-0 podman[418377]: 2025-12-13 09:34:04.618535812 +0000 UTC m=+0.029191914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:34:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:04 compute-0 podman[418377]: 2025-12-13 09:34:04.754002024 +0000 UTC m=+0.164658116 container init d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:34:04 compute-0 podman[418377]: 2025-12-13 09:34:04.764052877 +0000 UTC m=+0.174708949 container start d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:34:04 compute-0 podman[418377]: 2025-12-13 09:34:04.768140079 +0000 UTC m=+0.178796151 container attach d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:34:04 compute-0 nova_compute[248510]: 2025-12-13 09:34:04.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:05 compute-0 keen_driscoll[418427]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:34:05 compute-0 keen_driscoll[418427]: --> All data devices are unavailable
Dec 13 09:34:05 compute-0 systemd[1]: libpod-d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8.scope: Deactivated successfully.
Dec 13 09:34:05 compute-0 podman[418377]: 2025-12-13 09:34:05.300686624 +0000 UTC m=+0.711342696 container died d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654-merged.mount: Deactivated successfully.
Dec 13 09:34:05 compute-0 podman[418377]: 2025-12-13 09:34:05.346612487 +0000 UTC m=+0.757268559 container remove d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:34:05 compute-0 systemd[1]: libpod-conmon-d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8.scope: Deactivated successfully.
Dec 13 09:34:05 compute-0 sudo[418217]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:05 compute-0 sudo[418618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:34:05 compute-0 sudo[418618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:05 compute-0 sudo[418618]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4000: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 13 09:34:05 compute-0 nova_compute[248510]: 2025-12-13 09:34:05.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:05 compute-0 sudo[418664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:34:05 compute-0 sudo[418664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:05 compute-0 podman[418702]: 2025-12-13 09:34:05.893122332 +0000 UTC m=+0.049017492 container create 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:34:05 compute-0 systemd[1]: Started libpod-conmon-12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200.scope.
Dec 13 09:34:05 compute-0 podman[418702]: 2025-12-13 09:34:05.872702429 +0000 UTC m=+0.028597609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:34:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:34:05 compute-0 podman[418702]: 2025-12-13 09:34:05.986191929 +0000 UTC m=+0.142087119 container init 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:34:05 compute-0 podman[418702]: 2025-12-13 09:34:05.993518363 +0000 UTC m=+0.149413553 container start 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Dec 13 09:34:05 compute-0 podman[418702]: 2025-12-13 09:34:05.997815871 +0000 UTC m=+0.153711061 container attach 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 09:34:06 compute-0 funny_gates[418718]: 167 167
Dec 13 09:34:06 compute-0 systemd[1]: libpod-12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200.scope: Deactivated successfully.
Dec 13 09:34:06 compute-0 podman[418702]: 2025-12-13 09:34:06.001708919 +0000 UTC m=+0.157604079 container died 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8325494977d5b59c8fa5c4cfd63d4975cc9e9ad1e819a2977c78b13ac812761-merged.mount: Deactivated successfully.
Dec 13 09:34:06 compute-0 podman[418702]: 2025-12-13 09:34:06.055656044 +0000 UTC m=+0.211551204 container remove 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:34:06 compute-0 systemd[1]: libpod-conmon-12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200.scope: Deactivated successfully.
Dec 13 09:34:06 compute-0 podman[418742]: 2025-12-13 09:34:06.268284704 +0000 UTC m=+0.052467619 container create 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 09:34:06 compute-0 systemd[1]: Started libpod-conmon-3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663.scope.
Dec 13 09:34:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:34:06 compute-0 podman[418742]: 2025-12-13 09:34:06.245400169 +0000 UTC m=+0.029583134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:06 compute-0 podman[418742]: 2025-12-13 09:34:06.353669458 +0000 UTC m=+0.137852363 container init 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 09:34:06 compute-0 nova_compute[248510]: 2025-12-13 09:34:06.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:06 compute-0 podman[418742]: 2025-12-13 09:34:06.362133471 +0000 UTC m=+0.146316376 container start 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:34:06 compute-0 podman[418742]: 2025-12-13 09:34:06.366734916 +0000 UTC m=+0.150917821 container attach 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:34:06 compute-0 ceph-mon[76537]: pgmap v4000: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 13 09:34:06 compute-0 bold_wright[418758]: {
Dec 13 09:34:06 compute-0 bold_wright[418758]:     "0": [
Dec 13 09:34:06 compute-0 bold_wright[418758]:         {
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "devices": [
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "/dev/loop3"
Dec 13 09:34:06 compute-0 bold_wright[418758]:             ],
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_name": "ceph_lv0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_size": "21470642176",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "name": "ceph_lv0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "tags": {
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cluster_name": "ceph",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.crush_device_class": "",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.encrypted": "0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.objectstore": "bluestore",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osd_id": "0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.type": "block",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.vdo": "0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.with_tpm": "0"
Dec 13 09:34:06 compute-0 bold_wright[418758]:             },
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "type": "block",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "vg_name": "ceph_vg0"
Dec 13 09:34:06 compute-0 bold_wright[418758]:         }
Dec 13 09:34:06 compute-0 bold_wright[418758]:     ],
Dec 13 09:34:06 compute-0 bold_wright[418758]:     "1": [
Dec 13 09:34:06 compute-0 bold_wright[418758]:         {
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "devices": [
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "/dev/loop4"
Dec 13 09:34:06 compute-0 bold_wright[418758]:             ],
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_name": "ceph_lv1",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_size": "21470642176",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "name": "ceph_lv1",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "tags": {
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cluster_name": "ceph",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.crush_device_class": "",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.encrypted": "0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.objectstore": "bluestore",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osd_id": "1",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.type": "block",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.vdo": "0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.with_tpm": "0"
Dec 13 09:34:06 compute-0 bold_wright[418758]:             },
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "type": "block",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "vg_name": "ceph_vg1"
Dec 13 09:34:06 compute-0 bold_wright[418758]:         }
Dec 13 09:34:06 compute-0 bold_wright[418758]:     ],
Dec 13 09:34:06 compute-0 bold_wright[418758]:     "2": [
Dec 13 09:34:06 compute-0 bold_wright[418758]:         {
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "devices": [
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "/dev/loop5"
Dec 13 09:34:06 compute-0 bold_wright[418758]:             ],
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_name": "ceph_lv2",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_size": "21470642176",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "name": "ceph_lv2",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "tags": {
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.cluster_name": "ceph",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.crush_device_class": "",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.encrypted": "0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.objectstore": "bluestore",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osd_id": "2",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.type": "block",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.vdo": "0",
Dec 13 09:34:06 compute-0 bold_wright[418758]:                 "ceph.with_tpm": "0"
Dec 13 09:34:06 compute-0 bold_wright[418758]:             },
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "type": "block",
Dec 13 09:34:06 compute-0 bold_wright[418758]:             "vg_name": "ceph_vg2"
Dec 13 09:34:06 compute-0 bold_wright[418758]:         }
Dec 13 09:34:06 compute-0 bold_wright[418758]:     ]
Dec 13 09:34:06 compute-0 bold_wright[418758]: }
Dec 13 09:34:06 compute-0 systemd[1]: libpod-3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663.scope: Deactivated successfully.
Dec 13 09:34:06 compute-0 podman[418742]: 2025-12-13 09:34:06.725349043 +0000 UTC m=+0.509531948 container died 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 09:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc-merged.mount: Deactivated successfully.
Dec 13 09:34:06 compute-0 podman[418742]: 2025-12-13 09:34:06.776946318 +0000 UTC m=+0.561129233 container remove 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 09:34:06 compute-0 systemd[1]: libpod-conmon-3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663.scope: Deactivated successfully.
Dec 13 09:34:06 compute-0 sudo[418664]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:06 compute-0 sudo[418780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:34:06 compute-0 sudo[418780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:06 compute-0 sudo[418780]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:06 compute-0 sudo[418805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:34:06 compute-0 sudo[418805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:07 compute-0 podman[418842]: 2025-12-13 09:34:07.306527058 +0000 UTC m=+0.046740454 container create 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Dec 13 09:34:07 compute-0 systemd[1]: Started libpod-conmon-24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b.scope.
Dec 13 09:34:07 compute-0 podman[418842]: 2025-12-13 09:34:07.285237134 +0000 UTC m=+0.025450580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:34:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:34:07 compute-0 podman[418842]: 2025-12-13 09:34:07.410396557 +0000 UTC m=+0.150609973 container init 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:34:07 compute-0 podman[418842]: 2025-12-13 09:34:07.422153912 +0000 UTC m=+0.162367308 container start 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 09:34:07 compute-0 podman[418842]: 2025-12-13 09:34:07.425827665 +0000 UTC m=+0.166041061 container attach 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:34:07 compute-0 cranky_carver[418858]: 167 167
Dec 13 09:34:07 compute-0 systemd[1]: libpod-24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b.scope: Deactivated successfully.
Dec 13 09:34:07 compute-0 conmon[418858]: conmon 24f4a84d5faff117b719 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b.scope/container/memory.events
Dec 13 09:34:07 compute-0 podman[418842]: 2025-12-13 09:34:07.433757764 +0000 UTC m=+0.173971180 container died 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 09:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dd966305c25bf1fdcf1d73fb9508f94632d5fcfb49a12ca81b6b007b105c731-merged.mount: Deactivated successfully.
Dec 13 09:34:07 compute-0 podman[418842]: 2025-12-13 09:34:07.490908749 +0000 UTC m=+0.231122155 container remove 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 09:34:07 compute-0 systemd[1]: libpod-conmon-24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b.scope: Deactivated successfully.
Dec 13 09:34:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4001: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 13 09:34:07 compute-0 podman[418882]: 2025-12-13 09:34:07.700930194 +0000 UTC m=+0.046294084 container create d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 13 09:34:07 compute-0 systemd[1]: Started libpod-conmon-d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6.scope.
Dec 13 09:34:07 compute-0 podman[418882]: 2025-12-13 09:34:07.681402053 +0000 UTC m=+0.026765993 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:34:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:34:07 compute-0 podman[418882]: 2025-12-13 09:34:07.803196672 +0000 UTC m=+0.148560652 container init d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:34:07 compute-0 podman[418882]: 2025-12-13 09:34:07.810791033 +0000 UTC m=+0.156154963 container start d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 09:34:07 compute-0 podman[418882]: 2025-12-13 09:34:07.814795533 +0000 UTC m=+0.160159443 container attach d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:34:08 compute-0 sshd-session[418260]: Connection closed by 192.168.122.30 port 42660
Dec 13 09:34:08 compute-0 sshd-session[418242]: pam_unix(sshd:session): session closed for user zuul
Dec 13 09:34:08 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec 13 09:34:08 compute-0 systemd-logind[787]: Session 55 logged out. Waiting for processes to exit.
Dec 13 09:34:08 compute-0 systemd-logind[787]: Removed session 55.
Dec 13 09:34:08 compute-0 lvm[419002]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:34:08 compute-0 lvm[419002]: VG ceph_vg1 finished
Dec 13 09:34:08 compute-0 lvm[419001]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:34:08 compute-0 lvm[419001]: VG ceph_vg0 finished
Dec 13 09:34:08 compute-0 lvm[419004]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:34:08 compute-0 lvm[419004]: VG ceph_vg2 finished
Dec 13 09:34:08 compute-0 ceph-mon[76537]: pgmap v4001: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 13 09:34:08 compute-0 magical_moser[418899]: {}
Dec 13 09:34:08 compute-0 systemd[1]: libpod-d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6.scope: Deactivated successfully.
Dec 13 09:34:08 compute-0 systemd[1]: libpod-d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6.scope: Consumed 1.501s CPU time.
Dec 13 09:34:08 compute-0 podman[418882]: 2025-12-13 09:34:08.690943707 +0000 UTC m=+1.036307607 container died d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 09:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d-merged.mount: Deactivated successfully.
Dec 13 09:34:09 compute-0 podman[418882]: 2025-12-13 09:34:09.044308082 +0000 UTC m=+1.389672012 container remove d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 09:34:09 compute-0 systemd[1]: libpod-conmon-d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6.scope: Deactivated successfully.
Dec 13 09:34:09 compute-0 sudo[418805]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:34:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:34:09 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:09 compute-0 sudo[419019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:34:09 compute-0 sudo[419019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:34:09 compute-0 sudo[419019]: pam_unix(sudo:session): session closed for user root
Dec 13 09:34:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4002: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 09:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:34:09
Dec 13 09:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Dec 13 09:34:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:34:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:34:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:34:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:10 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:34:10 compute-0 ceph-mon[76537]: pgmap v4002: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 09:34:10 compute-0 nova_compute[248510]: 2025-12-13 09:34:10.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:34:11 compute-0 nova_compute[248510]: 2025-12-13 09:34:11.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:34:11.515 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:34:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4003: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Dec 13 09:34:12 compute-0 ceph-mon[76537]: pgmap v4003: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Dec 13 09:34:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4004: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec 13 09:34:14 compute-0 ceph-mon[76537]: pgmap v4004: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec 13 09:34:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:34:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564468377' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:34:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:34:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564468377' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:34:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2564468377' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:34:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2564468377' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:34:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4005: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Dec 13 09:34:15 compute-0 nova_compute[248510]: 2025-12-13 09:34:15.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:16 compute-0 nova_compute[248510]: 2025-12-13 09:34:16.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:16 compute-0 ceph-mon[76537]: pgmap v4005: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Dec 13 09:34:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4006: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 13 09:34:18 compute-0 ceph-mon[76537]: pgmap v4006: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 13 09:34:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4007: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 13 09:34:19 compute-0 ceph-mon[76537]: pgmap v4007: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 13 09:34:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:20 compute-0 nova_compute[248510]: 2025-12-13 09:34:20.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:21 compute-0 nova_compute[248510]: 2025-12-13 09:34:21.359 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4008: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:34:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:34:22 compute-0 ceph-mon[76537]: pgmap v4008: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Dec 13 09:34:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4009: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:23 compute-0 ceph-mon[76537]: pgmap v4009: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4010: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:25 compute-0 nova_compute[248510]: 2025-12-13 09:34:25.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:26 compute-0 nova_compute[248510]: 2025-12-13 09:34:26.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:26 compute-0 ceph-mon[76537]: pgmap v4010: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4011: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:28 compute-0 ceph-mon[76537]: pgmap v4011: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4012: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:30 compute-0 ceph-mon[76537]: pgmap v4012: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:30 compute-0 nova_compute[248510]: 2025-12-13 09:34:30.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:31 compute-0 nova_compute[248510]: 2025-12-13 09:34:31.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4013: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:32 compute-0 ceph-mon[76537]: pgmap v4013: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4014: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:33 compute-0 ceph-mon[76537]: pgmap v4014: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:34 compute-0 sshd-session[419044]: Invalid user zilliqa from 80.94.92.165 port 44666
Dec 13 09:34:34 compute-0 sshd-session[419044]: Connection closed by invalid user zilliqa 80.94.92.165 port 44666 [preauth]
Dec 13 09:34:35 compute-0 podman[419047]: 2025-12-13 09:34:35.025698096 +0000 UTC m=+0.098270529 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible)
Dec 13 09:34:35 compute-0 podman[419048]: 2025-12-13 09:34:35.025755057 +0000 UTC m=+0.094110994 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 09:34:35 compute-0 podman[419046]: 2025-12-13 09:34:35.06291242 +0000 UTC m=+0.135112944 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:34:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4015: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:35 compute-0 nova_compute[248510]: 2025-12-13 09:34:35.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:36 compute-0 ceph-mon[76537]: pgmap v4015: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:36 compute-0 nova_compute[248510]: 2025-12-13 09:34:36.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4016: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:38 compute-0 ceph-mon[76537]: pgmap v4016: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:38 compute-0 nova_compute[248510]: 2025-12-13 09:34:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4017: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:34:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:34:40 compute-0 ceph-mon[76537]: pgmap v4017: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:40 compute-0 nova_compute[248510]: 2025-12-13 09:34:40.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:40 compute-0 nova_compute[248510]: 2025-12-13 09:34:40.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:41 compute-0 nova_compute[248510]: 2025-12-13 09:34:41.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4018: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:42 compute-0 ceph-mon[76537]: pgmap v4018: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4019: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:44 compute-0 ceph-mon[76537]: pgmap v4019: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4020: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:45 compute-0 nova_compute[248510]: 2025-12-13 09:34:45.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:46 compute-0 nova_compute[248510]: 2025-12-13 09:34:46.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:47 compute-0 ceph-mon[76537]: pgmap v4020: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4021: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:49 compute-0 ceph-mon[76537]: pgmap v4021: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:34:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4022: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec 13 09:34:49 compute-0 nova_compute[248510]: 2025-12-13 09:34:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:49 compute-0 nova_compute[248510]: 2025-12-13 09:34:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:34:49 compute-0 nova_compute[248510]: 2025-12-13 09:34:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:34:50 compute-0 nova_compute[248510]: 2025-12-13 09:34:50.399 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:34:50 compute-0 nova_compute[248510]: 2025-12-13 09:34:50.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:50 compute-0 nova_compute[248510]: 2025-12-13 09:34:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:50 compute-0 nova_compute[248510]: 2025-12-13 09:34:50.885 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:34:50 compute-0 nova_compute[248510]: 2025-12-13 09:34:50.886 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:34:50 compute-0 nova_compute[248510]: 2025-12-13 09:34:50.886 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:34:50 compute-0 nova_compute[248510]: 2025-12-13 09:34:50.886 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:34:50 compute-0 nova_compute[248510]: 2025-12-13 09:34:50.887 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:34:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:51 compute-0 nova_compute[248510]: 2025-12-13 09:34:51.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4023: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 09:34:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:34:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835873693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:34:51 compute-0 nova_compute[248510]: 2025-12-13 09:34:51.703 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.816s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:34:51 compute-0 nova_compute[248510]: 2025-12-13 09:34:51.992 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:34:51 compute-0 nova_compute[248510]: 2025-12-13 09:34:51.995 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3487MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:34:51 compute-0 nova_compute[248510]: 2025-12-13 09:34:51.996 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:34:51 compute-0 nova_compute[248510]: 2025-12-13 09:34:51.996 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:34:52 compute-0 nova_compute[248510]: 2025-12-13 09:34:52.115 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:34:52 compute-0 nova_compute[248510]: 2025-12-13 09:34:52.116 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:34:52 compute-0 nova_compute[248510]: 2025-12-13 09:34:52.140 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:34:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4024: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:34:55.471 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:34:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:34:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:34:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:34:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4025: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:34:55 compute-0 nova_compute[248510]: 2025-12-13 09:34:55.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:34:56 compute-0 nova_compute[248510]: 2025-12-13 09:34:56.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:34:56 compute-0 ceph-mon[76537]: pgmap v4022: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec 13 09:34:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4026: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:34:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:34:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026049802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:34:57 compute-0 nova_compute[248510]: 2025-12-13 09:34:57.767 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:34:57 compute-0 nova_compute[248510]: 2025-12-13 09:34:57.776 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:34:58 compute-0 nova_compute[248510]: 2025-12-13 09:34:58.487 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:34:58 compute-0 nova_compute[248510]: 2025-12-13 09:34:58.490 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:34:58 compute-0 nova_compute[248510]: 2025-12-13 09:34:58.491 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:34:58 compute-0 ceph-mon[76537]: pgmap v4023: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 09:34:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1835873693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:34:58 compute-0 ceph-mon[76537]: pgmap v4024: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:34:58 compute-0 ceph-mon[76537]: pgmap v4025: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:34:58 compute-0 ceph-mon[76537]: pgmap v4026: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:34:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3026049802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:34:59 compute-0 nova_compute[248510]: 2025-12-13 09:34:59.492 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:59 compute-0 nova_compute[248510]: 2025-12-13 09:34:59.493 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:59 compute-0 nova_compute[248510]: 2025-12-13 09:34:59.493 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:59 compute-0 nova_compute[248510]: 2025-12-13 09:34:59.494 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:34:59 compute-0 nova_compute[248510]: 2025-12-13 09:34:59.494 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:34:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4027: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:35:00 compute-0 nova_compute[248510]: 2025-12-13 09:35:00.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:01 compute-0 nova_compute[248510]: 2025-12-13 09:35:01.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4028: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:35:01 compute-0 ceph-mon[76537]: pgmap v4027: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:35:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4029: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 09:35:04 compute-0 ceph-mon[76537]: pgmap v4028: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:35:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4030: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:35:05 compute-0 nova_compute[248510]: 2025-12-13 09:35:05.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:05 compute-0 podman[419152]: 2025-12-13 09:35:05.976033715 +0000 UTC m=+0.063525766 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 09:35:06 compute-0 podman[419153]: 2025-12-13 09:35:06.014914041 +0000 UTC m=+0.088627606 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 13 09:35:06 compute-0 podman[419151]: 2025-12-13 09:35:06.014945902 +0000 UTC m=+0.096604467 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:35:06 compute-0 nova_compute[248510]: 2025-12-13 09:35:06.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:06 compute-0 nova_compute[248510]: 2025-12-13 09:35:06.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:06 compute-0 ceph-mon[76537]: pgmap v4029: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 09:35:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4031: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 09:35:09 compute-0 ceph-mon[76537]: pgmap v4030: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:35:09 compute-0 sudo[419213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:35:09 compute-0 sudo[419213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:09 compute-0 sudo[419213]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:09 compute-0 sudo[419238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:35:09 compute-0 sudo[419238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:35:09
Dec 13 09:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Dec 13 09:35:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:35:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4032: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 6 op/s
Dec 13 09:35:10 compute-0 sudo[419238]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:35:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:35:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:35:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:35:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:35:10 compute-0 ceph-mon[76537]: pgmap v4031: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 09:35:10 compute-0 ceph-mon[76537]: pgmap v4032: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 6 op/s
Dec 13 09:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:35:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:35:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:35:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:35:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:35:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:35:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:35:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:35:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:35:10 compute-0 sudo[419294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:35:10 compute-0 sudo[419294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:10 compute-0 sudo[419294]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:10 compute-0 nova_compute[248510]: 2025-12-13 09:35:10.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:10 compute-0 sudo[419319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:35:10 compute-0 sudo[419319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:11 compute-0 podman[419356]: 2025-12-13 09:35:11.014117962 +0000 UTC m=+0.038350674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:35:11 compute-0 nova_compute[248510]: 2025-12-13 09:35:11.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4033: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:12 compute-0 podman[419356]: 2025-12-13 09:35:12.014909896 +0000 UTC m=+1.039142548 container create e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:35:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:35:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:35:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:35:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:35:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:35:12 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:35:13 compute-0 systemd[1]: Started libpod-conmon-e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c.scope.
Dec 13 09:35:13 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:35:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4034: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:14 compute-0 podman[419356]: 2025-12-13 09:35:14.05578149 +0000 UTC m=+3.080014212 container init e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:35:14 compute-0 podman[419356]: 2025-12-13 09:35:14.068473469 +0000 UTC m=+3.092706141 container start e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 09:35:14 compute-0 condescending_curie[419373]: 167 167
Dec 13 09:35:14 compute-0 systemd[1]: libpod-e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c.scope: Deactivated successfully.
Dec 13 09:35:14 compute-0 conmon[419373]: conmon e52d8576cf4d35e992e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c.scope/container/memory.events
Dec 13 09:35:14 compute-0 podman[419356]: 2025-12-13 09:35:14.591954685 +0000 UTC m=+3.616187377 container attach e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 09:35:14 compute-0 podman[419356]: 2025-12-13 09:35:14.592671173 +0000 UTC m=+3.616903885 container died e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:35:14 compute-0 ceph-mon[76537]: pgmap v4033: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:14 compute-0 ceph-mon[76537]: pgmap v4034: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4035: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:15 compute-0 nova_compute[248510]: 2025-12-13 09:35:15.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986954795' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:35:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:35:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986954795' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4112d5cfafc40d6df4d8a69dcfd7c066158305bedc1433fc296039a83f055f3b-merged.mount: Deactivated successfully.
Dec 13 09:35:16 compute-0 nova_compute[248510]: 2025-12-13 09:35:16.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:16 compute-0 ceph-mon[76537]: pgmap v4035: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2986954795' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:35:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/2986954795' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:35:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4036: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:17 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Dec 13 09:35:17 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:17.591026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:35:17 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Dec 13 09:35:17 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618517591152, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1032, "num_deletes": 262, "total_data_size": 1579712, "memory_usage": 1609664, "flush_reason": "Manual Compaction"}
Dec 13 09:35:17 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618518262164, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1530896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79822, "largest_seqno": 80853, "table_properties": {"data_size": 1525789, "index_size": 2566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10911, "raw_average_key_size": 19, "raw_value_size": 1515525, "raw_average_value_size": 2687, "num_data_blocks": 115, "num_entries": 564, "num_filter_entries": 564, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618413, "oldest_key_time": 1765618413, "file_creation_time": 1765618517, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 671218 microseconds, and 8955 cpu microseconds.
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.262240) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1530896 bytes OK
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.262271) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.482286) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.482328) EVENT_LOG_v1 {"time_micros": 1765618518482316, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.482359) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1574770, prev total WAL file size 1603089, number of live WAL files 2.
Dec 13 09:35:18 compute-0 podman[419356]: 2025-12-13 09:35:18.482346979 +0000 UTC m=+7.506579611 container remove e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.483932) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353136' seq:72057594037927935, type:22 .. '6C6F676D0033373734' seq:0, type:0; will stop at (end)
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1495KB)], [191(9629KB)]
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618518483977, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 11391657, "oldest_snapshot_seqno": -1}
Dec 13 09:35:18 compute-0 systemd[1]: libpod-conmon-e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c.scope: Deactivated successfully.
Dec 13 09:35:18 compute-0 podman[419398]: 2025-12-13 09:35:18.681033199 +0000 UTC m=+0.038921179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 9607 keys, 11277108 bytes, temperature: kUnknown
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618518956184, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 11277108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11217374, "index_size": 34597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 253963, "raw_average_key_size": 26, "raw_value_size": 11050657, "raw_average_value_size": 1150, "num_data_blocks": 1328, "num_entries": 9607, "num_filter_entries": 9607, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618518, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:35:18 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:35:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4037: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.956563) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 11277108 bytes
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.100981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.1 rd, 23.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(14.8) write-amplify(7.4) OK, records in: 10143, records dropped: 536 output_compression: NoCompression
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.101047) EVENT_LOG_v1 {"time_micros": 1765618520101018, "job": 120, "event": "compaction_finished", "compaction_time_micros": 472315, "compaction_time_cpu_micros": 50245, "output_level": 6, "num_output_files": 1, "total_output_size": 11277108, "num_input_records": 10143, "num_output_records": 9607, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618520101918, "job": 120, "event": "table_file_deletion", "file_number": 193}
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618520105346, "job": 120, "event": "table_file_deletion", "file_number": 191}
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.483859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:35:20 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:35:20 compute-0 podman[419398]: 2025-12-13 09:35:20.330301559 +0000 UTC m=+1.688189459 container create f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 09:35:20 compute-0 ceph-mon[76537]: pgmap v4036: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:20 compute-0 nova_compute[248510]: 2025-12-13 09:35:20.629 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:21 compute-0 systemd[1]: Started libpod-conmon-f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9.scope.
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4038: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:35:21 compute-0 nova_compute[248510]: 2025-12-13 09:35:21.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:35:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:35:22 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:22 compute-0 podman[419398]: 2025-12-13 09:35:22.453968622 +0000 UTC m=+3.811856612 container init f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:35:22 compute-0 podman[419398]: 2025-12-13 09:35:22.469234315 +0000 UTC m=+3.827122225 container start f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:35:23 compute-0 romantic_banzai[419414]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:35:23 compute-0 romantic_banzai[419414]: --> All data devices are unavailable
Dec 13 09:35:23 compute-0 systemd[1]: libpod-f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9.scope: Deactivated successfully.
Dec 13 09:35:23 compute-0 podman[419398]: 2025-12-13 09:35:23.466884031 +0000 UTC m=+4.824771951 container attach f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:35:23 compute-0 podman[419398]: 2025-12-13 09:35:23.470865921 +0000 UTC m=+4.828753841 container died f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:35:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4039: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 09:35:23 compute-0 ceph-mon[76537]: pgmap v4037: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 09:35:23 compute-0 ceph-mon[76537]: pgmap v4038: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 09:35:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4040: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Dec 13 09:35:25 compute-0 nova_compute[248510]: 2025-12-13 09:35:25.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308-merged.mount: Deactivated successfully.
Dec 13 09:35:25 compute-0 ceph-mon[76537]: pgmap v4039: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 09:35:26 compute-0 nova_compute[248510]: 2025-12-13 09:35:26.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4041: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:28 compute-0 ceph-mon[76537]: pgmap v4040: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Dec 13 09:35:28 compute-0 podman[419398]: 2025-12-13 09:35:28.560516733 +0000 UTC m=+9.918404673 container remove f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:35:28 compute-0 systemd[1]: libpod-conmon-f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9.scope: Deactivated successfully.
Dec 13 09:35:28 compute-0 sudo[419319]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:28 compute-0 sudo[419447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:35:28 compute-0 sudo[419447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:28 compute-0 sudo[419447]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:28 compute-0 sudo[419472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:35:28 compute-0 sudo[419472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:29 compute-0 podman[419510]: 2025-12-13 09:35:29.099650523 +0000 UTC m=+0.028724743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:35:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4042: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:29 compute-0 podman[419510]: 2025-12-13 09:35:29.647465631 +0000 UTC m=+0.576539741 container create 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 09:35:29 compute-0 ceph-mon[76537]: pgmap v4041: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:30 compute-0 systemd[1]: Started libpod-conmon-7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5.scope.
Dec 13 09:35:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:35:30 compute-0 nova_compute[248510]: 2025-12-13 09:35:30.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:30 compute-0 podman[419510]: 2025-12-13 09:35:30.758290008 +0000 UTC m=+1.687364148 container init 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:35:30 compute-0 podman[419510]: 2025-12-13 09:35:30.770794113 +0000 UTC m=+1.699868263 container start 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 09:35:30 compute-0 busy_archimedes[419526]: 167 167
Dec 13 09:35:30 compute-0 systemd[1]: libpod-7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5.scope: Deactivated successfully.
Dec 13 09:35:31 compute-0 ceph-mon[76537]: pgmap v4042: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:31 compute-0 podman[419510]: 2025-12-13 09:35:31.122341081 +0000 UTC m=+2.051415211 container attach 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 09:35:31 compute-0 podman[419510]: 2025-12-13 09:35:31.123539591 +0000 UTC m=+2.052613711 container died 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 09:35:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4043: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:31 compute-0 nova_compute[248510]: 2025-12-13 09:35:31.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:32 compute-0 ceph-mon[76537]: pgmap v4043: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:35:32.843 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:35:32 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:35:32.844 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:35:32 compute-0 sshd-session[419543]: Accepted publickey for zuul from 192.168.122.30 port 52730 ssh2: ECDSA SHA256:LSt7YF7gQCPgR0tIb34gZL5shqra8qE8VX5d32veH0w
Dec 13 09:35:32 compute-0 nova_compute[248510]: 2025-12-13 09:35:32.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:32 compute-0 systemd-logind[787]: New session 56 of user zuul.
Dec 13 09:35:32 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec 13 09:35:32 compute-0 sshd-session[419543]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 09:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-303ed7bbf5942a236c8dba9fd284ae12c57d67151f2c87b3a862a40856a559ab-merged.mount: Deactivated successfully.
Dec 13 09:35:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4044: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:33 compute-0 sudo[419616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain iscsid.service
Dec 13 09:35:33 compute-0 sudo[419616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:33 compute-0 sudo[419616]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:33 compute-0 sudo[419641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_nova_compute.service
Dec 13 09:35:33 compute-0 sudo[419641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:33 compute-0 sudo[419641]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:34 compute-0 sudo[419666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_controller.service
Dec 13 09:35:34 compute-0 sudo[419666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:34 compute-0 podman[419510]: 2025-12-13 09:35:34.029479662 +0000 UTC m=+4.958553822 container remove 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 09:35:34 compute-0 sudo[419666]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:34 compute-0 systemd[1]: libpod-conmon-7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5.scope: Deactivated successfully.
Dec 13 09:35:34 compute-0 sudo[419693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_metadata_agent.service
Dec 13 09:35:34 compute-0 sudo[419693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:34 compute-0 sudo[419693]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:34 compute-0 podman[419723]: 2025-12-13 09:35:34.285799659 +0000 UTC m=+0.038041976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:35:34 compute-0 nova_compute[248510]: 2025-12-13 09:35:34.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4045: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:35 compute-0 nova_compute[248510]: 2025-12-13 09:35:35.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:35 compute-0 podman[419723]: 2025-12-13 09:35:35.692926766 +0000 UTC m=+1.445169033 container create 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:35:36 compute-0 systemd[1]: Started libpod-conmon-555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a.scope.
Dec 13 09:35:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:36 compute-0 nova_compute[248510]: 2025-12-13 09:35:36.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:36 compute-0 ceph-mon[76537]: pgmap v4044: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:37 compute-0 podman[419723]: 2025-12-13 09:35:37.053601869 +0000 UTC m=+2.805844116 container init 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:35:37 compute-0 podman[419723]: 2025-12-13 09:35:37.072201907 +0000 UTC m=+2.824444174 container start 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 09:35:37 compute-0 mystifying_colden[419739]: {
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:     "0": [
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:         {
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "devices": [
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "/dev/loop3"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             ],
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_name": "ceph_lv0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_size": "21470642176",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "name": "ceph_lv0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "tags": {
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cluster_name": "ceph",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.crush_device_class": "",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.encrypted": "0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.objectstore": "bluestore",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osd_id": "0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.type": "block",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.vdo": "0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.with_tpm": "0"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             },
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "type": "block",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "vg_name": "ceph_vg0"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:         }
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:     ],
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:     "1": [
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:         {
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "devices": [
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "/dev/loop4"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             ],
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_name": "ceph_lv1",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_size": "21470642176",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "name": "ceph_lv1",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "tags": {
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cluster_name": "ceph",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.crush_device_class": "",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.encrypted": "0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.objectstore": "bluestore",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osd_id": "1",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.type": "block",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.vdo": "0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.with_tpm": "0"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             },
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "type": "block",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "vg_name": "ceph_vg1"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:         }
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:     ],
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:     "2": [
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:         {
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "devices": [
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "/dev/loop5"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             ],
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_name": "ceph_lv2",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_size": "21470642176",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "name": "ceph_lv2",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "tags": {
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.cluster_name": "ceph",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.crush_device_class": "",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.encrypted": "0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.objectstore": "bluestore",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osd_id": "2",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.type": "block",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.vdo": "0",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:                 "ceph.with_tpm": "0"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             },
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "type": "block",
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:             "vg_name": "ceph_vg2"
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:         }
Dec 13 09:35:37 compute-0 mystifying_colden[419739]:     ]
Dec 13 09:35:37 compute-0 mystifying_colden[419739]: }
Dec 13 09:35:37 compute-0 systemd[1]: libpod-555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a.scope: Deactivated successfully.
Dec 13 09:35:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4046: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:37 compute-0 podman[419723]: 2025-12-13 09:35:37.615964653 +0000 UTC m=+3.368206930 container attach 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:35:37 compute-0 podman[419723]: 2025-12-13 09:35:37.618182009 +0000 UTC m=+3.370424276 container died 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 09:35:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:38 compute-0 ceph-mon[76537]: pgmap v4045: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:38 compute-0 ceph-mon[76537]: pgmap v4046: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e-merged.mount: Deactivated successfully.
Dec 13 09:35:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4047: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:39 compute-0 podman[419723]: 2025-12-13 09:35:39.730936119 +0000 UTC m=+5.483178356 container remove 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:35:39 compute-0 systemd[1]: libpod-conmon-555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a.scope: Deactivated successfully.
Dec 13 09:35:39 compute-0 nova_compute[248510]: 2025-12-13 09:35:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:39 compute-0 sudo[419472]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:39 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:35:39.847 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:35:39 compute-0 podman[419744]: 2025-12-13 09:35:39.855056816 +0000 UTC m=+3.507835568 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 09:35:39 compute-0 podman[419742]: 2025-12-13 09:35:39.855586679 +0000 UTC m=+3.516977897 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 09:35:39 compute-0 sudo[419797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:35:39 compute-0 sudo[419797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:39 compute-0 sudo[419797]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:39 compute-0 podman[419741]: 2025-12-13 09:35:39.912513819 +0000 UTC m=+3.568856570 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 13 09:35:39 compute-0 sudo[419848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:35:39 compute-0 sudo[419848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:35:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:35:40 compute-0 podman[419885]: 2025-12-13 09:35:40.27768963 +0000 UTC m=+0.035082022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:35:40 compute-0 sshd-session[419899]: Accepted publickey for zuul from 192.168.122.30 port 40156 ssh2: ECDSA SHA256:LSt7YF7gQCPgR0tIb34gZL5shqra8qE8VX5d32veH0w
Dec 13 09:35:40 compute-0 systemd-logind[787]: New session 57 of user zuul.
Dec 13 09:35:40 compute-0 systemd[1]: Started Session 57 of User zuul.
Dec 13 09:35:40 compute-0 sshd-session[419899]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 09:35:40 compute-0 podman[419885]: 2025-12-13 09:35:40.599440651 +0000 UTC m=+0.356832943 container create 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:35:40 compute-0 nova_compute[248510]: 2025-12-13 09:35:40.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:40 compute-0 sudo[419972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Dec 13 09:35:40 compute-0 sudo[419972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:40 compute-0 sudo[419972]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:40 compute-0 ceph-mon[76537]: pgmap v4047: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:40 compute-0 systemd[1]: Started libpod-conmon-07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68.scope.
Dec 13 09:35:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:35:40 compute-0 sudo[420004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Dec 13 09:35:40 compute-0 sudo[420004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:40 compute-0 podman[419885]: 2025-12-13 09:35:40.918229916 +0000 UTC m=+0.675622228 container init 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:35:40 compute-0 podman[419885]: 2025-12-13 09:35:40.92953951 +0000 UTC m=+0.686931802 container start 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 09:35:40 compute-0 jovial_golick[419979]: 167 167
Dec 13 09:35:40 compute-0 systemd[1]: libpod-07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68.scope: Deactivated successfully.
Dec 13 09:35:41 compute-0 podman[419885]: 2025-12-13 09:35:41.172101572 +0000 UTC m=+0.929493864 container attach 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:35:41 compute-0 podman[419885]: 2025-12-13 09:35:41.173980979 +0000 UTC m=+0.931373271 container died 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:35:41 compute-0 groupadd[420006]: group added to /etc/group: name=podman, GID=42479
Dec 13 09:35:41 compute-0 groupadd[420006]: group added to /etc/gshadow: name=podman
Dec 13 09:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f17ec75e62fc75ac6ad749da7c4b61bc858fd5c433d0dd71280f4b60d9e9ed2-merged.mount: Deactivated successfully.
Dec 13 09:35:41 compute-0 groupadd[420006]: new group: name=podman, GID=42479
Dec 13 09:35:41 compute-0 sudo[420004]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:41 compute-0 sudo[420025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Dec 13 09:35:41 compute-0 sudo[420025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4048: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:41 compute-0 nova_compute[248510]: 2025-12-13 09:35:41.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:42 compute-0 usermod[420027]: add 'zuul' to group 'podman'
Dec 13 09:35:42 compute-0 usermod[420027]: add 'zuul' to shadow group 'podman'
Dec 13 09:35:42 compute-0 podman[419885]: 2025-12-13 09:35:42.348048285 +0000 UTC m=+2.105440577 container remove 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:35:42 compute-0 systemd[1]: libpod-conmon-07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68.scope: Deactivated successfully.
Dec 13 09:35:42 compute-0 podman[420035]: 2025-12-13 09:35:42.525524142 +0000 UTC m=+0.037584425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:35:42 compute-0 ceph-mon[76537]: pgmap v4048: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:42 compute-0 nova_compute[248510]: 2025-12-13 09:35:42.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:43 compute-0 podman[420035]: 2025-12-13 09:35:43.000766317 +0000 UTC m=+0.512826580 container create 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:35:43 compute-0 sudo[420025]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:43 compute-0 sudo[420055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Dec 13 09:35:43 compute-0 sudo[420055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:43 compute-0 sudo[420055]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:43 compute-0 sudo[420058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Dec 13 09:35:43 compute-0 sudo[420058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:43 compute-0 sudo[420058]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:43 compute-0 sudo[420061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Dec 13 09:35:43 compute-0 sudo[420061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:43 compute-0 sudo[420061]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:43 compute-0 sudo[420064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Dec 13 09:35:43 compute-0 sudo[420064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:43 compute-0 systemd[1]: Started libpod-conmon-74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e.scope.
Dec 13 09:35:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:35:43 compute-0 sudo[420064]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:43 compute-0 sudo[420072]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Dec 13 09:35:43 compute-0 sudo[420072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4049: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:43 compute-0 podman[420035]: 2025-12-13 09:35:43.914683679 +0000 UTC m=+1.426743952 container init 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 13 09:35:43 compute-0 podman[420035]: 2025-12-13 09:35:43.922927777 +0000 UTC m=+1.434988040 container start 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:35:44 compute-0 sudo[420072]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:44 compute-0 sudo[420077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Dec 13 09:35:44 compute-0 sudo[420077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:44 compute-0 systemd[1]: Reloading.
Dec 13 09:35:44 compute-0 systemd-rc-local-generator[420111]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 09:35:44 compute-0 systemd-sysv-generator[420114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 09:35:45 compute-0 flamboyant_williams[420069]: {}
Dec 13 09:35:45 compute-0 podman[420035]: 2025-12-13 09:35:45.134760091 +0000 UTC m=+2.646820344 container attach 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:35:45 compute-0 podman[420035]: 2025-12-13 09:35:45.159771769 +0000 UTC m=+2.671832062 container died 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:35:45 compute-0 ceph-mon[76537]: pgmap v4049: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4050: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:45 compute-0 sudo[420077]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:45 compute-0 systemd[1]: libpod-74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e.scope: Deactivated successfully.
Dec 13 09:35:45 compute-0 systemd[1]: libpod-74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e.scope: Consumed 1.409s CPU time.
Dec 13 09:35:45 compute-0 sudo[420199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Dec 13 09:35:45 compute-0 nova_compute[248510]: 2025-12-13 09:35:45.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:45 compute-0 sudo[420199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:45 compute-0 lvm[420202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:35:45 compute-0 lvm[420202]: VG ceph_vg0 finished
Dec 13 09:35:45 compute-0 lvm[420207]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:35:45 compute-0 lvm[420207]: VG ceph_vg1 finished
Dec 13 09:35:45 compute-0 lvm[420208]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:35:45 compute-0 lvm[420208]: VG ceph_vg2 finished
Dec 13 09:35:45 compute-0 sudo[420199]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07-merged.mount: Deactivated successfully.
Dec 13 09:35:45 compute-0 sudo[420209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Dec 13 09:35:45 compute-0 sudo[420209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:45 compute-0 systemd[1]: Reloading.
Dec 13 09:35:46 compute-0 systemd-rc-local-generator[420236]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 09:35:46 compute-0 systemd-sysv-generator[420240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 09:35:46 compute-0 podman[420035]: 2025-12-13 09:35:46.277341985 +0000 UTC m=+3.789402248 container remove 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:35:46 compute-0 systemd[1]: libpod-conmon-74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e.scope: Deactivated successfully.
Dec 13 09:35:46 compute-0 systemd[1]: Starting Podman API Socket...
Dec 13 09:35:46 compute-0 systemd[1]: Listening on Podman API Socket.
Dec 13 09:35:46 compute-0 sudo[419848]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 sudo[420209]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:35:46 compute-0 sudo[420246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Dec 13 09:35:46 compute-0 sudo[420246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:46 compute-0 sudo[420246]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 sudo[420249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Dec 13 09:35:46 compute-0 sudo[420249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:46 compute-0 sudo[420249]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 sudo[420252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Dec 13 09:35:46 compute-0 sudo[420252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:46 compute-0 sudo[420252]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 sudo[420255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Dec 13 09:35:46 compute-0 sudo[420255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:46 compute-0 sudo[420255]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 sudo[420258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Dec 13 09:35:46 compute-0 sudo[420258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:46 compute-0 sudo[420258]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 sudo[420261]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Dec 13 09:35:46 compute-0 dbus-broker-launch[768]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Dec 13 09:35:46 compute-0 sudo[420261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:46 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Dec 13 09:35:46 compute-0 systemd[1]: Closed Podman API Socket.
Dec 13 09:35:46 compute-0 systemd[1]: Stopping Podman API Socket...
Dec 13 09:35:46 compute-0 systemd[1]: Starting Podman API Socket...
Dec 13 09:35:46 compute-0 systemd[1]: Listening on Podman API Socket.
Dec 13 09:35:46 compute-0 sudo[420261]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 sudo[419978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Dec 13 09:35:46 compute-0 sudo[419978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:35:46 compute-0 sudo[419978]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:46 compute-0 nova_compute[248510]: 2025-12-13 09:35:46.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:46 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:35:46 compute-0 sshd-session[420267]: Accepted publickey for zuul from 192.168.122.30 port 46334 ssh2: ECDSA SHA256:LSt7YF7gQCPgR0tIb34gZL5shqra8qE8VX5d32veH0w
Dec 13 09:35:46 compute-0 systemd-logind[787]: New session 58 of user zuul.
Dec 13 09:35:46 compute-0 systemd[1]: Started Session 58 of User zuul.
Dec 13 09:35:46 compute-0 sshd-session[420267]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 09:35:46 compute-0 systemd[1]: Starting Podman API Service...
Dec 13 09:35:46 compute-0 systemd[1]: Started Podman API Service.
Dec 13 09:35:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:35:46 compute-0 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 13 09:35:46 compute-0 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="Setting parallel job count to 25"
Dec 13 09:35:46 compute-0 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="Using sqlite as database backend"
Dec 13 09:35:46 compute-0 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 13 09:35:46 compute-0 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 13 09:35:46 compute-0 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec 13 09:35:46 compute-0 podman[420271]: @ - - [13/Dec/2025:09:35:46 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Dec 13 09:35:46 compute-0 podman[420271]: @ - - [13/Dec/2025:09:35:46 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 25040 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Dec 13 09:35:47 compute-0 ceph-mon[76537]: pgmap v4050: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4051: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:47 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:35:47 compute-0 sudo[420284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:35:47 compute-0 sudo[420284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:35:47 compute-0 sudo[420284]: pam_unix(sudo:session): session closed for user root
Dec 13 09:35:48 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:35:48 compute-0 ceph-mon[76537]: pgmap v4051: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:48 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:35:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4052: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:50 compute-0 ceph-mon[76537]: pgmap v4052: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.808 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.809 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.859 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.859 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.859 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.860 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:35:50 compute-0 nova_compute[248510]: 2025-12-13 09:35:50.860 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:35:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:35:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350389499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:35:51 compute-0 nova_compute[248510]: 2025-12-13 09:35:51.483 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:35:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4053: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:51 compute-0 nova_compute[248510]: 2025-12-13 09:35:51.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:51 compute-0 nova_compute[248510]: 2025-12-13 09:35:51.676 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:35:51 compute-0 nova_compute[248510]: 2025-12-13 09:35:51.677 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3453MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:35:51 compute-0 nova_compute[248510]: 2025-12-13 09:35:51.677 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:35:51 compute-0 nova_compute[248510]: 2025-12-13 09:35:51.677 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:35:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3350389499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:35:51 compute-0 nova_compute[248510]: 2025-12-13 09:35:51.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:35:51 compute-0 nova_compute[248510]: 2025-12-13 09:35:51.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:35:52 compute-0 nova_compute[248510]: 2025-12-13 09:35:52.003 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:35:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:35:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736780615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:35:52 compute-0 nova_compute[248510]: 2025-12-13 09:35:52.582 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:35:52 compute-0 nova_compute[248510]: 2025-12-13 09:35:52.589 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:35:52 compute-0 nova_compute[248510]: 2025-12-13 09:35:52.642 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:35:52 compute-0 nova_compute[248510]: 2025-12-13 09:35:52.644 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:35:52 compute-0 nova_compute[248510]: 2025-12-13 09:35:52.644 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:35:53 compute-0 ceph-mon[76537]: pgmap v4053: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2736780615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:35:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4054: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:53 compute-0 nova_compute[248510]: 2025-12-13 09:35:53.607 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:53 compute-0 nova_compute[248510]: 2025-12-13 09:35:53.608 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:54 compute-0 ceph-mon[76537]: pgmap v4054: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:35:55.471 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:35:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:35:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:35:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:35:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:35:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4055: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:55 compute-0 nova_compute[248510]: 2025-12-13 09:35:55.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:55 compute-0 nova_compute[248510]: 2025-12-13 09:35:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:55 compute-0 ceph-mon[76537]: pgmap v4055: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:56 compute-0 nova_compute[248510]: 2025-12-13 09:35:56.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:35:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4056: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:57 compute-0 nova_compute[248510]: 2025-12-13 09:35:57.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:35:57 compute-0 nova_compute[248510]: 2025-12-13 09:35:57.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:35:58 compute-0 ceph-mon[76537]: pgmap v4056: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:35:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4057: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:35:59 compute-0 ceph-mon[76537]: pgmap v4057: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:00 compute-0 nova_compute[248510]: 2025-12-13 09:36:00.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4058: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:01 compute-0 nova_compute[248510]: 2025-12-13 09:36:01.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:01 compute-0 ceph-mon[76537]: pgmap v4058: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:01 compute-0 podman[420271]: time="2025-12-13T09:36:01Z" level=info msg="Received shutdown.Stop(), terminating!" PID=420271
Dec 13 09:36:01 compute-0 systemd[1]: podman.service: Deactivated successfully.
Dec 13 09:36:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4059: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:03 compute-0 ceph-mon[76537]: pgmap v4059: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4060: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:05 compute-0 nova_compute[248510]: 2025-12-13 09:36:05.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:06 compute-0 ceph-mon[76537]: pgmap v4060: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:06 compute-0 nova_compute[248510]: 2025-12-13 09:36:06.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:06 compute-0 nova_compute[248510]: 2025-12-13 09:36:06.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4061: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:08 compute-0 ceph-mon[76537]: pgmap v4061: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:36:09
Dec 13 09:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms']
Dec 13 09:36:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:36:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4062: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:09 compute-0 podman[420355]: 2025-12-13 09:36:09.986331974 +0000 UTC m=+0.075994650 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:36:10 compute-0 podman[420354]: 2025-12-13 09:36:10.015274931 +0000 UTC m=+0.104494966 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:36:10 compute-0 podman[420393]: 2025-12-13 09:36:10.109567599 +0000 UTC m=+0.094657488 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:36:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:36:10 compute-0 ceph-mon[76537]: pgmap v4062: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:10 compute-0 nova_compute[248510]: 2025-12-13 09:36:10.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:36:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4063: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:36:11.679 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 09:36:11 compute-0 nova_compute[248510]: 2025-12-13 09:36:11.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:11 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:36:11.681 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 09:36:11 compute-0 sudo[420420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Dec 13 09:36:11 compute-0 sudo[420420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:36:11 compute-0 sudo[420420]: pam_unix(sudo:session): session closed for user root
Dec 13 09:36:11 compute-0 sudo[420445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Dec 13 09:36:11 compute-0 sudo[420445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:36:11 compute-0 sudo[420445]: pam_unix(sudo:session): session closed for user root
Dec 13 09:36:12 compute-0 sshd-session[419546]: Connection closed by 192.168.122.30 port 52730
Dec 13 09:36:12 compute-0 sshd-session[419543]: pam_unix(sshd:session): session closed for user zuul
Dec 13 09:36:12 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec 13 09:36:12 compute-0 systemd-logind[787]: Session 56 logged out. Waiting for processes to exit.
Dec 13 09:36:12 compute-0 systemd-logind[787]: Removed session 56.
Dec 13 09:36:12 compute-0 sshd-session[419902]: Connection closed by 192.168.122.30 port 40156
Dec 13 09:36:12 compute-0 sshd-session[419899]: pam_unix(sshd:session): session closed for user zuul
Dec 13 09:36:12 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Dec 13 09:36:12 compute-0 systemd[1]: session-57.scope: Consumed 1.395s CPU time.
Dec 13 09:36:12 compute-0 systemd-logind[787]: Session 57 logged out. Waiting for processes to exit.
Dec 13 09:36:12 compute-0 systemd-logind[787]: Removed session 57.
Dec 13 09:36:12 compute-0 ceph-mon[76537]: pgmap v4063: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:13 compute-0 sshd-session[420270]: Connection closed by 192.168.122.30 port 46334
Dec 13 09:36:13 compute-0 sshd-session[420267]: pam_unix(sshd:session): session closed for user zuul
Dec 13 09:36:13 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Dec 13 09:36:13 compute-0 systemd-logind[787]: Session 58 logged out. Waiting for processes to exit.
Dec 13 09:36:13 compute-0 systemd-logind[787]: Removed session 58.
Dec 13 09:36:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4064: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:14 compute-0 ceph-mon[76537]: pgmap v4064: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:36:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/770163602' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:36:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:36:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/770163602' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:36:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/770163602' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:36:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/770163602' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:36:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4065: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:15 compute-0 nova_compute[248510]: 2025-12-13 09:36:15.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:16 compute-0 ceph-mon[76537]: pgmap v4065: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:16 compute-0 nova_compute[248510]: 2025-12-13 09:36:16.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4066: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:18 compute-0 ceph-mon[76537]: pgmap v4066: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4067: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:19 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:36:19.685 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 09:36:19 compute-0 nova_compute[248510]: 2025-12-13 09:36:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:19 compute-0 nova_compute[248510]: 2025-12-13 09:36:19.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:36:20 compute-0 ceph-mon[76537]: pgmap v4067: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:20 compute-0 nova_compute[248510]: 2025-12-13 09:36:20.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4068: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:21 compute-0 nova_compute[248510]: 2025-12-13 09:36:21.685 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:21 compute-0 ceph-mon[76537]: pgmap v4068: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:36:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:36:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4069: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:25 compute-0 ceph-mon[76537]: pgmap v4069: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4070: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:25 compute-0 nova_compute[248510]: 2025-12-13 09:36:25.716 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:25 compute-0 nova_compute[248510]: 2025-12-13 09:36:25.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:26 compute-0 ceph-mon[76537]: pgmap v4070: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:26 compute-0 nova_compute[248510]: 2025-12-13 09:36:26.688 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4071: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:27 compute-0 ceph-mon[76537]: pgmap v4071: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4072: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:29 compute-0 ceph-mon[76537]: pgmap v4072: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:30 compute-0 nova_compute[248510]: 2025-12-13 09:36:30.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4073: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:31 compute-0 nova_compute[248510]: 2025-12-13 09:36:31.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4074: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:33 compute-0 ceph-mon[76537]: pgmap v4073: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:35 compute-0 ceph-mon[76537]: pgmap v4074: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4075: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:35 compute-0 nova_compute[248510]: 2025-12-13 09:36:35.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:36 compute-0 ceph-mon[76537]: pgmap v4075: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:36 compute-0 nova_compute[248510]: 2025-12-13 09:36:36.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4076: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:38 compute-0 ceph-mon[76537]: pgmap v4076: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4077: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:39 compute-0 ceph-mon[76537]: pgmap v4077: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:39 compute-0 nova_compute[248510]: 2025-12-13 09:36:39.788 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:36:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:36:40 compute-0 nova_compute[248510]: 2025-12-13 09:36:40.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:40 compute-0 podman[420472]: 2025-12-13 09:36:40.995087136 +0000 UTC m=+0.072549472 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 09:36:41 compute-0 podman[420471]: 2025-12-13 09:36:41.001814566 +0000 UTC m=+0.084304789 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 13 09:36:41 compute-0 podman[420470]: 2025-12-13 09:36:41.057538155 +0000 UTC m=+0.135601016 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:36:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4078: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:41 compute-0 nova_compute[248510]: 2025-12-13 09:36:41.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:41 compute-0 ceph-mon[76537]: pgmap v4078: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4079: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:44 compute-0 ceph-mon[76537]: pgmap v4079: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:44 compute-0 nova_compute[248510]: 2025-12-13 09:36:44.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4080: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:45 compute-0 nova_compute[248510]: 2025-12-13 09:36:45.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:45 compute-0 ceph-mon[76537]: pgmap v4080: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:46 compute-0 nova_compute[248510]: 2025-12-13 09:36:46.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4081: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:47 compute-0 sudo[420534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:36:47 compute-0 sudo[420534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:36:47 compute-0 sudo[420534]: pam_unix(sudo:session): session closed for user root
Dec 13 09:36:47 compute-0 ceph-mon[76537]: pgmap v4081: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:47 compute-0 sudo[420559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:36:47 compute-0 sudo[420559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:36:48 compute-0 sudo[420559]: pam_unix(sudo:session): session closed for user root
Dec 13 09:36:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:36:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:36:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:36:48 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:36:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:36:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:49 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:36:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:36:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:36:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:36:49 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:36:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:36:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:36:49 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:36:49 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:36:49 compute-0 sudo[420615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:36:49 compute-0 sudo[420615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:36:49 compute-0 sudo[420615]: pam_unix(sudo:session): session closed for user root
Dec 13 09:36:49 compute-0 sudo[420640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:36:49 compute-0 sudo[420640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:36:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4082: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:49 compute-0 podman[420677]: 2025-12-13 09:36:49.668759539 +0000 UTC m=+0.022110777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:36:50 compute-0 podman[420677]: 2025-12-13 09:36:50.164868778 +0000 UTC m=+0.518220016 container create 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:36:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:36:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:36:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:36:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:36:50 compute-0 ceph-mon[76537]: pgmap v4082: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:50 compute-0 systemd[1]: Started libpod-conmon-5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63.scope.
Dec 13 09:36:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:36:50 compute-0 nova_compute[248510]: 2025-12-13 09:36:50.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:50 compute-0 nova_compute[248510]: 2025-12-13 09:36:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:50 compute-0 nova_compute[248510]: 2025-12-13 09:36:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:36:50 compute-0 nova_compute[248510]: 2025-12-13 09:36:50.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:36:50 compute-0 nova_compute[248510]: 2025-12-13 09:36:50.793 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:36:51 compute-0 podman[420677]: 2025-12-13 09:36:51.049288219 +0000 UTC m=+1.402639447 container init 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 09:36:51 compute-0 podman[420677]: 2025-12-13 09:36:51.057269779 +0000 UTC m=+1.410620997 container start 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 09:36:51 compute-0 xenodochial_hamilton[420693]: 167 167
Dec 13 09:36:51 compute-0 systemd[1]: libpod-5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63.scope: Deactivated successfully.
Dec 13 09:36:51 compute-0 podman[420677]: 2025-12-13 09:36:51.291593174 +0000 UTC m=+1.644944452 container attach 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:36:51 compute-0 podman[420677]: 2025-12-13 09:36:51.293115762 +0000 UTC m=+1.646466980 container died 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:36:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4083: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:51 compute-0 nova_compute[248510]: 2025-12-13 09:36:51.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:51 compute-0 nova_compute[248510]: 2025-12-13 09:36:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:51 compute-0 nova_compute[248510]: 2025-12-13 09:36:51.815 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:36:51 compute-0 nova_compute[248510]: 2025-12-13 09:36:51.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:36:51 compute-0 nova_compute[248510]: 2025-12-13 09:36:51.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:36:51 compute-0 nova_compute[248510]: 2025-12-13 09:36:51.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:36:51 compute-0 nova_compute[248510]: 2025-12-13 09:36:51.818 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ae9d182c58776983a0943aef46782920f653b5ec12d1addd446f51b8bb3878b-merged.mount: Deactivated successfully.
Dec 13 09:36:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:36:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840352053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:36:52 compute-0 nova_compute[248510]: 2025-12-13 09:36:52.507 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:36:52 compute-0 nova_compute[248510]: 2025-12-13 09:36:52.678 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:36:52 compute-0 nova_compute[248510]: 2025-12-13 09:36:52.680 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3459MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:36:52 compute-0 nova_compute[248510]: 2025-12-13 09:36:52.680 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:36:52 compute-0 nova_compute[248510]: 2025-12-13 09:36:52.680 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:36:53 compute-0 ceph-mon[76537]: pgmap v4083: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4084: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:36:54 compute-0 podman[420677]: 2025-12-13 09:36:54.468743946 +0000 UTC m=+4.822095154 container remove 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:36:54 compute-0 systemd[1]: libpod-conmon-5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63.scope: Deactivated successfully.
Dec 13 09:36:54 compute-0 nova_compute[248510]: 2025-12-13 09:36:54.570 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:36:54 compute-0 nova_compute[248510]: 2025-12-13 09:36:54.571 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:36:54 compute-0 nova_compute[248510]: 2025-12-13 09:36:54.600 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:36:54 compute-0 podman[420740]: 2025-12-13 09:36:54.646807098 +0000 UTC m=+0.034049426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:36:54 compute-0 podman[420740]: 2025-12-13 09:36:54.871809449 +0000 UTC m=+0.259051807 container create 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 09:36:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:36:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246048544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.141 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.147 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.191 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.193 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.193 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.193 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.194 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.223 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:36:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:36:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:36:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:36:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:36:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:36:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:36:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/840352053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:36:55 compute-0 ceph-mon[76537]: pgmap v4084: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:55 compute-0 systemd[1]: Started libpod-conmon-00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f.scope.
Dec 13 09:36:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:36:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4085: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:55 compute-0 nova_compute[248510]: 2025-12-13 09:36:55.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:55 compute-0 podman[420740]: 2025-12-13 09:36:55.769932545 +0000 UTC m=+1.157174903 container init 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:36:55 compute-0 podman[420740]: 2025-12-13 09:36:55.780750806 +0000 UTC m=+1.167993124 container start 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 09:36:55 compute-0 podman[420740]: 2025-12-13 09:36:55.856046367 +0000 UTC m=+1.243288675 container attach 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:36:56 compute-0 nova_compute[248510]: 2025-12-13 09:36:56.223 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:56 compute-0 nova_compute[248510]: 2025-12-13 09:36:56.224 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:56 compute-0 quizzical_wright[420778]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:36:56 compute-0 quizzical_wright[420778]: --> All data devices are unavailable
Dec 13 09:36:56 compute-0 systemd[1]: libpod-00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f.scope: Deactivated successfully.
Dec 13 09:36:56 compute-0 podman[420740]: 2025-12-13 09:36:56.308969472 +0000 UTC m=+1.696211800 container died 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 09:36:56 compute-0 nova_compute[248510]: 2025-12-13 09:36:56.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:36:56 compute-0 nova_compute[248510]: 2025-12-13 09:36:56.775 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3246048544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:36:57 compute-0 ceph-mon[76537]: pgmap v4085: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4086: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b-merged.mount: Deactivated successfully.
Dec 13 09:36:58 compute-0 nova_compute[248510]: 2025-12-13 09:36:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:36:58 compute-0 nova_compute[248510]: 2025-12-13 09:36:58.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:36:59 compute-0 ceph-mon[76537]: pgmap v4086: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:36:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4087: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:00 compute-0 podman[420740]: 2025-12-13 09:37:00.517426287 +0000 UTC m=+5.904668605 container remove 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:37:00 compute-0 sudo[420640]: pam_unix(sudo:session): session closed for user root
Dec 13 09:37:00 compute-0 systemd[1]: libpod-conmon-00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f.scope: Deactivated successfully.
Dec 13 09:37:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:00 compute-0 sudo[420810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:37:00 compute-0 sudo[420810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:37:00 compute-0 sudo[420810]: pam_unix(sudo:session): session closed for user root
Dec 13 09:37:00 compute-0 sudo[420835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:37:00 compute-0 sudo[420835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:37:00 compute-0 nova_compute[248510]: 2025-12-13 09:37:00.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:01 compute-0 ceph-mon[76537]: pgmap v4087: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:01 compute-0 podman[420873]: 2025-12-13 09:37:01.059255814 +0000 UTC m=+0.050588011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:37:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4088: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:01 compute-0 nova_compute[248510]: 2025-12-13 09:37:01.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:01 compute-0 podman[420873]: 2025-12-13 09:37:01.721285729 +0000 UTC m=+0.712617936 container create 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 09:37:02 compute-0 systemd[1]: Started libpod-conmon-1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0.scope.
Dec 13 09:37:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:37:02 compute-0 podman[420873]: 2025-12-13 09:37:02.700706757 +0000 UTC m=+1.692038944 container init 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:37:02 compute-0 podman[420873]: 2025-12-13 09:37:02.714855082 +0000 UTC m=+1.706187289 container start 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 09:37:02 compute-0 modest_euler[420889]: 167 167
Dec 13 09:37:02 compute-0 systemd[1]: libpod-1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0.scope: Deactivated successfully.
Dec 13 09:37:02 compute-0 ceph-mon[76537]: pgmap v4088: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:03 compute-0 podman[420873]: 2025-12-13 09:37:03.139717392 +0000 UTC m=+2.131049609 container attach 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:37:03 compute-0 podman[420873]: 2025-12-13 09:37:03.141483997 +0000 UTC m=+2.132816184 container died 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:37:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4089: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-59b11ec34ec2eb0a043c10a6c1ae272e0e33499e46977869a251437bf28e51a0-merged.mount: Deactivated successfully.
Dec 13 09:37:04 compute-0 ceph-mon[76537]: pgmap v4089: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:05 compute-0 podman[420873]: 2025-12-13 09:37:05.489657388 +0000 UTC m=+4.480989575 container remove 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:37:05 compute-0 systemd[1]: libpod-conmon-1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0.scope: Deactivated successfully.
Dec 13 09:37:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4090: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:05 compute-0 podman[420913]: 2025-12-13 09:37:05.643896692 +0000 UTC m=+0.024957758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:37:05 compute-0 nova_compute[248510]: 2025-12-13 09:37:05.770 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:06 compute-0 podman[420913]: 2025-12-13 09:37:06.348062057 +0000 UTC m=+0.729123133 container create 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 09:37:06 compute-0 systemd[1]: Started libpod-conmon-4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8.scope.
Dec 13 09:37:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:37:06 compute-0 nova_compute[248510]: 2025-12-13 09:37:06.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:06 compute-0 nova_compute[248510]: 2025-12-13 09:37:06.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:06 compute-0 ceph-mon[76537]: pgmap v4090: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:06 compute-0 podman[420913]: 2025-12-13 09:37:06.935871589 +0000 UTC m=+1.316932675 container init 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 09:37:06 compute-0 podman[420913]: 2025-12-13 09:37:06.945664885 +0000 UTC m=+1.326725941 container start 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:37:07 compute-0 determined_kare[420930]: {
Dec 13 09:37:07 compute-0 determined_kare[420930]:     "0": [
Dec 13 09:37:07 compute-0 determined_kare[420930]:         {
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "devices": [
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "/dev/loop3"
Dec 13 09:37:07 compute-0 determined_kare[420930]:             ],
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_name": "ceph_lv0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_size": "21470642176",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "name": "ceph_lv0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "tags": {
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cluster_name": "ceph",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.crush_device_class": "",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.encrypted": "0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.objectstore": "bluestore",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osd_id": "0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.type": "block",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.vdo": "0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.with_tpm": "0"
Dec 13 09:37:07 compute-0 determined_kare[420930]:             },
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "type": "block",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "vg_name": "ceph_vg0"
Dec 13 09:37:07 compute-0 determined_kare[420930]:         }
Dec 13 09:37:07 compute-0 determined_kare[420930]:     ],
Dec 13 09:37:07 compute-0 determined_kare[420930]:     "1": [
Dec 13 09:37:07 compute-0 determined_kare[420930]:         {
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "devices": [
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "/dev/loop4"
Dec 13 09:37:07 compute-0 determined_kare[420930]:             ],
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_name": "ceph_lv1",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_size": "21470642176",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "name": "ceph_lv1",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "tags": {
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cluster_name": "ceph",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.crush_device_class": "",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.encrypted": "0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.objectstore": "bluestore",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osd_id": "1",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.type": "block",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.vdo": "0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.with_tpm": "0"
Dec 13 09:37:07 compute-0 determined_kare[420930]:             },
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "type": "block",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "vg_name": "ceph_vg1"
Dec 13 09:37:07 compute-0 determined_kare[420930]:         }
Dec 13 09:37:07 compute-0 determined_kare[420930]:     ],
Dec 13 09:37:07 compute-0 determined_kare[420930]:     "2": [
Dec 13 09:37:07 compute-0 determined_kare[420930]:         {
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "devices": [
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "/dev/loop5"
Dec 13 09:37:07 compute-0 determined_kare[420930]:             ],
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_name": "ceph_lv2",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_size": "21470642176",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "name": "ceph_lv2",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "tags": {
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.cluster_name": "ceph",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.crush_device_class": "",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.encrypted": "0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.objectstore": "bluestore",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osd_id": "2",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.type": "block",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.vdo": "0",
Dec 13 09:37:07 compute-0 determined_kare[420930]:                 "ceph.with_tpm": "0"
Dec 13 09:37:07 compute-0 determined_kare[420930]:             },
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "type": "block",
Dec 13 09:37:07 compute-0 determined_kare[420930]:             "vg_name": "ceph_vg2"
Dec 13 09:37:07 compute-0 determined_kare[420930]:         }
Dec 13 09:37:07 compute-0 determined_kare[420930]:     ]
Dec 13 09:37:07 compute-0 determined_kare[420930]: }
Dec 13 09:37:07 compute-0 systemd[1]: libpod-4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8.scope: Deactivated successfully.
Dec 13 09:37:07 compute-0 podman[420913]: 2025-12-13 09:37:07.453345835 +0000 UTC m=+1.834406931 container attach 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 09:37:07 compute-0 podman[420913]: 2025-12-13 09:37:07.45473325 +0000 UTC m=+1.835794316 container died 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:37:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4091: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:08 compute-0 ceph-mon[76537]: pgmap v4091: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156-merged.mount: Deactivated successfully.
Dec 13 09:37:09 compute-0 podman[420913]: 2025-12-13 09:37:09.058760193 +0000 UTC m=+3.439821259 container remove 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 09:37:09 compute-0 sudo[420835]: pam_unix(sudo:session): session closed for user root
Dec 13 09:37:09 compute-0 systemd[1]: libpod-conmon-4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8.scope: Deactivated successfully.
Dec 13 09:37:09 compute-0 sudo[420952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:37:09 compute-0 sudo[420952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:37:09 compute-0 sudo[420952]: pam_unix(sudo:session): session closed for user root
Dec 13 09:37:09 compute-0 sudo[420977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:37:09 compute-0 sudo[420977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:37:09
Dec 13 09:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta']
Dec 13 09:37:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:37:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4092: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:09 compute-0 podman[421013]: 2025-12-13 09:37:09.549908017 +0000 UTC m=+0.023066930 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:37:09 compute-0 podman[421013]: 2025-12-13 09:37:09.696751365 +0000 UTC m=+0.169910248 container create 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:37:10 compute-0 systemd[1]: Started libpod-conmon-81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7.scope.
Dec 13 09:37:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:37:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:37:10 compute-0 ceph-mon[76537]: pgmap v4092: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:10 compute-0 podman[421013]: 2025-12-13 09:37:10.241133127 +0000 UTC m=+0.714292060 container init 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:37:10 compute-0 podman[421013]: 2025-12-13 09:37:10.252727308 +0000 UTC m=+0.725886231 container start 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 09:37:10 compute-0 focused_noyce[421030]: 167 167
Dec 13 09:37:10 compute-0 systemd[1]: libpod-81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7.scope: Deactivated successfully.
Dec 13 09:37:10 compute-0 podman[421013]: 2025-12-13 09:37:10.734917828 +0000 UTC m=+1.208076731 container attach 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:37:10 compute-0 podman[421013]: 2025-12-13 09:37:10.737180695 +0000 UTC m=+1.210339668 container died 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 09:37:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:10 compute-0 nova_compute[248510]: 2025-12-13 09:37:10.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-40660fe34a546203efd4000d6d2cfdf16e469071e2dfe88ba0c785fa1de35cbe-merged.mount: Deactivated successfully.
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:37:11 compute-0 podman[421013]: 2025-12-13 09:37:11.305724514 +0000 UTC m=+1.778883427 container remove 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 09:37:11 compute-0 systemd[1]: libpod-conmon-81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7.scope: Deactivated successfully.
Dec 13 09:37:11 compute-0 podman[421051]: 2025-12-13 09:37:11.405014257 +0000 UTC m=+0.263912259 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 09:37:11 compute-0 podman[421050]: 2025-12-13 09:37:11.411470159 +0000 UTC m=+0.275981692 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible)
Dec 13 09:37:11 compute-0 podman[421049]: 2025-12-13 09:37:11.443406201 +0000 UTC m=+0.308720814 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:37:11 compute-0 podman[421115]: 2025-12-13 09:37:11.527292268 +0000 UTC m=+0.030270661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:37:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4093: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:11 compute-0 nova_compute[248510]: 2025-12-13 09:37:11.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:12 compute-0 podman[421115]: 2025-12-13 09:37:12.143498772 +0000 UTC m=+0.646477135 container create 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:37:12 compute-0 ceph-mon[76537]: pgmap v4093: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:12 compute-0 systemd[1]: Started libpod-conmon-00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d.scope.
Dec 13 09:37:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:37:13 compute-0 podman[421115]: 2025-12-13 09:37:13.118317954 +0000 UTC m=+1.621296357 container init 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:37:13 compute-0 podman[421115]: 2025-12-13 09:37:13.132027789 +0000 UTC m=+1.635006182 container start 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 09:37:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4094: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:13 compute-0 podman[421115]: 2025-12-13 09:37:13.891432061 +0000 UTC m=+2.394410504 container attach 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:37:13 compute-0 lvm[421210]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:37:13 compute-0 lvm[421211]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:37:13 compute-0 lvm[421210]: VG ceph_vg0 finished
Dec 13 09:37:13 compute-0 lvm[421211]: VG ceph_vg1 finished
Dec 13 09:37:13 compute-0 lvm[421213]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:37:13 compute-0 lvm[421213]: VG ceph_vg2 finished
Dec 13 09:37:14 compute-0 lvm[421214]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:37:14 compute-0 lvm[421214]: VG ceph_vg2 finished
Dec 13 09:37:14 compute-0 jovial_banach[421132]: {}
Dec 13 09:37:14 compute-0 lvm[421217]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:37:14 compute-0 lvm[421217]: VG ceph_vg2 finished
Dec 13 09:37:14 compute-0 systemd[1]: libpod-00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d.scope: Deactivated successfully.
Dec 13 09:37:14 compute-0 podman[421115]: 2025-12-13 09:37:14.110987675 +0000 UTC m=+2.613966028 container died 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 09:37:14 compute-0 systemd[1]: libpod-00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d.scope: Consumed 1.618s CPU time.
Dec 13 09:37:15 compute-0 ceph-mon[76537]: pgmap v4094: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:37:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1998377532' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:37:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:37:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1998377532' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659-merged.mount: Deactivated successfully.
Dec 13 09:37:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4095: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:15 compute-0 podman[421115]: 2025-12-13 09:37:15.715264624 +0000 UTC m=+4.218242987 container remove 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 13 09:37:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:15 compute-0 nova_compute[248510]: 2025-12-13 09:37:15.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:15 compute-0 sudo[420977]: pam_unix(sudo:session): session closed for user root
Dec 13 09:37:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:37:15 compute-0 systemd[1]: libpod-conmon-00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d.scope: Deactivated successfully.
Dec 13 09:37:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:37:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:37:15 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:37:16 compute-0 sudo[421229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:37:16 compute-0 sudo[421229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:37:16 compute-0 sudo[421229]: pam_unix(sudo:session): session closed for user root
Dec 13 09:37:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1998377532' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:37:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1998377532' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:37:16 compute-0 ceph-mon[76537]: pgmap v4095: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:16 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:37:16 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:37:16 compute-0 nova_compute[248510]: 2025-12-13 09:37:16.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4096: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:18 compute-0 ceph-mon[76537]: pgmap v4096: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4097: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:20 compute-0 ceph-mon[76537]: pgmap v4097: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:20 compute-0 nova_compute[248510]: 2025-12-13 09:37:20.775 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4098: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:21 compute-0 nova_compute[248510]: 2025-12-13 09:37:21.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:37:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:37:21 compute-0 ceph-mon[76537]: pgmap v4098: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4099: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:23 compute-0 ceph-mon[76537]: pgmap v4099: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:24 compute-0 sshd-session[421254]: Invalid user tron from 80.94.92.165 port 47330
Dec 13 09:37:25 compute-0 sshd-session[421254]: Connection closed by invalid user tron 80.94.92.165 port 47330 [preauth]
Dec 13 09:37:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4100: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:25 compute-0 ceph-mon[76537]: pgmap v4100: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:25 compute-0 nova_compute[248510]: 2025-12-13 09:37:25.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:26 compute-0 nova_compute[248510]: 2025-12-13 09:37:26.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4101: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:27 compute-0 ceph-mon[76537]: pgmap v4101: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4102: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:29 compute-0 ceph-mon[76537]: pgmap v4102: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:30 compute-0 nova_compute[248510]: 2025-12-13 09:37:30.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4103: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:31 compute-0 nova_compute[248510]: 2025-12-13 09:37:31.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:31 compute-0 ceph-mon[76537]: pgmap v4103: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4104: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:33 compute-0 ceph-mon[76537]: pgmap v4104: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4105: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:35 compute-0 nova_compute[248510]: 2025-12-13 09:37:35.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:35 compute-0 nova_compute[248510]: 2025-12-13 09:37:35.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:35 compute-0 ceph-mon[76537]: pgmap v4105: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:36 compute-0 nova_compute[248510]: 2025-12-13 09:37:36.789 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4106: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.701333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657701391, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 251, "total_data_size": 2169617, "memory_usage": 2204736, "flush_reason": "Manual Compaction"}
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657719331, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2120720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80854, "largest_seqno": 82141, "table_properties": {"data_size": 2114457, "index_size": 3525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12736, "raw_average_key_size": 19, "raw_value_size": 2102129, "raw_average_value_size": 3269, "num_data_blocks": 158, "num_entries": 643, "num_filter_entries": 643, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618518, "oldest_key_time": 1765618518, "file_creation_time": 1765618657, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 18093 microseconds, and 6566 cpu microseconds.
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.719419) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2120720 bytes OK
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.719462) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.722746) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.722781) EVENT_LOG_v1 {"time_micros": 1765618657722770, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.722814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2163830, prev total WAL file size 2163830, number of live WAL files 2.
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.724169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2071KB)], [194(10MB)]
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657724244, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 13397828, "oldest_snapshot_seqno": -1}
Dec 13 09:37:37 compute-0 ceph-mon[76537]: pgmap v4106: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 9736 keys, 11547202 bytes, temperature: kUnknown
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657838444, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 11547202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11486370, "index_size": 35375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 257272, "raw_average_key_size": 26, "raw_value_size": 11316988, "raw_average_value_size": 1162, "num_data_blocks": 1355, "num_entries": 9736, "num_filter_entries": 9736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618657, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.838888) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 11547202 bytes
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.841473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.2 rd, 101.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 10250, records dropped: 514 output_compression: NoCompression
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.841497) EVENT_LOG_v1 {"time_micros": 1765618657841485, "job": 122, "event": "compaction_finished", "compaction_time_micros": 114333, "compaction_time_cpu_micros": 38319, "output_level": 6, "num_output_files": 1, "total_output_size": 11547202, "num_input_records": 10250, "num_output_records": 9736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657842032, "job": 122, "event": "table_file_deletion", "file_number": 196}
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657844462, "job": 122, "event": "table_file_deletion", "file_number": 194}
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.724044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:37:37 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:37:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4107: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:39 compute-0 ceph-mon[76537]: pgmap v4107: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:37:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:37:40 compute-0 nova_compute[248510]: 2025-12-13 09:37:40.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4108: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:41 compute-0 nova_compute[248510]: 2025-12-13 09:37:41.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:41 compute-0 nova_compute[248510]: 2025-12-13 09:37:41.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:41 compute-0 podman[421258]: 2025-12-13 09:37:41.994616295 +0000 UTC m=+0.074215555 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 09:37:42 compute-0 podman[421259]: 2025-12-13 09:37:42.017745716 +0000 UTC m=+0.085304194 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 09:37:42 compute-0 podman[421257]: 2025-12-13 09:37:42.053288108 +0000 UTC m=+0.135828982 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 09:37:42 compute-0 ceph-mon[76537]: pgmap v4108: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4109: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:44 compute-0 ceph-mon[76537]: pgmap v4109: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4110: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:45 compute-0 nova_compute[248510]: 2025-12-13 09:37:45.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:45 compute-0 ceph-mon[76537]: pgmap v4110: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:46 compute-0 nova_compute[248510]: 2025-12-13 09:37:46.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:46 compute-0 nova_compute[248510]: 2025-12-13 09:37:46.796 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4111: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:48 compute-0 ceph-mon[76537]: pgmap v4111: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4112: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:50 compute-0 ceph-mon[76537]: pgmap v4112: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:50 compute-0 nova_compute[248510]: 2025-12-13 09:37:50.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4113: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:51 compute-0 nova_compute[248510]: 2025-12-13 09:37:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:51 compute-0 nova_compute[248510]: 2025-12-13 09:37:51.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:51 compute-0 nova_compute[248510]: 2025-12-13 09:37:51.806 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:37:51 compute-0 nova_compute[248510]: 2025-12-13 09:37:51.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:37:51 compute-0 nova_compute[248510]: 2025-12-13 09:37:51.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:37:51 compute-0 nova_compute[248510]: 2025-12-13 09:37:51.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:37:51 compute-0 nova_compute[248510]: 2025-12-13 09:37:51.808 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:37:52 compute-0 ceph-mon[76537]: pgmap v4113: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:37:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1677801049' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.407 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.607 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.609 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3509MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.609 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.609 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.876 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.876 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.908 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.943 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.944 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:37:52 compute-0 nova_compute[248510]: 2025-12-13 09:37:52.980 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:37:53 compute-0 nova_compute[248510]: 2025-12-13 09:37:53.027 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:37:53 compute-0 nova_compute[248510]: 2025-12-13 09:37:53.049 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:37:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4114: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:37:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699968712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:37:53 compute-0 nova_compute[248510]: 2025-12-13 09:37:53.759 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:37:53 compute-0 nova_compute[248510]: 2025-12-13 09:37:53.765 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:37:53 compute-0 nova_compute[248510]: 2025-12-13 09:37:53.800 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:37:53 compute-0 nova_compute[248510]: 2025-12-13 09:37:53.802 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:37:53 compute-0 nova_compute[248510]: 2025-12-13 09:37:53.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:37:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1677801049' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:37:54 compute-0 nova_compute[248510]: 2025-12-13 09:37:54.803 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:54 compute-0 nova_compute[248510]: 2025-12-13 09:37:54.804 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:37:54 compute-0 nova_compute[248510]: 2025-12-13 09:37:54.804 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:37:54 compute-0 nova_compute[248510]: 2025-12-13 09:37:54.872 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:37:54 compute-0 nova_compute[248510]: 2025-12-13 09:37:54.873 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:37:55.473 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:37:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:37:55.473 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:37:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:37:55.473 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:37:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4115: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:55 compute-0 nova_compute[248510]: 2025-12-13 09:37:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:55 compute-0 nova_compute[248510]: 2025-12-13 09:37:55.810 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:37:56 compute-0 ceph-mon[76537]: pgmap v4114: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/699968712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:37:56 compute-0 nova_compute[248510]: 2025-12-13 09:37:56.801 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:37:57 compute-0 ceph-mon[76537]: pgmap v4115: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4116: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:58 compute-0 nova_compute[248510]: 2025-12-13 09:37:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:58 compute-0 nova_compute[248510]: 2025-12-13 09:37:58.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:37:58 compute-0 nova_compute[248510]: 2025-12-13 09:37:58.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:37:58 compute-0 ceph-mon[76537]: pgmap v4116: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:37:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4117: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:00 compute-0 nova_compute[248510]: 2025-12-13 09:38:00.812 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:01 compute-0 ceph-mon[76537]: pgmap v4117: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4118: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:01 compute-0 nova_compute[248510]: 2025-12-13 09:38:01.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4119: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:03 compute-0 ceph-mon[76537]: pgmap v4118: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:04 compute-0 ceph-mon[76537]: pgmap v4119: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4120: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:05 compute-0 nova_compute[248510]: 2025-12-13 09:38:05.814 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:06 compute-0 ceph-mon[76537]: pgmap v4120: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:06 compute-0 nova_compute[248510]: 2025-12-13 09:38:06.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4121: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:07 compute-0 nova_compute[248510]: 2025-12-13 09:38:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:38:08 compute-0 ceph-mon[76537]: pgmap v4121: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:38:09
Dec 13 09:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'volumes', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Dec 13 09:38:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:38:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4122: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:38:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:38:10 compute-0 nova_compute[248510]: 2025-12-13 09:38:10.817 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:11 compute-0 ceph-mon[76537]: pgmap v4122: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:38:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4123: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:11 compute-0 nova_compute[248510]: 2025-12-13 09:38:11.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:12 compute-0 podman[421365]: 2025-12-13 09:38:12.983293297 +0000 UTC m=+0.059376862 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 09:38:12 compute-0 podman[421364]: 2025-12-13 09:38:12.997108034 +0000 UTC m=+0.075534088 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:38:13 compute-0 ceph-mon[76537]: pgmap v4123: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:13 compute-0 podman[421363]: 2025-12-13 09:38:13.042043613 +0000 UTC m=+0.129942395 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller)
Dec 13 09:38:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4124: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:14 compute-0 ceph-mon[76537]: pgmap v4124: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4125: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:15 compute-0 nova_compute[248510]: 2025-12-13 09:38:15.819 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:16 compute-0 sudo[421427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:38:16 compute-0 sudo[421427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:16 compute-0 sudo[421427]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:16 compute-0 sudo[421452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 09:38:16 compute-0 sudo[421452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:16 compute-0 ceph-mon[76537]: pgmap v4125: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:16 compute-0 sudo[421452]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:38:16 compute-0 nova_compute[248510]: 2025-12-13 09:38:16.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:38:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4126: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:17 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:17 compute-0 sudo[421497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:38:17 compute-0 sudo[421497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:17 compute-0 sudo[421497]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:17 compute-0 sudo[421522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:38:17 compute-0 sudo[421522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:18 compute-0 sudo[421522]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:38:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:38:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:38:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:38:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:38:18 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:38:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:38:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:38:19 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:38:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:38:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:38:19 compute-0 sudo[421578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:38:19 compute-0 sudo[421578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:19 compute-0 sudo[421578]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:19 compute-0 sudo[421603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:38:19 compute-0 sudo[421603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:19 compute-0 ceph-mon[76537]: pgmap v4126: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:38:19 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:38:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4127: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:19 compute-0 podman[421639]: 2025-12-13 09:38:19.780974165 +0000 UTC m=+0.029045161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:38:20 compute-0 podman[421639]: 2025-12-13 09:38:20.152120704 +0000 UTC m=+0.400191700 container create b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:38:20 compute-0 systemd[1]: Started libpod-conmon-b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41.scope.
Dec 13 09:38:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:38:20 compute-0 nova_compute[248510]: 2025-12-13 09:38:20.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:21 compute-0 podman[421639]: 2025-12-13 09:38:21.101636261 +0000 UTC m=+1.349707247 container init b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:38:21 compute-0 podman[421639]: 2025-12-13 09:38:21.110361081 +0000 UTC m=+1.358432057 container start b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 09:38:21 compute-0 nice_noyce[421655]: 167 167
Dec 13 09:38:21 compute-0 systemd[1]: libpod-b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41.scope: Deactivated successfully.
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4128: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:21 compute-0 podman[421639]: 2025-12-13 09:38:21.702691076 +0000 UTC m=+1.950762162 container attach b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 09:38:21 compute-0 podman[421639]: 2025-12-13 09:38:21.703746193 +0000 UTC m=+1.951817209 container died b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 09:38:21 compute-0 nova_compute[248510]: 2025-12-13 09:38:21.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:38:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:38:21 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:38:21 compute-0 ceph-mon[76537]: pgmap v4127: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:38:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce5a59a6ad9ca0f7f4511989f9f842a1635c25a5c917bdc579940ced1048b717-merged.mount: Deactivated successfully.
Dec 13 09:38:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4129: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:23 compute-0 ceph-mon[76537]: pgmap v4128: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4130: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:25 compute-0 podman[421639]: 2025-12-13 09:38:25.759269173 +0000 UTC m=+6.007340159 container remove b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 09:38:25 compute-0 systemd[1]: libpod-conmon-b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41.scope: Deactivated successfully.
Dec 13 09:38:25 compute-0 nova_compute[248510]: 2025-12-13 09:38:25.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:26 compute-0 podman[421680]: 2025-12-13 09:38:25.913354463 +0000 UTC m=+0.024544427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:38:26 compute-0 ceph-mon[76537]: pgmap v4129: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:26 compute-0 nova_compute[248510]: 2025-12-13 09:38:26.817 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4131: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:27 compute-0 podman[421680]: 2025-12-13 09:38:27.667399684 +0000 UTC m=+1.778589608 container create 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:38:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:28 compute-0 ceph-mon[76537]: pgmap v4130: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:28 compute-0 systemd[1]: Started libpod-conmon-5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3.scope.
Dec 13 09:38:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:28 compute-0 podman[421680]: 2025-12-13 09:38:28.361057545 +0000 UTC m=+2.472247489 container init 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:38:28 compute-0 podman[421680]: 2025-12-13 09:38:28.376427211 +0000 UTC m=+2.487617135 container start 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:38:28 compute-0 podman[421680]: 2025-12-13 09:38:28.717947108 +0000 UTC m=+2.829137132 container attach 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 09:38:28 compute-0 stoic_yalow[421697]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:38:28 compute-0 stoic_yalow[421697]: --> All data devices are unavailable
Dec 13 09:38:28 compute-0 systemd[1]: libpod-5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3.scope: Deactivated successfully.
Dec 13 09:38:28 compute-0 podman[421680]: 2025-12-13 09:38:28.906500514 +0000 UTC m=+3.017690468 container died 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 09:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1-merged.mount: Deactivated successfully.
Dec 13 09:38:29 compute-0 podman[421680]: 2025-12-13 09:38:29.397252579 +0000 UTC m=+3.508442543 container remove 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:38:29 compute-0 systemd[1]: libpod-conmon-5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3.scope: Deactivated successfully.
Dec 13 09:38:29 compute-0 ceph-mon[76537]: pgmap v4131: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:29 compute-0 sudo[421603]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:29 compute-0 sudo[421731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:38:29 compute-0 sudo[421731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:29 compute-0 sudo[421731]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:29 compute-0 sudo[421756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:38:29 compute-0 sudo[421756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4132: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:29 compute-0 podman[421794]: 2025-12-13 09:38:29.863102489 +0000 UTC m=+0.022871236 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:38:29 compute-0 podman[421794]: 2025-12-13 09:38:29.993658347 +0000 UTC m=+0.153427074 container create 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:38:30 compute-0 systemd[1]: Started libpod-conmon-5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912.scope.
Dec 13 09:38:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:38:30 compute-0 podman[421794]: 2025-12-13 09:38:30.463143878 +0000 UTC m=+0.622912625 container init 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:38:30 compute-0 podman[421794]: 2025-12-13 09:38:30.474576265 +0000 UTC m=+0.634345002 container start 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 09:38:30 compute-0 funny_hypatia[421810]: 167 167
Dec 13 09:38:30 compute-0 systemd[1]: libpod-5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912.scope: Deactivated successfully.
Dec 13 09:38:30 compute-0 conmon[421810]: conmon 5a17aed4bed951f115bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912.scope/container/memory.events
Dec 13 09:38:30 compute-0 nova_compute[248510]: 2025-12-13 09:38:30.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:30 compute-0 podman[421794]: 2025-12-13 09:38:30.969822162 +0000 UTC m=+1.129590979 container attach 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:38:30 compute-0 podman[421794]: 2025-12-13 09:38:30.971763161 +0000 UTC m=+1.131531918 container died 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:38:31 compute-0 ceph-mon[76537]: pgmap v4132: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4133: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee80ad20d2672d8503d677f91f38ddfc9a50fac2f0462784a3ad70e6275c9a79-merged.mount: Deactivated successfully.
Dec 13 09:38:31 compute-0 nova_compute[248510]: 2025-12-13 09:38:31.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:32 compute-0 podman[421794]: 2025-12-13 09:38:32.0559855 +0000 UTC m=+2.215754227 container remove 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:38:32 compute-0 systemd[1]: libpod-conmon-5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912.scope: Deactivated successfully.
Dec 13 09:38:32 compute-0 podman[421834]: 2025-12-13 09:38:32.267292317 +0000 UTC m=+0.059907615 container create 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 09:38:32 compute-0 podman[421834]: 2025-12-13 09:38:32.232614216 +0000 UTC m=+0.025229514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:38:32 compute-0 systemd[1]: Started libpod-conmon-720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7.scope.
Dec 13 09:38:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:32 compute-0 podman[421834]: 2025-12-13 09:38:32.763616072 +0000 UTC m=+0.556231470 container init 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:38:32 compute-0 ceph-mon[76537]: pgmap v4133: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:32 compute-0 podman[421834]: 2025-12-13 09:38:32.776105996 +0000 UTC m=+0.568721284 container start 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:38:32 compute-0 podman[421834]: 2025-12-13 09:38:32.781921412 +0000 UTC m=+0.574536730 container attach 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:38:33 compute-0 happy_davinci[421850]: {
Dec 13 09:38:33 compute-0 happy_davinci[421850]:     "0": [
Dec 13 09:38:33 compute-0 happy_davinci[421850]:         {
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "devices": [
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "/dev/loop3"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             ],
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_name": "ceph_lv0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_size": "21470642176",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "name": "ceph_lv0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "tags": {
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cluster_name": "ceph",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.crush_device_class": "",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.encrypted": "0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.objectstore": "bluestore",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osd_id": "0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.type": "block",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.vdo": "0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.with_tpm": "0"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             },
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "type": "block",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "vg_name": "ceph_vg0"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:         }
Dec 13 09:38:33 compute-0 happy_davinci[421850]:     ],
Dec 13 09:38:33 compute-0 happy_davinci[421850]:     "1": [
Dec 13 09:38:33 compute-0 happy_davinci[421850]:         {
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "devices": [
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "/dev/loop4"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             ],
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_name": "ceph_lv1",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_size": "21470642176",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "name": "ceph_lv1",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "tags": {
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cluster_name": "ceph",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.crush_device_class": "",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.encrypted": "0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.objectstore": "bluestore",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osd_id": "1",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.type": "block",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.vdo": "0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.with_tpm": "0"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             },
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "type": "block",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "vg_name": "ceph_vg1"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:         }
Dec 13 09:38:33 compute-0 happy_davinci[421850]:     ],
Dec 13 09:38:33 compute-0 happy_davinci[421850]:     "2": [
Dec 13 09:38:33 compute-0 happy_davinci[421850]:         {
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "devices": [
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "/dev/loop5"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             ],
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_name": "ceph_lv2",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_size": "21470642176",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "name": "ceph_lv2",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "tags": {
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.cluster_name": "ceph",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.crush_device_class": "",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.encrypted": "0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.objectstore": "bluestore",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osd_id": "2",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.type": "block",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.vdo": "0",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:                 "ceph.with_tpm": "0"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             },
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "type": "block",
Dec 13 09:38:33 compute-0 happy_davinci[421850]:             "vg_name": "ceph_vg2"
Dec 13 09:38:33 compute-0 happy_davinci[421850]:         }
Dec 13 09:38:33 compute-0 happy_davinci[421850]:     ]
Dec 13 09:38:33 compute-0 happy_davinci[421850]: }
Dec 13 09:38:33 compute-0 systemd[1]: libpod-720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7.scope: Deactivated successfully.
Dec 13 09:38:33 compute-0 podman[421834]: 2025-12-13 09:38:33.132455676 +0000 UTC m=+0.925070964 container died 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 09:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf-merged.mount: Deactivated successfully.
Dec 13 09:38:33 compute-0 podman[421834]: 2025-12-13 09:38:33.338273454 +0000 UTC m=+1.130888742 container remove 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:38:33 compute-0 systemd[1]: libpod-conmon-720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7.scope: Deactivated successfully.
Dec 13 09:38:33 compute-0 sudo[421756]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:33 compute-0 sudo[421873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:38:33 compute-0 sudo[421873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:33 compute-0 sudo[421873]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:33 compute-0 sudo[421898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:38:33 compute-0 sudo[421898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4134: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:33 compute-0 podman[421934]: 2025-12-13 09:38:33.858772066 +0000 UTC m=+0.027317317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:38:33 compute-0 podman[421934]: 2025-12-13 09:38:33.959736772 +0000 UTC m=+0.128281983 container create 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:38:34 compute-0 ceph-mon[76537]: pgmap v4134: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:34 compute-0 systemd[1]: Started libpod-conmon-1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5.scope.
Dec 13 09:38:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:38:35 compute-0 podman[421934]: 2025-12-13 09:38:35.076350894 +0000 UTC m=+1.244896195 container init 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 09:38:35 compute-0 podman[421934]: 2025-12-13 09:38:35.085687129 +0000 UTC m=+1.254232380 container start 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:38:35 compute-0 practical_borg[421950]: 167 167
Dec 13 09:38:35 compute-0 systemd[1]: libpod-1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5.scope: Deactivated successfully.
Dec 13 09:38:35 compute-0 podman[421934]: 2025-12-13 09:38:35.350682395 +0000 UTC m=+1.519227696 container attach 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:38:35 compute-0 podman[421934]: 2025-12-13 09:38:35.351893665 +0000 UTC m=+1.520438906 container died 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:38:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4135: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:35 compute-0 nova_compute[248510]: 2025-12-13 09:38:35.827 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd7f260fec64a94d7ae2dcd02714098879ae068000f4e109dcc21b9b3efa6877-merged.mount: Deactivated successfully.
Dec 13 09:38:36 compute-0 ceph-mon[76537]: pgmap v4135: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:36 compute-0 nova_compute[248510]: 2025-12-13 09:38:36.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:37 compute-0 podman[421934]: 2025-12-13 09:38:37.062741252 +0000 UTC m=+3.231286463 container remove 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:38:37 compute-0 systemd[1]: libpod-conmon-1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5.scope: Deactivated successfully.
Dec 13 09:38:37 compute-0 podman[421975]: 2025-12-13 09:38:37.243724917 +0000 UTC m=+0.032453616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:38:37 compute-0 podman[421975]: 2025-12-13 09:38:37.376515152 +0000 UTC m=+0.165243751 container create 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:38:37 compute-0 systemd[1]: Started libpod-conmon-34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b.scope.
Dec 13 09:38:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:38:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4136: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:37 compute-0 podman[421975]: 2025-12-13 09:38:37.80053947 +0000 UTC m=+0.589268089 container init 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:38:37 compute-0 podman[421975]: 2025-12-13 09:38:37.807879495 +0000 UTC m=+0.596608094 container start 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:38:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:38 compute-0 podman[421975]: 2025-12-13 09:38:38.327560416 +0000 UTC m=+1.116289035 container attach 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:38:38 compute-0 ceph-mon[76537]: pgmap v4136: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:38 compute-0 lvm[422069]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:38:38 compute-0 lvm[422070]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:38:38 compute-0 lvm[422069]: VG ceph_vg0 finished
Dec 13 09:38:38 compute-0 lvm[422070]: VG ceph_vg1 finished
Dec 13 09:38:38 compute-0 lvm[422072]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:38:38 compute-0 lvm[422072]: VG ceph_vg2 finished
Dec 13 09:38:38 compute-0 charming_davinci[421991]: {}
Dec 13 09:38:38 compute-0 systemd[1]: libpod-34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b.scope: Deactivated successfully.
Dec 13 09:38:38 compute-0 podman[421975]: 2025-12-13 09:38:38.749926744 +0000 UTC m=+1.538655343 container died 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 09:38:38 compute-0 systemd[1]: libpod-34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b.scope: Consumed 1.459s CPU time.
Dec 13 09:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3-merged.mount: Deactivated successfully.
Dec 13 09:38:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4137: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:39 compute-0 podman[421975]: 2025-12-13 09:38:39.857852368 +0000 UTC m=+2.646580987 container remove 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:38:39 compute-0 systemd[1]: libpod-conmon-34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b.scope: Deactivated successfully.
Dec 13 09:38:39 compute-0 ceph-mon[76537]: pgmap v4137: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:39 compute-0 sudo[421898]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:38:39 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:38:39 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:40 compute-0 sudo[422090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:38:40 compute-0 sudo[422090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:38:40 compute-0 sudo[422090]: pam_unix(sudo:session): session closed for user root
Dec 13 09:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:38:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:38:40 compute-0 nova_compute[248510]: 2025-12-13 09:38:40.829 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:40 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:38:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4138: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:41 compute-0 nova_compute[248510]: 2025-12-13 09:38:41.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:41 compute-0 ceph-mon[76537]: pgmap v4138: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:42 compute-0 nova_compute[248510]: 2025-12-13 09:38:42.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:38:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4139: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:43 compute-0 ceph-mon[76537]: pgmap v4139: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:43 compute-0 podman[422118]: 2025-12-13 09:38:43.988482865 +0000 UTC m=+0.062832009 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:38:44 compute-0 podman[422117]: 2025-12-13 09:38:44.021410032 +0000 UTC m=+0.102918126 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 09:38:44 compute-0 podman[422116]: 2025-12-13 09:38:44.027905625 +0000 UTC m=+0.109759848 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 09:38:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4140: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:45 compute-0 ceph-mon[76537]: pgmap v4140: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:45 compute-0 nova_compute[248510]: 2025-12-13 09:38:45.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:46 compute-0 nova_compute[248510]: 2025-12-13 09:38:46.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:38:46 compute-0 nova_compute[248510]: 2025-12-13 09:38:46.827 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4141: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:47 compute-0 ceph-mon[76537]: pgmap v4141: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4142: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:49 compute-0 ceph-mon[76537]: pgmap v4142: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:50 compute-0 nova_compute[248510]: 2025-12-13 09:38:50.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4143: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:51 compute-0 ceph-mon[76537]: pgmap v4143: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:51 compute-0 nova_compute[248510]: 2025-12-13 09:38:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:38:51 compute-0 nova_compute[248510]: 2025-12-13 09:38:51.830 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:51 compute-0 nova_compute[248510]: 2025-12-13 09:38:51.974 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:38:51 compute-0 nova_compute[248510]: 2025-12-13 09:38:51.975 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:38:51 compute-0 nova_compute[248510]: 2025-12-13 09:38:51.975 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:38:51 compute-0 nova_compute[248510]: 2025-12-13 09:38:51.975 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:38:51 compute-0 nova_compute[248510]: 2025-12-13 09:38:51.975 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:38:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:38:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187745040' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:38:52 compute-0 nova_compute[248510]: 2025-12-13 09:38:52.516 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:38:52 compute-0 nova_compute[248510]: 2025-12-13 09:38:52.713 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:38:52 compute-0 nova_compute[248510]: 2025-12-13 09:38:52.714 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3486MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:38:52 compute-0 nova_compute[248510]: 2025-12-13 09:38:52.714 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:38:52 compute-0 nova_compute[248510]: 2025-12-13 09:38:52.715 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:38:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/187745040' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:38:53 compute-0 nova_compute[248510]: 2025-12-13 09:38:53.570 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:38:53 compute-0 nova_compute[248510]: 2025-12-13 09:38:53.571 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:38:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4144: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:53 compute-0 nova_compute[248510]: 2025-12-13 09:38:53.748 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:38:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:38:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312532257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:38:54 compute-0 ceph-mon[76537]: pgmap v4144: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:54 compute-0 nova_compute[248510]: 2025-12-13 09:38:54.302 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:38:54 compute-0 nova_compute[248510]: 2025-12-13 09:38:54.309 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:38:54 compute-0 nova_compute[248510]: 2025-12-13 09:38:54.401 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:38:54 compute-0 nova_compute[248510]: 2025-12-13 09:38:54.403 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:38:54 compute-0 nova_compute[248510]: 2025-12-13 09:38:54.403 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:38:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3312532257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:38:55 compute-0 nova_compute[248510]: 2025-12-13 09:38:55.404 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:38:55 compute-0 nova_compute[248510]: 2025-12-13 09:38:55.405 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:38:55 compute-0 nova_compute[248510]: 2025-12-13 09:38:55.405 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:38:55 compute-0 nova_compute[248510]: 2025-12-13 09:38:55.424 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:38:55 compute-0 nova_compute[248510]: 2025-12-13 09:38:55.425 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:38:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:38:55.474 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:38:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:38:55.474 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:38:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:38:55.475 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:38:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4145: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:55 compute-0 nova_compute[248510]: 2025-12-13 09:38:55.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:56 compute-0 ceph-mon[76537]: pgmap v4145: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:56 compute-0 nova_compute[248510]: 2025-12-13 09:38:56.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:38:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4146: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:57 compute-0 nova_compute[248510]: 2025-12-13 09:38:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:38:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:38:59 compute-0 ceph-mon[76537]: pgmap v4146: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4147: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:38:59 compute-0 nova_compute[248510]: 2025-12-13 09:38:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:38:59 compute-0 nova_compute[248510]: 2025-12-13 09:38:59.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:39:00 compute-0 ceph-mon[76537]: pgmap v4147: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:00 compute-0 nova_compute[248510]: 2025-12-13 09:39:00.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:00 compute-0 nova_compute[248510]: 2025-12-13 09:39:00.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4148: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:01 compute-0 nova_compute[248510]: 2025-12-13 09:39:01.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:02 compute-0 ceph-mon[76537]: pgmap v4148: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4149: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:04 compute-0 ceph-mon[76537]: pgmap v4149: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4150: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:05 compute-0 nova_compute[248510]: 2025-12-13 09:39:05.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:06 compute-0 ceph-mon[76537]: pgmap v4150: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:06 compute-0 nova_compute[248510]: 2025-12-13 09:39:06.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4151: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:08 compute-0 ceph-mon[76537]: pgmap v4151: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:08 compute-0 nova_compute[248510]: 2025-12-13 09:39:08.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:39:09
Dec 13 09:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes']
Dec 13 09:39:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:39:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4152: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:39:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:39:10 compute-0 nova_compute[248510]: 2025-12-13 09:39:10.843 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:11 compute-0 ceph-mon[76537]: pgmap v4152: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:39:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4153: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:11 compute-0 nova_compute[248510]: 2025-12-13 09:39:11.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:12 compute-0 ceph-mon[76537]: pgmap v4153: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4154: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:13 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:14 compute-0 podman[422224]: 2025-12-13 09:39:14.99853844 +0000 UTC m=+0.082904923 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 13 09:39:15 compute-0 podman[422225]: 2025-12-13 09:39:15.009682539 +0000 UTC m=+0.082198265 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:39:15 compute-0 podman[422223]: 2025-12-13 09:39:15.055008098 +0000 UTC m=+0.134426257 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec 13 09:39:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4155: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:15 compute-0 nova_compute[248510]: 2025-12-13 09:39:15.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:16 compute-0 ceph-mon[76537]: pgmap v4154: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:16 compute-0 nova_compute[248510]: 2025-12-13 09:39:16.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4156: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:39:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3553484223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:39:17 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:39:17 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3553484223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:39:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:19 compute-0 ceph-mon[76537]: pgmap v4155: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:19 compute-0 ceph-mon[76537]: pgmap v4156: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 09:39:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4157: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 13 09:39:20 compute-0 nova_compute[248510]: 2025-12-13 09:39:20.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4158: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 13 09:39:21 compute-0 nova_compute[248510]: 2025-12-13 09:39:21.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:39:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:39:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3553484223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:39:22 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3553484223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:39:23 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Dec 13 09:39:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4159: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 511 B/s wr, 7 op/s
Dec 13 09:39:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Dec 13 09:39:24 compute-0 ceph-mon[76537]: pgmap v4157: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 13 09:39:24 compute-0 ceph-mon[76537]: pgmap v4158: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 13 09:39:24 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Dec 13 09:39:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4161: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Dec 13 09:39:25 compute-0 nova_compute[248510]: 2025-12-13 09:39:25.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:26 compute-0 ceph-mon[76537]: pgmap v4159: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 511 B/s wr, 7 op/s
Dec 13 09:39:26 compute-0 nova_compute[248510]: 2025-12-13 09:39:26.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:27 compute-0 ceph-mon[76537]: osdmap e324: 3 total, 3 up, 3 in
Dec 13 09:39:27 compute-0 ceph-mon[76537]: pgmap v4161: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Dec 13 09:39:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4162: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Dec 13 09:39:28 compute-0 ceph-mon[76537]: pgmap v4162: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Dec 13 09:39:28 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4163: 321 pgs: 321 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 21 op/s
Dec 13 09:39:30 compute-0 ceph-mon[76537]: pgmap v4163: 321 pgs: 321 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 21 op/s
Dec 13 09:39:30 compute-0 nova_compute[248510]: 2025-12-13 09:39:30.852 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4164: 321 pgs: 321 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 21 op/s
Dec 13 09:39:31 compute-0 nova_compute[248510]: 2025-12-13 09:39:31.854 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:32 compute-0 ceph-mon[76537]: pgmap v4164: 321 pgs: 321 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 21 op/s
Dec 13 09:39:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4165: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Dec 13 09:39:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Dec 13 09:39:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Dec 13 09:39:34 compute-0 ceph-mon[76537]: pgmap v4165: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Dec 13 09:39:34 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Dec 13 09:39:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4167: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Dec 13 09:39:35 compute-0 nova_compute[248510]: 2025-12-13 09:39:35.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:35 compute-0 nova_compute[248510]: 2025-12-13 09:39:35.853 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Dec 13 09:39:36 compute-0 ceph-mon[76537]: osdmap e325: 3 total, 3 up, 3 in
Dec 13 09:39:36 compute-0 ceph-mon[76537]: pgmap v4167: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Dec 13 09:39:36 compute-0 nova_compute[248510]: 2025-12-13 09:39:36.856 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Dec 13 09:39:37 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Dec 13 09:39:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4169: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.1 KiB/s wr, 13 op/s
Dec 13 09:39:38 compute-0 ceph-mon[76537]: osdmap e326: 3 total, 3 up, 3 in
Dec 13 09:39:38 compute-0 ceph-mon[76537]: pgmap v4169: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.1 KiB/s wr, 13 op/s
Dec 13 09:39:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4170: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 20 op/s
Dec 13 09:39:40 compute-0 sudo[422287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:39:40 compute-0 sudo[422287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:39:40 compute-0 sudo[422287]: pam_unix(sudo:session): session closed for user root
Dec 13 09:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:39:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:39:40 compute-0 sudo[422312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:39:40 compute-0 sudo[422312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:39:40 compute-0 nova_compute[248510]: 2025-12-13 09:39:40.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:40 compute-0 sudo[422312]: pam_unix(sudo:session): session closed for user root
Dec 13 09:39:40 compute-0 ceph-mon[76537]: pgmap v4170: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 20 op/s
Dec 13 09:39:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4171: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 895 B/s wr, 15 op/s
Dec 13 09:39:41 compute-0 nova_compute[248510]: 2025-12-13 09:39:41.858 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 09:39:42 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 09:39:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:39:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:39:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:39:42 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:39:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:39:43 compute-0 ceph-mon[76537]: pgmap v4171: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 895 B/s wr, 15 op/s
Dec 13 09:39:43 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 09:39:43 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:39:43 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:39:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4172: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 573 B/s wr, 8 op/s
Dec 13 09:39:43 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:39:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:39:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:39:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:39:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:39:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:39:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:39:44 compute-0 sudo[422367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:39:44 compute-0 sudo[422367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:39:44 compute-0 sudo[422367]: pam_unix(sudo:session): session closed for user root
Dec 13 09:39:44 compute-0 sudo[422392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:39:44 compute-0 sudo[422392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:39:44 compute-0 podman[422429]: 2025-12-13 09:39:44.584050001 +0000 UTC m=+0.027109062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:39:44 compute-0 nova_compute[248510]: 2025-12-13 09:39:44.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:45 compute-0 podman[422429]: 2025-12-13 09:39:45.033382466 +0000 UTC m=+0.476441507 container create 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 09:39:45 compute-0 ceph-mon[76537]: pgmap v4172: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 573 B/s wr, 8 op/s
Dec 13 09:39:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:39:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:39:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:39:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:39:45 compute-0 systemd[1]: Started libpod-conmon-4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c.scope.
Dec 13 09:39:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:39:45 compute-0 podman[422429]: 2025-12-13 09:39:45.252598611 +0000 UTC m=+0.695657712 container init 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:39:45 compute-0 podman[422448]: 2025-12-13 09:39:45.252545419 +0000 UTC m=+0.159790094 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 09:39:45 compute-0 podman[422447]: 2025-12-13 09:39:45.254280513 +0000 UTC m=+0.169116378 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 13 09:39:45 compute-0 podman[422429]: 2025-12-13 09:39:45.264263994 +0000 UTC m=+0.707322995 container start 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 09:39:45 compute-0 sad_wu[422459]: 167 167
Dec 13 09:39:45 compute-0 systemd[1]: libpod-4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c.scope: Deactivated successfully.
Dec 13 09:39:45 compute-0 podman[422429]: 2025-12-13 09:39:45.292212416 +0000 UTC m=+0.735271447 container attach 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:39:45 compute-0 podman[422429]: 2025-12-13 09:39:45.295013646 +0000 UTC m=+0.738072657 container died 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:39:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4173: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Dec 13 09:39:45 compute-0 nova_compute[248510]: 2025-12-13 09:39:45.856 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eebeefe9736551223a5499ba0eeabf30e3d250e06b566294030cd69ac51be07-merged.mount: Deactivated successfully.
Dec 13 09:39:46 compute-0 ceph-mon[76537]: pgmap v4173: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Dec 13 09:39:46 compute-0 nova_compute[248510]: 2025-12-13 09:39:46.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:47 compute-0 podman[422429]: 2025-12-13 09:39:47.198833517 +0000 UTC m=+2.641892528 container remove 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 09:39:47 compute-0 systemd[1]: libpod-conmon-4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c.scope: Deactivated successfully.
Dec 13 09:39:47 compute-0 podman[422443]: 2025-12-13 09:39:47.299735531 +0000 UTC m=+2.214511515 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 13 09:39:47 compute-0 podman[422535]: 2025-12-13 09:39:47.385192687 +0000 UTC m=+0.032076156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:39:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4174: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 481 B/s wr, 13 op/s
Dec 13 09:39:47 compute-0 nova_compute[248510]: 2025-12-13 09:39:47.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:47 compute-0 podman[422535]: 2025-12-13 09:39:47.857464188 +0000 UTC m=+0.504347637 container create 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 09:39:47 compute-0 ceph-mon[76537]: pgmap v4174: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 481 B/s wr, 13 op/s
Dec 13 09:39:48 compute-0 systemd[1]: Started libpod-conmon-0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a.scope.
Dec 13 09:39:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:48 compute-0 podman[422535]: 2025-12-13 09:39:48.629419665 +0000 UTC m=+1.276303154 container init 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 09:39:48 compute-0 podman[422535]: 2025-12-13 09:39:48.638346799 +0000 UTC m=+1.285230248 container start 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 09:39:48 compute-0 podman[422535]: 2025-12-13 09:39:48.723879027 +0000 UTC m=+1.370762476 container attach 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 09:39:49 compute-0 frosty_clarke[422552]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:39:49 compute-0 frosty_clarke[422552]: --> All data devices are unavailable
Dec 13 09:39:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Dec 13 09:39:49 compute-0 systemd[1]: libpod-0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a.scope: Deactivated successfully.
Dec 13 09:39:49 compute-0 podman[422535]: 2025-12-13 09:39:49.1735414 +0000 UTC m=+1.820424849 container died 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:39:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Dec 13 09:39:49 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Dec 13 09:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be-merged.mount: Deactivated successfully.
Dec 13 09:39:49 compute-0 podman[422535]: 2025-12-13 09:39:49.599990009 +0000 UTC m=+2.246873458 container remove 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:39:49 compute-0 systemd[1]: libpod-conmon-0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a.scope: Deactivated successfully.
Dec 13 09:39:49 compute-0 sudo[422392]: pam_unix(sudo:session): session closed for user root
Dec 13 09:39:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4176: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Dec 13 09:39:49 compute-0 sudo[422588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:39:49 compute-0 sudo[422588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:39:49 compute-0 sudo[422588]: pam_unix(sudo:session): session closed for user root
Dec 13 09:39:49 compute-0 sudo[422613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:39:49 compute-0 sudo[422613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:39:50 compute-0 podman[422650]: 2025-12-13 09:39:50.072922647 +0000 UTC m=+0.023528392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:39:50 compute-0 podman[422650]: 2025-12-13 09:39:50.209344343 +0000 UTC m=+0.159950078 container create 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:39:50 compute-0 ceph-mon[76537]: osdmap e327: 3 total, 3 up, 3 in
Dec 13 09:39:50 compute-0 ceph-mon[76537]: pgmap v4176: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Dec 13 09:39:50 compute-0 systemd[1]: Started libpod-conmon-9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6.scope.
Dec 13 09:39:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:39:50 compute-0 nova_compute[248510]: 2025-12-13 09:39:50.859 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:51 compute-0 podman[422650]: 2025-12-13 09:39:51.085290202 +0000 UTC m=+1.035895957 container init 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:39:51 compute-0 podman[422650]: 2025-12-13 09:39:51.094543314 +0000 UTC m=+1.045149039 container start 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 09:39:51 compute-0 unruffled_sammet[422666]: 167 167
Dec 13 09:39:51 compute-0 systemd[1]: libpod-9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6.scope: Deactivated successfully.
Dec 13 09:39:51 compute-0 podman[422650]: 2025-12-13 09:39:51.37825437 +0000 UTC m=+1.328860205 container attach 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:39:51 compute-0 podman[422650]: 2025-12-13 09:39:51.378815994 +0000 UTC m=+1.329421759 container died 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 09:39:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4177: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Dec 13 09:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fc9d003412f46eb2dac1ddf4a6616c3c95e34818f92328f3dbacd555b9d00dd-merged.mount: Deactivated successfully.
Dec 13 09:39:51 compute-0 podman[422650]: 2025-12-13 09:39:51.793941098 +0000 UTC m=+1.744546823 container remove 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:39:51 compute-0 systemd[1]: libpod-conmon-9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6.scope: Deactivated successfully.
Dec 13 09:39:51 compute-0 nova_compute[248510]: 2025-12-13 09:39:51.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:52 compute-0 podman[422690]: 2025-12-13 09:39:51.945393012 +0000 UTC m=+0.027578784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:39:52 compute-0 podman[422690]: 2025-12-13 09:39:52.126398478 +0000 UTC m=+0.208584230 container create 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 09:39:52 compute-0 ceph-mon[76537]: pgmap v4177: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Dec 13 09:39:52 compute-0 systemd[1]: Started libpod-conmon-85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d.scope.
Dec 13 09:39:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:52 compute-0 podman[422690]: 2025-12-13 09:39:52.403251351 +0000 UTC m=+0.485437133 container init 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 09:39:52 compute-0 podman[422690]: 2025-12-13 09:39:52.416796701 +0000 UTC m=+0.498982463 container start 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 09:39:52 compute-0 podman[422690]: 2025-12-13 09:39:52.425034568 +0000 UTC m=+0.507220340 container attach 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]: {
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:     "0": [
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:         {
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "devices": [
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "/dev/loop3"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             ],
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_name": "ceph_lv0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_size": "21470642176",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "name": "ceph_lv0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "tags": {
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cluster_name": "ceph",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.crush_device_class": "",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.encrypted": "0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.objectstore": "bluestore",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osd_id": "0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.type": "block",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.vdo": "0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.with_tpm": "0"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             },
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "type": "block",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "vg_name": "ceph_vg0"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:         }
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:     ],
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:     "1": [
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:         {
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "devices": [
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "/dev/loop4"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             ],
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_name": "ceph_lv1",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_size": "21470642176",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "name": "ceph_lv1",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "tags": {
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cluster_name": "ceph",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.crush_device_class": "",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.encrypted": "0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.objectstore": "bluestore",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osd_id": "1",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.type": "block",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.vdo": "0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.with_tpm": "0"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             },
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "type": "block",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "vg_name": "ceph_vg1"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:         }
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:     ],
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:     "2": [
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:         {
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "devices": [
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "/dev/loop5"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             ],
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_name": "ceph_lv2",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_size": "21470642176",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "name": "ceph_lv2",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "tags": {
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.cluster_name": "ceph",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.crush_device_class": "",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.encrypted": "0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.objectstore": "bluestore",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osd_id": "2",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.type": "block",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.vdo": "0",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:                 "ceph.with_tpm": "0"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             },
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "type": "block",
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:             "vg_name": "ceph_vg2"
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:         }
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]:     ]
Dec 13 09:39:52 compute-0 hopeful_davinci[422706]: }
Dec 13 09:39:52 compute-0 systemd[1]: libpod-85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d.scope: Deactivated successfully.
Dec 13 09:39:52 compute-0 podman[422690]: 2025-12-13 09:39:52.767361045 +0000 UTC m=+0.849546827 container died 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:39:52 compute-0 nova_compute[248510]: 2025-12-13 09:39:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:52 compute-0 nova_compute[248510]: 2025-12-13 09:39:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:52 compute-0 nova_compute[248510]: 2025-12-13 09:39:52.806 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:39:52 compute-0 nova_compute[248510]: 2025-12-13 09:39:52.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:39:52 compute-0 nova_compute[248510]: 2025-12-13 09:39:52.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:39:52 compute-0 nova_compute[248510]: 2025-12-13 09:39:52.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:39:52 compute-0 nova_compute[248510]: 2025-12-13 09:39:52.808 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44-merged.mount: Deactivated successfully.
Dec 13 09:39:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Dec 13 09:39:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Dec 13 09:39:53 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Dec 13 09:39:53 compute-0 podman[422690]: 2025-12-13 09:39:53.259552235 +0000 UTC m=+1.341737987 container remove 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 09:39:53 compute-0 sudo[422613]: pam_unix(sudo:session): session closed for user root
Dec 13 09:39:53 compute-0 systemd[1]: libpod-conmon-85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d.scope: Deactivated successfully.
Dec 13 09:39:53 compute-0 sudo[422748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:39:53 compute-0 sudo[422748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:39:53 compute-0 sudo[422748]: pam_unix(sudo:session): session closed for user root
Dec 13 09:39:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:39:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2342472209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:39:53 compute-0 sudo[422773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:39:53 compute-0 sudo[422773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:39:53 compute-0 nova_compute[248510]: 2025-12-13 09:39:53.484 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:39:53 compute-0 nova_compute[248510]: 2025-12-13 09:39:53.667 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:39:53 compute-0 nova_compute[248510]: 2025-12-13 09:39:53.668 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3442MB free_disk=59.987360855564475GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:39:53 compute-0 nova_compute[248510]: 2025-12-13 09:39:53.669 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:39:53 compute-0 nova_compute[248510]: 2025-12-13 09:39:53.669 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:39:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4179: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.6 MiB/s wr, 13 op/s
Dec 13 09:39:53 compute-0 nova_compute[248510]: 2025-12-13 09:39:53.746 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:39:53 compute-0 nova_compute[248510]: 2025-12-13 09:39:53.747 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:39:53 compute-0 nova_compute[248510]: 2025-12-13 09:39:53.771 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:39:53 compute-0 podman[422813]: 2025-12-13 09:39:53.761578283 +0000 UTC m=+0.021278885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:39:54 compute-0 podman[422813]: 2025-12-13 09:39:54.10347024 +0000 UTC m=+0.363170842 container create 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:39:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:54 compute-0 systemd[1]: Started libpod-conmon-05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc.scope.
Dec 13 09:39:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:39:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:39:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192107434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:39:54 compute-0 nova_compute[248510]: 2025-12-13 09:39:54.707 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.936s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:39:54 compute-0 nova_compute[248510]: 2025-12-13 09:39:54.714 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:39:54 compute-0 nova_compute[248510]: 2025-12-13 09:39:54.764 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:39:54 compute-0 nova_compute[248510]: 2025-12-13 09:39:54.766 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:39:54 compute-0 nova_compute[248510]: 2025-12-13 09:39:54.767 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:39:54 compute-0 podman[422813]: 2025-12-13 09:39:54.857814655 +0000 UTC m=+1.117515257 container init 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 09:39:54 compute-0 podman[422813]: 2025-12-13 09:39:54.869114939 +0000 UTC m=+1.128815511 container start 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:39:54 compute-0 romantic_mccarthy[422850]: 167 167
Dec 13 09:39:54 compute-0 systemd[1]: libpod-05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc.scope: Deactivated successfully.
Dec 13 09:39:55 compute-0 ceph-mon[76537]: osdmap e328: 3 total, 3 up, 3 in
Dec 13 09:39:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2342472209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:39:55 compute-0 ceph-mon[76537]: pgmap v4179: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.6 MiB/s wr, 13 op/s
Dec 13 09:39:55 compute-0 podman[422813]: 2025-12-13 09:39:55.253492012 +0000 UTC m=+1.513192584 container attach 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 09:39:55 compute-0 podman[422813]: 2025-12-13 09:39:55.254328653 +0000 UTC m=+1.514029245 container died 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:39:55.474 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:39:55.475 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:39:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:39:55.476 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:39:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4180: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Dec 13 09:39:55 compute-0 nova_compute[248510]: 2025-12-13 09:39:55.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:56 compute-0 nova_compute[248510]: 2025-12-13 09:39:56.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-441f40f441a3302c8d591bb719402c5386e3dc8aeee74400bcd48ef8d60def11-merged.mount: Deactivated successfully.
Dec 13 09:39:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/192107434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:39:57 compute-0 ceph-mon[76537]: pgmap v4180: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Dec 13 09:39:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4181: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 2.5 MiB/s wr, 10 op/s
Dec 13 09:39:57 compute-0 podman[422813]: 2025-12-13 09:39:57.782445843 +0000 UTC m=+4.042146455 container remove 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 09:39:57 compute-0 systemd[1]: libpod-conmon-05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc.scope: Deactivated successfully.
Dec 13 09:39:58 compute-0 podman[422875]: 2025-12-13 09:39:57.965439529 +0000 UTC m=+0.027102392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:39:58 compute-0 podman[422875]: 2025-12-13 09:39:58.305303684 +0000 UTC m=+0.366966497 container create 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 09:39:58 compute-0 ceph-mon[76537]: pgmap v4181: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 2.5 MiB/s wr, 10 op/s
Dec 13 09:39:58 compute-0 systemd[1]: Started libpod-conmon-81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208.scope.
Dec 13 09:39:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:39:58 compute-0 nova_compute[248510]: 2025-12-13 09:39:58.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:58 compute-0 nova_compute[248510]: 2025-12-13 09:39:58.769 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:39:58 compute-0 nova_compute[248510]: 2025-12-13 09:39:58.769 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:39:58 compute-0 nova_compute[248510]: 2025-12-13 09:39:58.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:39:58 compute-0 nova_compute[248510]: 2025-12-13 09:39:58.795 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:39:58 compute-0 podman[422875]: 2025-12-13 09:39:58.80681298 +0000 UTC m=+0.868475803 container init 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 09:39:58 compute-0 podman[422875]: 2025-12-13 09:39:58.817468507 +0000 UTC m=+0.879131310 container start 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:39:58 compute-0 podman[422875]: 2025-12-13 09:39:58.948844407 +0000 UTC m=+1.010507230 container attach 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:39:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:39:59 compute-0 lvm[422969]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:39:59 compute-0 lvm[422969]: VG ceph_vg0 finished
Dec 13 09:39:59 compute-0 lvm[422970]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:39:59 compute-0 lvm[422970]: VG ceph_vg1 finished
Dec 13 09:39:59 compute-0 lvm[422972]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:39:59 compute-0 lvm[422972]: VG ceph_vg2 finished
Dec 13 09:39:59 compute-0 inspiring_mendeleev[422891]: {}
Dec 13 09:39:59 compute-0 systemd[1]: libpod-81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208.scope: Deactivated successfully.
Dec 13 09:39:59 compute-0 systemd[1]: libpod-81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208.scope: Consumed 1.309s CPU time.
Dec 13 09:39:59 compute-0 podman[422875]: 2025-12-13 09:39:59.616379711 +0000 UTC m=+1.678042514 container died 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 09:39:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4182: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 13 09:39:59 compute-0 ceph-mon[76537]: pgmap v4182: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 13 09:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853-merged.mount: Deactivated successfully.
Dec 13 09:40:00 compute-0 podman[422875]: 2025-12-13 09:40:00.662129932 +0000 UTC m=+2.723792775 container remove 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:40:00 compute-0 systemd[1]: libpod-conmon-81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208.scope: Deactivated successfully.
Dec 13 09:40:00 compute-0 sudo[422773]: pam_unix(sudo:session): session closed for user root
Dec 13 09:40:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:40:00 compute-0 nova_compute[248510]: 2025-12-13 09:40:00.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:00 compute-0 nova_compute[248510]: 2025-12-13 09:40:00.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:00 compute-0 nova_compute[248510]: 2025-12-13 09:40:00.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:40:00 compute-0 nova_compute[248510]: 2025-12-13 09:40:00.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:01 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:40:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:40:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4183: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 13 09:40:01 compute-0 nova_compute[248510]: 2025-12-13 09:40:01.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:02 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:40:02 compute-0 sudo[422988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:40:02 compute-0 sudo[422988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:40:02 compute-0 sudo[422988]: pam_unix(sudo:session): session closed for user root
Dec 13 09:40:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:40:02 compute-0 ceph-mon[76537]: pgmap v4183: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 13 09:40:02 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:40:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4184: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 783 KiB/s wr, 8 op/s
Dec 13 09:40:04 compute-0 ceph-mon[76537]: pgmap v4184: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 783 KiB/s wr, 8 op/s
Dec 13 09:40:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4185: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 426 B/s wr, 3 op/s
Dec 13 09:40:05 compute-0 ceph-mon[76537]: pgmap v4185: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 426 B/s wr, 3 op/s
Dec 13 09:40:05 compute-0 nova_compute[248510]: 2025-12-13 09:40:05.865 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:06 compute-0 nova_compute[248510]: 2025-12-13 09:40:06.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4186: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Dec 13 09:40:08 compute-0 ceph-mon[76537]: pgmap v4186: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Dec 13 09:40:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:40:09
Dec 13 09:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'vms', 'backups', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.log']
Dec 13 09:40:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:40:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4187: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Dec 13 09:40:10 compute-0 ceph-mon[76537]: pgmap v4187: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Dec 13 09:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:40:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:40:10 compute-0 nova_compute[248510]: 2025-12-13 09:40:10.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:10 compute-0 nova_compute[248510]: 2025-12-13 09:40:10.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:40:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4188: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:11 compute-0 nova_compute[248510]: 2025-12-13 09:40:11.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:13 compute-0 ceph-mon[76537]: pgmap v4188: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4189: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:14 compute-0 ceph-mon[76537]: pgmap v4189: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:40:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1500980230' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:40:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:40:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1500980230' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:40:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1500980230' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:40:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1500980230' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:40:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4190: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:15 compute-0 nova_compute[248510]: 2025-12-13 09:40:15.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:15 compute-0 podman[423014]: 2025-12-13 09:40:15.988482215 +0000 UTC m=+0.075342313 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 09:40:16 compute-0 podman[423013]: 2025-12-13 09:40:16.049513998 +0000 UTC m=+0.136540490 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:40:16 compute-0 ceph-mon[76537]: pgmap v4190: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:16 compute-0 nova_compute[248510]: 2025-12-13 09:40:16.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:17 compute-0 sshd-session[423053]: Invalid user xrp from 80.94.92.165 port 49990
Dec 13 09:40:17 compute-0 sshd-session[423053]: Connection closed by invalid user xrp 80.94.92.165 port 49990 [preauth]
Dec 13 09:40:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4191: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:17 compute-0 podman[423055]: 2025-12-13 09:40:17.987657613 +0000 UTC m=+0.071082065 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 09:40:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4192: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:19 compute-0 ceph-mon[76537]: pgmap v4191: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:20 compute-0 nova_compute[248510]: 2025-12-13 09:40:20.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:21 compute-0 ceph-mon[76537]: pgmap v4192: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4193: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:21 compute-0 nova_compute[248510]: 2025-12-13 09:40:21.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.537814584026472e-05 of space, bias 1.0, pg target 0.004613443752079416 quantized to 32 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003367012783039856 of space, bias 1.0, pg target 0.10101038349119568 quantized to 32 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.966907383726934e-07 of space, bias 4.0, pg target 0.0007160288860472322 quantized to 16 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:40:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:40:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4194: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:23 compute-0 ceph-mon[76537]: pgmap v4193: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:25 compute-0 ceph-mon[76537]: pgmap v4194: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4195: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:25 compute-0 nova_compute[248510]: 2025-12-13 09:40:25.872 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:26 compute-0 nova_compute[248510]: 2025-12-13 09:40:26.885 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4196: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:27 compute-0 ceph-mon[76537]: pgmap v4195: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4197: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:29 compute-0 ceph-mon[76537]: pgmap v4196: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:30 compute-0 nova_compute[248510]: 2025-12-13 09:40:30.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4198: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:31 compute-0 nova_compute[248510]: 2025-12-13 09:40:31.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:32 compute-0 ceph-mon[76537]: pgmap v4197: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4199: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:33 compute-0 ceph-mon[76537]: pgmap v4198: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:35 compute-0 ceph-mon[76537]: pgmap v4199: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4200: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:35 compute-0 nova_compute[248510]: 2025-12-13 09:40:35.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:36 compute-0 nova_compute[248510]: 2025-12-13 09:40:36.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:37 compute-0 ceph-mon[76537]: pgmap v4200: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4201: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:39 compute-0 ceph-mon[76537]: pgmap v4201: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4202: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:40:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:40:40 compute-0 nova_compute[248510]: 2025-12-13 09:40:40.877 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:41 compute-0 ceph-mon[76537]: pgmap v4202: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4203: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:41 compute-0 nova_compute[248510]: 2025-12-13 09:40:41.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:42 compute-0 ceph-mon[76537]: pgmap v4203: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4204: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:45 compute-0 ceph-mon[76537]: pgmap v4204: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4205: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:45 compute-0 nova_compute[248510]: 2025-12-13 09:40:45.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:46 compute-0 nova_compute[248510]: 2025-12-13 09:40:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:46 compute-0 nova_compute[248510]: 2025-12-13 09:40:46.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:46 compute-0 podman[423076]: 2025-12-13 09:40:46.960891248 +0000 UTC m=+0.053409878 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:40:46 compute-0 podman[423075]: 2025-12-13 09:40:46.989101738 +0000 UTC m=+0.083164666 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 09:40:47 compute-0 ceph-mon[76537]: pgmap v4205: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4206: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:47 compute-0 nova_compute[248510]: 2025-12-13 09:40:47.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:49 compute-0 podman[423117]: 2025-12-13 09:40:49.004792624 +0000 UTC m=+0.094671123 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:40:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:40:49 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 83K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1274 writes, 5603 keys, 1274 commit groups, 1.0 writes per commit group, ingest: 8.86 MB, 0.01 MB/s
                                           Interval WAL: 1273 writes, 1273 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     22.4      4.71              0.38        61    0.077       0      0       0.0       0.0
                                             L6      1/0   11.01 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2     76.0     64.8      8.41              1.81        60    0.140    439K    31K       0.0       0.0
                                            Sum      1/0   11.01 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     48.7     49.6     13.13              2.20       121    0.108    439K    31K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   8.2     31.5     31.4      1.51              0.21         8    0.188     40K   2004       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     76.0     64.8      8.41              1.81        60    0.140    439K    31K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     22.4      4.71              0.38        60    0.078       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.103, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.64 GB write, 0.08 MB/s write, 0.62 GB read, 0.08 MB/s read, 13.1 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 73.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000626 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4520,70.28 MB,23.1198%) FilterBlock(122,1.18 MB,0.387006%) IndexBlock(122,1.91 MB,0.628587%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 09:40:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:49 compute-0 ceph-mon[76537]: pgmap v4206: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4207: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:50 compute-0 nova_compute[248510]: 2025-12-13 09:40:50.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:51 compute-0 ceph-mon[76537]: pgmap v4207: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4208: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:51 compute-0 nova_compute[248510]: 2025-12-13 09:40:51.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:52 compute-0 nova_compute[248510]: 2025-12-13 09:40:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:53 compute-0 ceph-mon[76537]: pgmap v4208: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4209: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:54 compute-0 nova_compute[248510]: 2025-12-13 09:40:54.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:54 compute-0 ceph-mon[76537]: pgmap v4209: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:54 compute-0 nova_compute[248510]: 2025-12-13 09:40:54.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:40:54 compute-0 nova_compute[248510]: 2025-12-13 09:40:54.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:40:54 compute-0 nova_compute[248510]: 2025-12-13 09:40:54.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:40:54 compute-0 nova_compute[248510]: 2025-12-13 09:40:54.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:40:54 compute-0 nova_compute[248510]: 2025-12-13 09:40:54.810 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:40:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:40:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3423949893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:40:55 compute-0 nova_compute[248510]: 2025-12-13 09:40:55.385 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:40:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:40:55.475 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:40:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:40:55.475 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:40:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:40:55.476 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:40:55 compute-0 nova_compute[248510]: 2025-12-13 09:40:55.578 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:40:55 compute-0 nova_compute[248510]: 2025-12-13 09:40:55.579 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3487MB free_disk=59.98735874146223GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:40:55 compute-0 nova_compute[248510]: 2025-12-13 09:40:55.580 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:40:55 compute-0 nova_compute[248510]: 2025-12-13 09:40:55.580 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:40:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4210: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:55 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3423949893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:40:55 compute-0 nova_compute[248510]: 2025-12-13 09:40:55.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.175 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:40:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:40:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690645903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.725 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.734 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.753 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.755 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.756 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:40:56 compute-0 nova_compute[248510]: 2025-12-13 09:40:56.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:40:56 compute-0 ceph-mon[76537]: pgmap v4210: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1690645903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:40:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4211: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:58 compute-0 nova_compute[248510]: 2025-12-13 09:40:58.757 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:58 compute-0 nova_compute[248510]: 2025-12-13 09:40:58.757 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:40:58 compute-0 nova_compute[248510]: 2025-12-13 09:40:58.758 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:40:58 compute-0 nova_compute[248510]: 2025-12-13 09:40:58.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:40:58 compute-0 ceph-mon[76537]: pgmap v4211: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:40:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4212: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:40:59 compute-0 nova_compute[248510]: 2025-12-13 09:40:59.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:40:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Dec 13 09:40:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Dec 13 09:40:59 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Dec 13 09:41:00 compute-0 nova_compute[248510]: 2025-12-13 09:41:00.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:00 compute-0 ceph-mon[76537]: pgmap v4212: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:00 compute-0 ceph-mon[76537]: osdmap e329: 3 total, 3 up, 3 in
Dec 13 09:41:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4214: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:01 compute-0 nova_compute[248510]: 2025-12-13 09:41:01.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:01 compute-0 nova_compute[248510]: 2025-12-13 09:41:01.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:02 compute-0 sudo[423181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:41:02 compute-0 sudo[423181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:02 compute-0 sudo[423181]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:02 compute-0 sudo[423206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 09:41:02 compute-0 sudo[423206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:02 compute-0 nova_compute[248510]: 2025-12-13 09:41:02.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:02 compute-0 nova_compute[248510]: 2025-12-13 09:41:02.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:41:02 compute-0 podman[423275]: 2025-12-13 09:41:02.957967621 +0000 UTC m=+0.138507292 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:41:03 compute-0 ceph-mon[76537]: pgmap v4214: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:03 compute-0 podman[423275]: 2025-12-13 09:41:03.089713053 +0000 UTC m=+0.270252714 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:41:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4215: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:41:04 compute-0 sudo[423206]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:04 compute-0 sudo[423463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:41:04 compute-0 sudo[423463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:04 compute-0 sudo[423463]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:04 compute-0 sudo[423488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:41:04 compute-0 sudo[423488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:04 compute-0 sudo[423488]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:41:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:41:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:41:04 compute-0 sudo[423544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:41:04 compute-0 sudo[423544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:04 compute-0 sudo[423544]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:04 compute-0 sudo[423569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:41:04 compute-0 sudo[423569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:05 compute-0 ceph-mon[76537]: pgmap v4215: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 09:41:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:41:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:41:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:41:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:41:05 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:41:05 compute-0 podman[423606]: 2025-12-13 09:41:05.194785688 +0000 UTC m=+0.042551688 container create 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:41:05 compute-0 systemd[1]: Started libpod-conmon-6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d.scope.
Dec 13 09:41:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:41:05 compute-0 podman[423606]: 2025-12-13 09:41:05.175210412 +0000 UTC m=+0.022976442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:41:05 compute-0 podman[423606]: 2025-12-13 09:41:05.27257565 +0000 UTC m=+0.120341690 container init 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:41:05 compute-0 podman[423606]: 2025-12-13 09:41:05.277728308 +0000 UTC m=+0.125494318 container start 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:41:05 compute-0 peaceful_bohr[423622]: 167 167
Dec 13 09:41:05 compute-0 systemd[1]: libpod-6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d.scope: Deactivated successfully.
Dec 13 09:41:05 compute-0 podman[423606]: 2025-12-13 09:41:05.294775711 +0000 UTC m=+0.142541751 container attach 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 09:41:05 compute-0 podman[423606]: 2025-12-13 09:41:05.295878009 +0000 UTC m=+0.143644009 container died 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e8d855cdb43757368984e7241ff5fb45ade656de37a7d84a177ee0a7c2026ad-merged.mount: Deactivated successfully.
Dec 13 09:41:05 compute-0 podman[423606]: 2025-12-13 09:41:05.341218995 +0000 UTC m=+0.188984995 container remove 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:41:05 compute-0 systemd[1]: libpod-conmon-6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d.scope: Deactivated successfully.
Dec 13 09:41:05 compute-0 podman[423646]: 2025-12-13 09:41:05.500514351 +0000 UTC m=+0.044782063 container create 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:41:05 compute-0 systemd[1]: Started libpod-conmon-427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8.scope.
Dec 13 09:41:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:41:05 compute-0 podman[423646]: 2025-12-13 09:41:05.483054678 +0000 UTC m=+0.027322410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:05 compute-0 podman[423646]: 2025-12-13 09:41:05.618061271 +0000 UTC m=+0.162329073 container init 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 09:41:05 compute-0 podman[423646]: 2025-12-13 09:41:05.63370355 +0000 UTC m=+0.177971262 container start 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:41:05 compute-0 podman[423646]: 2025-12-13 09:41:05.63813356 +0000 UTC m=+0.182401312 container attach 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 13 09:41:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4216: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:41:05 compute-0 nova_compute[248510]: 2025-12-13 09:41:05.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:06 compute-0 wonderful_ganguly[423663]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:41:06 compute-0 wonderful_ganguly[423663]: --> All data devices are unavailable
Dec 13 09:41:06 compute-0 systemd[1]: libpod-427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8.scope: Deactivated successfully.
Dec 13 09:41:06 compute-0 podman[423646]: 2025-12-13 09:41:06.137692098 +0000 UTC m=+0.681959810 container died 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:41:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3-merged.mount: Deactivated successfully.
Dec 13 09:41:06 compute-0 podman[423646]: 2025-12-13 09:41:06.18005677 +0000 UTC m=+0.724324482 container remove 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:41:06 compute-0 systemd[1]: libpod-conmon-427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8.scope: Deactivated successfully.
Dec 13 09:41:06 compute-0 sudo[423569]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:06 compute-0 sudo[423695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:41:06 compute-0 sudo[423695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:06 compute-0 sudo[423695]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:06 compute-0 sudo[423720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:41:06 compute-0 sudo[423720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:06 compute-0 podman[423757]: 2025-12-13 09:41:06.651040048 +0000 UTC m=+0.023851203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:41:06 compute-0 nova_compute[248510]: 2025-12-13 09:41:06.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:07 compute-0 podman[423757]: 2025-12-13 09:41:07.328556106 +0000 UTC m=+0.701367261 container create dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:41:07 compute-0 ceph-mon[76537]: pgmap v4216: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:41:07 compute-0 systemd[1]: Started libpod-conmon-dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf.scope.
Dec 13 09:41:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:41:07 compute-0 podman[423757]: 2025-12-13 09:41:07.626630449 +0000 UTC m=+0.999441594 container init dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 09:41:07 compute-0 podman[423757]: 2025-12-13 09:41:07.634429323 +0000 UTC m=+1.007240468 container start dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 09:41:07 compute-0 podman[423757]: 2025-12-13 09:41:07.637803777 +0000 UTC m=+1.010614942 container attach dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:41:07 compute-0 compassionate_mcclintock[423773]: 167 167
Dec 13 09:41:07 compute-0 systemd[1]: libpod-dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf.scope: Deactivated successfully.
Dec 13 09:41:07 compute-0 podman[423757]: 2025-12-13 09:41:07.640701869 +0000 UTC m=+1.013513014 container died dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 09:41:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a53f8d82d53188a9ed09f214196a0d81c8f5f7710d4508ad905fd8bee27c2ebb-merged.mount: Deactivated successfully.
Dec 13 09:41:07 compute-0 podman[423757]: 2025-12-13 09:41:07.684038015 +0000 UTC m=+1.056849200 container remove dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 09:41:07 compute-0 systemd[1]: libpod-conmon-dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf.scope: Deactivated successfully.
Dec 13 09:41:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4217: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:41:07 compute-0 podman[423796]: 2025-12-13 09:41:07.906539382 +0000 UTC m=+0.061510769 container create 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:41:07 compute-0 systemd[1]: Started libpod-conmon-90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9.scope.
Dec 13 09:41:07 compute-0 podman[423796]: 2025-12-13 09:41:07.873610224 +0000 UTC m=+0.028581701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:41:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:41:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:08 compute-0 podman[423796]: 2025-12-13 09:41:08.002764162 +0000 UTC m=+0.157735559 container init 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 09:41:08 compute-0 podman[423796]: 2025-12-13 09:41:08.010794171 +0000 UTC m=+0.165765558 container start 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:41:08 compute-0 podman[423796]: 2025-12-13 09:41:08.015059027 +0000 UTC m=+0.170030454 container attach 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 09:41:08 compute-0 eager_panini[423812]: {
Dec 13 09:41:08 compute-0 eager_panini[423812]:     "0": [
Dec 13 09:41:08 compute-0 eager_panini[423812]:         {
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "devices": [
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "/dev/loop3"
Dec 13 09:41:08 compute-0 eager_panini[423812]:             ],
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_name": "ceph_lv0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_size": "21470642176",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "name": "ceph_lv0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "tags": {
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cluster_name": "ceph",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.crush_device_class": "",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.encrypted": "0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.objectstore": "bluestore",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osd_id": "0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.type": "block",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.vdo": "0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.with_tpm": "0"
Dec 13 09:41:08 compute-0 eager_panini[423812]:             },
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "type": "block",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "vg_name": "ceph_vg0"
Dec 13 09:41:08 compute-0 eager_panini[423812]:         }
Dec 13 09:41:08 compute-0 eager_panini[423812]:     ],
Dec 13 09:41:08 compute-0 eager_panini[423812]:     "1": [
Dec 13 09:41:08 compute-0 eager_panini[423812]:         {
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "devices": [
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "/dev/loop4"
Dec 13 09:41:08 compute-0 eager_panini[423812]:             ],
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_name": "ceph_lv1",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_size": "21470642176",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "name": "ceph_lv1",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "tags": {
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cluster_name": "ceph",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.crush_device_class": "",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.encrypted": "0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.objectstore": "bluestore",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osd_id": "1",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.type": "block",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.vdo": "0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.with_tpm": "0"
Dec 13 09:41:08 compute-0 eager_panini[423812]:             },
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "type": "block",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "vg_name": "ceph_vg1"
Dec 13 09:41:08 compute-0 eager_panini[423812]:         }
Dec 13 09:41:08 compute-0 eager_panini[423812]:     ],
Dec 13 09:41:08 compute-0 eager_panini[423812]:     "2": [
Dec 13 09:41:08 compute-0 eager_panini[423812]:         {
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "devices": [
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "/dev/loop5"
Dec 13 09:41:08 compute-0 eager_panini[423812]:             ],
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_name": "ceph_lv2",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_size": "21470642176",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "name": "ceph_lv2",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "tags": {
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.cluster_name": "ceph",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.crush_device_class": "",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.encrypted": "0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.objectstore": "bluestore",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osd_id": "2",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.type": "block",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.vdo": "0",
Dec 13 09:41:08 compute-0 eager_panini[423812]:                 "ceph.with_tpm": "0"
Dec 13 09:41:08 compute-0 eager_panini[423812]:             },
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "type": "block",
Dec 13 09:41:08 compute-0 eager_panini[423812]:             "vg_name": "ceph_vg2"
Dec 13 09:41:08 compute-0 eager_panini[423812]:         }
Dec 13 09:41:08 compute-0 eager_panini[423812]:     ]
Dec 13 09:41:08 compute-0 eager_panini[423812]: }
Dec 13 09:41:08 compute-0 systemd[1]: libpod-90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9.scope: Deactivated successfully.
Dec 13 09:41:08 compute-0 podman[423796]: 2025-12-13 09:41:08.352120358 +0000 UTC m=+0.507091765 container died 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:41:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103-merged.mount: Deactivated successfully.
Dec 13 09:41:08 compute-0 podman[423796]: 2025-12-13 09:41:08.395241379 +0000 UTC m=+0.550212766 container remove 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:41:08 compute-0 systemd[1]: libpod-conmon-90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9.scope: Deactivated successfully.
Dec 13 09:41:08 compute-0 sudo[423720]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:08 compute-0 sudo[423832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:41:08 compute-0 sudo[423832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:08 compute-0 sudo[423832]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:08 compute-0 sudo[423857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:41:08 compute-0 sudo[423857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:08 compute-0 podman[423893]: 2025-12-13 09:41:08.914451035 +0000 UTC m=+0.045519081 container create ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:41:08 compute-0 systemd[1]: Started libpod-conmon-ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7.scope.
Dec 13 09:41:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:41:08 compute-0 podman[423893]: 2025-12-13 09:41:08.892528011 +0000 UTC m=+0.023596107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:41:09 compute-0 podman[423893]: 2025-12-13 09:41:09.004339738 +0000 UTC m=+0.135407804 container init ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 09:41:09 compute-0 podman[423893]: 2025-12-13 09:41:09.014224733 +0000 UTC m=+0.145292769 container start ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 09:41:09 compute-0 podman[423893]: 2025-12-13 09:41:09.01810828 +0000 UTC m=+0.149176316 container attach ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:41:09 compute-0 systemd[1]: libpod-ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7.scope: Deactivated successfully.
Dec 13 09:41:09 compute-0 lucid_banach[423910]: 167 167
Dec 13 09:41:09 compute-0 conmon[423910]: conmon ac60e454ea985ea618c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7.scope/container/memory.events
Dec 13 09:41:09 compute-0 podman[423893]: 2025-12-13 09:41:09.022255813 +0000 UTC m=+0.153323879 container died ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-86d898c0a3e0b0bc9e2e787d710a8a834a90a35b4c5ab7d287abac840c8c5402-merged.mount: Deactivated successfully.
Dec 13 09:41:09 compute-0 podman[423893]: 2025-12-13 09:41:09.074573672 +0000 UTC m=+0.205641708 container remove ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:41:09 compute-0 systemd[1]: libpod-conmon-ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7.scope: Deactivated successfully.
Dec 13 09:41:09 compute-0 podman[423932]: 2025-12-13 09:41:09.275350829 +0000 UTC m=+0.048552697 container create f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:41:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Dec 13 09:41:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Dec 13 09:41:09 compute-0 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Dec 13 09:41:09 compute-0 systemd[1]: Started libpod-conmon-f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236.scope.
Dec 13 09:41:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:41:09 compute-0 podman[423932]: 2025-12-13 09:41:09.255485226 +0000 UTC m=+0.028687074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:41:09
Dec 13 09:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups']
Dec 13 09:41:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:41:09 compute-0 podman[423932]: 2025-12-13 09:41:09.66468412 +0000 UTC m=+0.437885988 container init f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:41:09 compute-0 podman[423932]: 2025-12-13 09:41:09.672175666 +0000 UTC m=+0.445377514 container start f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:41:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4219: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:41:09 compute-0 ceph-mon[76537]: pgmap v4217: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:41:09 compute-0 ceph-mon[76537]: osdmap e330: 3 total, 3 up, 3 in
Dec 13 09:41:09 compute-0 podman[423932]: 2025-12-13 09:41:09.923341574 +0000 UTC m=+0.696543452 container attach f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:41:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:41:10 compute-0 lvm[424031]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:41:10 compute-0 lvm[424031]: VG ceph_vg2 finished
Dec 13 09:41:10 compute-0 lvm[424026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:41:10 compute-0 lvm[424026]: VG ceph_vg0 finished
Dec 13 09:41:10 compute-0 lvm[424028]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:41:10 compute-0 lvm[424028]: VG ceph_vg1 finished
Dec 13 09:41:10 compute-0 elastic_nobel[423949]: {}
Dec 13 09:41:10 compute-0 systemd[1]: libpod-f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236.scope: Deactivated successfully.
Dec 13 09:41:10 compute-0 systemd[1]: libpod-f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236.scope: Consumed 1.465s CPU time.
Dec 13 09:41:10 compute-0 podman[423932]: 2025-12-13 09:41:10.607256221 +0000 UTC m=+1.380458109 container died f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 09:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36-merged.mount: Deactivated successfully.
Dec 13 09:41:10 compute-0 podman[423932]: 2025-12-13 09:41:10.649185212 +0000 UTC m=+1.422387060 container remove f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 09:41:10 compute-0 systemd[1]: libpod-conmon-f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236.scope: Deactivated successfully.
Dec 13 09:41:10 compute-0 sudo[423857]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:41:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:41:10 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:10 compute-0 sudo[424046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:41:10 compute-0 sudo[424046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:41:10 compute-0 sudo[424046]: pam_unix(sudo:session): session closed for user root
Dec 13 09:41:10 compute-0 nova_compute[248510]: 2025-12-13 09:41:10.905 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:41:11 compute-0 ceph-mon[76537]: pgmap v4219: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:41:11 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:11 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:41:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4220: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:41:11 compute-0 nova_compute[248510]: 2025-12-13 09:41:11.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:11 compute-0 nova_compute[248510]: 2025-12-13 09:41:11.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:12 compute-0 ceph-mon[76537]: pgmap v4220: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 09:41:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4221: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Dec 13 09:41:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:14 compute-0 ceph-mon[76537]: pgmap v4221: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Dec 13 09:41:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:41:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045476058' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:41:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:41:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045476058' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:41:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4222: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3045476058' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:41:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3045476058' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:41:15 compute-0 nova_compute[248510]: 2025-12-13 09:41:15.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:16 compute-0 ceph-mon[76537]: pgmap v4222: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:16 compute-0 nova_compute[248510]: 2025-12-13 09:41:16.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4223: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:18 compute-0 podman[424072]: 2025-12-13 09:41:17.999587379 +0000 UTC m=+0.088033597 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:41:18 compute-0 podman[424071]: 2025-12-13 09:41:18.037283525 +0000 UTC m=+0.126258737 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:41:18 compute-0 ceph-mon[76537]: pgmap v4223: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4224: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:20 compute-0 podman[424116]: 2025-12-13 09:41:20.001796599 +0000 UTC m=+0.090490138 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 09:41:20 compute-0 nova_compute[248510]: 2025-12-13 09:41:20.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:20 compute-0 nova_compute[248510]: 2025-12-13 09:41:20.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 09:41:20 compute-0 nova_compute[248510]: 2025-12-13 09:41:20.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:21 compute-0 ceph-mon[76537]: pgmap v4224: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4225: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:41:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:41:21 compute-0 nova_compute[248510]: 2025-12-13 09:41:21.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:23 compute-0 ceph-mon[76537]: pgmap v4225: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4226: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:25 compute-0 ceph-mon[76537]: pgmap v4226: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4227: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:25 compute-0 nova_compute[248510]: 2025-12-13 09:41:25.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:26 compute-0 ceph-mon[76537]: pgmap v4227: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:27 compute-0 nova_compute[248510]: 2025-12-13 09:41:27.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4228: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:28 compute-0 nova_compute[248510]: 2025-12-13 09:41:28.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:28 compute-0 ceph-mon[76537]: pgmap v4228: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.590691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889590774, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2077, "num_deletes": 253, "total_data_size": 3669057, "memory_usage": 3714720, "flush_reason": "Manual Compaction"}
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889608723, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 2126493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82142, "largest_seqno": 84218, "table_properties": {"data_size": 2119686, "index_size": 3620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17702, "raw_average_key_size": 20, "raw_value_size": 2104405, "raw_average_value_size": 2493, "num_data_blocks": 165, "num_entries": 844, "num_filter_entries": 844, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618657, "oldest_key_time": 1765618657, "file_creation_time": 1765618889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 18088 microseconds, and 5881 cpu microseconds.
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.608790) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 2126493 bytes OK
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.608819) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.611568) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.611593) EVENT_LOG_v1 {"time_micros": 1765618889611585, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.611621) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3660301, prev total WAL file size 3660301, number of live WAL files 2.
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.613313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353133' seq:72057594037927935, type:22 .. '6D6772737461740033373634' seq:0, type:0; will stop at (end)
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(2076KB)], [197(11MB)]
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889613381, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 13673695, "oldest_snapshot_seqno": -1}
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 10164 keys, 11645808 bytes, temperature: kUnknown
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889707819, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 11645808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11583645, "index_size": 35618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 266365, "raw_average_key_size": 26, "raw_value_size": 11408266, "raw_average_value_size": 1122, "num_data_blocks": 1372, "num_entries": 10164, "num_filter_entries": 10164, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.708295) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 11645808 bytes
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.711605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.6 rd, 123.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.0 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(11.9) write-amplify(5.5) OK, records in: 10580, records dropped: 416 output_compression: NoCompression
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.711636) EVENT_LOG_v1 {"time_micros": 1765618889711621, "job": 124, "event": "compaction_finished", "compaction_time_micros": 94578, "compaction_time_cpu_micros": 30022, "output_level": 6, "num_output_files": 1, "total_output_size": 11645808, "num_input_records": 10580, "num_output_records": 10164, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889712598, "job": 124, "event": "table_file_deletion", "file_number": 199}
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889716818, "job": 124, "event": "table_file_deletion", "file_number": 197}
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.613121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:29 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4229: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:30 compute-0 nova_compute[248510]: 2025-12-13 09:41:30.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:31 compute-0 ceph-mon[76537]: pgmap v4229: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.610111) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891610160, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 251, "total_data_size": 42860, "memory_usage": 49048, "flush_reason": "Manual Compaction"}
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891613106, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 42752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84219, "largest_seqno": 84488, "table_properties": {"data_size": 40888, "index_size": 92, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4821, "raw_average_key_size": 18, "raw_value_size": 37294, "raw_average_value_size": 141, "num_data_blocks": 4, "num_entries": 264, "num_filter_entries": 264, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618890, "oldest_key_time": 1765618890, "file_creation_time": 1765618891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 3034 microseconds, and 882 cpu microseconds.
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.613144) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 42752 bytes OK
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.613161) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614419) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614432) EVENT_LOG_v1 {"time_micros": 1765618891614428, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 40791, prev total WAL file size 40791, number of live WAL files 2.
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614840) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(41KB)], [200(11MB)]
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891614929, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 11688560, "oldest_snapshot_seqno": -1}
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 9919 keys, 9856043 bytes, temperature: kUnknown
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891698380, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 9856043, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9797209, "index_size": 32929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24837, "raw_key_size": 261967, "raw_average_key_size": 26, "raw_value_size": 9627727, "raw_average_value_size": 970, "num_data_blocks": 1250, "num_entries": 9919, "num_filter_entries": 9919, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.698688) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 9856043 bytes
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.699834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.9 rd, 118.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.1 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(503.9) write-amplify(230.5) OK, records in: 10428, records dropped: 509 output_compression: NoCompression
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.699852) EVENT_LOG_v1 {"time_micros": 1765618891699844, "job": 126, "event": "compaction_finished", "compaction_time_micros": 83553, "compaction_time_cpu_micros": 30026, "output_level": 6, "num_output_files": 1, "total_output_size": 9856043, "num_input_records": 10428, "num_output_records": 9919, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891699983, "job": 126, "event": "table_file_deletion", "file_number": 202}
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891702295, "job": 126, "event": "table_file_deletion", "file_number": 200}
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:31 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:41:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4230: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:32 compute-0 nova_compute[248510]: 2025-12-13 09:41:32.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:33 compute-0 ceph-mon[76537]: pgmap v4230: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4231: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:35 compute-0 ceph-mon[76537]: pgmap v4231: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4232: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:35 compute-0 nova_compute[248510]: 2025-12-13 09:41:35.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:36 compute-0 nova_compute[248510]: 2025-12-13 09:41:36.781 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:37 compute-0 nova_compute[248510]: 2025-12-13 09:41:37.008 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:37 compute-0 ceph-mon[76537]: pgmap v4232: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4233: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:39 compute-0 ceph-mon[76537]: pgmap v4233: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4234: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:41:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:41:40 compute-0 nova_compute[248510]: 2025-12-13 09:41:40.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:41 compute-0 ceph-mon[76537]: pgmap v4234: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4235: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:42 compute-0 nova_compute[248510]: 2025-12-13 09:41:42.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:43 compute-0 ceph-mon[76537]: pgmap v4235: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4236: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:45 compute-0 ceph-mon[76537]: pgmap v4236: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4237: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:45 compute-0 nova_compute[248510]: 2025-12-13 09:41:45.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:47 compute-0 nova_compute[248510]: 2025-12-13 09:41:47.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:47 compute-0 ceph-mon[76537]: pgmap v4237: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4238: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:47 compute-0 nova_compute[248510]: 2025-12-13 09:41:47.825 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:48 compute-0 nova_compute[248510]: 2025-12-13 09:41:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:48 compute-0 podman[424138]: 2025-12-13 09:41:48.97019027 +0000 UTC m=+0.055876909 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:41:49 compute-0 podman[424137]: 2025-12-13 09:41:49.001970559 +0000 UTC m=+0.090406696 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 13 09:41:49 compute-0 ceph-mon[76537]: pgmap v4238: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4239: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:50 compute-0 nova_compute[248510]: 2025-12-13 09:41:50.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:50 compute-0 podman[424181]: 2025-12-13 09:41:50.96833744 +0000 UTC m=+0.058812002 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Dec 13 09:41:51 compute-0 ceph-mon[76537]: pgmap v4239: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4240: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:52 compute-0 nova_compute[248510]: 2025-12-13 09:41:52.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:52 compute-0 nova_compute[248510]: 2025-12-13 09:41:52.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4241: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:53 compute-0 ceph-mon[76537]: pgmap v4240: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:54 compute-0 ceph-mon[76537]: pgmap v4241: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:41:55.476 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:41:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:41:55.477 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:41:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:41:55.477 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:41:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4242: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:55 compute-0 nova_compute[248510]: 2025-12-13 09:41:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:55 compute-0 nova_compute[248510]: 2025-12-13 09:41:55.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:56 compute-0 nova_compute[248510]: 2025-12-13 09:41:56.503 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:41:56 compute-0 nova_compute[248510]: 2025-12-13 09:41:56.503 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:41:56 compute-0 nova_compute[248510]: 2025-12-13 09:41:56.504 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:41:56 compute-0 nova_compute[248510]: 2025-12-13 09:41:56.504 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:41:56 compute-0 nova_compute[248510]: 2025-12-13 09:41:56.504 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:41:56 compute-0 ceph-mon[76537]: pgmap v4242: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:41:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:41:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082699147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.062 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.217 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.218 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3514MB free_disk=59.987355314195156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.219 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.219 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.326 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.327 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.360 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:41:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4243: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:41:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640706481' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:41:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2082699147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:41:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2640706481' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.954 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.962 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.994 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.997 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:41:57 compute-0 nova_compute[248510]: 2025-12-13 09:41:57.998 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:41:59 compute-0 ceph-mon[76537]: pgmap v4243: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:41:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4244: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:41:59 compute-0 nova_compute[248510]: 2025-12-13 09:41:59.998 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:41:59 compute-0 nova_compute[248510]: 2025-12-13 09:41:59.999 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:42:00 compute-0 nova_compute[248510]: 2025-12-13 09:41:59.999 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:42:00 compute-0 nova_compute[248510]: 2025-12-13 09:42:00.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:01 compute-0 ceph-mon[76537]: pgmap v4244: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:01 compute-0 nova_compute[248510]: 2025-12-13 09:42:01.540 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:42:01 compute-0 nova_compute[248510]: 2025-12-13 09:42:01.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4245: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:02 compute-0 nova_compute[248510]: 2025-12-13 09:42:02.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:03 compute-0 ceph-mon[76537]: pgmap v4245: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:03 compute-0 nova_compute[248510]: 2025-12-13 09:42:03.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4246: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:03 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.74 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 496 writes, 1133 keys, 496 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s
                                           Interval WAL: 496 writes, 229 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:04 compute-0 nova_compute[248510]: 2025-12-13 09:42:04.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:04 compute-0 nova_compute[248510]: 2025-12-13 09:42:04.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:42:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4247: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:05 compute-0 nova_compute[248510]: 2025-12-13 09:42:05.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:05 compute-0 ceph-mon[76537]: pgmap v4246: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:07 compute-0 nova_compute[248510]: 2025-12-13 09:42:07.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:07 compute-0 ceph-mon[76537]: pgmap v4247: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:07 compute-0 nova_compute[248510]: 2025-12-13 09:42:07.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:07 compute-0 nova_compute[248510]: 2025-12-13 09:42:07.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 09:42:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4248: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:08 compute-0 nova_compute[248510]: 2025-12-13 09:42:08.261 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 09:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:42:09
Dec 13 09:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta']
Dec 13 09:42:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:42:09 compute-0 nova_compute[248510]: 2025-12-13 09:42:09.714 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4249: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:09 compute-0 ceph-mon[76537]: pgmap v4248: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:42:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:42:10 compute-0 sudo[424246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:42:10 compute-0 sudo[424246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:10 compute-0 sudo[424246]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:10 compute-0 nova_compute[248510]: 2025-12-13 09:42:10.932 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:11 compute-0 sudo[424271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:42:11 compute-0 sudo[424271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:11 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7801.8 total, 600.0 interval
                                           Cumulative writes: 49K writes, 190K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.72 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 525 writes, 1278 keys, 525 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 525 writes, 234 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:42:11 compute-0 sudo[424271]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:42:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:42:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:42:11 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:42:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:42:11 compute-0 ceph-mon[76537]: pgmap v4249: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4250: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:12 compute-0 nova_compute[248510]: 2025-12-13 09:42:12.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:12 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:42:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:42:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:42:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:42:12 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:42:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:42:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:42:12 compute-0 sudo[424329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:42:12 compute-0 sudo[424329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:12 compute-0 sudo[424329]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:12 compute-0 sudo[424354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:42:12 compute-0 sudo[424354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:12 compute-0 podman[424392]: 2025-12-13 09:42:12.579869011 +0000 UTC m=+0.040587929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:42:12 compute-0 nova_compute[248510]: 2025-12-13 09:42:12.801 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4251: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:42:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:42:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:42:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:42:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:42:13 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:42:13 compute-0 podman[424392]: 2025-12-13 09:42:13.988277492 +0000 UTC m=+1.448996420 container create a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:42:14 compute-0 systemd[1]: Started libpod-conmon-a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a.scope.
Dec 13 09:42:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:42:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:14 compute-0 podman[424392]: 2025-12-13 09:42:14.809786367 +0000 UTC m=+2.270505315 container init a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:42:14 compute-0 podman[424392]: 2025-12-13 09:42:14.822336878 +0000 UTC m=+2.283055826 container start a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:42:14 compute-0 systemd[1]: libpod-a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a.scope: Deactivated successfully.
Dec 13 09:42:14 compute-0 quirky_mclaren[424409]: 167 167
Dec 13 09:42:14 compute-0 conmon[424409]: conmon a5de0316ede0394b02bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a.scope/container/memory.events
Dec 13 09:42:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:42:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1186689776' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:42:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:42:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1186689776' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:42:15 compute-0 podman[424392]: 2025-12-13 09:42:15.420872795 +0000 UTC m=+2.881591713 container attach a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:42:15 compute-0 podman[424392]: 2025-12-13 09:42:15.422533906 +0000 UTC m=+2.883252844 container died a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 09:42:15 compute-0 ceph-mon[76537]: pgmap v4250: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:15 compute-0 ceph-mon[76537]: pgmap v4251: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4252: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:15 compute-0 nova_compute[248510]: 2025-12-13 09:42:15.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:16 compute-0 sshd-session[424426]: Accepted publickey for zuul from 192.168.122.10 port 58402 ssh2: ECDSA SHA256:LSt7YF7gQCPgR0tIb34gZL5shqra8qE8VX5d32veH0w
Dec 13 09:42:16 compute-0 systemd-logind[787]: New session 59 of user zuul.
Dec 13 09:42:16 compute-0 systemd[1]: Started Session 59 of User zuul.
Dec 13 09:42:16 compute-0 sshd-session[424426]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 09:42:16 compute-0 sudo[424430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 13 09:42:16 compute-0 sudo[424430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:42:17 compute-0 nova_compute[248510]: 2025-12-13 09:42:17.077 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7717a5d7a392ea09ba24430527defec3cca24a5a21236568ae7d3edd14675062-merged.mount: Deactivated successfully.
Dec 13 09:42:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1186689776' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:42:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1186689776' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:42:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4253: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:19 compute-0 ceph-mon[76537]: pgmap v4252: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:19 compute-0 podman[424392]: 2025-12-13 09:42:19.137038666 +0000 UTC m=+6.597757604 container remove a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:42:19 compute-0 systemd[1]: libpod-conmon-a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a.scope: Deactivated successfully.
Dec 13 09:42:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:19 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 154K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 443 writes, 954 keys, 443 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 443 writes, 192 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4254: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:19 compute-0 podman[424502]: 2025-12-13 09:42:19.701565186 +0000 UTC m=+0.389933735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:42:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:20 compute-0 podman[424502]: 2025-12-13 09:42:20.280350272 +0000 UTC m=+0.968718731 container create 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 09:42:20 compute-0 podman[424498]: 2025-12-13 09:42:20.413114169 +0000 UTC m=+1.116534072 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 09:42:20 compute-0 ceph-mon[76537]: pgmap v4253: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:20 compute-0 podman[424494]: 2025-12-13 09:42:20.428778579 +0000 UTC m=+1.140011296 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 13 09:42:20 compute-0 systemd[1]: Started libpod-conmon-6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac.scope.
Dec 13 09:42:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:20 compute-0 podman[424502]: 2025-12-13 09:42:20.72675129 +0000 UTC m=+1.415119769 container init 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 09:42:20 compute-0 podman[424502]: 2025-12-13 09:42:20.735720562 +0000 UTC m=+1.424089011 container start 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:42:20 compute-0 podman[424502]: 2025-12-13 09:42:20.922968633 +0000 UTC m=+1.611337092 container attach 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:42:20 compute-0 nova_compute[248510]: 2025-12-13 09:42:20.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:21 compute-0 xenodochial_elgamal[424572]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:42:21 compute-0 xenodochial_elgamal[424572]: --> All data devices are unavailable
Dec 13 09:42:21 compute-0 systemd[1]: libpod-6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac.scope: Deactivated successfully.
Dec 13 09:42:21 compute-0 podman[424502]: 2025-12-13 09:42:21.329239354 +0000 UTC m=+2.017607823 container died 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4255: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:42:21 compute-0 ceph-mon[76537]: pgmap v4254: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:42:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:42:22 compute-0 nova_compute[248510]: 2025-12-13 09:42:22.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4-merged.mount: Deactivated successfully.
Dec 13 09:42:22 compute-0 podman[424502]: 2025-12-13 09:42:22.387629723 +0000 UTC m=+3.075998182 container remove 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 09:42:22 compute-0 systemd[1]: libpod-conmon-6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac.scope: Deactivated successfully.
Dec 13 09:42:22 compute-0 sudo[424354]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:22 compute-0 podman[424621]: 2025-12-13 09:42:22.492505377 +0000 UTC m=+1.117887547 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:42:22 compute-0 sudo[424700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:42:22 compute-0 sudo[424700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:22 compute-0 sudo[424700]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:22 compute-0 sudo[424727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:42:22 compute-0 sudo[424727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:22 compute-0 podman[424782]: 2025-12-13 09:42:22.905012833 +0000 UTC m=+0.041123732 container create c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:42:22 compute-0 systemd[1]: Started libpod-conmon-c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7.scope.
Dec 13 09:42:22 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23186 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:22 compute-0 podman[424782]: 2025-12-13 09:42:22.885658993 +0000 UTC m=+0.021769912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:42:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:42:23 compute-0 podman[424782]: 2025-12-13 09:42:23.005181161 +0000 UTC m=+0.141292090 container init c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:42:23 compute-0 podman[424782]: 2025-12-13 09:42:23.019890737 +0000 UTC m=+0.156001676 container start c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:42:23 compute-0 podman[424782]: 2025-12-13 09:42:23.02525854 +0000 UTC m=+0.161369479 container attach c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:42:23 compute-0 hopeful_wescoff[424799]: 167 167
Dec 13 09:42:23 compute-0 systemd[1]: libpod-c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7.scope: Deactivated successfully.
Dec 13 09:42:23 compute-0 conmon[424799]: conmon c2d6df58c0bdda285d0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7.scope/container/memory.events
Dec 13 09:42:23 compute-0 podman[424782]: 2025-12-13 09:42:23.029825513 +0000 UTC m=+0.165936432 container died c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 09:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-51da354d36afb062bd30b45df001a9991ac848ad7297a6be9c9ca1edc4b67018-merged.mount: Deactivated successfully.
Dec 13 09:42:23 compute-0 podman[424782]: 2025-12-13 09:42:23.078425101 +0000 UTC m=+0.214536000 container remove c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 09:42:23 compute-0 systemd[1]: libpod-conmon-c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7.scope: Deactivated successfully.
Dec 13 09:42:23 compute-0 podman[424831]: 2025-12-13 09:42:23.28015971 +0000 UTC m=+0.054839372 container create e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:42:23 compute-0 systemd[1]: Started libpod-conmon-e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a.scope.
Dec 13 09:42:23 compute-0 podman[424831]: 2025-12-13 09:42:23.259426545 +0000 UTC m=+0.034106207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:42:23 compute-0 ceph-mon[76537]: pgmap v4255: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:23 compute-0 podman[424831]: 2025-12-13 09:42:23.389872165 +0000 UTC m=+0.164551857 container init e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:42:23 compute-0 podman[424831]: 2025-12-13 09:42:23.401980666 +0000 UTC m=+0.176660328 container start e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 09:42:23 compute-0 podman[424831]: 2025-12-13 09:42:23.40776028 +0000 UTC m=+0.182439962 container attach e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:42:23 compute-0 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 09:42:23 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23188 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:23 compute-0 lucid_jackson[424867]: {
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:     "0": [
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:         {
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "devices": [
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "/dev/loop3"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             ],
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_name": "ceph_lv0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_size": "21470642176",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "name": "ceph_lv0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "tags": {
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cluster_name": "ceph",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.crush_device_class": "",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.encrypted": "0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.objectstore": "bluestore",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osd_id": "0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.type": "block",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.vdo": "0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.with_tpm": "0"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             },
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "type": "block",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "vg_name": "ceph_vg0"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:         }
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:     ],
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:     "1": [
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:         {
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "devices": [
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "/dev/loop4"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             ],
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_name": "ceph_lv1",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_size": "21470642176",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "name": "ceph_lv1",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "tags": {
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cluster_name": "ceph",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.crush_device_class": "",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.encrypted": "0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.objectstore": "bluestore",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osd_id": "1",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.type": "block",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.vdo": "0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.with_tpm": "0"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             },
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "type": "block",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "vg_name": "ceph_vg1"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:         }
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:     ],
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:     "2": [
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:         {
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "devices": [
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "/dev/loop5"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             ],
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_name": "ceph_lv2",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_size": "21470642176",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "name": "ceph_lv2",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "tags": {
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.cluster_name": "ceph",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.crush_device_class": "",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.encrypted": "0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.objectstore": "bluestore",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osd_id": "2",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.type": "block",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.vdo": "0",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:                 "ceph.with_tpm": "0"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             },
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "type": "block",
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:             "vg_name": "ceph_vg2"
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:         }
Dec 13 09:42:23 compute-0 lucid_jackson[424867]:     ]
Dec 13 09:42:23 compute-0 lucid_jackson[424867]: }
Dec 13 09:42:23 compute-0 systemd[1]: libpod-e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a.scope: Deactivated successfully.
Dec 13 09:42:23 compute-0 podman[424831]: 2025-12-13 09:42:23.772717684 +0000 UTC m=+0.547397346 container died e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:42:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4256: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7-merged.mount: Deactivated successfully.
Dec 13 09:42:23 compute-0 podman[424831]: 2025-12-13 09:42:23.837119264 +0000 UTC m=+0.611798936 container remove e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:42:23 compute-0 systemd[1]: libpod-conmon-e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a.scope: Deactivated successfully.
Dec 13 09:42:23 compute-0 sudo[424727]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:23 compute-0 sudo[424912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:42:23 compute-0 sudo[424912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:23 compute-0 sudo[424912]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:24 compute-0 sudo[424937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:42:24 compute-0 sudo[424937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 09:42:24 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3746652238' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 09:42:24 compute-0 ceph-mon[76537]: from='client.23186 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:24 compute-0 ceph-mon[76537]: from='client.23188 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:24 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3746652238' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 09:42:24 compute-0 podman[424976]: 2025-12-13 09:42:24.366841421 +0000 UTC m=+0.051862549 container create d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:42:24 compute-0 systemd[1]: Started libpod-conmon-d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9.scope.
Dec 13 09:42:24 compute-0 podman[424976]: 2025-12-13 09:42:24.343332757 +0000 UTC m=+0.028353895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:42:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:42:24 compute-0 podman[424976]: 2025-12-13 09:42:24.464400384 +0000 UTC m=+0.149421532 container init d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:42:24 compute-0 podman[424976]: 2025-12-13 09:42:24.472363162 +0000 UTC m=+0.157384290 container start d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:42:24 compute-0 podman[424976]: 2025-12-13 09:42:24.476361921 +0000 UTC m=+0.161383069 container attach d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:42:24 compute-0 serene_ramanujan[424995]: 167 167
Dec 13 09:42:24 compute-0 systemd[1]: libpod-d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9.scope: Deactivated successfully.
Dec 13 09:42:24 compute-0 podman[424976]: 2025-12-13 09:42:24.479326215 +0000 UTC m=+0.164347343 container died d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 09:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc4e8b187745e4bdcd783c35ca6d150b224c1e75ce71063a62fca6065cba4fd5-merged.mount: Deactivated successfully.
Dec 13 09:42:24 compute-0 podman[424976]: 2025-12-13 09:42:24.520224141 +0000 UTC m=+0.205245269 container remove d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:42:24 compute-0 systemd[1]: libpod-conmon-d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9.scope: Deactivated successfully.
Dec 13 09:42:24 compute-0 podman[425038]: 2025-12-13 09:42:24.761799911 +0000 UTC m=+0.049816408 container create ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 09:42:24 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:24 compute-0 systemd[1]: Started libpod-conmon-ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e.scope.
Dec 13 09:42:24 compute-0 podman[425038]: 2025-12-13 09:42:24.739983889 +0000 UTC m=+0.028000406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:42:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:42:24 compute-0 podman[425038]: 2025-12-13 09:42:24.871401224 +0000 UTC m=+0.159417751 container init ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 09:42:24 compute-0 podman[425038]: 2025-12-13 09:42:24.880174141 +0000 UTC m=+0.168190638 container start ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 09:42:24 compute-0 podman[425038]: 2025-12-13 09:42:24.884030457 +0000 UTC m=+0.172046984 container attach ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:42:25 compute-0 ceph-mon[76537]: pgmap v4256: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:25 compute-0 lvm[425140]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:42:25 compute-0 lvm[425140]: VG ceph_vg0 finished
Dec 13 09:42:25 compute-0 lvm[425139]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:42:25 compute-0 lvm[425139]: VG ceph_vg1 finished
Dec 13 09:42:25 compute-0 lvm[425142]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:42:25 compute-0 lvm[425142]: VG ceph_vg2 finished
Dec 13 09:42:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4257: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:25 compute-0 bold_wilson[425055]: {}
Dec 13 09:42:25 compute-0 systemd[1]: libpod-ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e.scope: Deactivated successfully.
Dec 13 09:42:25 compute-0 systemd[1]: libpod-ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e.scope: Consumed 1.574s CPU time.
Dec 13 09:42:25 compute-0 podman[425038]: 2025-12-13 09:42:25.877679737 +0000 UTC m=+1.165696474 container died ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 09:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936-merged.mount: Deactivated successfully.
Dec 13 09:42:25 compute-0 nova_compute[248510]: 2025-12-13 09:42:25.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:25 compute-0 podman[425038]: 2025-12-13 09:42:25.97199693 +0000 UTC m=+1.260013427 container remove ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:42:25 compute-0 systemd[1]: libpod-conmon-ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e.scope: Deactivated successfully.
Dec 13 09:42:26 compute-0 sudo[424937]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:42:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:42:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:42:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:42:26 compute-0 sudo[425157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:42:26 compute-0 sudo[425157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:42:26 compute-0 sudo[425157]: pam_unix(sudo:session): session closed for user root
Dec 13 09:42:27 compute-0 ceph-mon[76537]: pgmap v4257: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:27 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:42:27 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:42:27 compute-0 nova_compute[248510]: 2025-12-13 09:42:27.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4258: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:28 compute-0 ovs-vsctl[425211]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 13 09:42:29 compute-0 ceph-mon[76537]: pgmap v4258: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:29 compute-0 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 13 09:42:29 compute-0 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 13 09:42:29 compute-0 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 13 09:42:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4259: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:29 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:29 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: cache status {prefix=cache status} (starting...)
Dec 13 09:42:30 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: client ls {prefix=client ls} (starting...)
Dec 13 09:42:30 compute-0 lvm[425545]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:42:30 compute-0 lvm[425545]: VG ceph_vg1 finished
Dec 13 09:42:30 compute-0 lvm[425568]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:42:30 compute-0 lvm[425568]: VG ceph_vg0 finished
Dec 13 09:42:30 compute-0 lvm[425577]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:42:30 compute-0 lvm[425577]: VG ceph_vg2 finished
Dec 13 09:42:30 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23192 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:30 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: damage ls {prefix=damage ls} (starting...)
Dec 13 09:42:30 compute-0 nova_compute[248510]: 2025-12-13 09:42:30.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:30 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump loads {prefix=dump loads} (starting...)
Dec 13 09:42:31 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23194 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:31 compute-0 ceph-mon[76537]: pgmap v4259: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:31 compute-0 ceph-mon[76537]: from='client.23192 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 13 09:42:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 13 09:42:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 13 09:42:31 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23196 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 13 09:42:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 13 09:42:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596733442' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 09:42:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4260: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 13 09:42:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 13 09:42:32 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23200 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:32 compute-0 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:42:32.037+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 09:42:32 compute-0 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 09:42:32 compute-0 nova_compute[248510]: 2025-12-13 09:42:32.085 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:42:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143079818' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:42:32 compute-0 ceph-mon[76537]: from='client.23194 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:32 compute-0 ceph-mon[76537]: from='client.23196 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/596733442' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 09:42:32 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: ops {prefix=ops} (starting...)
Dec 13 09:42:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 13 09:42:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759029824' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 09:42:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 13 09:42:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971111549' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 09:42:32 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: session ls {prefix=session ls} (starting...)
Dec 13 09:42:33 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: status {prefix=status} (starting...)
Dec 13 09:42:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 09:42:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82207280' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 09:42:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 13 09:42:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438341048' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 09:42:33 compute-0 ceph-mon[76537]: pgmap v4260: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:33 compute-0 ceph-mon[76537]: from='client.23200 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3143079818' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:42:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3759029824' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 09:42:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2971111549' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 09:42:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4261: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:34 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23214 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 09:42:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/216254374' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 09:42:34 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23216 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/82207280' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 09:42:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3438341048' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 09:42:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/216254374' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 09:42:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 09:42:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629908501' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 09:42:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 13 09:42:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820940915' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 09:42:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 09:42:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684076411' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 09:42:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 13 09:42:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2750013254' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 09:42:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 09:42:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3832629167' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 09:42:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4262: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:35 compute-0 nova_compute[248510]: 2025-12-13 09:42:35.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:36 compute-0 ceph-mon[76537]: pgmap v4261: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:36 compute-0 ceph-mon[76537]: from='client.23214 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:36 compute-0 ceph-mon[76537]: from='client.23216 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/629908501' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 09:42:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2820940915' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 09:42:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1684076411' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 09:42:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2750013254' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 09:42:36 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23228 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:36 compute-0 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 09:42:36 compute-0 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:42:36.063+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 09:42:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 09:42:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3980690016' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 09:42:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 13 09:42:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175755030' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 09:42:37 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23234 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:37 compute-0 nova_compute[248510]: 2025-12-13 09:42:37.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3832629167' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 09:42:37 compute-0 ceph-mon[76537]: pgmap v4262: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:37 compute-0 ceph-mon[76537]: from='client.23228 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3980690016' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 09:42:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/175755030' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 49430528 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:09:59.443393+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 49422336 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:00.443504+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 49422336 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254905 data_alloc: 218103808 data_used: 4010677
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:01.443680+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:02.443912+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:03.444140+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:04.444312+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:05.444470+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b54407500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254905 data_alloc: 218103808 data_used: 4010677
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b552fc1c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b54d6b880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:06.444674+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc0c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc0c00 session 0x562b552fd340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 61.573890686s of 61.637538910s, submitted: 40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b553461c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b555c9c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b54d2c000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b566bf180
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b553adc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b553adc00 session 0x562b54406fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:07.444860+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:08.445052+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x1d5298f/0x1f11000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:09.445322+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:10.445515+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x1d5298f/0x1f11000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300675 data_alloc: 218103808 data_used: 4010677
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:11.445688+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:12.445874+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:13.446140+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56672256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:14.446356+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56672256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:15.446548+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x1d5298f/0x1f11000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56672256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300675 data_alloc: 218103808 data_used: 4010677
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:16.446734+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56672256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:17.446993+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287752192 unmapped: 56664064 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:18.447162+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287752192 unmapped: 56664064 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:19.447385+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287752192 unmapped: 56664064 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.575314522s of 13.690111160s, submitted: 13
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b54950e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:20.447553+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 56508416 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3302715 data_alloc: 218103808 data_used: 4011205
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:21.447733+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbd7000/0x0/0x4ffc00000, data 0x1d7698f/0x1f35000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 56508416 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:22.447890+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:23.448022+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:24.448170+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:25.448351+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbd7000/0x0/0x4ffc00000, data 0x1d7698f/0x1f35000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbd7000/0x0/0x4ffc00000, data 0x1d7698f/0x1f35000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339323 data_alloc: 218103808 data_used: 10128069
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:26.448607+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:27.448806+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:28.449028+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:29.449218+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:30.449401+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:31.449568+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339323 data_alloc: 218103808 data_used: 10128069
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbd7000/0x0/0x4ffc00000, data 0x1d7698f/0x1f35000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.049160004s of 12.053641319s, submitted: 1
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:32.449721+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 54878208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:33.449842+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289603584 unmapped: 54812672 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:34.450016+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec773000/0x0/0x4ffc00000, data 0x21da98f/0x2399000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:35.450266+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:36.450439+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386233 data_alloc: 218103808 data_used: 10975941
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74f000/0x0/0x4ffc00000, data 0x21fd98f/0x23bc000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:37.450611+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:38.450802+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:39.450953+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74f000/0x0/0x4ffc00000, data 0x21fd98f/0x23bc000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:40.451086+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:41.451305+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384497 data_alloc: 218103808 data_used: 10975941
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:42.451555+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:43.451746+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74d000/0x0/0x4ffc00000, data 0x220098f/0x23bf000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:44.451895+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:45.452052+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:46.452264+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384753 data_alloc: 218103808 data_used: 10984133
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:47.452389+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:48.452515+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74d000/0x0/0x4ffc00000, data 0x220098f/0x23bf000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:49.452756+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74d000/0x0/0x4ffc00000, data 0x220098f/0x23bf000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:50.452892+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b54e0da40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b55367a40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a1000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a1000 session 0x562b53181c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:51.453104+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384753 data_alloc: 218103808 data_used: 10984133
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d2000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d2000 session 0x562b53219c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.035808563s of 19.402549744s, submitted: 44
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b555c8fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a1000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a1000 session 0x562b51d10fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289660928 unmapped: 54755328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b5819ee00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b550fce00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52946800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b52946800 session 0x562b55367500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:52.453316+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289660928 unmapped: 54755328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:53.453539+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289660928 unmapped: 54755328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:54.453705+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289660928 unmapped: 54755328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:55.453873+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289669120 unmapped: 54747136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec327000/0x0/0x4ffc00000, data 0x2624a01/0x27e5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:56.454119+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414760 data_alloc: 218103808 data_used: 10984133
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289669120 unmapped: 54747136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec327000/0x0/0x4ffc00000, data 0x2624a01/0x27e5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:57.454338+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289669120 unmapped: 54747136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:58.454529+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289669120 unmapped: 54747136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:59.454712+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:00.454835+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b5375fc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:01.455027+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a1000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a1000 session 0x562b555c8000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414760 data_alloc: 218103808 data_used: 10984133
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec327000/0x0/0x4ffc00000, data 0x2624a01/0x27e5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:02.455204+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b55366a80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.518766403s of 11.703509331s, submitted: 25
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:03.455378+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b5250a1c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec327000/0x0/0x4ffc00000, data 0x2624a01/0x27e5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:04.455512+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:05.455684+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289685504 unmapped: 54730752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:06.455894+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415626 data_alloc: 218103808 data_used: 10984133
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289685504 unmapped: 54730752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:07.456035+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec326000/0x0/0x4ffc00000, data 0x2624a11/0x27e6000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:08.456202+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:09.456358+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:10.456521+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec326000/0x0/0x4ffc00000, data 0x2624a11/0x27e6000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:11.456644+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440970 data_alloc: 234881024 data_used: 15234757
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:12.456772+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:13.456969+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec326000/0x0/0x4ffc00000, data 0x2624a11/0x27e6000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:14.457183+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:15.457348+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:16.457534+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440970 data_alloc: 234881024 data_used: 15234757
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:17.457699+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.469920158s of 14.109190941s, submitted: 2
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293150720 unmapped: 51265536 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebf42000/0x0/0x4ffc00000, data 0x2a00a11/0x2bc2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [0,0,0,0,0,1,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:18.457865+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293773312 unmapped: 50642944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:19.458000+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293773312 unmapped: 50642944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:20.458161+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebcb9000/0x0/0x4ffc00000, data 0x2c83a11/0x2e45000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293773312 unmapped: 50642944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:21.458291+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480778 data_alloc: 234881024 data_used: 15280837
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:22.458440+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:23.458572+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:24.458722+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebcab000/0x0/0x4ffc00000, data 0x2c97a11/0x2e59000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:25.458885+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:26.459127+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488080 data_alloc: 234881024 data_used: 15280837
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:27.459338+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:28.459485+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebcab000/0x0/0x4ffc00000, data 0x2c97a11/0x2e59000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:29.460923+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:30.461146+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:31.461336+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489120 data_alloc: 234881024 data_used: 15358661
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:32.461488+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.193249702s of 15.741126060s, submitted: 73
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53219dc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54381180
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:33.461624+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebcab000/0x0/0x4ffc00000, data 0x2c97a11/0x2e59000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [0,0,0,0,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294174720 unmapped: 50241536 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:34.461808+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294174720 unmapped: 50241536 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:35.461940+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5297a1c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:36.462131+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396833 data_alloc: 218103808 data_used: 11061957
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:37.462296+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec746000/0x0/0x4ffc00000, data 0x220298f/0x23c1000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:38.462490+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:39.462775+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b5511f340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b566be1c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:40.462916+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:41.463350+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396833 data_alloc: 218103808 data_used: 11061957
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:42.463577+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec746000/0x0/0x4ffc00000, data 0x220298f/0x23c1000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:43.463708+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.748074532s of 10.684754372s, submitted: 21
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:44.463883+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:45.464058+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b55366fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:46.464374+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:47.464571+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:48.464852+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:49.465035+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.2 total, 600.0 interval
                                           Cumulative writes: 35K writes, 143K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2470 writes, 10K keys, 2470 commit groups, 1.0 writes per commit group, ingest: 13.34 MB, 0.02 MB/s
                                           Interval WAL: 2470 writes, 940 syncs, 2.63 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.035       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.035       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.035       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f23a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f23a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f23a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:50.465149+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:51.465264+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:52.465383+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:53.465520+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:54.466887+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:55.467021+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:56.467195+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:57.467330+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:58.467523+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:59.467641+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:00.467822+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:01.467978+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:02.468147+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:03.468315+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:04.468490+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:05.468654+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:06.468866+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:07.469056+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:08.469288+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:09.469451+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:10.469608+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:11.469763+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:12.469888+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:13.470007+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:14.470187+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:15.470324+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57147392 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:16.470484+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57147392 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:17.470685+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:18.470857+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:19.471022+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:20.471184+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:21.471319+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:22.471463+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:23.471614+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:24.471827+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:25.472130+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:26.472364+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:27.472507+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:28.472742+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:29.472958+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:30.473146+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:31.473322+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57122816 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:32.473525+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57122816 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:33.473664+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57122816 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b552c7c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b550fcfc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b547ff6c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54380c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a1000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.722755432s of 49.969383240s, submitted: 8
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a1000 session 0x562b53218e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54d6b880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b553461c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b54d2c000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54406fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:34.473841+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:35.474044+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:36.474378+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320829 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:37.474578+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:38.474835+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:39.474990+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56803328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:40.475189+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56803328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:41.475391+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56803328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320829 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:42.475596+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:43.475792+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:44.476014+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:45.476192+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:46.476459+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b531808c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320829 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b550fda40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:47.476607+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56786944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b55346fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.317261696s of 14.469831467s, submitted: 12
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b51d11340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:48.476970+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56778752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:49.477130+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56778752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:50.477298+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:51.477500+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361887 data_alloc: 218103808 data_used: 10613331
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:52.477672+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:53.477803+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:54.477972+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:55.478200+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:56.478439+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361887 data_alloc: 218103808 data_used: 10613331
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:57.478615+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:58.478822+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:59.478956+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:00.479196+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.383279800s of 12.385676384s, submitted: 1
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:01.479368+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 47841280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430299 data_alloc: 218103808 data_used: 10621523
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec089000/0x0/0x4ffc00000, data 0x28c498f/0x2a83000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:02.479659+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:03.479853+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:04.480043+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:05.480145+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x298198f/0x2b40000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [0,0,3,0,0,0,0,0,7,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:06.480387+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445819 data_alloc: 218103808 data_used: 11854419
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:07.480612+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:08.480769+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:09.480955+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:10.481151+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:11.481296+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445819 data_alloc: 218103808 data_used: 11854419
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:12.481464+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:13.481641+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:14.481777+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:15.481962+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b547ffc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494fc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494fc00 session 0x562b54d2c1c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54d6a380
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b5511ec40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:16.482165+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.426490784s of 15.836899757s, submitted: 131
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 47407104 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b552c7180
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b5250bc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b57d3e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b57d3e000 session 0x562b555c8c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483357 data_alloc: 218103808 data_used: 11854419
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54406e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552fc540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:17.482302+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:18.482466+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:19.482656+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:20.482839+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:21.482986+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483357 data_alloc: 218103808 data_used: 11854419
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:22.483190+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:23.483336+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b5250aa80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:24.483522+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 47382528 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b552fce00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:25.483693+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 47382528 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980c00 session 0x562b566bea80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b51d101c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:26.515583+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3485119 data_alloc: 218103808 data_used: 11854419
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:27.515735+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:28.516000+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.240238190s of 12.527070999s, submitted: 14
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:29.516180+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297066496 unmapped: 47349760 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:30.516373+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:31.516530+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518799 data_alloc: 234881024 data_used: 17691219
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:32.516674+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:33.516805+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 46678016 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:34.516950+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 46661632 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:35.517095+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:36.517267+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518671 data_alloc: 234881024 data_used: 17687123
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:37.517397+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:38.517553+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:39.517782+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.727138519s of 10.814496040s, submitted: 92
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:40.517963+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297926656 unmapped: 46489600 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:41.518117+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557167 data_alloc: 234881024 data_used: 18235987
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:42.518294+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:43.518403+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:44.518616+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:45.518816+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:46.519869+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557167 data_alloc: 234881024 data_used: 18235987
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:47.520062+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:48.520381+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:49.520532+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:50.520684+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:51.520841+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557295 data_alloc: 234881024 data_used: 18240083
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:52.521001+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:53.521151+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:54.521311+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:55.521435+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b52afdc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54951a40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:56.521611+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.452621460s of 16.708354950s, submitted: 31
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b54e0d180
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449686 data_alloc: 218103808 data_used: 11854419
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:57.521764+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:58.521906+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:59.522088+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfc0000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:00.522281+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b55347500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b50f49c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:01.522441+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55367880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfc0000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:02.522627+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:03.522774+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:04.522964+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:05.523152+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:06.523376+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:07.523561+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:08.523710+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:09.523911+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:10.524059+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:11.524245+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:12.524430+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:13.524691+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:14.524861+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:15.525124+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:16.525345+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:17.525546+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:18.525696+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:19.525879+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:20.526114+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:21.526335+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:22.526513+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:23.526707+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:24.526888+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:25.527201+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:26.527422+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:27.527599+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:28.527759+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:29.527927+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:30.528169+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:31.529210+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:32.529384+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:33.529557+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:34.529731+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:35.529964+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:36.530411+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:37.530627+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:38.530838+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:39.531054+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:40.531297+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:41.531563+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:42.531747+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:43.532018+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:44.532198+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:45.532451+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:46.532708+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:47.532991+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:48.533165+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:49.533379+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:50.533552+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:51.533732+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:52.533905+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:53.534118+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:54.534322+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:55.534486+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:56.534676+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 60.426128387s of 60.458377838s, submitted: 21
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293036032 unmapped: 51380224 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54db96c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b5498ee00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b5511efc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54407dc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:57.534821+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552c6c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:58.535031+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:59.535232+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:00.535446+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:01.535618+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:02.535781+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:03.535934+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:04.536119+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:05.536296+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:06.536489+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:07.536725+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:08.536908+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:09.537161+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b550fd340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:10.537304+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:11.537498+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:12.537689+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b59f63dc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:13.537873+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5497fc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5497fc00 session 0x562b5819ec40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.514007568s of 16.712116241s, submitted: 33
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5250b500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:14.538028+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:15.538161+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:16.538328+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:17.538480+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3347951 data_alloc: 218103808 data_used: 7718483
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:18.538642+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:19.538800+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:20.539022+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:21.539150+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:22.539314+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3347951 data_alloc: 218103808 data_used: 7718483
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:23.539487+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:24.539676+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:25.539812+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.294775009s of 12.303226471s, submitted: 2
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 45989888 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:26.539982+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297967616 unmapped: 46448640 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:27.540136+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x25a09e1/0x275f000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425143 data_alloc: 218103808 data_used: 7842387
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 47120384 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:28.540280+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:29.540447+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:30.540620+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec383000/0x0/0x4ffc00000, data 0x25ca9e1/0x2789000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:31.540746+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:32.540925+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429921 data_alloc: 218103808 data_used: 7842387
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec383000/0x0/0x4ffc00000, data 0x25ca9e1/0x2789000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:33.541061+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:34.541246+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:35.541441+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:36.541659+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:37.541786+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428473 data_alloc: 218103808 data_used: 7846483
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:38.541925+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:39.542040+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:40.542185+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:41.542361+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:42.542496+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428473 data_alloc: 218103808 data_used: 7846483
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:43.542647+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:44.542759+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:45.542874+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.759756088s of 19.683139801s, submitted: 100
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54db8700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b52afcfc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc1400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b59f62540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d2c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d2c00 session 0x562b54d2c380
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5498e700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:46.543062+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:47.544631+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:48.545267+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:49.545445+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:50.545581+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:51.545709+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:52.546161+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:53.546331+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b543aca80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:54.546471+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc1400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b54db9340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:55.546983+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:56.547244+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b566be000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:57.548159+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5ad35400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5ad35400 session 0x562b54d2d500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:58.548326+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:59.548437+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298393600 unmapped: 49700864 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.676366806s of 13.795249939s, submitted: 7
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:00.548949+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:01.549254+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:02.549540+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500135 data_alloc: 234881024 data_used: 13920851
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:03.549828+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:04.549983+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:05.550241+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:06.550434+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:07.550564+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500135 data_alloc: 234881024 data_used: 13920851
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:08.550712+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:09.550897+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.781335831s of 10.787143707s, submitted: 3
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:10.551122+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 45432832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:11.551303+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 48185344 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:12.551533+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547725 data_alloc: 234881024 data_used: 14633555
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:13.551696+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:14.551921+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:15.552156+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:16.552370+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:17.552467+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547725 data_alloc: 234881024 data_used: 14633555
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:18.552604+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:19.552749+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:20.552896+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:21.553106+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.882456779s of 11.166961670s, submitted: 31
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:22.553301+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549005 data_alloc: 234881024 data_used: 14719571
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:23.553448+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:24.553653+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:25.553830+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:26.554020+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:27.554140+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547045 data_alloc: 234881024 data_used: 14719571
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:28.554327+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b543801c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b555c8380
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc1400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:29.554480+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 48136192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:30.554682+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 48136192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b54db96c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:31.554798+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fa000/0x0/0x4ffc00000, data 0x31539e1/0x3312000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:32.554961+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435449 data_alloc: 218103808 data_used: 7846483
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:33.555116+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:34.555214+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:35.555342+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.982597351s of 14.264299393s, submitted: 9
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b532196c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54407880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:36.555481+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2c3000/0x0/0x4ffc00000, data 0x168b97f/0x1849000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:37.555596+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b555c88c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:38.555746+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:39.555868+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:40.556122+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:41.556270+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:42.556413+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:43.556556+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:44.556727+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:45.556891+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:46.557214+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:47.557340+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:48.557500+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:49.557647+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:50.557795+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:51.557940+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:52.558365+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:53.558576+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:54.558740+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:55.558897+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:56.559172+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:57.559443+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:58.559699+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:59.559901+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:00.560198+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:01.560410+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:02.562562+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:03.562777+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: mgrc ms_handle_reset ms_handle_reset con 0x562b588df800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: get_auth_request con 0x562b54980c00 auth_method 0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:04.562932+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:05.563208+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:06.563452+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:07.563718+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:08.563931+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:09.564184+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:10.564475+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:11.564629+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:12.564818+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:13.565142+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 50905088 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:14.565333+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 50905088 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.654140472s of 39.253135681s, submitted: 37
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:15.565502+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5511f500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b55347c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b550fcc40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc1400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b543ac8c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54d2da40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:16.565765+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:17.565959+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335641 data_alloc: 218103808 data_used: 4018673
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:18.566118+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:19.566309+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:20.566514+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b51d108c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:21.566670+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:22.566794+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335897 data_alloc: 218103808 data_used: 4050417
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:23.566942+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:24.567157+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:25.567349+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:26.567552+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:27.567737+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354329 data_alloc: 218103808 data_used: 7196145
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:28.567944+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:29.568159+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:30.568305+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:31.568463+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:32.568666+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354329 data_alloc: 218103808 data_used: 7196145
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:33.568895+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.388601303s of 18.978036880s, submitted: 4
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 50544640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:34.569027+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:35.569129+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:36.569306+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:37.569502+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375597 data_alloc: 218103808 data_used: 7561713
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:38.569642+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:39.569779+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:40.569916+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:41.570182+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:42.570451+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375613 data_alloc: 218103808 data_used: 7561713
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:43.570720+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299294720 unmapped: 48799744 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:44.570876+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299294720 unmapped: 48799744 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:45.571186+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:46.571387+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:47.571580+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375613 data_alloc: 218103808 data_used: 7561713
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:48.571761+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:49.571961+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.605512619s of 15.843142509s, submitted: 25
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b51d116c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298852352 unmapped: 49242112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d9c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b51d11340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555de800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555de800 session 0x562b5297a000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53180000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b552c7180
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:50.572141+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:51.572278+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:52.572446+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407519 data_alloc: 218103808 data_used: 7561713
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:53.572674+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:54.572888+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:55.573031+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:56.573261+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:57.573469+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407519 data_alloc: 218103808 data_used: 7561713
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:58.573659+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d9c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b54e0c380
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298885120 unmapped: 49209344 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5ad35800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:59.573794+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298893312 unmapped: 49201152 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:00.573939+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:01.574092+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:02.574234+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436133 data_alloc: 218103808 data_used: 11955185
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:03.574392+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:04.574572+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:05.574877+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:06.575177+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:07.575317+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436133 data_alloc: 218103808 data_used: 11955185
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:08.575467+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:09.575609+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:10.575761+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.689737320s of 20.918289185s, submitted: 50
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 46456832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:11.575899+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 46456832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:12.576172+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 46178304 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496455 data_alloc: 218103808 data_used: 12848113
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:13.576304+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:14.576514+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:15.576694+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:16.576948+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:17.577194+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497911 data_alloc: 218103808 data_used: 12860401
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:18.577323+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:19.577494+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:20.577718+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:21.577890+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5523bc00 session 0x562b552c6a80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52941000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:22.578040+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec153000/0x0/0x4ffc00000, data 0x27f9a04/0x29b9000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497663 data_alloc: 218103808 data_used: 12934129
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:23.578189+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.069880486s of 13.417132378s, submitted: 83
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54950fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5ad35800 session 0x562b55346000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301940736 unmapped: 46153728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:24.578342+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec153000/0x0/0x4ffc00000, data 0x27f9a04/0x29b9000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:25.578493+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:26.578725+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:27.578938+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:28.579110+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384144 data_alloc: 218103808 data_used: 7561713
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:29.579258+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:30.579419+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecaeb000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:31.579591+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53218000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b552fce00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:32.579746+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecaeb000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,1,2])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b53234e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:33.579935+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:34.580175+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:35.580358+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:36.580588+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:37.580744+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:38.580931+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:39.582941+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:40.583089+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:41.583275+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:42.583449+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:43.583612+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:44.583783+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:45.583966+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:46.584181+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:47.584352+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:48.584508+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:49.584726+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:50.584969+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:51.585172+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:52.585328+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:53.585479+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:54.585634+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:55.585804+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:56.586025+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:57.586216+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:58.586370+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:59.586681+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:00.586871+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:01.587130+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:02.588408+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:03.589661+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:04.590280+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:05.590518+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:06.591695+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:07.592453+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:08.593207+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:09.593631+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:10.593876+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:11.594115+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:12.594625+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:13.594938+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:14.595476+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.231575012s of 50.706127167s, submitted: 63
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:15.595694+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:16.595982+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:17.596199+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:18.596456+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:19.596778+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:20.596979+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:21.597160+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:22.597390+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d9c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:23.597565+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,0,0,4])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:24.597712+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 44638208 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b5819ec40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53181dc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b53180700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:25.597881+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b566bea80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b532341c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 48308224 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:26.598162+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 48308224 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:27.598447+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:28.598645+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369086 data_alloc: 218103808 data_used: 4026732
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7c000/0x0/0x4ffc00000, data 0x1ad297f/0x1c90000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:29.598834+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:30.599016+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7c000/0x0/0x4ffc00000, data 0x1ad297f/0x1c90000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:31.599227+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.732717514s of 16.663993835s, submitted: 37
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552fc700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:32.599363+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:33.599507+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395959 data_alloc: 218103808 data_used: 8353644
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:34.599663+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:35.599818+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:36.600010+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:37.600223+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:38.601314+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395959 data_alloc: 218103808 data_used: 8353644
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:39.601516+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:40.601667+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:41.601905+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:42.602109+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:43.602279+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.296160698s of 12.306211472s, submitted: 4
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411073 data_alloc: 218103808 data_used: 8374124
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec87f000/0x0/0x4ffc00000, data 0x20ce9a2/0x228d000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:44.602520+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 45539328 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:45.602716+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:46.602957+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:47.603208+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec41c000/0x0/0x4ffc00000, data 0x25299a2/0x26e8000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:48.603607+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465651 data_alloc: 218103808 data_used: 8661868
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:49.603751+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:50.603908+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:51.604097+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:52.604779+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:53.604928+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460459 data_alloc: 218103808 data_used: 8661868
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:54.605062+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:55.605290+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:56.605540+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.590172768s of 12.890510559s, submitted: 92
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3fa000/0x0/0x4ffc00000, data 0x25539a2/0x2712000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:57.605766+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:58.605998+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460587 data_alloc: 218103808 data_used: 8661868
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:59.606300+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 46006272 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:00.606519+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 46006272 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53219a40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54406fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b52afdc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54951500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555cec00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,4])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:01.606660+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555cec00 session 0x562b555c96c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b555c9880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b59f62540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b547ff340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b5511ec40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:02.606885+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:03.607228+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532989 data_alloc: 218103808 data_used: 8661868
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:04.607407+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:05.607592+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:06.607786+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:07.607928+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 52477952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494fc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494fc00 session 0x562b55347500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:08.608087+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 52477952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532989 data_alloc: 218103808 data_used: 8661868
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:09.608229+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5297a1c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 52469760 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:10.608393+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54d6bdc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 52469760 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.584549904s of 14.069593430s, submitted: 17
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b53219500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528d9800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:11.608518+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 52461568 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:12.608686+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:13.608830+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580038 data_alloc: 218103808 data_used: 13113962
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:14.608967+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:15.609140+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:16.609388+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:17.609537+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:18.609690+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580542 data_alloc: 218103808 data_used: 13113962
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:19.609855+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb80f000/0x0/0x4ffc00000, data 0x313c9b2/0x32fc000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:20.610047+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:21.610267+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb80f000/0x0/0x4ffc00000, data 0x313c9b2/0x32fc000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:22.610436+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.964795113s of 11.985386848s, submitted: 9
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 52191232 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:23.610584+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 51101696 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3610428 data_alloc: 234881024 data_used: 14183018
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:24.610772+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:25.610943+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:26.611176+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:27.611374+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb37a000/0x0/0x4ffc00000, data 0x35d29b2/0x3792000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:28.611558+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617380 data_alloc: 234881024 data_used: 14252650
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:29.611706+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb359000/0x0/0x4ffc00000, data 0x35f39b2/0x37b3000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:30.611920+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:31.612194+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:32.612382+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:33.612626+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552fce00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.867029190s of 11.092142105s, submitted: 63
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528d9800 session 0x562b566bf880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612536 data_alloc: 234881024 data_used: 14252650
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b555c8e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:34.612819+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f8000/0x0/0x4ffc00000, data 0x25559a2/0x2714000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:35.613139+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:36.613397+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:37.613579+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b51d101c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b5250a700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b553476c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:38.613731+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:39.613870+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:40.613998+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:41.614148+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:42.614278+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:43.614435+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:44.614587+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:45.614735+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:46.614878+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:47.615043+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:48.615264+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:49.615468+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:50.615701+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:51.615902+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:52.616059+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:53.616304+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:54.616482+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:55.616628+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:56.616837+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:57.617032+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:58.617198+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:59.617339+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:00.617493+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:01.617608+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:02.617737+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:03.617982+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 52625408 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:04.618202+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 52625408 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:05.618401+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:06.618698+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:07.618937+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:08.619185+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:09.619453+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:10.619657+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:11.619853+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:12.620030+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:13.620235+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 52609024 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:14.620405+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b53218a80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b552c7c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b552c6e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54d2c1c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.048336029s of 41.130916595s, submitted: 44
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 51535872 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b5375ee00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552c6c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53234e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b55346700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b550fcc40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:15.620605+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:16.620833+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:17.620988+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:18.621196+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371796 data_alloc: 218103808 data_used: 4010551
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:19.621360+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:20.621511+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:21.621651+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:22.621813+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:23.622029+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b543801c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371796 data_alloc: 218103808 data_used: 4010551
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d7000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a0c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:24.622159+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:25.622283+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:26.622512+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:27.622672+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:28.622859+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371928 data_alloc: 218103808 data_used: 4010551
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:29.623097+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:30.623272+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:31.623430+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:32.623645+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:33.623798+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371928 data_alloc: 218103808 data_used: 4010551
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:34.623999+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:35.624251+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.683448792s of 20.806020737s, submitted: 41
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:36.624490+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 51322880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:37.624709+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306692096 unmapped: 49274880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:38.624897+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 52928512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396090 data_alloc: 218103808 data_used: 4084279
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:39.625143+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:40.625324+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:41.625552+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccef000/0x0/0x4ffc00000, data 0x1c5c9f1/0x1e1c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:42.625736+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccef000/0x0/0x4ffc00000, data 0x1c5c9f1/0x1e1c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:43.625902+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3404906 data_alloc: 218103808 data_used: 4227639
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:44.626165+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:45.626382+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:46.626620+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:47.626769+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:48.626969+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:49.627200+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.2 total, 600.0 interval
                                           Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.82 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1838 writes, 7373 keys, 1838 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s
                                           Interval WAL: 1838 writes, 718 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:50.627397+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:51.627522+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:52.627771+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:53.627919+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:54.628204+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:55.628388+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:56.628608+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:57.628788+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:58.629186+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:59.629355+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:00.629511+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:01.629670+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:02.629842+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:03.630024+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:04.630150+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:05.630293+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:06.630488+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.124912262s of 30.743280411s, submitted: 64
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 51372032 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:07.630672+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54a8e800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54a8e800 session 0x562b566bea80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b52afc540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b59f62a80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b59f621c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 51372032 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:08.630820+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54381a40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5499b400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5499b400 session 0x562b5250afc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b59f63880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b53181c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b552fcfc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x225aa2a/0x241c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442101 data_alloc: 218103808 data_used: 4231735
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:09.630994+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:10.631138+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:11.631360+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:12.631656+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:13.631824+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b566bee00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441205 data_alloc: 218103808 data_used: 4231735
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:14.632011+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x225aa63/0x241c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5489d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5489d800 session 0x562b51d116c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c9dc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:15.632136+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b528556c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:16.632359+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:17.632541+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:18.632742+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479447 data_alloc: 218103808 data_used: 10342983
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:19.632910+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:20.633092+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:21.633244+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:22.633454+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:23.633618+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479447 data_alloc: 218103808 data_used: 10342983
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:24.633789+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:25.633928+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.493576050s of 19.711193085s, submitted: 33
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:26.634153+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:27.634319+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305594368 unmapped: 50372608 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:28.634444+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 47267840 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511373 data_alloc: 218103808 data_used: 10801735
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:29.634639+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2ed000/0x0/0x4ffc00000, data 0x265ca73/0x281f000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:30.634887+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:31.635190+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:32.635402+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:33.635567+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2c9000/0x0/0x4ffc00000, data 0x2680a73/0x2843000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:34.635720+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517921 data_alloc: 218103808 data_used: 11022919
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:35.635888+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:36.636274+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:37.636438+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:38.636575+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2c7000/0x0/0x4ffc00000, data 0x2682a73/0x2845000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:39.636779+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515601 data_alloc: 218103808 data_used: 11035207
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.758734703s of 13.324839592s, submitted: 61
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:40.636939+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:41.637205+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54380c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53180000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e5800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e5800 session 0x562b566bf340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:42.637439+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:43.637624+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eca9e000/0x0/0x4ffc00000, data 0x1c839f1/0x1e43000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:44.637854+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410514 data_alloc: 218103808 data_used: 4231735
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eca9e000/0x0/0x4ffc00000, data 0x1c839f1/0x1e43000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:45.638025+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:46.638569+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccc2000/0x0/0x4ffc00000, data 0x1c8a9f1/0x1e4a000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:47.639021+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:48.639201+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b5511f500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b54db8000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:49.639335+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53234c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:50.639524+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:51.639710+0000)
Dec 13 09:42:37 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23238 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:52.640402+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:53.640925+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:54.641292+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:55.641500+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:56.641794+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:57.642111+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:58.642541+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:59.642706+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:00.642863+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:01.643182+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:02.643407+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:03.643601+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:04.643834+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:05.644020+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:06.644399+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:07.644624+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:08.644808+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:09.645003+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:10.645143+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:11.645296+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:12.645437+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:13.645617+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:14.645796+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:15.645962+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:16.646269+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:17.647214+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:18.647481+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:19.647758+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:20.648180+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:21.649160+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:22.649361+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.192054749s of 43.299339294s, submitted: 52
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5250a700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54950fc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55346540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5819e000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a0c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b51d108c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:23.649671+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:24.649880+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386400 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:25.650038+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:26.650492+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:27.650824+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:28.651279+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d7000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b552fc8c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:29.651516+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385968 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552fd6c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:30.651696+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54d2c700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:31.651881+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 49995776 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a0c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:32.652004+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 49995776 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.878920555s of 10.003678322s, submitted: 50
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:33.652192+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:34.652375+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397315 data_alloc: 218103808 data_used: 4811123
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:35.652549+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:36.652789+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b55367500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b55366a80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:37.652983+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:38.653211+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:39.653426+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397183 data_alloc: 218103808 data_used: 4811123
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:40.653661+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:41.653791+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:42.654025+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d7000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5cc12000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:43.654187+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:44.654371+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397443 data_alloc: 218103808 data_used: 4811139
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:45.654559+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:46.654850+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:47.654991+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:48.655145+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b55366e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.139425278s of 16.248653412s, submitted: 60
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b552fd180
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:49.655317+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397311 data_alloc: 218103808 data_used: 4811139
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:50.655704+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:51.655963+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:52.656156+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:53.656330+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:54.656499+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397443 data_alloc: 218103808 data_used: 4811139
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b53219500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:55.656687+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306036736 unmapped: 49930240 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b555c9500
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:56.657146+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:57.657413+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:58.657634+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:59.657948+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:00.658263+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:01.658480+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:02.658632+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:03.658854+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:04.659035+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:05.659314+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:06.659516+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:07.659749+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:08.659979+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:09.660221+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:10.660427+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:11.660575+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:12.660789+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:13.661149+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:14.661318+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:15.661490+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:16.661773+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:17.661979+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:18.662171+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:19.662338+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:20.662499+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:21.662724+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:22.662889+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:23.663105+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:24.663233+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:25.663421+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:26.663717+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:27.663922+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:28.664221+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:29.664396+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:30.664535+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:31.664628+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:32.664741+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:33.664879+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:34.665012+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:35.665139+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:36.665300+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:37.665440+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:38.665577+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:39.665706+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:40.665875+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:41.666049+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:42.666269+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:43.666476+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:44.666643+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:45.666850+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:46.667039+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:47.667167+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:48.667330+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:49.667480+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:50.668539+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:51.668684+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:52.668874+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a0c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.132514954s of 64.175086975s, submitted: 21
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b54380c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:53.669025+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55346a80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b552c6540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53234e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5cc12000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b54e0d880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555dc000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555dc000 session 0x562b550fddc0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:54.669256+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433682 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:55.669559+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5819f880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54407880
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b532196c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:56.670179+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5cc12000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52946400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:57.670350+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:58.670520+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec90f000/0x0/0x4ffc00000, data 0x203e98f/0x21fd000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:59.670765+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491164 data_alloc: 218103808 data_used: 12978547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:00.670935+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b51d101c0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b52946400 session 0x562b53235c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec90f000/0x0/0x4ffc00000, data 0x203e98f/0x21fd000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:01.671158+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52946400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:02.671318+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b52946400 session 0x562b52afc540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:03.671452+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:04.671629+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:05.671732+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:06.671983+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:07.672150+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:08.672283+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:09.672459+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:10.672590+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:11.672723+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:12.672867+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:13.673038+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:14.673163+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:15.673307+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:16.673598+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:17.673775+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:18.673962+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:19.674180+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:20.674335+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:21.674495+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:22.674633+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:23.674777+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:24.674944+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:25.675154+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:26.675379+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:27.675580+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:28.676353+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:29.676595+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:30.676786+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:31.676963+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:32.677148+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:33.677306+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:34.677484+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:35.677666+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:36.677910+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:37.678143+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:38.678371+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:39.678630+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:40.678868+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:41.679134+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:42.679336+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:43.679482+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:44.679716+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:45.679880+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:46.680128+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:47.680320+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:48.680487+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:49.680687+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:50.680835+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:51.680972+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:52.681130+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:53.681391+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:54.681597+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:55.681834+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:56.682145+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:57.682305+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:58.682554+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:59.683178+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:00.683760+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:01.684036+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:02.685128+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:03.685467+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:04.685905+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:05.686352+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:06.686752+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:07.687053+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:08.687247+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:09.687477+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:10.687655+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:11.687872+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:12.688185+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 79.888259888s of 80.062774658s, submitted: 22
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 58089472 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 313 ms_handle_reset con 0x562b528e1800 session 0x562b54e0c380
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:13.688342+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ecc7, meta 0x110a1339), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 58089472 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:14.688706+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385265 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:15.688946+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:16.689185+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:17.689358+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:18.689537+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:19.689748+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385265 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:20.689897+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:21.690175+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:22.690368+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 58064896 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:23.690539+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 58064896 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:24.690768+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.884174347s of 11.914915085s, submitted: 19
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:25.690993+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:26.691224+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:27.691861+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:28.692018+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:29.692247+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:30.692374+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:31.692525+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:32.692759+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:33.693467+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:34.693840+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:35.694183+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:36.694469+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:37.694907+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:38.695225+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:39.695652+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:40.696013+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:41.696337+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:42.696592+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:43.696817+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:44.697060+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:45.697298+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:46.697586+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:47.697808+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:48.698152+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:49.698479+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:50.698731+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:51.698927+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:52.699213+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:53.699498+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:54.699707+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:55.699901+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:56.700348+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:57.700562+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:58.700884+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:59.701236+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:00.701509+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:01.701782+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:02.702007+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:03.702219+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:04.702465+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:05.702701+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:06.702974+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301645824 unmapped: 57991168 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:07.703196+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:08.703441+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:09.703657+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:10.703831+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:11.704042+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:12.704297+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:13.704510+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:14.704701+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:15.704943+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:16.705241+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:17.705549+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:18.705729+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:19.705956+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 57966592 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:20.706270+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:21.706579+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:22.706863+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:23.707217+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:24.707562+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:25.707869+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:26.708200+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:27.708521+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:28.708813+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:29.709059+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:30.709357+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:31.709639+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:32.709941+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:33.710246+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:34.710449+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:35.710635+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:36.710935+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:37.711163+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:38.711910+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:39.712276+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:40.712561+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:41.712806+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 57933824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:42.713364+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 57933824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:43.713629+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:44.713837+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:45.714050+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:46.714484+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:47.714700+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:48.714894+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:49.715109+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:50.715250+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:51.715438+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:52.715585+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 57901056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:53.715755+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 57901056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:54.716294+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:55.716512+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:56.716816+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:57.717133+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:58.717388+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:59.717658+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:00.717863+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:01.718046+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:02.718312+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:03.718529+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:04.718801+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:05.719140+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:06.719427+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:07.719595+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:08.719887+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:09.720190+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:10.720577+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 57868288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:11.720765+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 106.316764832s of 106.328948975s, submitted: 13
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 ms_handle_reset con 0x562b5296d800 session 0x562b54380700
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 ms_handle_reset con 0x562b538a8400 session 0x562b54380540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:12.720984+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:13.721159+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:14.721345+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:15.721641+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387959 data_alloc: 234881024 data_used: 11555187
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:16.721913+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:17.722105+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:18.722265+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:19.722500+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:20.722673+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387959 data_alloc: 234881024 data_used: 11555187
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:21.722863+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5cc12000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.003993034s of 10.013246536s, submitted: 5
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 51838976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ef44, meta 0x110a10bc), peers [0,1] op hist [0,0,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 315 ms_handle_reset con 0x562b5cc12000 session 0x562b550fdc00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:22.723048+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:23.723270+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:24.723433+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:25.724147+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 315 heartbeat osd_stat(store_statfs(0x4edf4c000/0x0/0x4ffc00000, data 0x9fcbde/0xbbe000, compress 0x0/0x0/0x0, omap 0x4f41b, meta 0x110a0be5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 56975360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303249 data_alloc: 218103808 data_used: 151939
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:26.724310+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 316 ms_handle_reset con 0x562b528e1800 session 0x562b54407c00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:27.724573+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ee74a000/0x0/0x4ffc00000, data 0x1fe79b/0x3bf000, compress 0x0/0x0/0x0, omap 0x4fd71, meta 0x110a028f), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:28.724905+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ee74a000/0x0/0x4ffc00000, data 0x1fe79b/0x3bf000, compress 0x0/0x0/0x0, omap 0x4fd71, meta 0x110a028f), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:29.725229+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:30.725613+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262502 data_alloc: 218103808 data_used: 151923
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:31.725896+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52946400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 316 ms_handle_reset con 0x562b52946400 session 0x562b5250aa80
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022649765s of 10.216675758s, submitted: 92
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:32.726216+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ee748000/0x0/0x4ffc00000, data 0x200236/0x3c2000, compress 0x0/0x0/0x0, omap 0x4fec5, meta 0x110a013b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:33.726377+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:34.726549+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ee748000/0x0/0x4ffc00000, data 0x200236/0x3c2000, compress 0x0/0x0/0x0, omap 0x4fec5, meta 0x110a013b), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 317 ms_handle_reset con 0x562b5296d800 session 0x562b566bee00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 317 handle_osd_map epochs [318,318], i have 318, src has [1,318]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:35.726694+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267970 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:36.726938+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:37.727196+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 318 heartbeat osd_stat(store_statfs(0x4ee745000/0x0/0x4ffc00000, data 0x201cb5/0x3c5000, compress 0x0/0x0/0x0, omap 0x503f7, meta 0x1109fc09), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:38.727357+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 318 heartbeat osd_stat(store_statfs(0x4ee745000/0x0/0x4ffc00000, data 0x201cb5/0x3c5000, compress 0x0/0x0/0x0, omap 0x503f7, meta 0x1109fc09), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 318 ms_handle_reset con 0x562b538a8400 session 0x562b59f63180
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:39.727520+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:40.727711+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:41.727891+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:42.728088+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:43.728630+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:44.729058+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:45.729245+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:46.729425+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:47.729559+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:48.729801+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:49.730023+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:50.730187+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:51.730333+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:52.730480+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:53.730672+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:54.730865+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:55.731058+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:56.731283+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:57.731689+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:58.731857+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:59.732019+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:00.732209+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:01.732443+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:02.732604+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:03.732770+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:04.732913+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:05.733105+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:06.733325+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:07.733497+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:08.733651+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:09.733910+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:10.734171+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:11.734384+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:12.734547+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:13.734721+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:14.734874+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:15.735039+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:16.735246+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:17.735404+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:18.735606+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:19.735746+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:20.735850+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:21.735951+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:22.736105+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:23.736294+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:24.736423+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:25.736648+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:26.736807+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:27.736959+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:28.737159+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:29.737348+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:30.737491+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:31.737680+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:32.737919+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:33.738136+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:34.738332+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:35.738484+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:36.738713+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:37.738879+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:38.740599+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:39.740787+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:40.740927+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:41.741064+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:42.741245+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:43.741393+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:44.741581+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:45.741780+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:46.742006+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:47.742128+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:48.742262+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:49.742443+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:50.742615+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:51.742757+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:52.742911+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:53.743047+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:54.743249+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:55.743416+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:56.743620+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:57.743829+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:58.744233+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:59.744394+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:00.744518+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:01.744685+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:02.744881+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:03.745174+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:04.745343+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:05.745497+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:06.745748+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:07.745928+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:08.746147+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:09.746306+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:10.746429+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:11.746582+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:12.746736+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:13.746925+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:14.747118+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:15.747283+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 56827904 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:16.747503+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 56827904 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:17.747604+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 56819712 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:18.747839+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56811520 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:19.747989+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:20.748113+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:21.748273+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:22.748438+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:23.748574+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 56795136 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:24.748726+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:25.748857+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:26.749032+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:27.749801+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 56778752 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc0800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 116.213493347s of 116.241378784s, submitted: 33
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:28.749960+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 56778752 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 320 ms_handle_reset con 0x562b54dc0800 session 0x562b59fc2000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:29.750141+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 56762368 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a1000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 320 heartbeat osd_stat(store_statfs(0x4edf3d000/0x0/0x4ffc00000, data 0x20550e/0x3cd000, compress 0x0/0x0/0x0, omap 0x5097d, meta 0x1109f683), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:30.750304+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 ms_handle_reset con 0x562b548a1000 session 0x562b59fc2380
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:31.750456+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:32.750642+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:33.750797+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:34.750947+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:35.751114+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:36.751329+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:37.751466+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:38.751792+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:39.752006+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:40.752228+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:41.752374+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:42.752545+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:43.752704+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:44.752835+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:45.753015+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-13T09:30:46.753297+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _finish_auth 0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:46.754630+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:47.753627+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:48.753872+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:49.754064+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:50.754266+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:51.754527+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:52.754708+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:53.754900+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:54.755136+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:55.755280+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:56.755478+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:57.755633+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:58.755803+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:59.756047+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:00.756267+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:01.756429+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:02.756597+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:03.756797+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:04.756951+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:05.757137+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:06.757345+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5497f000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324731 data_alloc: 218103808 data_used: 156000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:07.758260+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:08.758416+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:09.758592+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.045394897s of 42.178756714s, submitted: 18
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 322 ms_handle_reset con 0x562b5497f000 session 0x562b54d6a540
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:10.758715+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:11.758877+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326012 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:12.759063+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:13.759332+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:14.759516+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:15.759690+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:16.759929+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326012 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:17.760124+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:18.760291+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:19.760481+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 65003520 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.889834404s of 10.003436089s, submitted: 19
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:20.760612+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:21.760845+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:22.761013+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:23.761187+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:24.761386+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:25.761575+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:26.761875+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:27.762008+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:28.762243+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:29.762451+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:30.762640+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:31.762861+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:32.763036+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:33.763226+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:34.763444+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:35.763672+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:36.763967+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:37.764174+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:38.764340+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:39.764538+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:40.764708+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:41.764913+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:42.765146+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:43.765324+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:44.765523+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:45.765746+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:46.765927+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:47.766100+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:48.766291+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 153K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 912 writes, 2666 keys, 912 commit groups, 1.0 writes per commit group, ingest: 1.84 MB, 0.00 MB/s
                                           Interval WAL: 912 writes, 399 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread fragmentation_score=0.004241 took=0.000088s
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:49.766456+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:50.766620+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:51.766765+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 65175552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:52.766900+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 65175552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:53.767044+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 65167360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:54.767224+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 65167360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:55.767405+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:57.275958+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:58.276174+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:59.276347+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:00.276514+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:01.276749+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:02.277011+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:03.277191+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:04.277461+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:05.277644+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:06.277769+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:07.277947+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:08.278154+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:09.278293+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:10.278485+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:11.278638+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:12.278818+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:13.278990+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:14.279280+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:15.279460+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:16.279630+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:17.279881+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:18.280050+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:19.280251+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:20.280426+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:21.280694+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:22.280874+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:23.281120+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:24.281290+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:25.281485+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:26.281632+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:27.281869+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:28.282035+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:29.282591+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:30.283109+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:31.283485+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:32.284562+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:33.284961+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:34.285776+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:35.286517+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:36.287225+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:37.287567+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:38.287769+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:39.287930+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:40.288099+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:41.288329+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:42.288727+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:43.289018+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:44.289332+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:45.289525+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:46.290143+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:47.290313+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:48.290481+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:49.290808+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:50.291027+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:51.291305+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:52.291529+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:53.291680+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:54.291968+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:55.292195+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:56.292621+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:57.292864+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:58.293042+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:59.293209+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:00.293686+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:01.293922+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:02.294300+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:03.294603+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 65003520 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:04.294951+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64995328 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:05.295372+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64995328 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 13 09:42:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038311636' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:06.295676+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:07.295915+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:08.296151+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:09.296423+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:10.296760+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:11.297001+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:12.297183+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:13.297452+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:14.297768+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:15.297981+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:16.298241+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:17.298506+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:18.298664+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:19.298848+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:20.299046+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:21.299284+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:22.299724+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 ms_handle_reset con 0x562b52941000 session 0x562b54406e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:23.300011+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:24.300201+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:25.300398+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:26.300574+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 126.705642700s of 126.712867737s, submitted: 15
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:27.300754+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:28.300892+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:29.301145+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:30.301319+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:31.301541+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:32.301719+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:33.301954+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:34.302287+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 64839680 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:35.302423+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:36.302565+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:37.302769+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:38.302938+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:39.303160+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:40.303307+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:41.303634+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:42.303953+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:43.304237+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:44.304411+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:45.304791+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:46.305010+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:47.305332+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:48.305575+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:49.305817+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:50.306040+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:51.306207+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:52.306386+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:53.306700+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:54.306937+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:55.307162+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:56.307359+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:57.307596+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:58.307761+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:59.307982+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:00.308169+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:01.308363+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:02.308567+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:03.308784+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:04.308930+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:05.309114+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:06.309326+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:07.309506+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:08.309660+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:09.309826+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:10.310005+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:11.310168+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:12.310332+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:13.310502+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:14.310705+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:15.310905+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:16.311122+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:17.311349+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.510643005s of 50.603378296s, submitted: 108
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:18.311540+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:19.311735+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:20.311935+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:21.312135+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:22.312328+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:23.312529+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:24.312715+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:25.312898+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:26.313047+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:27.313305+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 2.568279505s of 10.172575951s, submitted: 10
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:28.313463+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:29.313637+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:30.313822+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:31.313993+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:32.314148+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:33.314327+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:34.314503+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:35.314684+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:36.314872+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:37.315056+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63692800 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.948706627s of 10.349843025s, submitted: 6
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:38.315250+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:39.315434+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:40.315647+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:41.315804+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:42.315945+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:43.316148+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:44.316368+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:45.316571+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:46.316734+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:47.316985+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63668224 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:48.317198+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:49.317386+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:50.317593+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:51.317775+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:52.317916+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:53.318162+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:54.318428+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:55.318654+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:56.318928+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:57.319127+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:58.319336+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:59.319529+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:00.319741+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:01.320581+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:02.320710+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:03.320869+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:04.321022+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:05.321161+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:06.321309+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:07.321469+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:08.321612+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:09.321772+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:10.321976+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:11.322175+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63610880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:12.322413+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63610880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:13.322598+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:14.322735+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:15.322921+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:16.323089+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:17.323325+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:18.323577+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:19.323913+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63586304 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:20.324159+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63586304 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:21.324303+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:22.324587+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:23.324825+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:24.324994+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:25.325223+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:26.325421+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:27.325621+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:28.325830+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:29.325982+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:30.326162+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:31.326329+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:32.326504+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:33.326776+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:34.327015+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:35.327242+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:36.327434+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:37.327667+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:38.327813+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:39.327947+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:40.328166+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:41.328340+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:42.328557+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:43.328768+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:44.328955+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 63545344 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:45.329136+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 63545344 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:46.329385+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 63537152 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:47.329707+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:48.329904+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:49.330198+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:50.330365+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:51.330565+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:52.330746+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:53.330985+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:54.331311+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:55.331499+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:56.331667+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:57.331879+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:58.332123+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:59.332290+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:00.332494+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:01.332678+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:02.332838+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:03.333021+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:04.333201+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:05.333421+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:06.333593+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 63488000 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:07.333918+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:08.334183+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:09.334362+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:10.334620+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:11.334833+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:12.335033+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:13.335238+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:14.335446+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:15.335715+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:16.336015+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:17.336294+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:18.336463+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:19.336621+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:20.336795+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:21.337012+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:22.337175+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:23.337397+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:24.337572+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:25.337864+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:26.338010+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:27.338189+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:28.338375+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:29.338581+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:30.338764+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:31.338955+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:32.342249+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:33.342479+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:34.342664+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:35.342789+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:36.342948+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:37.343174+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:38.347866+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:39.348036+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:40.348324+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:41.348508+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:42.348657+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:43.348781+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:44.348964+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:45.349144+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:46.349348+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:47.349589+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:48.349789+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:49.349982+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:50.350152+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:51.350326+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:52.350482+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:53.350650+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:54.350810+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:55.351119+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:56.351304+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:57.351520+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:58.351731+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:59.351915+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:00.352163+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:01.352354+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:02.352505+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:03.352645+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:04.352796+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:05.353003+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:06.353150+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:07.353621+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:08.353766+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:09.353915+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:10.354127+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:11.354341+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:12.354543+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:13.354753+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:14.354906+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:15.355138+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:16.355344+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:17.355588+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:18.355737+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:19.355898+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:20.356046+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:21.356441+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 63307776 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:22.356647+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:23.356828+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:24.356974+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:25.357817+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:26.358008+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:27.358270+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:28.358437+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:29.358589+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:30.358811+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:31.359110+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:32.359256+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:33.359518+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:34.359719+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:35.359903+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:36.360057+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:37.360278+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:38.360414+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:39.360617+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:40.360777+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:41.360954+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 63258624 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:42.361135+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 63258624 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:43.361342+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:44.361498+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:45.361637+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:46.361804+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:47.362017+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:48.362123+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:49.362246+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:50.362386+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:51.362563+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:52.362848+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:53.363041+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:54.363283+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:55.363481+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:56.363646+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:57.363852+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:58.364003+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:59.364198+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304807936 unmapped: 63225856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:00.364337+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:01.364510+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:02.364696+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:03.364844+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:04.365003+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:05.365203+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:06.365325+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:07.365492+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:08.365618+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:09.365777+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:10.365928+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:11.366115+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:12.366261+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:13.366487+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:14.366653+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:15.366960+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:16.367147+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:17.367391+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:18.367570+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:19.367748+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304848896 unmapped: 63184896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:20.367935+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304848896 unmapped: 63184896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:21.368158+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304857088 unmapped: 63176704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:22.368331+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304857088 unmapped: 63176704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:23.368562+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:24.368715+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:25.368905+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:26.369121+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:27.369366+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:28.369537+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:29.369748+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 63160320 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:30.369917+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 63160320 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:31.370061+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:32.370300+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:33.370489+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:34.370652+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:35.370842+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:36.371035+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:37.371323+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:38.371477+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:39.371644+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304914432 unmapped: 63119360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:40.371859+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304914432 unmapped: 63119360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:41.372041+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:42.372295+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:43.372487+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:44.372643+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:45.372831+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:46.373023+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:47.373248+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:48.373458+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:49.373635+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52941000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 63102976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:50.373853+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 63102976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:51.374146+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:52.374294+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:53.374478+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:54.374653+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:55.374844+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 254.228393555s of 257.374877930s, submitted: 8
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:56.375306+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x516f8, meta 0x1109e908), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:57.375536+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:58.375750+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x5176e, meta 0x1109e892), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331560 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 63062016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:59.375990+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 324 ms_handle_reset con 0x562b52941000 session 0x562b57c07340
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:00.376195+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:01.376384+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:02.376590+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:03.376709+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331192 data_alloc: 218103808 data_used: 156129
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:04.376898+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x517e4, meta 0x1109e81c), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:05.377218+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.808628559s of 10.056917191s, submitted: 37
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:06.377408+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:07.377614+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:08.377815+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0xa0f857/0xbdd000, compress 0x0/0x0/0x0, omap 0x522af, meta 0x1109dd51), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336626 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:09.378182+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:10.378348+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4edf2f000/0x0/0x4ffc00000, data 0xa0f857/0xbdd000, compress 0x0/0x0/0x0, omap 0x522af, meta 0x1109dd51), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:11.378527+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:12.378711+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 63488000 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:13.378888+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3295730 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:14.379149+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4ee72f000/0x0/0x4ffc00000, data 0x20f857/0x3dd000, compress 0x0/0x0/0x0, omap 0x52325, meta 0x1109dcdb), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 326 ms_handle_reset con 0x562b538a8400 session 0x562b56c08c40
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:15.379331+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4ee72f000/0x0/0x4ffc00000, data 0x20f857/0x3dd000, compress 0x0/0x0/0x0, omap 0x52538, meta 0x1109dac8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:16.379586+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:17.379820+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:18.379976+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3294910 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:19.380140+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.690998077s of 13.717031479s, submitted: 34
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:20.380288+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a1000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:21.380438+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 327 heartbeat osd_stat(store_statfs(0x4ee72a000/0x0/0x4ffc00000, data 0x2112d6/0x3e0000, compress 0x0/0x0/0x0, omap 0x52708, meta 0x1109d8f8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:22.380620+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 327 heartbeat osd_stat(store_statfs(0x4ee72a000/0x0/0x4ffc00000, data 0x2112d6/0x3e0000, compress 0x0/0x0/0x0, omap 0x52708, meta 0x1109d8f8), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:23.380802+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 ms_handle_reset con 0x562b548a1000 session 0x562b56c08e00
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:24.380987+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:25.381246+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:26.381396+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:27.381603+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:28.381878+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:29.384282+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:30.384457+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:31.384624+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:32.384810+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:33.385033+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:34.385187+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:35.385365+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:36.385559+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:37.385901+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:38.386141+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:39.386271+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:40.386441+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:41.386689+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:42.386820+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:43.387005+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:44.387183+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:45.387375+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:46.387542+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:47.387797+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:48.387950+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:49.388238+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:50.388479+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:51.388614+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:52.388817+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:53.388994+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:54.389199+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:55.389409+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:56.389593+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:57.389842+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:58.390041+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:59.390266+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:00.390465+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:01.390739+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:02.391012+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:03.391220+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:04.391413+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:05.391587+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:06.391816+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:07.392108+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:08.392304+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:09.392516+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:10.392675+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:11.392877+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:12.393121+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:13.393327+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:14.393489+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:15.393678+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:16.393881+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:17.394054+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:18.394246+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:19.394418+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:20.394616+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:21.394801+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:22.394967+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:23.395119+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:24.395349+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:25.395517+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:26.395718+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:27.395903+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:28.396169+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc0800
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:29.396353+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 70.411071777s of 70.590003967s, submitted: 17
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 329 ms_handle_reset con 0x562b54dc0800 session 0x562b54fe8000
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:30.396538+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:31.396717+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 329 heartbeat osd_stat(store_statfs(0x4ee724000/0x0/0x4ffc00000, data 0x214a62/0x3e6000, compress 0x0/0x0/0x0, omap 0x52d43, meta 0x1109d2bd), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:32.396893+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:33.397064+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:34.397303+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303952 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:35.397426+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 329 heartbeat osd_stat(store_statfs(0x4ee724000/0x0/0x4ffc00000, data 0x214a62/0x3e6000, compress 0x0/0x0/0x0, omap 0x52d43, meta 0x1109d2bd), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:36.397597+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:37.397813+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:38.397957+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 329 handle_osd_map epochs [329,330], i have 329, src has [1,330]
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:39.398115+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:40.398250+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:41.398476+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:42.398656+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:43.398789+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:44.398946+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:45.399103+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:46.399313+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:47.399553+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:48.399792+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:49.400039+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:50.400229+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:51.400365+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:52.400561+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:53.400929+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:54.401121+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:55.401278+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:56.401472+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:57.401697+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:58.401913+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:59.402089+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:00.402256+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:01.402507+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:02.402827+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:03.403000+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:04.403171+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:05.403326+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:06.403668+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:07.403947+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:08.404160+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:09.404341+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:10.404581+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:11.404798+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:12.405187+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:13.405312+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:14.405661+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:15.405810+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:16.405991+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:17.406323+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:18.406610+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:19.406800+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:20.406984+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:21.407156+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:22.407311+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:23.407586+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:24.407762+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:25.407997+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:26.408249+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:27.408459+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:28.408666+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:29.408870+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305856512 unmapped: 62177280 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:30.409047+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:31.409260+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:32.409440+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:33.409652+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:34.409926+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:35.410142+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:36.410896+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:37.411142+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:38.411541+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:39.411691+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:40.411954+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:41.412119+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:42.412399+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:43.412639+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:44.412860+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:45.413151+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:46.413331+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:47.413564+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:48.413733+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:49.413994+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 154K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 443 writes, 954 keys, 443 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 443 writes, 192 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:50.414216+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:51.414421+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:52.414579+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:53.414823+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:54.414982+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:55.415115+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:56.415307+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:57.415652+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:58.415834+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:59.416007+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:00.416165+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:01.416321+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:02.416454+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:03.416644+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:04.416788+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'config show' '{prefix=config show}'
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:05.416948+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:42:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:06.417157+0000)
Dec 13 09:42:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306028544 unmapped: 62005248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:37 compute-0 ceph-osd[89221]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:42:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4263: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:38 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23240 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 09:42:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:42:38 compute-0 ceph-mon[76537]: from='client.23234 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:38 compute-0 ceph-mon[76537]: from='client.23238 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2038311636' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 09:42:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:42:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 09:42:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2716947977' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 09:42:38 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23244 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 09:42:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:42:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 09:42:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810040118' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 09:42:38 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23248 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:39 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23252 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 09:42:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2356441915' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 09:42:39 compute-0 ceph-mon[76537]: pgmap v4263: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:39 compute-0 ceph-mon[76537]: from='client.23240 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:39 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2716947977' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 09:42:39 compute-0 ceph-mon[76537]: from='client.23244 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:39 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:42:39 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/810040118' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 09:42:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4264: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:39 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23254 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 09:42:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3732311435' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 09:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:42:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:42:40 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23258 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:40 compute-0 crontab[426866]: (root) LIST (root)
Dec 13 09:42:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 09:42:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155563914' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 09:42:40 compute-0 nova_compute[248510]: 2025-12-13 09:42:40.979 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:41 compute-0 ceph-mon[76537]: from='client.23248 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:41 compute-0 ceph-mon[76537]: from='client.23252 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2356441915' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 09:42:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3732311435' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 09:42:41 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23262 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 13 09:42:41 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254522402' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 09:42:41 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23266 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4265: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:42 compute-0 nova_compute[248510]: 2025-12-13 09:42:42.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:42 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23270 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:42 compute-0 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 09:42:42 compute-0 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:42:42.193+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:09:56.475783+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 67485696 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:09:57.475995+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 67485696 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456929 data_alloc: 218103808 data_used: 185903
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:09:58.476176+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 67485696 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:09:59.476449+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:00.476618+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:01.476849+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:02.477002+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456929 data_alloc: 218103808 data_used: 185903
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:03.477177+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:04.477321+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:05.477530+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329474048 unmapped: 67469312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829398a80
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182a1fe8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc1800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x561829cac8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:06.477661+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c5ff000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c5ff000 session 0x56182b708fc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.984951019s of 62.126846313s, submitted: 70
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bc5e700
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829b65a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329793536 unmapped: 67149824 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:07.477899+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329793536 unmapped: 67149824 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504826 data_alloc: 218103808 data_used: 185903
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:08.478124+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329793536 unmapped: 67149824 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:09.478292+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329793536 unmapped: 67149824 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:10.478413+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:11.478597+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:12.478729+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504826 data_alloc: 218103808 data_used: 185903
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:13.478910+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:14.479170+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:15.479393+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329809920 unmapped: 67133440 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:16.479592+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329809920 unmapped: 67133440 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:17.479830+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504826 data_alloc: 218103808 data_used: 185903
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829bf3a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:18.480065+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc1800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x56182a0001c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:19.480352+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182cc39800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182cc39800 session 0x56182be2d340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.380788803s of 13.533100128s, submitted: 29
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182b886c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:20.482060+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:21.482277+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:22.482396+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554588 data_alloc: 218103808 data_used: 8312367
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:23.482598+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:24.482746+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:25.482895+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:26.483162+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:27.483400+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554588 data_alloc: 218103808 data_used: 8312367
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:28.483580+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:29.483748+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:30.483948+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:31.484131+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.119009972s of 12.188241005s, submitted: 1
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:32.484270+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 333938688 unmapped: 63004672 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601308 data_alloc: 218103808 data_used: 8312367
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:33.484445+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334135296 unmapped: 62808064 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:34.484597+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:35.484742+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb360000/0x0/0x4ffc00000, data 0x12aacd3/0x146c000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:36.484900+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:37.485158+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611734 data_alloc: 234881024 data_used: 9410095
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:38.485329+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb360000/0x0/0x4ffc00000, data 0x12aacd3/0x146c000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:39.485495+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:40.485675+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:41.485864+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:42.486015+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb360000/0x0/0x4ffc00000, data 0x12aacd3/0x146c000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611734 data_alloc: 234881024 data_used: 9410095
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:43.486129+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:44.486288+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:45.486431+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:46.486559+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:47.486758+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611734 data_alloc: 234881024 data_used: 9410095
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:48.486880+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb360000/0x0/0x4ffc00000, data 0x12aacd3/0x146c000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:49.487129+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334028800 unmapped: 62914560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:50.487309+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334028800 unmapped: 62914560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc1800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x56182b0fc8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c60c400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60c400 session 0x56182a1fe540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182c330e00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:51.487468+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829363800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829363800 session 0x561829ace540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.916124344s of 19.432558060s, submitted: 79
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561831c20700
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561828f05180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc1800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x561831c21880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c60c400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60c400 session 0x56182bfb8c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618337d0800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618337d0800 session 0x561829ace1c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334405632 unmapped: 62537728 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:52.487598+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 62472192 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674491 data_alloc: 234881024 data_used: 9414191
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:53.487757+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 62472192 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:54.487930+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eaa13000/0x0/0x4ffc00000, data 0x1bf7cd3/0x1db9000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 62472192 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:55.488138+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 62472192 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:56.488347+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eaa13000/0x0/0x4ffc00000, data 0x1bf7cd3/0x1db9000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:57.488546+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674491 data_alloc: 234881024 data_used: 9414191
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:58.488684+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eaa13000/0x0/0x4ffc00000, data 0x1bf7cd3/0x1db9000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:59.488866+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:00.488999+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:01.489128+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:02.489301+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674491 data_alloc: 234881024 data_used: 9414191
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:03.489424+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.609133720s of 11.786836624s, submitted: 29
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182bc13a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea9ef000/0x0/0x4ffc00000, data 0x1c1bcd3/0x1ddd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 62308352 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:04.489567+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc1800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 62300160 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:05.489733+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 62300160 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:06.489918+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 62300160 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:07.490130+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727215 data_alloc: 234881024 data_used: 17754159
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:08.490274+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:09.502335+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea9ed000/0x0/0x4ffc00000, data 0x1c1ccd3/0x1dde000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:10.502503+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:11.502685+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:12.502802+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea9ed000/0x0/0x4ffc00000, data 0x1c1ccd3/0x1dde000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727215 data_alloc: 234881024 data_used: 17754159
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:13.502943+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:14.503138+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:15.503317+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-mon[76537]: pgmap v4264: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:42 compute-0 ceph-mon[76537]: from='client.23254 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:42 compute-0 ceph-mon[76537]: from='client.23258 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1155563914' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 09:42:42 compute-0 ceph-mon[76537]: from='client.23262 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4254522402' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 09:42:42 compute-0 ceph-mon[76537]: from='client.23266 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:16.503465+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.139926910s of 13.925807953s, submitted: 5
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:17.503679+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54566912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3795023 data_alloc: 234881024 data_used: 17813551
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:18.503797+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9ebb000/0x0/0x4ffc00000, data 0x274fcd3/0x2911000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [0,0,0,0,0,0,0,0,15])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343351296 unmapped: 53592064 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:19.503917+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 53460992 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:20.504046+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343605248 unmapped: 53338112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:21.504211+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343826432 unmapped: 53116928 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:22.504348+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343908352 unmapped: 53035008 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818345 data_alloc: 234881024 data_used: 19507611
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:23.504517+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343908352 unmapped: 53035008 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e13000/0x0/0x4ffc00000, data 0x27f6cd3/0x29b8000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:24.504647+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343908352 unmapped: 53035008 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:25.504787+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343908352 unmapped: 53035008 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:26.504901+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:27.505130+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3816841 data_alloc: 234881024 data_used: 19519899
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:28.505310+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x281acd3/0x29dc000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x281acd3/0x29dc000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:29.505740+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x281acd3/0x29dc000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:30.506314+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:31.506464+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x281acd3/0x29dc000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:32.506621+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.503668785s of 15.253365517s, submitted: 133
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182be2d500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x56182bc12c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818569 data_alloc: 234881024 data_used: 19545499
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:33.507205+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c60c400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:34.507383+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 51970048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:35.507513+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60c400 session 0x56182b855340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9de1000/0x0/0x4ffc00000, data 0x2829cd3/0x29eb000, compress 0x0/0x0/0x0, omap 0x747d8, meta 0x133bb828), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:36.507659+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:37.507843+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:38.508127+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623583 data_alloc: 218103808 data_used: 8891291
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:39.508324+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829cac1c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182c331500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:40.508521+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.8 total, 600.0 interval
                                           Cumulative writes: 45K writes, 175K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3990 writes, 15K keys, 3990 commit groups, 1.0 writes per commit group, ingest: 19.11 MB, 0.03 MB/s
                                           Interval WAL: 3990 writes, 1573 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b5a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b5a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.363       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.363       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.363       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b5a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.8 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb35f000/0x0/0x4ffc00000, data 0x12abcd3/0x146d000, compress 0x0/0x0/0x0, omap 0x74a85, meta 0x133bb57b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:41.508704+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:42.508890+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 55353344 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.131446838s of 10.597537994s, submitted: 44
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:43.509061+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 55353344 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623511 data_alloc: 218103808 data_used: 8891291
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:44.509283+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 55353344 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:45.509507+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829398380
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c71/0x428000, compress 0x0/0x0/0x0, omap 0x74ca9, meta 0x133bb357), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:46.509707+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:47.509958+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:48.510131+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:49.510268+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:50.510405+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:51.510575+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:52.510727+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:53.510858+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:54.511042+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:55.511215+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:56.511368+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:57.511542+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:58.511688+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:59.511805+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:00.511942+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:01.512015+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:02.512132+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:03.512282+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:04.512400+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:05.512556+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:06.512712+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:07.512923+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:08.513039+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:09.513191+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:10.513335+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:11.513527+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:12.513716+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:13.513873+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:14.514020+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:15.514217+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:16.514362+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:17.514529+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:18.514657+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:19.514810+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:20.514981+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:21.515142+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:22.515275+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:23.515438+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:24.515592+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:25.515739+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:26.515874+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:27.516061+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:28.516238+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:29.516413+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:30.516554+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:31.518682+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:32.518839+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:33.518973+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561830d47c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb3c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc1800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x56182bfb96c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182bb11500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.578239441s of 50.584568024s, submitted: 27
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bb83a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182bd40fc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182b854c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c60c400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60c400 session 0x56182a1fec40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf86000/0x0/0x4ffc00000, data 0x685c71/0x846000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182c08e540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:34.519469+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf86000/0x0/0x4ffc00000, data 0x685c71/0x846000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:35.519630+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:36.519787+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:37.520041+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:38.520234+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513790 data_alloc: 218103808 data_used: 169785
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:39.520476+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:40.520630+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:41.520846+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:42.521049+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:43.521267+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513790 data_alloc: 218103808 data_used: 169785
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:44.521471+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:45.521633+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:46.521815+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:47.522020+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.418393135s of 14.521083832s, submitted: 6
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182be2d500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:48.522185+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516668 data_alloc: 218103808 data_used: 169785
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:49.522346+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:50.522556+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:51.522746+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:52.522907+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:53.523115+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545088 data_alloc: 218103808 data_used: 4920137
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:54.523301+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:55.523456+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:56.523612+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:57.523821+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:58.523940+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545088 data_alloc: 218103808 data_used: 4920137
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:59.524107+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:00.524342+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.942356110s of 12.959303856s, submitted: 10
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:01.524483+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338231296 unmapped: 58712064 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:02.524681+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 58040320 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:03.524818+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 58040320 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567036 data_alloc: 218103808 data_used: 4929353
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:04.524947+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338911232 unmapped: 58032128 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:05.525089+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa33c94/0xbf5000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:06.525229+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:07.525418+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:08.525575+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573854 data_alloc: 218103808 data_used: 4929353
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:09.525706+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:10.525896+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.332371712s of 10.059819221s, submitted: 28
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:11.526041+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:12.526239+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:13.526375+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573870 data_alloc: 218103808 data_used: 4929353
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:14.526512+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:15.526643+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:16.526786+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 54501376 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x56182cef4e00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a09ac00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a09ac00 session 0x56182b8861c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:17.526977+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:18.527159+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:19.527371+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:20.527599+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:21.527832+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:22.528026+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:23.528214+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:24.528420+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:25.528659+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:26.529141+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:27.529336+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338264064 unmapped: 58679296 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:28.529469+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338264064 unmapped: 58679296 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.482851028s of 17.732896805s, submitted: 27
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:29.529609+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c60dc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338272256 unmapped: 58671104 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:30.529738+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338436096 unmapped: 58507264 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:31.529853+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338436096 unmapped: 58507264 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:32.529995+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 58499072 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:33.530134+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338452480 unmapped: 58490880 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627765 data_alloc: 218103808 data_used: 9248073
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:34.530335+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 58482688 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:35.530464+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:36.531160+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:37.532775+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:38.532906+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627981 data_alloc: 218103808 data_used: 9248073
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:39.533036+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.794153214s of 10.743078232s, submitted: 91
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:40.533124+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb2dc000/0x0/0x4ffc00000, data 0x1325cf6/0x14e8000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341172224 unmapped: 55771136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:41.533223+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:42.533403+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:43.533531+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669113 data_alloc: 218103808 data_used: 9273673
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1c6000/0x0/0x4ffc00000, data 0x1443cf6/0x1606000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:44.533672+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:45.533842+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1c6000/0x0/0x4ffc00000, data 0x1443cf6/0x1606000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:46.533987+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:47.534184+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:48.534375+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3667489 data_alloc: 218103808 data_used: 9277769
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:49.534545+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:50.534690+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:51.534819+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:52.534966+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:53.535114+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3667745 data_alloc: 218103808 data_used: 9285961
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:55.064491+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:56.064663+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.866159439s of 16.156522751s, submitted: 69
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60dc00 session 0x56182b709a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561830d47340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:57.064825+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:58.065002+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eba06000/0x0/0x4ffc00000, data 0xa42c94/0xc04000, compress 0x0/0x0/0x0, omap 0x74d01, meta 0x133bb2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eba06000/0x0/0x4ffc00000, data 0xa42c94/0xc04000, compress 0x0/0x0/0x0, omap 0x74d01, meta 0x133bb2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:59.065144+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3581947 data_alloc: 218103808 data_used: 4929353
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:00.065289+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:01.065440+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182bf65dc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561831c20700
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829b40c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:02.065602+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c84/0x428000, compress 0x0/0x0/0x0, omap 0x750b9, meta 0x133baf47), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:03.065755+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:04.065958+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:05.066152+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:06.066297+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:07.066498+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:08.066750+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:09.066912+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:10.067096+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:11.067258+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:12.067370+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:13.067510+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:14.067636+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:15.067814+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:16.068022+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:17.068164+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:18.068376+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:19.068553+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:20.068840+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:21.069041+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:22.069219+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:23.069400+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:24.069594+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:25.069763+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:26.069948+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:27.070150+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:28.070373+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:29.070518+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:30.070667+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:31.070868+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:32.071025+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:33.071176+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:34.071419+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:35.071604+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:36.071751+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:37.071926+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:38.072190+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:39.072481+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:40.072638+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:41.072873+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:42.073036+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:43.073232+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:44.073402+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:45.073593+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:46.073776+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:47.073954+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:48.074189+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:49.074345+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:50.074490+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:51.074650+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:52.074836+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:53.074982+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:54.075156+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:55.075312+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:56.075494+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334790656 unmapped: 62152704 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a09ac00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a09ac00 session 0x56182bb83500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182cef41c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182ba7cc40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182c330380
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:57.075649+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.001258850s of 61.117053986s, submitted: 73
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 60293120 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182a1fe540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:58.075815+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:59.075933+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:00.076120+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:01.076277+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:02.076423+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:03.077166+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:04.077954+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:05.078315+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:06.078512+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:07.078681+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:08.078891+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:09.079065+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:10.079295+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x561830d46c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:11.079507+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:12.079698+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335052800 unmapped: 65560576 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182c330e00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:13.079934+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335052800 unmapped: 65560576 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182ba7c8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.469144821s of 16.633630753s, submitted: 17
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182be2d6c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:14.080141+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335060992 unmapped: 65552384 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579208 data_alloc: 218103808 data_used: 177844
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:15.080275+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335020032 unmapped: 65593344 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:16.080390+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:17.080631+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:18.080892+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:19.081108+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652168 data_alloc: 234881024 data_used: 12416692
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:20.081236+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:21.081499+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:22.081700+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:23.081956+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:24.082883+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652168 data_alloc: 234881024 data_used: 12416692
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:25.083178+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:26.083375+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.341166496s of 12.350997925s, submitted: 5
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341409792 unmapped: 59203584 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:27.083573+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 58343424 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb079000/0x0/0x4ffc00000, data 0x1591c94/0x1753000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:28.083761+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 58343424 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:29.083934+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716928 data_alloc: 234881024 data_used: 14595764
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:30.084218+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:31.084409+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:32.084579+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:33.084725+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:34.084860+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717184 data_alloc: 234881024 data_used: 14603956
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:35.085053+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:36.085304+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:37.085470+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:38.085706+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:39.085866+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717440 data_alloc: 234881024 data_used: 14612148
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:40.086041+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:41.086208+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:42.086375+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:43.086543+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:44.086727+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343646208 unmapped: 56967168 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717440 data_alloc: 234881024 data_used: 14612148
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:45.086866+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b709500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b413c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b413c00 session 0x56182bfb3880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182b855340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343646208 unmapped: 56967168 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829cac1c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.109582901s of 19.573698044s, submitted: 104
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561830d47c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b431180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6400 session 0x56182bc5ec40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:46.087017+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561829b65340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829b65880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343670784 unmapped: 56942592 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:47.087309+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:48.089721+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:49.089918+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730452 data_alloc: 234881024 data_used: 14612148
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:50.090748+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:51.090922+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:52.091208+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:53.091766+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:54.091918+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730452 data_alloc: 234881024 data_used: 14612148
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:55.092088+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:56.092190+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:57.092530+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.862773895s of 11.917412758s, submitted: 10
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343834624 unmapped: 56778752 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182bb82380
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:58.092719+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 56770560 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:59.092885+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:00.093255+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:01.093394+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:02.093519+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:03.093676+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:04.093783+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:05.093917+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:06.094053+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:07.094240+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:08.094770+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 56205312 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:09.094910+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 56205312 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.189497948s of 12.216222763s, submitted: 13
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:10.095124+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 54435840 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:11.095260+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 54173696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:12.095423+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea779000/0x0/0x4ffc00000, data 0x1e80cc7/0x2044000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:13.095682+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:14.095877+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791218 data_alloc: 234881024 data_used: 15420612
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:15.096050+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:16.096247+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:17.096492+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:18.096831+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:19.097186+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791234 data_alloc: 234881024 data_used: 15420612
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:20.097298+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:21.097465+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:22.097845+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:23.098051+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:24.098271+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792386 data_alloc: 234881024 data_used: 15502532
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:25.098483+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:26.098689+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:27.098871+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:28.099737+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2d340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.931962967s of 18.925085068s, submitted: 81
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b1e2e00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:29.099937+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788494 data_alloc: 234881024 data_used: 15502532
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:30.100057+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:31.100235+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182cef5340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:32.100399+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:33.100504+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafb2000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x758c9, meta 0x133ba737), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:34.101285+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 53575680 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724208 data_alloc: 234881024 data_used: 14681780
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:35.101441+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 53575680 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182ba7c540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x561829b41340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:36.101631+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 59138048 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:37.101737+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafb2000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x758c9, meta 0x133ba737), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182b708700
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c84/0x428000, compress 0x0/0x0/0x0, omap 0x75c7a, meta 0x133ba386), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:38.101888+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:39.102015+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:40.102197+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:41.102350+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:42.102473+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:43.102608+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:44.102777+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:45.102960+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:46.103122+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:47.103331+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:48.103584+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:49.103765+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:50.103942+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:51.104151+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b081800 session 0x56182c330fc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:52.104335+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: mgrc ms_handle_reset ms_handle_reset con 0x56182a1c1400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: get_auth_request con 0x56182cc39800 auth_method 0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:53.104496+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b415c00 session 0x56182bd53a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829362c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a012800 session 0x56182bb836c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b415c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:54.104669+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:55.104984+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:56.105283+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:57.105464+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:58.105692+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:59.105891+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:00.106057+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:01.106260+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:02.106553+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:03.106804+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:04.107024+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:05.107243+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:06.107534+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:07.107716+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:08.108046+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:09.108348+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:10.108550+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:11.108764+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:12.108924+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:13.109111+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182ba7c8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561831c216c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829cac8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182c330380
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:14.109304+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.973899841s of 45.842048645s, submitted: 73
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527821 data_alloc: 218103808 data_used: 181905
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:15.109493+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2d180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:16.109668+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b855340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23c8a/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:17.109852+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:18.110117+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:19.110285+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3581350 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:20.110463+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:21.110618+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:22.110827+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 62660608 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:23.111015+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 62357504 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:24.111213+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620518 data_alloc: 218103808 data_used: 6809198
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:25.111371+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:26.111550+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:27.111793+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:28.112010+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:29.112172+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620518 data_alloc: 218103808 data_used: 6809198
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:30.112332+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:31.112451+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:32.113304+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:33.113534+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.478496552s of 19.308280945s, submitted: 36
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [0,0,1,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:34.113681+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 61358080 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:35.113840+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 61915136 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668324 data_alloc: 218103808 data_used: 7542382
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:36.114111+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:37.114294+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:38.114494+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb363000/0x0/0x4ffc00000, data 0x1299cc3/0x145a000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:39.114632+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:40.114758+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681236 data_alloc: 218103808 data_used: 7419502
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:41.114907+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:42.115057+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:43.115240+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:44.115385+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:45.115586+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674556 data_alloc: 218103808 data_used: 7423598
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:46.115788+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.386922836s of 12.756870270s, submitted: 92
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:47.115938+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:48.116300+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:49.116534+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829b65880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182a1fe540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182cef41c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c4400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c4400 session 0x56182b0b2000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:50.116654+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561831c20700
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 70705152 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3751276 data_alloc: 218103808 data_used: 7423598
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829167dc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd52000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182bb83a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcbe000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcbe000 session 0x56182b0fdc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:51.116827+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 70705152 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea739000/0x0/0x4ffc00000, data 0x1ed1cd2/0x2093000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:52.117667+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:53.117820+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:54.118035+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:55.118190+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3751276 data_alloc: 218103808 data_used: 7423598
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:56.118446+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea739000/0x0/0x4ffc00000, data 0x1ed1cd2/0x2093000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:57.118608+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829b65500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:58.126285+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182bb708c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561829bf2c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.984330177s of 12.172765732s, submitted: 21
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2c000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:59.126452+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:00.126648+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 70557696 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753806 data_alloc: 218103808 data_used: 7520878
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:01.126839+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:02.127034+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea738000/0x0/0x4ffc00000, data 0x1ed1ce2/0x2094000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:03.127150+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:04.127333+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:05.127632+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819854 data_alloc: 234881024 data_used: 18727534
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:06.127842+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:07.128033+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea738000/0x0/0x4ffc00000, data 0x1ed1ce2/0x2094000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:08.128263+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:09.128539+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:10.128736+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819262 data_alloc: 234881024 data_used: 18731630
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.963133812s of 11.972195625s, submitted: 3
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:11.128888+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 59408384 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:12.129064+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e91000/0x0/0x4ffc00000, data 0x276ace2/0x292d000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 59326464 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:13.129272+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 59858944 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:14.129426+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:15.129598+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884820 data_alloc: 234881024 data_used: 19830382
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:16.129823+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:17.130063+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:18.130286+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:19.130491+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829499000 session 0x56182bb11880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:20.130661+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879500 data_alloc: 234881024 data_used: 19830382
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:21.130805+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.312377930s of 10.672085762s, submitted: 111
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:22.130943+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0c00 session 0x561829b41a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:23.131174+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:24.131340+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bc5fdc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182bd53c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:25.131464+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd4d340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689771 data_alloc: 218103808 data_used: 7411310
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:26.131619+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:27.131790+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:28.132129+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa8000/0x0/0x4ffc00000, data 0x12cbcc3/0x148c000, compress 0x0/0x0/0x0, omap 0x76139, meta 0x133b9ec7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:29.132329+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:30.132497+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689771 data_alloc: 218103808 data_used: 7411310
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:31.132668+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa8000/0x0/0x4ffc00000, data 0x12cbcc3/0x148c000, compress 0x0/0x0/0x0, omap 0x76139, meta 0x133b9ec7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:32.132835+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.554950714s of 10.802752495s, submitted: 39
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182b709a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b0fc8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:33.133007+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:34.133147+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:35.133306+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:36.133483+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:37.133705+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:38.134007+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:39.134235+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:40.134421+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:41.134621+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:42.134807+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:43.135022+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:44.135167+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:45.137896+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:46.138115+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:47.138260+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:48.138453+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:49.138665+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:50.138827+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:51.139038+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:52.139248+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:53.139487+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:54.332702+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:55.332863+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:56.333025+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:57.333144+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:58.333339+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:59.333490+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:00.333677+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:01.333855+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:02.334707+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:03.334959+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:04.335337+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:05.335504+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:06.335650+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:07.336360+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:08.336803+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:09.337413+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:10.338162+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:11.338382+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:12.339455+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:13.340402+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:14.341213+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.951286316s of 42.007965088s, submitted: 30
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:15.341366+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553079 data_alloc: 218103808 data_used: 189964
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:16.342036+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:17.342715+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:18.343003+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:19.343191+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:20.343523+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553079 data_alloc: 218103808 data_used: 189964
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:21.343950+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:22.344374+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182cef4540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182cef4a80
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x561831c20c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182a000e00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:23.344624+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:24.344827+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.872159004s of 10.219871521s, submitted: 29
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 62029824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182bb10380
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182b1e3c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182b0fddc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:25.345032+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611703 data_alloc: 218103808 data_used: 189980
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x561829acefc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182bc12fc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:26.345240+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:27.345450+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:28.345679+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:29.345870+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:30.346133+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611703 data_alloc: 218103808 data_used: 189980
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:31.346275+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd4c700
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 70713344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:32.346433+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 70713344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:33.346602+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 70443008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:34.346744+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:35.346889+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675816 data_alloc: 234881024 data_used: 10259996
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:36.347038+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:37.347330+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:38.347560+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:39.347692+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:40.347978+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675816 data_alloc: 234881024 data_used: 10259996
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:41.348122+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:42.348324+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:43.348496+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.931827545s of 19.378744125s, submitted: 11
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:44.348630+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 62586880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:45.348818+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759650 data_alloc: 234881024 data_used: 11513372
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:46.349020+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:47.349216+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:48.349394+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:49.349565+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:50.349704+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759650 data_alloc: 234881024 data_used: 11513372
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:51.349852+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:52.350054+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:53.350212+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:54.350366+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:55.350518+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0x16c8c71/0x1889000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753666 data_alloc: 234881024 data_used: 11513372
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:56.350672+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.650314331s of 12.895412445s, submitted: 88
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:57.350862+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da2000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:58.351161+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:59.351319+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:00.351517+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753866 data_alloc: 234881024 data_used: 11513372
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da2000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:01.351707+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182cef5c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182a1ffdc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:02.351839+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:03.351998+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:04.352132+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:05.352302+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3794327 data_alloc: 234881024 data_used: 11513372
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:06.352501+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:07.352677+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:08.352880+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:09.353066+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:10.353288+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3794327 data_alloc: 234881024 data_used: 11513372
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.721897125s of 14.075347900s, submitted: 35
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182c08e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:11.353514+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c611800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:12.353628+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:13.353806+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:14.353995+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:15.354180+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817488 data_alloc: 234881024 data_used: 15285788
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:16.354350+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:17.354491+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:18.354661+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:19.354829+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:20.354993+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817872 data_alloc: 234881024 data_used: 15289884
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:21.355242+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 62488576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:22.355476+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.944444656s of 11.960687637s, submitted: 7
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 61833216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:23.355608+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62734336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:24.355733+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7ef4000/0x0/0x4ffc00000, data 0x23c6cf6/0x2589000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:25.355926+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7ef4000/0x0/0x4ffc00000, data 0x23c6cf6/0x2589000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3875166 data_alloc: 234881024 data_used: 15687196
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:26.356157+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:27.356311+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:28.356541+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:29.356683+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355803136 unmapped: 60563456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:30.356798+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7f00000/0x0/0x4ffc00000, data 0x23c9cf6/0x258c000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869110 data_alloc: 234881024 data_used: 15691292
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355803136 unmapped: 60563456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:31.356937+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:32.357171+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:33.357473+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.615018845s of 11.088137627s, submitted: 103
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561830d46c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182d6b0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:34.357620+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182d6b0400 session 0x56182cef4380
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:35.358260+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765060 data_alloc: 234881024 data_used: 11513372
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:36.358373+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8c00000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76b7d, meta 0x156f9483), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:37.358561+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b708a80
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182fe8b180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:38.358806+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x5618293988c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:39.359162+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:40.359320+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:41.359487+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:42.359665+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:43.359835+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:44.359977+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:45.360159+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:46.360343+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:47.360519+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:48.360752+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:49.360899+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:50.361123+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:51.361330+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:52.361506+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:53.361680+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:54.361854+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:55.362019+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 64634880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:56.362186+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 64634880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:57.362340+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:58.362522+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:59.362643+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:00.362784+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:01.363288+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:02.363415+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:03.363559+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:04.363723+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:05.363890+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:06.364131+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:07.364350+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:08.364628+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:09.364771+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:10.364993+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:11.365229+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:12.365407+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:13.365513+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:14.365717+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182b854000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182b431c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182ba7c540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182fe8a8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.068725586s of 41.208274841s, submitted: 78
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 64077824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x561829ace540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:15.365881+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640912 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:16.366144+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:17.366272+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:18.366402+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:19.366583+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0xbdec61/0xd9e000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:20.366752+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640912 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:21.366919+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:22.367119+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:23.367345+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bfb9180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0xbdec61/0xd9e000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 64528384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:24.367496+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 64528384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c611800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:25.367638+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701496 data_alloc: 234881024 data_used: 10021255
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:26.367836+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:27.367967+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:28.368153+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:29.368377+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:30.368643+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701496 data_alloc: 234881024 data_used: 10021255
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:31.368802+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:32.368976+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:33.369125+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:34.369285+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:35.369574+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702776 data_alloc: 234881024 data_used: 10060167
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.024909973s of 21.150117874s, submitted: 14
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:36.369703+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 63397888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:37.369924+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e94a6000/0x0/0x4ffc00000, data 0xe26c61/0xfe6000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [0,0,0,0,0,0,0,1,0,27])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62332928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:38.370146+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 62021632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:39.370307+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9139000/0x0/0x4ffc00000, data 0x118bc61/0x134b000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9139000/0x0/0x4ffc00000, data 0x118bc61/0x134b000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:40.370468+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6601.8 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2279 writes, 9430 keys, 2279 commit groups, 1.0 writes per commit group, ingest: 11.32 MB, 0.02 MB/s
                                           Interval WAL: 2279 writes, 877 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768766 data_alloc: 234881024 data_used: 11496839
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:41.370700+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:42.370871+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:43.371041+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:44.371160+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:45.371311+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:46.371466+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:47.371608+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:48.371757+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:49.371984+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:50.372173+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:51.372288+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:52.372427+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:53.372538+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:54.372695+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:55.372867+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:56.373049+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:57.373290+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:58.373486+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:59.373648+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:00.373778+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769038 data_alloc: 234881024 data_used: 11505031
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:01.373992+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:02.374218+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:03.374387+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:04.374526+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:05.374698+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769038 data_alloc: 234881024 data_used: 11505031
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:06.374827+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:07.374989+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:08.375125+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829f62c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.785942078s of 32.167488098s, submitted: 70
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829f62c00 session 0x561831c20540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:09.375384+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:10.375558+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:11.375767+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3825944 data_alloc: 234881024 data_used: 11505031
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:12.375998+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8846000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8846000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:13.376207+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb9500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:14.376373+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182b855880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:15.376546+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182b887dc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x561828f04fc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:16.376725+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826205 data_alloc: 234881024 data_used: 11505031
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:17.376888+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:18.377092+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:19.377253+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:20.377388+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:21.377555+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3873569 data_alloc: 234881024 data_used: 19325831
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:22.377729+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:23.377908+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:24.378051+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:25.378288+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:26.378449+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3873569 data_alloc: 234881024 data_used: 19325831
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 59473920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:27.378601+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 59473920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.664230347s of 19.858476639s, submitted: 27
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:28.378898+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 55656448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8691000/0x0/0x4ffc00000, data 0x1c3bc61/0x1dfb000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:29.379026+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 55631872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:30.379162+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:31.379386+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3930201 data_alloc: 234881024 data_used: 20498311
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:32.379576+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:33.379783+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fe5000/0x0/0x4ffc00000, data 0x22e7c61/0x24a7000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:34.379952+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:35.380154+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:36.380298+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3929153 data_alloc: 234881024 data_used: 20498311
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:37.380480+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:38.380857+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:39.381025+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:40.381214+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 55328768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:41.381497+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3929665 data_alloc: 234881024 data_used: 20559751
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 55328768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bd53180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.727113724s of 13.411382675s, submitted: 117
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x561829399c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bb83dc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:42.381694+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:43.381848+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:44.381988+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:45.382393+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:46.382876+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776487 data_alloc: 234881024 data_used: 11566471
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:47.383691+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:48.384187+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561829398a80
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bb82540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:49.384383+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182c331880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:50.384767+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:51.385107+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:52.385355+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:53.385733+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:54.385989+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:55.386128+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:56.386350+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:57.386533+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:58.386807+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:59.387001+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:00.387195+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:01.387426+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:02.387588+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:03.387749+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:04.387997+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:05.388187+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:06.389015+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:07.389221+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:08.389423+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:09.390290+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:10.391013+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:11.391124+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:12.391286+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:13.391446+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:14.391659+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:15.391807+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:16.391996+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:17.392145+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:18.392586+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:19.393121+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:20.393260+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:21.393447+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:22.393608+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb8c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182b887dc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561829b64700
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c611800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561831c21180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.109107971s of 41.198600769s, submitted: 61
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bd41180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182b0fcc40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bd4ddc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182bb708c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:23.394004+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bb83dc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:24.394206+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:25.394357+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:26.394597+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662357 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:27.394845+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:28.395156+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:29.395386+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:30.395546+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c611800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:31.395696+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561829ace540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3663102 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:32.395919+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.737547874s of 10.004721642s, submitted: 42
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:33.396111+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:34.396385+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:35.396545+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:36.396703+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb3dc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:37.396878+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:38.397149+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:39.397286+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:40.397467+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:41.397616+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:42.397791+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:43.397953+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:44.398150+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:45.398298+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:46.398440+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:47.399193+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:48.399488+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182c08f180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:49.399702+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:50.399877+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:51.400166+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:52.400704+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:53.400859+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:54.401024+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bfb3500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:55.401185+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.629669189s of 22.757272720s, submitted: 64
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182c331a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:56.401589+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:57.401831+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:58.402159+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:59.402840+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:00.402979+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:01.403482+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:02.403765+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:03.403992+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:04.404265+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:05.404427+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:06.404703+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:07.405144+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:08.405466+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:09.405701+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:10.405963+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:11.406213+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:12.406446+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:13.406716+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:14.406940+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:15.407053+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:16.407203+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:17.407382+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:18.407816+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:19.407962+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:20.408161+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:21.408440+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:22.408638+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:23.408913+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:24.409134+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:25.409314+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:26.409524+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:27.409707+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:28.409997+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:29.410217+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:30.410340+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:31.410520+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:32.410674+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:33.410840+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:34.410971+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:35.411107+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:36.411273+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:37.411440+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:38.411616+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:39.411756+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:40.411893+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:41.412049+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:42.412258+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:43.412481+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:44.412585+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:45.412725+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:46.412869+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:47.413130+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:48.413316+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:49.413496+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:50.413657+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:51.413821+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:52.414053+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.619342804s of 57.669017792s, submitted: 25
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:53.414222+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182a001180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bd40c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182be2c000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x561829399c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182c3316c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:54.414418+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bfb2000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:55.414505+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b854e00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9ed5000/0x0/0x4ffc00000, data 0x3f7c61/0x5b7000, compress 0x0/0x0/0x0, omap 0x77891, meta 0x156f876f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561831c21a40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:56.414634+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182b4308c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9ed5000/0x0/0x4ffc00000, data 0x3f7c61/0x5b7000, compress 0x0/0x0/0x0, omap 0x77891, meta 0x156f876f), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628047 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:57.414786+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:58.415186+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:59.415345+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:00.415484+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:01.415647+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182c08ec40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3637451 data_alloc: 218103808 data_used: 1815908
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344580096 unmapped: 71786496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:02.415775+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77c7d, meta 0x156f8383), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bb82000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:03.415950+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:04.416203+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:05.416372+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:06.416568+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:07.416759+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:08.416960+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:09.417152+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:10.417333+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:11.417475+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:12.417636+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:13.417874+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:14.418292+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:15.418548+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:16.418872+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:17.419111+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:18.419397+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:19.419652+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:20.419875+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:21.420205+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:22.420447+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:23.420662+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:24.420914+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:25.421134+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:26.421342+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:27.421837+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:28.422265+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:29.422536+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:30.422728+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:31.422966+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:32.423250+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:33.423794+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:34.424176+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:35.424501+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:36.424915+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:37.425233+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:38.425550+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:39.425856+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:40.426061+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:41.426351+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:42.426580+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:43.426816+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344637440 unmapped: 71729152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:44.427191+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344645632 unmapped: 71720960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:45.427515+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:46.427755+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:47.428053+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:48.428432+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:49.428716+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:50.428932+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:51.429401+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:52.429710+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:53.430051+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:54.430409+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:55.430645+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:56.431207+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:57.431681+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:58.432203+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:59.433159+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:00.433478+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:01.433760+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:02.434214+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:03.434831+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:04.435260+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:05.435908+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:06.436749+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:07.437654+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:08.438458+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:09.439550+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:10.440120+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:11.440806+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:12.441142+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 78.625038147s of 79.510787964s, submitted: 41
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:13.441492+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 313 ms_handle_reset con 0x56182b411c00 session 0x56182ba7c540
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:14.441852+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:15.442048+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:16.442373+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:17.442655+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619786 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:18.442906+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:19.443237+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:20.443484+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:21.443846+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:22.444223+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619786 data_alloc: 218103808 data_used: 202084
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:23.444494+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:24.444846+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 313 ms_handle_reset con 0x561829b7e000 session 0x56182ba7c8c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.686821938s of 11.737939835s, submitted: 28
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 71622656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:25.445062+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 71622656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:26.445273+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:27.445426+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:28.445844+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:29.446001+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:30.446257+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:31.446448+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:32.446644+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:33.447430+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:34.447714+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:35.447952+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:36.448388+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:37.448958+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:38.449792+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:39.449982+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:40.450162+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:41.450548+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:42.450736+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:43.451055+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:44.451301+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:45.451429+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:46.451639+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:47.451888+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:48.452147+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:49.452420+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:50.452597+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:51.452729+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:52.452933+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:53.453091+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:54.453288+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:55.453410+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:56.453554+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 71557120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:57.453734+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 71557120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:58.454047+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:59.454261+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:00.454413+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:01.454555+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:02.454744+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:03.454895+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:04.455052+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:05.455245+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:06.455406+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:07.455608+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:08.455832+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:09.455981+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:10.456205+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:11.456440+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:12.456648+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:13.456832+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:14.457139+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:15.457351+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:16.457524+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:17.457654+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:18.457905+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:19.458113+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:20.458245+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:21.458389+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:22.458555+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:23.458740+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:24.458923+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:25.459133+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:26.459293+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:27.459488+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 71499776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:28.459717+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:29.459908+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:30.460064+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:31.460331+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:32.460535+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:33.460747+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344883200 unmapped: 71483392 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:34.460978+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344883200 unmapped: 71483392 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:35.461171+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:36.461317+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:37.461477+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:38.462351+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:39.462783+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:40.463046+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:41.463620+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344899584 unmapped: 71467008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:42.463862+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344899584 unmapped: 71467008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:43.464102+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:44.464328+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:45.464547+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:46.464801+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:47.465181+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:48.465467+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:49.465658+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:50.465831+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:51.466020+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:52.466182+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:53.466319+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:54.466448+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:55.466629+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:56.466783+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:57.467012+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:58.467394+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:59.467823+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 71434240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:00.468006+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 71434240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:01.468201+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:02.468475+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:03.468882+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:04.469093+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:05.469360+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:06.469571+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:07.469841+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:08.470198+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:09.471308+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:10.471553+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:11.471779+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 ms_handle_reset con 0x56182a00e000 session 0x56182bd416c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 ms_handle_reset con 0x56182a1c6c00 session 0x56182bb11340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:12.472053+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:13.472405+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:14.472586+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:15.472850+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:16.473268+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:17.473429+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206180
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:18.473741+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:19.474009+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:20.474229+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:21.474387+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 117.142692566s of 117.148269653s, submitted: 14
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:22.474557+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 315 ms_handle_reset con 0x56182c0e9400 session 0x56182c08efc0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623367 data_alloc: 218103808 data_used: 206129
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 71360512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:23.474708+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 71352320 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:24.474892+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 71344128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:25.475037+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 315 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26cd26/0x42a000, compress 0x0/0x0/0x0, omap 0x78b82, meta 0x156f747e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 315 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26cd26/0x42a000, compress 0x0/0x0/0x0, omap 0x78b82, meta 0x156f747e), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 71344128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:26.475233+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 316 ms_handle_reset con 0x56182bcc0400 session 0x56182bc13340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 71335936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:27.475402+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3625319 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 71335936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:28.475581+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 316 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26e8f3/0x42c000, compress 0x0/0x0/0x0, omap 0x78c09, meta 0x156f73f7), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:29.475766+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:30.475979+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:31.476180+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.064450264s of 10.179132462s, submitted: 41
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 71311360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:32.476416+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628093 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 71311360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:33.476593+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 317 heartbeat osd_stat(store_statfs(0x4ea05b000/0x0/0x4ffc00000, data 0x27038e/0x42f000, compress 0x0/0x0/0x0, omap 0x78c90, meta 0x156f7370), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 71303168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 317 heartbeat osd_stat(store_statfs(0x4ea05b000/0x0/0x4ffc00000, data 0x27038e/0x42f000, compress 0x0/0x0/0x0, omap 0x78c90, meta 0x156f7370), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:34.476748+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: get_auth_request con 0x56182a213000 auth_method 0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:35.476937+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:36.477129+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:37.477295+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3630867 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:38.477561+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345088000 unmapped: 71278592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:39.477702+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 ms_handle_reset con 0x56182a00e000 session 0x56182bb11880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 71262208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:40.477856+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:41.478010+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 71262208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:42.478202+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:43.478380+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:44.478526+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:45.478681+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:46.478853+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:47.479012+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:48.479250+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:49.479423+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:50.479620+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:51.479809+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:52.480024+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:53.480177+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:54.480376+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:55.480602+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:56.480767+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:57.480988+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:58.481206+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:59.481397+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:00.481573+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:01.481810+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:02.482013+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:03.482188+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:04.482378+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:05.482559+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:06.482772+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:07.482936+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:08.483172+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:09.483411+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:10.483673+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:11.483915+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:12.484125+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 71204864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:13.484310+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 71204864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:14.484513+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345169920 unmapped: 71196672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:15.484651+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:16.484867+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:17.485007+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:18.485390+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:19.485516+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:20.485669+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:21.485864+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:22.486197+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:23.486336+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:24.486504+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:25.486656+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:26.486836+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:27.487017+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:28.487269+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:29.487490+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:30.487683+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:31.487909+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:32.488154+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:33.488308+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:34.488595+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:35.488857+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:36.489034+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 71155712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:37.489321+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 71155712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:38.489546+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:39.489726+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:40.489952+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:41.490185+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:42.490324+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:43.490536+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:44.490698+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:45.490835+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:46.490983+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:47.491119+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 71122944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:48.491332+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 71122944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:49.491493+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:50.491707+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:51.491989+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:52.492181+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:53.492397+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:54.492582+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:55.492859+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:56.493231+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:57.493447+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:58.493824+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:59.494151+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:00.494456+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:01.494747+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:02.495046+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:03.495438+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:04.495668+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:05.495973+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:06.496233+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:07.496511+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:08.496768+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 71081984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:09.496943+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 71081984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:10.497171+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:11.497390+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:12.497608+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:13.497873+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:14.498142+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:15.498360+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:16.498571+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:17.498766+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:18.498970+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:19.499150+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:20.499297+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:21.499487+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:22.499676+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:23.499863+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:24.500011+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:25.500188+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:26.500353+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:27.500525+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 71041024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 116.263618469s of 116.304878235s, submitted: 39
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:28.500704+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679712 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 79429632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 319 handle_osd_map epochs [320,320], i have 320, src has [1,320]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 320 ms_handle_reset con 0x56182a1c6c00 session 0x56182be2c000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:29.500905+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:30.501144+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 78422016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 ms_handle_reset con 0x56182b411c00 session 0x56182c08ec40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:31.501301+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e9052000/0x0/0x4ffc00000, data 0x127558b/0x143a000, compress 0x0/0x0/0x0, omap 0x7a247, meta 0x156f5db9), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:32.501524+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:33.501761+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:34.501965+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:35.502232+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:36.502421+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:37.502571+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:38.502832+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:39.502962+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:40.503147+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:41.503322+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:42.503506+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:43.503709+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:44.503857+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:45.504006+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-13T09:30:46.504215+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _finish_auth 0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:46.505451+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:47.504591+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:48.504853+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:49.505014+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:50.505212+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:51.505356+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:52.508113+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:53.508392+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:54.508598+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:55.508752+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:56.508932+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:57.509365+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:58.509643+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:59.509801+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:00.509982+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:01.510171+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:02.510424+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:03.510591+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:04.510806+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:05.511225+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:06.511491+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.994350433s of 38.509582520s, submitted: 26
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:07.511691+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:08.511918+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728172 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:09.512240+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904f000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a571, meta 0x156f5a8f), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:10.512414+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 322 ms_handle_reset con 0x56182c0e9400 session 0x56182ba7c000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:11.512567+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:12.512808+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:13.513042+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730846 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:14.513281+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:15.513464+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 322 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 322 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:16.513610+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:17.513757+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:18.513937+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730846 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:19.514119+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.453020096s of 13.245205879s, submitted: 36
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:20.514290+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:21.514433+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:22.514739+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:23.514967+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:24.515152+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:25.515383+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:26.515647+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:27.515816+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:28.516006+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:29.516210+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:30.516934+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:31.517666+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:32.517815+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:33.517985+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347578368 unmapped: 77185024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:34.518181+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347578368 unmapped: 77185024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:35.518406+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:36.518573+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:37.518739+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:38.518948+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:39.519217+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:40.519422+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7201.8 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 997 writes, 3370 keys, 997 commit groups, 1.0 writes per commit group, ingest: 2.81 MB, 0.00 MB/s
                                           Interval WAL: 997 writes, 422 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread fragmentation_score=0.004687 took=0.000062s
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:41.519575+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:42.519719+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:43.519891+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:44.520041+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:45.520175+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:46.520364+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:47.520554+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:48.520762+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:49.520955+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:50.521169+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:51.521324+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x5618299ef400 session 0x561831c201c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347611136 unmapped: 77152256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:52.521603+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: mgrc ms_handle_reset ms_handle_reset con 0x56182cc39800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: get_auth_request con 0x56182bcc0400 auth_method 0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x561829362c00 session 0x56182cef4c40
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182b415c00 session 0x56182bd4c1c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829109800
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:53.521751+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:54.522043+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:55.522273+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:56.522471+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:57.522636+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:58.523226+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:59.523433+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:00.523609+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:01.523842+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:02.524019+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:03.524232+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:04.524425+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:05.524653+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:06.524887+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:07.525050+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:08.525297+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:09.525535+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:10.525734+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:11.525948+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:12.526177+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:13.526362+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:14.526560+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:15.526752+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347643904 unmapped: 77119488 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:16.526988+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347643904 unmapped: 77119488 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:17.527211+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:18.527460+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:19.527673+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:20.527907+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:21.528099+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:22.528251+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:23.528497+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 77103104 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:24.528801+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 77103104 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:25.528991+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:26.529160+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:27.529418+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:28.529631+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:29.531024+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:30.531202+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:31.531891+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:32.532344+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:33.533278+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347676672 unmapped: 77086720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:34.533658+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347676672 unmapped: 77086720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:35.534124+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:36.534568+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:37.534847+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:38.535290+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:39.535740+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:40.536052+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:41.536408+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:42.536709+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:43.537222+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:44.537617+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:45.537883+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:46.538194+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:47.538507+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:48.538931+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:49.539260+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:50.539512+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:51.539716+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:52.539946+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:53.540249+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:54.540510+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:55.540758+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:56.541043+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:57.541382+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:58.541746+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:59.543197+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:00.543557+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:01.544554+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:02.545040+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:03.545763+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:04.546479+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:05.546752+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:06.547363+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:07.547649+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:08.548193+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:09.548627+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:10.548873+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:11.549304+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:12.549688+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:13.550222+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:14.550654+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:15.550868+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:16.551200+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:17.551721+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:18.552266+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182a1c6000 session 0x56182fe8a380
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:19.552511+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:20.552767+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:21.553047+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182ba4bc00 session 0x561830d47880
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:22.553335+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:23.553627+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:24.553875+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:25.554123+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:26.554327+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 127.064315796s of 127.072921753s, submitted: 13
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:27.554504+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347774976 unmapped: 76988416 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:28.554812+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 76980224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:29.555020+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347791360 unmapped: 76972032 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:30.555224+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:31.555459+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:32.555628+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:33.556196+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:34.556340+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 76931072 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:35.556498+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:36.557006+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:37.557189+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:38.557361+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:39.557606+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:40.557855+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:41.558038+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:42.558209+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:43.558385+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:44.558596+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 76898304 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:45.558823+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 76898304 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:46.559004+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:47.559216+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:48.559445+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:49.559618+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:50.559750+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:51.559922+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:52.560270+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:53.560519+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:54.560733+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:55.560907+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:56.561056+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:57.561257+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:58.561521+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:59.561680+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:00.561868+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:01.562035+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:02.562230+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:03.562374+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:04.562565+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:05.562761+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:06.562884+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:07.563151+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:08.563347+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:09.563531+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:10.563670+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:11.563942+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:12.564147+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:13.564313+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:14.564483+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.015445709s of 48.233345032s, submitted: 110
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:15.564655+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:16.564894+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:17.565159+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:18.565348+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:19.565524+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:20.565701+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:21.565875+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:22.566130+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:23.566305+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:24.566486+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:25.566663+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:26.566892+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:27.567141+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:28.567340+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:29.567523+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:30.567688+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:31.567902+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 76824576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:32.568065+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.083396912s of 17.435754776s, submitted: 6
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 76824576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:33.568304+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:34.568493+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:35.568698+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:36.568917+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:37.569120+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:38.569258+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:39.569432+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:40.569590+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:41.569760+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:42.569942+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:43.570189+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:44.570373+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:45.570550+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:46.570768+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:47.570959+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:48.571197+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:49.571371+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:50.571551+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:51.571734+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:52.572024+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 76783616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:53.572265+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 76783616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:54.576065+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 76783616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:55.576286+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 76775424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:56.576497+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 76775424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:57.576683+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:58.576899+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:59.577179+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:00.577390+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:01.577552+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:02.577742+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:03.577929+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:04.578182+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:05.578407+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:06.578582+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:07.578777+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:08.579118+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 76750848 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:09.579407+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 76750848 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:10.579644+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 76750848 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:11.579882+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 76742656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:12.580221+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 76742656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:13.580592+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:14.580769+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:15.581164+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:16.581530+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:17.581943+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:18.582288+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:19.582484+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:20.582921+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:21.583261+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:22.583540+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:23.583724+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:24.584026+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:25.584244+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:26.584454+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:27.584652+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 76709888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:28.584916+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 76709888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:29.585404+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 76709888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:30.585808+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 76709888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:31.586058+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:32.586473+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:33.586977+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:34.587454+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:35.587777+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:36.588153+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:37.588383+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:38.588779+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:39.589029+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:40.589285+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:41.589462+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:42.589680+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:43.589861+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 76677120 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:44.590057+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 76677120 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:45.590317+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:46.590514+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:47.590720+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:48.590947+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:49.591146+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:50.591315+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:51.591477+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:52.591650+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:53.591845+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:54.592022+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:55.592953+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:56.593243+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:57.593431+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:58.595232+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:59.595408+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 76644352 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:00.595559+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 76644352 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:01.595762+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 76636160 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:02.595903+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 76636160 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:03.596117+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 76627968 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:04.596279+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 76627968 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:05.596408+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 76627968 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:06.596549+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:07.596757+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 76627968 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:08.597010+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:09.597192+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:10.597341+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:11.597546+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:12.597749+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:13.597855+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:14.598050+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:15.598279+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:16.598432+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 76611584 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:17.598657+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 76611584 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:18.598869+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:19.599008+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:20.599125+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:21.599295+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:22.599448+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:23.599625+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:24.599788+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:25.599967+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:26.600160+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:27.600336+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:28.600577+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:29.600814+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:30.600985+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:31.601160+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:32.601326+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 76578816 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:33.601697+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 76578816 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:34.601876+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:35.602056+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:36.602280+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:37.602469+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:38.602737+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:39.602874+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:40.603122+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:41.603312+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:42.603493+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:43.603644+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:44.603805+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:45.604036+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:46.604340+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:47.604536+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:48.604827+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:49.605034+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:50.605290+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:51.605519+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:52.607358+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:53.607559+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:54.607712+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:55.607868+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:56.608048+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:57.608317+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:58.608546+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:59.608754+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:00.609041+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:01.609296+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:02.609495+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:03.609661+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:04.609856+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:05.610042+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:06.610304+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:07.610515+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:08.610778+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:09.610964+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 76505088 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:10.611134+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 76505088 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:11.611371+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:12.611566+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:13.611735+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:14.611910+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:15.612142+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:16.612373+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:17.612617+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:18.612865+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:19.613116+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 76488704 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:20.613338+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 76488704 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:21.613513+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 76488704 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:22.613730+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:23.613950+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:24.614149+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:25.614338+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:26.614539+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:27.614722+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 76472320 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:28.614941+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 76472320 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:29.615156+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 76464128 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:30.615328+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348307456 unmapped: 76455936 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:31.615523+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:32.615703+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:33.615898+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:34.616186+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:35.616487+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:36.616705+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:37.616890+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:38.617103+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:39.617425+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:40.617603+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:41.617798+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:42.618000+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:43.618206+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:44.618480+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:45.618708+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:46.618905+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:47.619129+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:48.619352+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:49.619558+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 76423168 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:50.619743+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 76414976 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:51.619939+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 76414976 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:52.620150+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 76414976 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:53.620297+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:54.620490+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:55.620704+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:56.620891+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:57.621110+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:58.621410+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:59.621633+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:00.621814+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:01.622054+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:02.622265+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:03.622456+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:04.622724+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:05.622901+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:06.623052+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:07.623245+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 76390400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:08.623429+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 76390400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:09.623618+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 76390400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:10.623784+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 76390400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:11.623995+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 76374016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:12.624172+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 76374016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:13.626158+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 76374016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:14.626334+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 76374016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:15.626568+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:16.626722+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:17.626927+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:18.627202+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:19.627392+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:20.627625+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:21.627824+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 76349440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:22.627967+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 76349440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:23.628227+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:24.628480+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:25.628726+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:26.628952+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:27.629152+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:28.629354+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:29.629516+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:30.638595+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:31.638809+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 76324864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:32.638969+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 76324864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:33.639165+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:34.639333+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:35.639553+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:36.639757+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:37.639989+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:38.640235+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:39.640424+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 76308480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:40.640618+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 76308480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:41.640852+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:42.641347+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:43.641561+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:44.641780+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:45.641975+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:46.642218+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:47.642438+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 250.118301392s of 255.261520386s, submitted: 16
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 76283904 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:48.642673+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 76283904 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:49.642824+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 76259328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:50.643168+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733452 data_alloc: 218103808 data_used: 210410
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 76259328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:51.643366+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 76259328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:52.643542+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 76259328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:53.643730+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 323 handle_osd_map epochs [323,324], i have 323, src has [1,324]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 76251136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:54.643919+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 76251136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:55.644061+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3736040 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 76234752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:56.644256+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 324 heartbeat osd_stat(store_statfs(0x4e9045000/0x0/0x4ffc00000, data 0x127c363/0x1445000, compress 0x0/0x0/0x0, omap 0x7ad5a, meta 0x156f52a6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 76234752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:57.644408+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.513689041s of 10.024970055s, submitted: 41
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:58.644622+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:59.644811+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 324 ms_handle_reset con 0x56182c0e9400 session 0x56182bb70a80
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:00.644990+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693904 data_alloc: 218103808 data_used: 210375
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:01.645218+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c60cc00
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:02.645445+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 324 heartbeat osd_stat(store_statfs(0x4e9847000/0x0/0x4ffc00000, data 0xa7c340/0xc44000, compress 0x0/0x0/0x0, omap 0x7ad5a, meta 0x156f52a6), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:03.645644+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:04.645842+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348553216 unmapped: 76210176 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 325 heartbeat osd_stat(store_statfs(0x4e9842000/0x0/0x4ffc00000, data 0xa7ddbf/0xc47000, compress 0x0/0x0/0x0, omap 0x7b965, meta 0x156f469b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:05.645996+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3697334 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348553216 unmapped: 76210176 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:06.646158+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 325 heartbeat osd_stat(store_statfs(0x4e9843000/0x0/0x4ffc00000, data 0xa7ddbf/0xc47000, compress 0x0/0x0/0x0, omap 0x7b965, meta 0x156f469b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 325 handle_osd_map epochs [326,326], i have 326, src has [1,326]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348569600 unmapped: 76193792 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:07.646342+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.852182388s of 10.416975975s, submitted: 26
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 76177408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:08.646606+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 76177408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:09.646819+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 326 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0xa7f9af/0xc4a000, compress 0x0/0x0/0x0, omap 0x7b9ed, meta 0x156f4613), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 76169216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:10.646985+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700108 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 76169216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:11.647240+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 76161024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:12.647476+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 76161024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:13.647683+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 326 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0xa7f9af/0xc4a000, compress 0x0/0x0/0x0, omap 0x7b9ed, meta 0x156f4613), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:14.647884+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 326 ms_handle_reset con 0x56182c60cc00 session 0x56182b709340
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:15.648166+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659840 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:16.648412+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:17.648602+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:18.649154+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 326 heartbeat osd_stat(store_statfs(0x4ea040000/0x0/0x4ffc00000, data 0x27f9af/0x44a000, compress 0x0/0x0/0x0, omap 0x7b9ed, meta 0x156f4613), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:19.649316+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 76120064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.377560616s of 12.102450371s, submitted: 13
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:20.649466+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc1400
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662614 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 76103680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:21.649654+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 67698688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:22.649816+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348692480 unmapped: 76070912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 ms_handle_reset con 0x56182bcc1400 session 0x561829bf3500
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:23.649946+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348692480 unmapped: 76070912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:24.650167+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 76062720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:25.650393+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 76062720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:26.650685+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 76062720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:27.650894+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348708864 unmapped: 76054528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:28.651190+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348708864 unmapped: 76054528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:29.651354+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:30.651521+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:31.651698+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:32.651854+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:33.651995+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:34.652256+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:35.652434+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 76029952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:36.652592+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 76029952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:37.652801+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 76029952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:38.653270+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:39.653442+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:40.653616+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:41.653813+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:42.654021+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:43.654428+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:44.654656+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:45.654835+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:46.655152+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:47.655453+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:48.655732+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:49.655917+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:50.656144+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:51.656270+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:52.656463+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:53.656633+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:54.656783+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:55.656926+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:56.657146+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:57.657401+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:58.657620+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:59.657809+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:00.658005+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348782592 unmapped: 75980800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:01.658184+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348782592 unmapped: 75980800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:02.658339+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:03.658521+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:04.658688+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:05.659113+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:06.659291+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:07.659483+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:08.659749+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:09.659944+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:10.660174+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:11.660411+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:12.660572+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:13.660785+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:14.661000+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:15.661199+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:16.661469+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 75948032 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:17.661695+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 75948032 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:18.661896+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:19.662097+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:20.662319+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:21.662434+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:22.662640+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:23.662920+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:24.663173+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:25.663377+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:26.663552+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:27.663727+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:28.664160+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618337d0000
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 67.868362427s of 68.700660706s, submitted: 25
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:29.664425+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 329 ms_handle_reset con 0x5618337d0000 session 0x56182b8541c0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:30.664612+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 75890688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 329 heartbeat osd_stat(store_statfs(0x4ea037000/0x0/0x4ffc00000, data 0x284bba/0x453000, compress 0x0/0x0/0x0, omap 0x7caed, meta 0x156f3513), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672841 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:31.664793+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 75890688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 329 heartbeat osd_stat(store_statfs(0x4ea037000/0x0/0x4ffc00000, data 0x284bba/0x453000, compress 0x0/0x0/0x0, omap 0x7caed, meta 0x156f3513), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:32.664941+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:33.665061+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:34.665278+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:35.665431+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 329 heartbeat osd_stat(store_statfs(0x4ea037000/0x0/0x4ffc00000, data 0x284bba/0x453000, compress 0x0/0x0/0x0, omap 0x7caed, meta 0x156f3513), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672841 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:36.665598+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:37.665756+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 329 heartbeat osd_stat(store_statfs(0x4ea037000/0x0/0x4ffc00000, data 0x284bba/0x453000, compress 0x0/0x0/0x0, omap 0x7caed, meta 0x156f3513), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:38.666033+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:39.666442+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.121034622s of 11.191111565s, submitted: 41
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:40.666581+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:41.666766+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:42.667020+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:43.667195+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:44.667395+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:45.667546+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:46.667750+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:47.667971+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:48.668178+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:49.668341+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:50.668497+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:51.668728+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:52.668938+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:53.669178+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:54.669411+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:55.669625+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:56.669828+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 75825152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:57.670026+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 75825152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:58.670336+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 75825152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:59.670502+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 75816960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:00.670700+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 75816960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:01.670875+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 75816960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:02.671425+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 75816960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:03.671638+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 75808768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:04.672638+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 75808768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:05.673241+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 75808768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:06.673422+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:07.673800+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:08.674187+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:09.674346+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:10.674775+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:11.675018+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:12.675417+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:13.675618+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:14.675859+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:15.676212+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:16.676472+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:17.676625+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:18.676833+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:19.677210+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 75784192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:20.677456+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 75784192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:21.677716+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 75784192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:22.686564+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 75776000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:23.686729+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 75767808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:24.686921+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 75767808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:25.687167+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 75767808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:26.687373+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 75767808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:27.687537+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 75759616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:28.687795+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 75759616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:29.687943+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 75759616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:30.688162+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 75759616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:31.688334+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:32.688521+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:33.688698+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:34.689167+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:35.689473+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:36.690551+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:37.691455+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:38.692274+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:39.692545+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:40.693902+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7801.8 total, 600.0 interval
                                           Cumulative writes: 49K writes, 190K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.72 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 525 writes, 1278 keys, 525 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 525 writes, 234 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:41.694419+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:42.695134+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:43.695382+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 75726848 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:44.695677+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:45.695892+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:46.696106+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:47.696293+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:48.696466+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:49.698422+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:50.698598+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:51.698769+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:52.698986+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:53.699165+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:54.699356+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:55.699513+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:56.699793+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:57.700169+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 75702272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:58.700416+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 75702272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:59.700835+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 75694080 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:00.700999+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:01.701184+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:02.701339+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:03.701507+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:04.701679+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:05.701863+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:06.702045+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:07.702227+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349102080 unmapped: 75661312 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:08.702442+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 75571200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'config show' '{prefix=config show}'
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:09.702619+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:42:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 75702272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:10.702889+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 75776000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:42:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:11.703057+0000)
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:42:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 75702272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:42 compute-0 ceph-osd[88086]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:42:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 13 09:42:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/585406901' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 09:42:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 13 09:42:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011349358' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 09:42:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 13 09:42:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514135407' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 09:42:43 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:42:43 compute-0 rsyslogd[1002]: imjournal from <np0005558241:ceph-osd>: begin to drop messages due to rate-limiting
Dec 13 09:42:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 13 09:42:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/15232624' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 13 09:42:43 compute-0 ceph-mon[76537]: pgmap v4265: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:43 compute-0 ceph-mon[76537]: from='client.23270 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/585406901' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 09:42:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1011349358' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 09:42:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2514135407' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 09:42:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/15232624' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 13 09:42:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 13 09:42:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108172072' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 09:42:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4266: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 13 09:42:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960063324' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 13 09:42:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 13 09:42:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355722505' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 13 09:42:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 13 09:42:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232765828' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 13 09:42:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 13 09:42:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388265955' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 13 09:42:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 13 09:42:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398863311' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 13 09:42:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 13 09:42:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838193388' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 13 09:42:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2108172072' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 09:42:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1960063324' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 13 09:42:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1355722505' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 13 09:42:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3232765828' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 13 09:42:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 13 09:42:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4264047997' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 13 09:42:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 13 09:42:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3530308985' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 13 09:42:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4267: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:45 compute-0 nova_compute[248510]: 2025-12-13 09:42:45.981 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 09:42:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526987252' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 13 09:42:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4027133742' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 13 09:42:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945019973' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 13 09:42:46 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23304 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: pgmap v4266: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1388265955' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2398863311' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1838193388' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4264047997' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3530308985' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/526987252' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 09:42:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4027133742' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 13 09:42:47 compute-0 nova_compute[248510]: 2025-12-13 09:42:47.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:47 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23308 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:47 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23307 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:08.528027+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 62087168 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:09.528180+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 62087168 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea930000/0x0/0x4ffc00000, data 0x1ce4f7c/0x1e9c000, compress 0x0/0x0/0x0, omap 0x72187, meta 0x133bde79), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:10.528417+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 62087168 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:11.528637+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613037 data_alloc: 218103808 data_used: 792680
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 62087168 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:12.528801+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 62087168 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea930000/0x0/0x4ffc00000, data 0x1ce4f7c/0x1e9c000, compress 0x0/0x0/0x0, omap 0x72187, meta 0x133bde79), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:13.528972+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 62087168 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:14.529165+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341106688 unmapped: 62078976 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:15.529369+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341106688 unmapped: 62078976 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:16.529550+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613037 data_alloc: 218103808 data_used: 792680
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341106688 unmapped: 62078976 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:17.529751+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341106688 unmapped: 62078976 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:18.529925+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea930000/0x0/0x4ffc00000, data 0x1ce4f7c/0x1e9c000, compress 0x0/0x0/0x0, omap 0x72187, meta 0x133bde79), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341114880 unmapped: 62070784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea930000/0x0/0x4ffc00000, data 0x1ce4f7c/0x1e9c000, compress 0x0/0x0/0x0, omap 0x72187, meta 0x133bde79), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:19.530158+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea930000/0x0/0x4ffc00000, data 0x1ce4f7c/0x1e9c000, compress 0x0/0x0/0x0, omap 0x72187, meta 0x133bde79), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341114880 unmapped: 62070784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000c8dc00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.350701332s of 13.484014511s, submitted: 23
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000c8dc00 session 0x560004ad6a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:20.530483+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea930000/0x0/0x4ffc00000, data 0x1ce4f7c/0x1e9c000, compress 0x0/0x0/0x0, omap 0x72187, meta 0x133bde79), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000247f400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341114880 unmapped: 62070784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:21.530623+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615747 data_alloc: 218103808 data_used: 792680
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 341114880 unmapped: 62070784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:22.530754+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:23.530884+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:24.531055+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea92f000/0x0/0x4ffc00000, data 0x1ce4f9f/0x1e9d000, compress 0x0/0x0/0x0, omap 0x72310, meta 0x133bdcf0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:25.531276+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:26.531436+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654915 data_alloc: 218103808 data_used: 7306856
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:27.531609+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:28.531792+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:29.531989+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:30.532199+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea92f000/0x0/0x4ffc00000, data 0x1ce4f9f/0x1e9d000, compress 0x0/0x0/0x0, omap 0x72310, meta 0x133bdcf0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:31.532434+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654915 data_alloc: 218103808 data_used: 7306856
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 62480384 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.083189964s of 12.243203163s, submitted: 12
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4ea92f000/0x0/0x4ffc00000, data 0x1ce4f9f/0x1e9d000, compress 0x0/0x0/0x0, omap 0x72310, meta 0x133bdcf0), peers [1,2] op hist [0,0,1,0,6])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:32.532563+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343965696 unmapped: 59219968 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:33.532686+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343097344 unmapped: 60088320 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:34.532873+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 59629568 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:35.533022+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 59629568 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8b68000/0x0/0x4ffc00000, data 0x2903f9f/0x2abc000, compress 0x0/0x0/0x0, omap 0x72310, meta 0x1455dcf0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:36.533177+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733411 data_alloc: 218103808 data_used: 7684712
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 59629568 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:37.533329+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 59629568 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:38.533488+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 59629568 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:39.533649+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343556096 unmapped: 59629568 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:40.533813+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 59686912 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:41.533969+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8b4f000/0x0/0x4ffc00000, data 0x2924f9f/0x2add000, compress 0x0/0x0/0x0, omap 0x72310, meta 0x1455dcf0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728699 data_alloc: 218103808 data_used: 7684712
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 59686912 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:42.534169+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 59686912 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:43.534341+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 59686912 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:44.534511+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 59686912 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:45.534637+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 59686912 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:46.534785+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728699 data_alloc: 218103808 data_used: 7684712
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343498752 unmapped: 59686912 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8b4f000/0x0/0x4ffc00000, data 0x2924f9f/0x2add000, compress 0x0/0x0/0x0, omap 0x72310, meta 0x1455dcf0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:47.534950+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 59678720 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:48.535107+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.035373688s of 16.355480194s, submitted: 114
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 59678720 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:49.535339+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343506944 unmapped: 59678720 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:50.535560+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343515136 unmapped: 59670528 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8b49000/0x0/0x4ffc00000, data 0x292af9f/0x2ae3000, compress 0x0/0x0/0x0, omap 0x72310, meta 0x1455dcf0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:51.535690+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095000 session 0x560002726fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776731 data_alloc: 218103808 data_used: 7684712
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e847f000/0x0/0x4ffc00000, data 0x2ff4f9f/0x31ad000, compress 0x0/0x0/0x0, omap 0x72398, meta 0x1455dc68), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:52.535814+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:53.536014+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:54.536249+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:55.536476+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:56.536749+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776731 data_alloc: 218103808 data_used: 7684712
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:57.536928+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e837c000/0x0/0x4ffc00000, data 0x30f7f9f/0x32b0000, compress 0x0/0x0/0x0, omap 0x72398, meta 0x1455dc68), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:58.537056+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:10:59.537255+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e837c000/0x0/0x4ffc00000, data 0x30f7f9f/0x32b0000, compress 0x0/0x0/0x0, omap 0x72398, meta 0x1455dc68), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343621632 unmapped: 59564032 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:00.537495+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560001232c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343752704 unmapped: 59432960 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:01.537644+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003099400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003099400 session 0x560009d988c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776731 data_alloc: 218103808 data_used: 7684712
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343891968 unmapped: 59293696 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:02.537783+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e837c000/0x0/0x4ffc00000, data 0x30f7f9f/0x32b0000, compress 0x0/0x0/0x0, omap 0x72398, meta 0x1455dc68), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560003967c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343891968 unmapped: 59293696 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000c8dc00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.455738068s of 14.616323471s, submitted: 13
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:03.537899+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000c8dc00 session 0x5600004abdc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343891968 unmapped: 59293696 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:04.538035+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343900160 unmapped: 59285504 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:05.538207+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e837b000/0x0/0x4ffc00000, data 0x30f7fc2/0x32b1000, compress 0x0/0x0/0x0, omap 0x72684, meta 0x1455d97c), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e837b000/0x0/0x4ffc00000, data 0x30f7fc2/0x32b1000, compress 0x0/0x0/0x0, omap 0x72684, meta 0x1455d97c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343900160 unmapped: 59285504 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:06.538364+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3780029 data_alloc: 218103808 data_used: 7684712
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 343900160 unmapped: 59285504 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:07.538501+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344440832 unmapped: 58744832 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:08.538613+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344440832 unmapped: 58744832 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:09.538783+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344440832 unmapped: 58744832 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:10.538981+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344440832 unmapped: 58744832 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e837b000/0x0/0x4ffc00000, data 0x30f7fc2/0x32b1000, compress 0x0/0x0/0x0, omap 0x72684, meta 0x1455d97c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:11.539300+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e837b000/0x0/0x4ffc00000, data 0x30f7fc2/0x32b1000, compress 0x0/0x0/0x0, omap 0x72684, meta 0x1455d97c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828285 data_alloc: 234881024 data_used: 15852136
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344440832 unmapped: 58744832 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:12.539482+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e837b000/0x0/0x4ffc00000, data 0x30f7fc2/0x32b1000, compress 0x0/0x0/0x0, omap 0x72684, meta 0x1455d97c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344440832 unmapped: 58744832 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:13.539652+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344440832 unmapped: 58744832 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:14.539817+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344440832 unmapped: 58744832 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:15.539991+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.382331848s of 12.260192871s, submitted: 13
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344449024 unmapped: 58736640 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:16.540164+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3830349 data_alloc: 234881024 data_used: 15856232
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344449024 unmapped: 58736640 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:17.540392+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 55713792 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:18.540485+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ecc000/0x0/0x4ffc00000, data 0x35a6fc2/0x3760000, compress 0x0/0x0/0x0, omap 0x726b9, meta 0x1455d947), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 57253888 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:19.559188+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345931776 unmapped: 57253888 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:20.560439+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 56205312 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:21.560622+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7cbc000/0x0/0x4ffc00000, data 0x37b6fc2/0x3970000, compress 0x0/0x0/0x0, omap 0x726b9, meta 0x1455d947), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870453 data_alloc: 234881024 data_used: 15859816
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 56205312 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:22.560762+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 56205312 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:23.560929+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 56205312 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:24.561047+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346988544 unmapped: 56197120 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:25.561188+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346988544 unmapped: 56197120 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:26.561353+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7cb4000/0x0/0x4ffc00000, data 0x37befc2/0x3978000, compress 0x0/0x0/0x0, omap 0x726b9, meta 0x1455d947), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870453 data_alloc: 234881024 data_used: 15859816
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346988544 unmapped: 56197120 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:27.561548+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7cb4000/0x0/0x4ffc00000, data 0x37befc2/0x3978000, compress 0x0/0x0/0x0, omap 0x726b9, meta 0x1455d947), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.391067505s of 12.241982460s, submitted: 45
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 56188928 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:28.561710+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 56188928 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:29.561961+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 56188928 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:30.562163+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7cb4000/0x0/0x4ffc00000, data 0x37befc2/0x3978000, compress 0x0/0x0/0x0, omap 0x726ee, meta 0x1455d912), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 56188928 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:31.562295+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870709 data_alloc: 234881024 data_used: 15868008
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 56188928 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:32.562623+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7cb4000/0x0/0x4ffc00000, data 0x37befc2/0x3978000, compress 0x0/0x0/0x0, omap 0x726ee, meta 0x1455d912), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x5600027e1180
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095000 session 0x5600003efa40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 56188928 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:33.562765+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 172K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 43K writes, 15K syncs, 2.77 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3460 writes, 13K keys, 3460 commit groups, 1.0 writes per commit group, ingest: 16.23 MB, 0.03 MB/s
                                           Interval WAL: 3459 writes, 1381 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 347004928 unmapped: 56180736 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:34.562926+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 347004928 unmapped: 56180736 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:35.563501+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600003ee8c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ca8000/0x0/0x4ffc00000, data 0x37cafc2/0x3984000, compress 0x0/0x0/0x0, omap 0x72776, meta 0x1455d88a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 57974784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:36.563866+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8b3c000/0x0/0x4ffc00000, data 0x2936f9f/0x2aef000, compress 0x0/0x0/0x0, omap 0x72a62, meta 0x1455d59e), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739897 data_alloc: 218103808 data_used: 7680616
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 57974784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:37.564224+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 57974784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:38.564983+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 57974784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:39.565316+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.379004478s of 11.920421600s, submitted: 37
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000247f400 session 0x5600003efc00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9800 session 0x5600027276c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 57974784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:40.565666+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 57974784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:41.566292+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e8b3d000/0x0/0x4ffc00000, data 0x2936f9f/0x2aef000, compress 0x0/0x0/0x0, omap 0x72a62, meta 0x1455d59e), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739269 data_alloc: 218103808 data_used: 7684677
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 57974784 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:42.566685+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 57966592 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:43.566878+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 57966592 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:44.567240+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344055808 unmapped: 59129856 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:45.567399+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f9f/0x181f000, compress 0x0/0x0/0x0, omap 0x72cc7, meta 0x1455d339), peers [1,2] op hist [0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560004c936c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:46.567926+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:47.568115+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:48.568318+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:49.568483+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:50.568662+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:51.568825+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:52.568989+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:53.569151+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:54.569442+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:55.569621+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:56.569798+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:57.569947+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:58.570157+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:11:59.570278+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:00.570421+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:01.570580+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344064000 unmapped: 59121664 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:02.570730+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344072192 unmapped: 59113472 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:03.570874+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344072192 unmapped: 59113472 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:04.571037+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344072192 unmapped: 59113472 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:05.571215+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344072192 unmapped: 59113472 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:06.571350+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344072192 unmapped: 59113472 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:07.571588+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:08.571787+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344072192 unmapped: 59113472 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:09.571914+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344072192 unmapped: 59113472 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:10.572115+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344072192 unmapped: 59113472 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:11.572313+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344080384 unmapped: 59105280 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:12.572477+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344080384 unmapped: 59105280 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:13.572653+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344080384 unmapped: 59105280 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:14.572866+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344080384 unmapped: 59105280 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:15.573050+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344080384 unmapped: 59105280 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:16.573251+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344080384 unmapped: 59105280 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:17.573430+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344080384 unmapped: 59105280 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:18.573625+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344080384 unmapped: 59105280 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:19.573821+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 59097088 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:20.574046+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 59097088 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:21.574296+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 59097088 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:22.574433+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 59097088 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:23.574646+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344096768 unmapped: 59088896 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:24.574810+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344096768 unmapped: 59088896 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:25.575000+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344096768 unmapped: 59088896 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:26.575185+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344096768 unmapped: 59088896 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:27.575349+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344096768 unmapped: 59088896 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:28.575532+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344096768 unmapped: 59088896 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:29.575689+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344096768 unmapped: 59088896 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:30.575849+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344096768 unmapped: 59088896 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:31.576063+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 59072512 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e9e0d000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x72e3e, meta 0x1455d1c2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589281 data_alloc: 218103808 data_used: 796741
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:32.576353+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 59072512 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:33.576571+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 59072512 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.808341980s of 54.187030792s, submitted: 54
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002934700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000c8dc00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000c8dc00 session 0x5600004aa380
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600024bf180
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560001411500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000247f400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000247f400 session 0x560004556fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:34.576723+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344236032 unmapped: 58949632 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:35.576871+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344236032 unmapped: 58949632 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:36.577194+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344236032 unmapped: 58949632 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:37.577417+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654157 data_alloc: 218103808 data_used: 800802
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344236032 unmapped: 58949632 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93d7000/0x0/0x4ffc00000, data 0x209cfde/0x2255000, compress 0x0/0x0/0x0, omap 0x730d8, meta 0x1455cf28), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:38.577652+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344236032 unmapped: 58949632 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93d7000/0x0/0x4ffc00000, data 0x209cfde/0x2255000, compress 0x0/0x0/0x0, omap 0x730d8, meta 0x1455cf28), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:39.577780+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344244224 unmapped: 58941440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:40.577976+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93d7000/0x0/0x4ffc00000, data 0x209cfde/0x2255000, compress 0x0/0x0/0x0, omap 0x730d8, meta 0x1455cf28), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344244224 unmapped: 58941440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:41.578132+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344244224 unmapped: 58941440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:42.578317+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654157 data_alloc: 218103808 data_used: 800802
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344244224 unmapped: 58941440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93d7000/0x0/0x4ffc00000, data 0x209cfde/0x2255000, compress 0x0/0x0/0x0, omap 0x730d8, meta 0x1455cf28), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:43.578532+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344244224 unmapped: 58941440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:44.578761+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344244224 unmapped: 58941440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:45.578944+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344244224 unmapped: 58941440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:46.579182+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9800 session 0x560004c92fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344252416 unmapped: 58933248 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095000 session 0x560009d99c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:47.579346+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654157 data_alloc: 218103808 data_used: 800802
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93d7000/0x0/0x4ffc00000, data 0x209cfde/0x2255000, compress 0x0/0x0/0x0, omap 0x730d8, meta 0x1455cf28), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344252416 unmapped: 58933248 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560001fb56c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.315941811s of 14.515570641s, submitted: 47
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600024be000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:48.579464+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000247f400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93b3000/0x0/0x4ffc00000, data 0x20c0fde/0x2279000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x1455cea0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:49.579649+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:50.579824+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:51.580001+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93b3000/0x0/0x4ffc00000, data 0x20c0fde/0x2279000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x1455cea0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:52.580183+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3711401 data_alloc: 234881024 data_used: 10181154
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:53.580338+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:54.580538+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:55.580754+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:56.580927+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93b3000/0x0/0x4ffc00000, data 0x20c0fde/0x2279000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x1455cea0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:57.581167+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3711401 data_alloc: 234881024 data_used: 10181154
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:58.581357+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93b3000/0x0/0x4ffc00000, data 0x20c0fde/0x2279000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x1455cea0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:59.581504+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 58777600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:00.581698+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.328557968s of 12.337379456s, submitted: 3
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 344416256 unmapped: 58769408 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e93b3000/0x0/0x4ffc00000, data 0x20c0fde/0x2279000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x1455cea0), peers [1,2] op hist [0,0,0,0,0,0,2])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:01.581824+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 53821440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:02.581951+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7c84000/0x0/0x4ffc00000, data 0x264ffde/0x2808000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [0,0,0,0,0,0,0,3])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765387 data_alloc: 234881024 data_used: 10240546
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 54681600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:03.582111+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 54681600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:04.582309+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 54591488 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7931000/0x0/0x4ffc00000, data 0x29a1fde/0x2b5a000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [0,0,0,0,0,0,0,8])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:05.582546+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:06.582695+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:07.582883+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3780615 data_alloc: 234881024 data_used: 11115042
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e78f6000/0x0/0x4ffc00000, data 0x29dcfde/0x2b95000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:08.583113+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:09.583219+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:10.583491+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.410496235s of 10.694005013s, submitted: 99
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:11.583647+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e78f6000/0x0/0x4ffc00000, data 0x29dcfde/0x2b95000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:12.583924+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3779471 data_alloc: 234881024 data_used: 11119138
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:13.584122+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:14.584423+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:15.584617+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:16.584857+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e78d2000/0x0/0x4ffc00000, data 0x2a01fde/0x2bba000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 49774592 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560007592e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560000d76000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026db400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026db400 session 0x560002982700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560000d77c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560009d98380
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:17.585006+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3847922 data_alloc: 234881024 data_used: 11119138
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349306880 unmapped: 57556992 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:18.585203+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349306880 unmapped: 57556992 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:19.585407+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:20.585587+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:21.586034+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6e9e000/0x0/0x4ffc00000, data 0x3435fde/0x35ee000, compress 0x0/0x0/0x0, omap 0x72f9b, meta 0x156fd065), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:22.586349+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3847690 data_alloc: 234881024 data_used: 11119138
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:23.586532+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560009d98e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:24.586690+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560002570fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:25.586866+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026da800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026da800 session 0x560004c93340
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.456769943s of 14.890123367s, submitted: 32
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560007592000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:26.587000+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:27.587123+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3853640 data_alloc: 234881024 data_used: 11119138
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6e79000/0x0/0x4ffc00000, data 0x345a001/0x3613000, compress 0x0/0x0/0x0, omap 0x731f7, meta 0x156fce09), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:28.587249+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:29.587399+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 57073664 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:30.587585+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 55386112 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:31.588167+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6e79000/0x0/0x4ffc00000, data 0x345a001/0x3613000, compress 0x0/0x0/0x0, omap 0x731f7, meta 0x156fce09), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 55386112 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:32.588344+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917080 data_alloc: 234881024 data_used: 21725730
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 55386112 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:33.588522+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 53272576 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:34.588680+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 53264384 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5cd9000/0x0/0x4ffc00000, data 0x345a001/0x3613000, compress 0x0/0x0/0x0, omap 0x731f7, meta 0x1689ce09), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:35.588819+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 53248000 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:36.588978+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 53248000 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:37.589112+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917368 data_alloc: 234881024 data_used: 21725730
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 53248000 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:38.589262+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 53248000 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5cd9000/0x0/0x4ffc00000, data 0x345a001/0x3613000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:39.589450+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.297067642s of 13.578221321s, submitted: 122
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 52723712 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:40.589618+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 48660480 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:41.589800+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e94000/0x0/0x4ffc00000, data 0x429f001/0x4458000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:42.589979+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4027812 data_alloc: 234881024 data_used: 24445474
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e55000/0x0/0x4ffc00000, data 0x42d6001/0x448f000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:43.590234+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:44.590402+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:45.590568+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e55000/0x0/0x4ffc00000, data 0x42d6001/0x448f000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:46.590735+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:47.590908+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4021804 data_alloc: 234881024 data_used: 24445474
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e5a000/0x0/0x4ffc00000, data 0x42d9001/0x4492000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:48.591098+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:49.591254+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:50.591466+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.182911873s of 11.664206505s, submitted: 140
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:51.591605+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:52.591750+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e5a000/0x0/0x4ffc00000, data 0x42d9001/0x4492000, compress 0x0/0x0/0x0, omap 0x7325f, meta 0x1689cda1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4023596 data_alloc: 234881024 data_used: 24547874
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:53.591925+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:54.592112+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e5a000/0x0/0x4ffc00000, data 0x42d9001/0x4492000, compress 0x0/0x0/0x0, omap 0x7325f, meta 0x1689cda1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:55.592272+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560002726700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600012321c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:56.592420+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560000e1c540
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:57.592576+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796182 data_alloc: 234881024 data_used: 11127330
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:58.592724+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e66fc000/0x0/0x4ffc00000, data 0x2a12fde/0x2bcb000, compress 0x0/0x0/0x0, omap 0x73717, meta 0x1689c8e9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:59.592955+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e66fc000/0x0/0x4ffc00000, data 0x2a12fde/0x2bcb000, compress 0x0/0x0/0x0, omap 0x73717, meta 0x1689c8e9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:00.593195+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e66fc000/0x0/0x4ffc00000, data 0x2a12fde/0x2bcb000, compress 0x0/0x0/0x0, omap 0x73717, meta 0x1689c8e9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000247f400 session 0x5600027e0c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9800 session 0x560004c921c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:01.593372+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.963108063s of 10.171596527s, submitted: 62
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c49500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:02.593571+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:03.593824+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:04.594042+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:05.594344+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:06.594523+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:07.594768+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:08.594967+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:09.595152+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:10.595371+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:11.596012+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:12.596179+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:13.596339+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:14.596472+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:15.596641+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:16.596842+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:17.596994+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:18.597149+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:19.597375+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:20.597592+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:21.597818+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:22.598021+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:23.598246+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:24.598402+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:25.598618+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:26.598819+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:27.599027+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:28.599273+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:29.599510+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:30.599726+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:31.599896+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:32.600205+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:33.600480+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:34.600629+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:35.600883+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:36.601059+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:37.601316+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:38.601508+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:39.601704+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:40.601928+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:41.602117+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:42.602260+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:43.602456+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:44.602613+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:45.602801+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:46.603005+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:47.603273+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:48.603444+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:49.603647+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:50.603838+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:51.604005+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:52.604184+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:53.604328+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:54.604485+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:55.604638+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:56.604835+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560003966fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560004c92a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560004c58380
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030dea80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 55.332927704s of 55.392375946s, submitted: 35
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 52822016 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x5600027416c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002934700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9800 session 0x5600014a2380
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600094f8000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600094f8000 session 0x5600011a6e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030df6c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:57.605035+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659110 data_alloc: 218103808 data_used: 808861
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:58.605235+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x73a83, meta 0x1689c57d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:59.605436+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x73a83, meta 0x1689c57d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:00.605647+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:01.605848+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:02.606002+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659110 data_alloc: 218103808 data_used: 808861
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:03.606156+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x73a83, meta 0x1689c57d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:04.606404+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:05.606573+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:06.606754+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:07.606953+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659110 data_alloc: 218103808 data_used: 808861
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:08.607140+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x73a83, meta 0x1689c57d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:09.607319+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 57507840 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:10.607551+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 57507840 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:11.607725+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 57499648 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:12.607969+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659110 data_alloc: 218103808 data_used: 808861
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 57499648 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:13.608194+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.549066544s of 16.679725647s, submitted: 15
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560004c928c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 57491456 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:14.608419+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 57491456 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:15.608572+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:16.608690+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:17.608851+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688687 data_alloc: 218103808 data_used: 5624237
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:18.609008+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:19.609168+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:20.609388+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:21.609510+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:22.609826+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688687 data_alloc: 218103808 data_used: 5624237
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:23.609994+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:24.610258+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:25.610463+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.406724930s of 12.420920372s, submitted: 7
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 55599104 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:26.610607+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,6])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 57860096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:27.610866+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717479 data_alloc: 218103808 data_used: 5706157
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 57860096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e702e000/0x0/0x4ffc00000, data 0x2105f8c/0x22be000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:28.611022+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:29.611228+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fe8000/0x0/0x4ffc00000, data 0x214af8c/0x2303000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fa9000/0x0/0x4ffc00000, data 0x2189f8c/0x2342000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:30.611396+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:31.611655+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fa9000/0x0/0x4ffc00000, data 0x2189f8c/0x2342000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:32.611842+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fa9000/0x0/0x4ffc00000, data 0x2189f8c/0x2342000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726589 data_alloc: 218103808 data_used: 5697965
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:33.612166+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fa9000/0x0/0x4ffc00000, data 0x2189f8c/0x2342000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:34.612507+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:35.612770+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:36.612987+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f89000/0x0/0x4ffc00000, data 0x21aaf8c/0x2363000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:37.613132+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724821 data_alloc: 218103808 data_used: 5697965
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:38.613353+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f89000/0x0/0x4ffc00000, data 0x21aaf8c/0x2363000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:39.613493+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:40.613655+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:41.613819+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f89000/0x0/0x4ffc00000, data 0x21aaf8c/0x2363000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 56836096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:42.614003+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724821 data_alloc: 218103808 data_used: 5697965
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 56836096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:43.614201+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.264011383s of 17.528238297s, submitted: 41
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 56836096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:44.614366+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000a3bf000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000a3bf000 session 0x560002740c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600030df880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002dd7800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002dd7800 session 0x560002570700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 56836096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:45.614488+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560003966a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f83000/0x0/0x4ffc00000, data 0x21b0f8c/0x2369000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560002571880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560004556c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000a3bf000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000a3bf000 session 0x5600011a6e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002489800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002489800 session 0x560004c921c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002570700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:46.614738+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:47.615432+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812375 data_alloc: 218103808 data_used: 5697965
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:48.616493+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:49.616639+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:50.616798+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d3000/0x0/0x4ffc00000, data 0x2f5ffee/0x3119000, compress 0x0/0x0/0x0, omap 0x7436e, meta 0x1689bc92), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:51.616955+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:52.617127+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d3000/0x0/0x4ffc00000, data 0x2f5ffee/0x3119000, compress 0x0/0x0/0x0, omap 0x7436e, meta 0x1689bc92), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812375 data_alloc: 218103808 data_used: 5697965
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:53.617315+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560004557500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:54.617444+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560004556000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:55.617801+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d3000/0x0/0x4ffc00000, data 0x2f5ffee/0x3119000, compress 0x0/0x0/0x0, omap 0x7436e, meta 0x1689bc92), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:56.618137+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000a3bf000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000a3bf000 session 0x560009d99a40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:57.618462+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000308a800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.645063400s of 13.876958847s, submitted: 50
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000308a800 session 0x560004c58000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3813085 data_alloc: 218103808 data_used: 5697965
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:58.618679+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:59.618806+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:00.619031+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:01.619220+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d2000/0x0/0x4ffc00000, data 0x2f5fffe/0x311a000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:02.619452+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3896829 data_alloc: 234881024 data_used: 19757997
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:03.620154+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d2000/0x0/0x4ffc00000, data 0x2f5fffe/0x311a000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:04.620393+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:05.620565+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:06.620763+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:07.620911+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d2000/0x0/0x4ffc00000, data 0x2f5fffe/0x311a000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3896829 data_alloc: 234881024 data_used: 19757997
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:08.621177+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:09.621356+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.166196823s of 12.185412407s, submitted: 5
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:10.621566+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d2000/0x0/0x4ffc00000, data 0x2f5fffe/0x311a000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 59465728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:11.621744+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 57892864 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:12.621891+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3980347 data_alloc: 234881024 data_used: 21101485
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:13.622144+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:14.622374+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54ca000/0x0/0x4ffc00000, data 0x3c67ffe/0x3e22000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:15.622494+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:16.622663+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:17.622826+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3978539 data_alloc: 234881024 data_used: 21105581
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:18.622945+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:19.623063+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:20.623262+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54a6000/0x0/0x4ffc00000, data 0x3c8bffe/0x3e46000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:21.623415+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:22.623573+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3979179 data_alloc: 234881024 data_used: 21126061
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:23.623715+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54a6000/0x0/0x4ffc00000, data 0x3c8bffe/0x3e46000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:24.623884+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54a6000/0x0/0x4ffc00000, data 0x3c8bffe/0x3e46000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:25.624044+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.691843987s of 15.793980598s, submitted: 146
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54a6000/0x0/0x4ffc00000, data 0x3c8bffe/0x3e46000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:26.624224+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:27.624399+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e548b000/0x0/0x4ffc00000, data 0x3ca6ffe/0x3e61000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:28.624572+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3979211 data_alloc: 234881024 data_used: 21126061
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e548b000/0x0/0x4ffc00000, data 0x3ca6ffe/0x3e61000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560004c93c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560004c59500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:29.624742+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:30.624905+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560001232c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:31.625051+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:32.625227+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:33.625403+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741268 data_alloc: 218103808 data_used: 5706157
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f76000/0x0/0x4ffc00000, data 0x21bcf8c/0x2375000, compress 0x0/0x0/0x0, omap 0x745c7, meta 0x1689ba39), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:34.625530+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:35.625686+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560004c92fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.019501686s of 10.248433113s, submitted: 46
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9800 session 0x560002934c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:36.625830+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:37.625996+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030dfdc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:38.626165+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:39.626335+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:40.626515+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:41.626667+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:42.626810+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:43.627099+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:44.627273+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:45.627418+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:46.627606+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:47.627737+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: mgrc ms_handle_reset ms_handle_reset con 0x560002f13800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: get_auth_request con 0x5600026d9800 auth_method 0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:48.627915+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:49.628102+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:50.628291+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:51.628438+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:52.628666+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x5600027261c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d7800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:53.628830+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:54.628983+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:55.629116+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:56.629254+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:57.629453+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:58.629642+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:59.629806+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:00.630024+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:01.630391+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:02.630532+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:03.630801+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:04.630945+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:05.631149+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:06.631296+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:07.631493+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:08.631674+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:09.631891+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:10.632192+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 64569344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:11.632357+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 64569344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:12.632550+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 64569344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560002c64380
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560009d996c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560003966e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000a3bf000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000a3bf000 session 0x5600030defc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:13.632704+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.651481628s of 37.857086182s, submitted: 26
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 64569344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f8c/0x181f000, compress 0x0/0x0/0x0, omap 0x7480e, meta 0x1689b7f2), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:14.632902+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353853440 unmapped: 64561152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:15.633033+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030df340
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002c43180
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600029348c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560002740540
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002f13400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002f13400 session 0x560002982a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:16.633213+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:17.633401+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:18.633548+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560001233500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3713637 data_alloc: 218103808 data_used: 812875
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600024be1c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:19.633671+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7069000/0x0/0x4ffc00000, data 0x20caf8c/0x2283000, compress 0x0/0x0/0x0, omap 0x74ab6, meta 0x1689b54a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560000d77180
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:20.633851+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 64086016 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560002740700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:21.634003+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002de6800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:22.634187+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:23.634324+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7043000/0x0/0x4ffc00000, data 0x20eefbe/0x22a9000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782319 data_alloc: 234881024 data_used: 11086682
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:24.634503+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:25.634705+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:26.634963+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7043000/0x0/0x4ffc00000, data 0x20eefbe/0x22a9000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:27.635135+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:28.635317+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3785519 data_alloc: 234881024 data_used: 11616090
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:29.635514+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:30.635774+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:31.635935+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7043000/0x0/0x4ffc00000, data 0x20eefbe/0x22a9000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:32.636152+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:33.636370+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3785903 data_alloc: 234881024 data_used: 11628378
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.954565048s of 20.221033096s, submitted: 39
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356638720 unmapped: 61775872 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:34.636491+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 58318848 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:35.636610+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 58089472 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:36.636757+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 58089472 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6142000/0x0/0x4ffc00000, data 0x2feffbe/0x31aa000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:37.636900+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 58089472 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:38.637059+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3897989 data_alloc: 234881024 data_used: 13733722
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 57991168 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:39.637208+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6142000/0x0/0x4ffc00000, data 0x2feffbe/0x31aa000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 57991168 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:40.637409+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:41.637583+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6140000/0x0/0x4ffc00000, data 0x2ff1fbe/0x31ac000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:42.637775+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:43.637973+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3890853 data_alloc: 234881024 data_used: 13733722
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:44.638145+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:45.638345+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6140000/0x0/0x4ffc00000, data 0x2ff1fbe/0x31ac000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:46.638499+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.321171761s of 12.761690140s, submitted: 114
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:47.638718+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613f000/0x0/0x4ffc00000, data 0x2ff2fbe/0x31ad000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:48.638966+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613f000/0x0/0x4ffc00000, data 0x2ff2fbe/0x31ad000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3891309 data_alloc: 234881024 data_used: 13741914
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000144d000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000144d000 session 0x560003966a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:49.639241+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560003967880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600004ab6c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560001fb4c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560001fbea80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:50.639474+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:51.639640+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:52.639797+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c73000/0x0/0x4ffc00000, data 0x34befbe/0x3679000, compress 0x0/0x0/0x0, omap 0x74d9a, meta 0x1689b266), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:53.639943+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922455 data_alloc: 234881024 data_used: 13741914
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:54.640213+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c73000/0x0/0x4ffc00000, data 0x34befbe/0x3679000, compress 0x0/0x0/0x0, omap 0x74d9a, meta 0x1689b266), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:55.640414+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:56.640584+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9400 session 0x560004c588c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:57.640754+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c73000/0x0/0x4ffc00000, data 0x34befbe/0x3679000, compress 0x0/0x0/0x0, omap 0x74d9a, meta 0x1689b266), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600003ee8c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:58.640936+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002744e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.030322075s of 12.168992996s, submitted: 8
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560001fbfa40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3924831 data_alloc: 234881024 data_used: 13741914
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 58212352 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:59.641146+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026dc000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 58130432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:00.641340+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:01.641507+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:02.641695+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c4f000/0x0/0x4ffc00000, data 0x34e2fbe/0x369d000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:03.642268+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3954911 data_alloc: 234881024 data_used: 18773850
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:04.642402+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:05.642561+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c4f000/0x0/0x4ffc00000, data 0x34e2fbe/0x369d000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:06.642821+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:07.643011+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c4d000/0x0/0x4ffc00000, data 0x34e3fbe/0x369e000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:08.643203+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3955559 data_alloc: 234881024 data_used: 18773850
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c4d000/0x0/0x4ffc00000, data 0x34e3fbe/0x369e000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:09.643387+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:10.643578+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.978725433s of 11.995156288s, submitted: 6
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 51675136 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:11.643852+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 51675136 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:12.643975+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:13.644120+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010013 data_alloc: 234881024 data_used: 19072858
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5463000/0x0/0x4ffc00000, data 0x3ccefbe/0x3e89000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:14.644281+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:15.644439+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:16.644607+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:17.644831+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5463000/0x0/0x4ffc00000, data 0x3ccefbe/0x3e89000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:18.645185+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010013 data_alloc: 234881024 data_used: 19072858
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:19.645380+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:20.645631+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.403758049s of 10.652852058s, submitted: 44
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:21.645785+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000247f000 session 0x560001232e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026dc400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:22.645949+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5462000/0x0/0x4ffc00000, data 0x3ccffbe/0x3e8a000, compress 0x0/0x0/0x0, omap 0x74eaa, meta 0x1689b156), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:23.646156+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4009941 data_alloc: 234881024 data_used: 19138394
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026dc000 session 0x560004c59880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560002745880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 51650560 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:24.646310+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5462000/0x0/0x4ffc00000, data 0x3ccffbe/0x3e8a000, compress 0x0/0x0/0x0, omap 0x74eaa, meta 0x1689b156), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600027e16c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:25.646504+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613d000/0x0/0x4ffc00000, data 0x2ff4fbe/0x31af000, compress 0x0/0x0/0x0, omap 0x74fba, meta 0x1689b046), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:26.646671+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:27.646877+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613d000/0x0/0x4ffc00000, data 0x2ff4fbe/0x31af000, compress 0x0/0x0/0x0, omap 0x74fba, meta 0x1689b046), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:28.647038+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900837 data_alloc: 234881024 data_used: 13807450
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:29.647197+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 51625984 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:30.647406+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 51625984 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:31.647582+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.783339500s of 10.808708191s, submitted: 14
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x5600027448c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002de6800 session 0x560002741180
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613d000/0x0/0x4ffc00000, data 0x2ff4fbe/0x31af000, compress 0x0/0x0/0x0, omap 0x74fba, meta 0x1689b046), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 51617792 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:32.647703+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002570700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aa8000/0x0/0x4ffc00000, data 0x168afae/0x1844000, compress 0x0/0x0/0x0, omap 0x752a6, meta 0x1689ad5a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:33.647818+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:34.647955+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:35.648096+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:36.648231+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:37.648369+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:38.648567+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:39.648722+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:40.648895+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:41.649139+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:42.649308+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:43.649476+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:44.649648+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:45.649838+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:46.650057+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:47.650263+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:48.650491+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:49.650651+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:50.650867+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:51.651049+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:52.651293+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:53.651496+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:54.651718+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:55.651929+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:56.652145+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:57.652341+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:58.652510+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:59.652656+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:00.653563+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:01.657630+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:02.658459+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:03.658718+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:04.659375+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:05.659821+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:06.660166+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:07.660361+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:08.660609+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:09.661229+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:10.661849+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:11.662119+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362045440 unmapped: 56369152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:12.662590+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362045440 unmapped: 56369152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:13.662933+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362045440 unmapped: 56369152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:14.663376+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362045440 unmapped: 56369152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.719680786s of 42.821624756s, submitted: 56
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:15.663731+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:16.663912+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:17.664252+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:18.664567+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670631 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:19.664836+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:20.665169+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:21.665379+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:22.665689+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002c64fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560000d76e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002741880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560004ad6e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:23.665900+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3732853 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:24.666111+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368492544 unmapped: 49922048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x5600029341c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002de6800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002de6800 session 0x5600025701c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x55ffffffe000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.353059769s of 10.003663063s, submitted: 51
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:25.666297+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002c42a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600026b8540
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a9000/0x0/0x4ffc00000, data 0x1d8afa5/0x1f43000, compress 0x0/0x0/0x0, omap 0x7575e, meta 0x1689a8a2), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:26.666502+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:27.666707+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:28.666955+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718973 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:29.667258+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560004c921c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:30.667449+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026dc000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026dc000 session 0x5600004ab880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560000d776c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:31.667586+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600027e0c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a9000/0x0/0x4ffc00000, data 0x1d8afde/0x1f43000, compress 0x0/0x0/0x0, omap 0x75792, meta 0x1689a86e), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:32.667709+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:33.667823+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758867 data_alloc: 218103808 data_used: 7178008
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:34.667982+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:35.668134+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a8000/0x0/0x4ffc00000, data 0x1d8afee/0x1f44000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:36.668257+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:37.668425+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:38.668588+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a8000/0x0/0x4ffc00000, data 0x1d8afee/0x1f44000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a8000/0x0/0x4ffc00000, data 0x1d8afee/0x1f44000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758867 data_alloc: 218103808 data_used: 7178008
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:39.669298+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a8000/0x0/0x4ffc00000, data 0x1d8afee/0x1f44000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:40.669568+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:41.669768+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:42.669909+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:43.670127+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.259466171s of 18.754104614s, submitted: 11
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3786887 data_alloc: 218103808 data_used: 7796504
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:44.670299+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f85000/0x0/0x4ffc00000, data 0x21a7fee/0x2361000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:45.670444+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:46.670657+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:47.670843+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:48.671014+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796901 data_alloc: 218103808 data_used: 7956248
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:49.671131+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:50.671352+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:51.671506+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:52.671675+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:53.671823+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796901 data_alloc: 218103808 data_used: 7956248
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:54.671983+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:55.672160+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:56.672344+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:57.672496+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:58.672676+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797157 data_alloc: 218103808 data_used: 7964440
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:59.672824+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364085248 unmapped: 54329344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:00.673175+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364085248 unmapped: 54329344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.286857605s of 17.486074448s, submitted: 64
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x5600030df500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:01.673320+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d6800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d6800 session 0x560000d76c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002488800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002488800 session 0x560000630700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600011a7c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560000631dc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f77000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7593f, meta 0x1689a6c1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:02.673511+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6acf000/0x0/0x4ffc00000, data 0x2663fee/0x281d000, compress 0x0/0x0/0x0, omap 0x759ef, meta 0x1689a611), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:03.673677+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821490 data_alloc: 218103808 data_used: 7968536
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:04.673812+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:05.673954+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:06.674126+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:07.674313+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 54181888 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6acf000/0x0/0x4ffc00000, data 0x2663fee/0x281d000, compress 0x0/0x0/0x0, omap 0x759ef, meta 0x1689a611), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d6800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d6800 session 0x560002c42700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:08.674438+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 54181888 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821490 data_alloc: 218103808 data_used: 7968536
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x5600027e08c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:09.674612+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 54181888 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:10.674817+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003084400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003084400 session 0x560001fb4c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c64000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 54034432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.996580124s of 10.098914146s, submitted: 19
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d6800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:11.675259+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 54034432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:12.675446+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6aaa000/0x0/0x4ffc00000, data 0x2687ffe/0x2842000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:13.675587+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854464 data_alloc: 234881024 data_used: 12761880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:14.675738+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:15.675886+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6aaa000/0x0/0x4ffc00000, data 0x2687ffe/0x2842000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:16.676052+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:17.676236+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6aaa000/0x0/0x4ffc00000, data 0x2687ffe/0x2842000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:18.676384+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854464 data_alloc: 234881024 data_used: 12761880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:19.676526+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6aaa000/0x0/0x4ffc00000, data 0x2687ffe/0x2842000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:20.676712+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:21.676845+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:22.676996+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.363173485s of 11.364879608s, submitted: 1
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365477888 unmapped: 52936704 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:23.677137+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950244 data_alloc: 234881024 data_used: 13623064
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:24.677309+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5e72000/0x0/0x4ffc00000, data 0x32b7ffe/0x3472000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:25.677494+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5e72000/0x0/0x4ffc00000, data 0x32b7ffe/0x3472000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:26.677631+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:27.677791+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:28.677955+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:29.678175+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950500 data_alloc: 234881024 data_used: 13631256
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:30.678445+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5e72000/0x0/0x4ffc00000, data 0x32b7ffe/0x3472000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:31.678660+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 51224576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:32.678823+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 51224576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:33.678995+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.801392555s of 11.098257065s, submitted: 90
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560003967880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d6800 session 0x560001410c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 51224576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560000d77500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:34.679221+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3805479 data_alloc: 218103808 data_used: 8017688
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 55025664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:35.679439+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 55025664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f77000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x75c79, meta 0x1689a387), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:36.679573+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 55025664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:37.679728+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002c43a40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560004557dc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363397120 unmapped: 55017472 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c656c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:38.679916+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:39.680114+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:40.680362+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:41.680548+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:42.680781+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:43.680928+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:44.681036+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:45.681196+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:46.681395+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:47.681562+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:48.681762+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:49.681917+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:50.682118+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:51.682292+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:52.682417+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:53.682564+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:54.682718+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:55.682825+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:56.682974+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:57.683170+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:58.683345+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:59.683530+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:00.683717+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:01.683880+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:02.684055+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:03.684328+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:04.684632+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:05.684889+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:06.685142+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:07.685329+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:08.685464+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:09.685664+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:10.685949+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:11.686162+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:12.686408+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:13.686579+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:14.686764+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.016468048s of 41.142299652s, submitted: 65
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 58802176 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002c428c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:15.687062+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 63520768 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:16.687520+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 63520768 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:17.687756+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 63520768 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:18.688140+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7063000/0x0/0x4ffc00000, data 0x20d1f7c/0x2289000, compress 0x0/0x0/0x0, omap 0x75fe5, meta 0x1689a01b), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 63520768 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:19.688382+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757852 data_alloc: 218103808 data_used: 820918
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362258432 unmapped: 63512576 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:20.688601+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362258432 unmapped: 63512576 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:21.688747+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7063000/0x0/0x4ffc00000, data 0x20d1f7c/0x2289000, compress 0x0/0x0/0x0, omap 0x75fe5, meta 0x1689a01b), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362258432 unmapped: 63512576 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600030de700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:22.688923+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d6800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d6800 session 0x560002982a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362258432 unmapped: 63512576 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c65500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:23.689094+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560004ad6fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 63586304 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:24.689240+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762768 data_alloc: 218103808 data_used: 820918
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 63586304 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:25.689358+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:26.689571+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7061000/0x0/0x4ffc00000, data 0x20d1faf/0x228b000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:27.689782+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7061000/0x0/0x4ffc00000, data 0x20d1faf/0x228b000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:28.689982+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:29.690213+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3827152 data_alloc: 234881024 data_used: 11658934
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:30.690465+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:31.690597+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7061000/0x0/0x4ffc00000, data 0x20d1faf/0x228b000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:32.690790+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:33.690993+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.76 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2584 writes, 10K keys, 2584 commit groups, 1.0 writes per commit group, ingest: 11.17 MB, 0.02 MB/s
                                           Interval WAL: 2585 writes, 1016 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7061000/0x0/0x4ffc00000, data 0x20d1faf/0x228b000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:34.691211+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3827152 data_alloc: 234881024 data_used: 11658934
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:35.691442+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.893293381s of 21.064565659s, submitted: 24
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 61898752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:36.691618+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 61898752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:37.691817+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367263744 unmapped: 58507264 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:38.692004+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61ae000/0x0/0x4ffc00000, data 0x2f7efaf/0x3138000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 59162624 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:39.692179+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3913484 data_alloc: 234881024 data_used: 12236470
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:40.692349+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:41.692537+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e617b000/0x0/0x4ffc00000, data 0x2faffaf/0x3169000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:42.692694+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:43.692856+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:44.692999+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e617b000/0x0/0x4ffc00000, data 0x2faffaf/0x3169000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3915456 data_alloc: 234881024 data_used: 12215990
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:45.693170+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:46.693353+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:47.693507+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:48.693729+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:49.693949+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911784 data_alloc: 234881024 data_used: 12215990
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:50.694155+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:51.694274+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:52.694456+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:53.694673+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:54.694877+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911784 data_alloc: 234881024 data_used: 12215990
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:55.695057+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:56.695234+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:57.695409+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:58.695630+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:59.695796+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911784 data_alloc: 234881024 data_used: 12215990
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:00.696009+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:01.696141+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:02.696310+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:03.696482+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:04.696656+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911784 data_alloc: 234881024 data_used: 12215990
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:05.696794+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:06.696986+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:07.697138+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.367116928s of 32.255329132s, submitted: 118
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 59047936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560004557880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:08.697281+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca4c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca4c00 session 0x560000630700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd400 session 0x560002c43180
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030df340
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca4c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca4c00 session 0x560002745880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 59015168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:09.697469+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3951599 data_alloc: 234881024 data_used: 12215990
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 59015168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5afe000/0x0/0x4ffc00000, data 0x3634faf/0x37ee000, compress 0x0/0x0/0x0, omap 0x7647f, meta 0x16899b81), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:10.697626+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:11.697836+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:12.698031+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5afe000/0x0/0x4ffc00000, data 0x3634faf/0x37ee000, compress 0x0/0x0/0x0, omap 0x7647f, meta 0x16899b81), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:13.698314+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:14.698460+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3951887 data_alloc: 234881024 data_used: 12215990
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600024be700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:15.698584+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 58851328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:16.699567+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 58851328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:17.699711+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:18.699860+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5ad9000/0x0/0x4ffc00000, data 0x3658fd2/0x3813000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:19.700029+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3996452 data_alloc: 234881024 data_used: 19041974
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:20.700268+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:21.700415+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:22.700696+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5ad9000/0x0/0x4ffc00000, data 0x3658fd2/0x3813000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:23.700871+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:24.701018+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3996452 data_alloc: 234881024 data_used: 19041974
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5ad9000/0x0/0x4ffc00000, data 0x3658fd2/0x3813000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:25.701138+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.759237289s of 17.944322586s, submitted: 27
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 57090048 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:26.701295+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5ad9000/0x0/0x4ffc00000, data 0x3658fd2/0x3813000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 57090048 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:27.701450+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 370524160 unmapped: 55246848 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:28.701613+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e547c000/0x0/0x4ffc00000, data 0x3cb5fd2/0x3e70000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372178944 unmapped: 53592064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:29.701741+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4075406 data_alloc: 234881024 data_used: 19361462
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372178944 unmapped: 53592064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:30.701929+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372178944 unmapped: 53592064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:31.702118+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372178944 unmapped: 53592064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:32.702269+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e4d000/0x0/0x4ffc00000, data 0x42e4fd2/0x449f000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:33.702481+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:34.702648+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083418 data_alloc: 234881024 data_used: 19361462
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:35.702805+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:36.703005+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e4d000/0x0/0x4ffc00000, data 0x42e4fd2/0x449f000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:37.703206+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372219904 unmapped: 53551104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:38.703396+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372219904 unmapped: 53551104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:39.703562+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083546 data_alloc: 234881024 data_used: 19365558
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e4d000/0x0/0x4ffc00000, data 0x42e4fd2/0x449f000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372228096 unmapped: 53542912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:40.703790+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372228096 unmapped: 53542912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:41.703936+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.653168678s of 15.359889030s, submitted: 89
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560001232380
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x560004556000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x560004c581c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e617f000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x7667c, meta 0x16899984), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:42.704138+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:43.704366+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:44.704625+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3925462 data_alloc: 234881024 data_used: 12207798
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:45.704758+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:46.704922+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e617e000/0x0/0x4ffc00000, data 0x2fb3faf/0x316d000, compress 0x0/0x0/0x0, omap 0x7667c, meta 0x16899984), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:47.705119+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:48.705604+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x55fffe9d5180
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600027408c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c42fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:49.705779+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:50.705983+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:51.706258+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:52.706571+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:53.706730+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:54.706916+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:55.707145+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:56.707396+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:57.707648+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:58.707830+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:59.708123+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:00.708346+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:01.708520+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:02.708732+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:03.708903+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:04.709118+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:05.709261+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:06.709411+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:07.709569+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:08.709715+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:09.709906+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:10.710155+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:11.710311+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:12.718568+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:13.718727+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:14.718978+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:15.719168+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:16.719422+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:17.719606+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:18.719818+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:19.720060+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 63700992 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:20.720439+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 63700992 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:21.720651+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 63692800 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca4c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca4c00 session 0x560002c43340
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600025708c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600026b9c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:22.720784+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560002c436c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361816064 unmapped: 63954944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.080917358s of 41.216419220s, submitted: 75
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x5600027e08c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560009d988c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c49500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002570c40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560000630700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:23.721024+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:24.721236+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792201 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:25.721507+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:26.721677+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76c80, meta 0x16899380), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:27.721890+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76c80, meta 0x16899380), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:28.722274+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:29.722427+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361832448 unmapped: 63938560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791161 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:30.722661+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361832448 unmapped: 63938560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x560002570700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:31.722812+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361832448 unmapped: 63938560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:32.722980+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361832448 unmapped: 63938560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.708735466s of 10.003569603s, submitted: 73
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:33.723122+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:34.723300+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857085 data_alloc: 234881024 data_used: 11944560
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:35.723472+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:36.723623+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560004557dc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:37.723744+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:38.723905+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:39.724114+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3856953 data_alloc: 234881024 data_used: 11944560
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:40.724311+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:41.724483+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:42.724625+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:43.724945+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:44.725123+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857085 data_alloc: 234881024 data_used: 11944560
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:45.725412+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:46.725619+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:47.725827+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:48.725990+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.146135330s of 16.288476944s, submitted: 80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002745dc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:49.727755+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3856953 data_alloc: 234881024 data_used: 11944560
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:50.728217+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:51.728468+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:52.728668+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:53.728927+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:54.729200+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002571500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3856953 data_alloc: 234881024 data_used: 11944560
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:55.729405+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560002c65880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:56.729574+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:57.729724+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:58.729921+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:59.730384+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:00.730682+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:01.730891+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:02.731567+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:03.731935+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:04.732131+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:05.732447+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:06.732605+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:07.732775+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:08.733029+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:09.733185+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:10.733377+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:11.733531+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:12.733682+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:13.733887+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:14.734123+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:15.734284+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:16.734450+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:17.734634+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:18.734815+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:19.734976+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:20.735216+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:21.735490+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:22.735686+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:23.735866+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:24.736129+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:25.736339+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:26.736506+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:27.736658+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:28.736857+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:29.737066+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:30.737287+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:31.737448+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 66101248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:32.737620+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 66101248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:33.737746+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:34.737921+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:35.738179+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:36.738319+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:37.738485+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:38.738592+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:39.738751+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:40.738964+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:41.739188+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:42.739404+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:43.739644+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:44.739754+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:45.739997+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:46.740176+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:47.740337+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 66076672 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:48.740535+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 66076672 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:49.740704+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 66076672 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:50.740907+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 66068480 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:51.741040+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 66068480 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:52.741183+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x5600024be380
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x560001411500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002982380
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002983a40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 66068480 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.068878174s of 64.135406494s, submitted: 35
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560001233500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560009d99a40
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002dd6000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002dd6000 session 0x560002744540
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:53.741298+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c656c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002982a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:54.741479+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e70d1000/0x0/0x4ffc00000, data 0x2061fee/0x221b000, compress 0x0/0x0/0x0, omap 0x77074, meta 0x16898f8c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e70d1000/0x0/0x4ffc00000, data 0x2061fee/0x221b000, compress 0x0/0x0/0x0, omap 0x77074, meta 0x16898f8c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:55.741719+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800904 data_alloc: 218103808 data_used: 833038
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560000d776c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:56.741909+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 64847872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000715ec00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:57.742166+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:58.742326+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:59.742492+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e70d0000/0x0/0x4ffc00000, data 0x2062011/0x221c000, compress 0x0/0x0/0x0, omap 0x772d0, meta 0x16898d30), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:00.742744+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3863274 data_alloc: 234881024 data_used: 10992670
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e70d0000/0x0/0x4ffc00000, data 0x2062011/0x221c000, compress 0x0/0x0/0x0, omap 0x772d0, meta 0x16898d30), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x5600026b9340
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000715ec00 session 0x560002c48e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:01.742955+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000715ec00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 66256896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:02.743151+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000715ec00 session 0x560001410fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:03.743316+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:04.743462+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:05.743592+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:06.743873+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:07.744123+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:08.744327+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:09.744539+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:10.744792+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:11.744918+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:12.745178+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:13.745383+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:14.745791+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:15.745954+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:16.746159+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:17.746324+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:18.746499+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:19.746692+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:20.746882+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:21.747028+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:22.747171+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:23.747350+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:24.747523+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:25.747692+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:26.747858+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:27.748031+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:28.748265+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:29.748543+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:30.748885+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:31.749126+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:32.749374+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:33.749589+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:34.749788+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 66207744 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:35.750007+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 66207744 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:36.750242+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 66207744 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:37.750450+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 66207744 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:38.750667+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 66199552 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:39.750894+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 66199552 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:40.751130+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 66199552 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:41.751350+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 66199552 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:42.751531+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:43.751724+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:44.751898+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:45.752045+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:46.753265+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:47.753441+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:48.753752+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:49.753932+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:50.754274+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:51.754479+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:52.754663+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:53.754814+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:54.755043+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:55.755276+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:56.755471+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:57.755675+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:58.755803+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 66174976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:59.756485+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 66174976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:00.757171+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:01.758207+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:02.758700+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:03.759227+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:04.759490+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:05.759712+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:06.759988+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 66158592 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:07.760368+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 66158592 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:08.760633+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 66150400 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:09.760984+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 66150400 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:10.761246+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 66150400 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:11.761451+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 66150400 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:12.761619+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 78.070625305s of 79.486984253s, submitted: 93
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 312 handle_osd_map epochs [312,313], i have 313, src has [1,313]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 313 ms_handle_reset con 0x55ffff7b1400 session 0x560004556e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:13.761778+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e7aca000/0x0/0x4ffc00000, data 0x1668b49/0x1820000, compress 0x0/0x0/0x0, omap 0x78247, meta 0x16897db9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:14.761971+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:15.762187+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743364 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:16.762331+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e7aca000/0x0/0x4ffc00000, data 0x1668b49/0x1820000, compress 0x0/0x0/0x0, omap 0x78247, meta 0x16897db9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:17.762477+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:18.762642+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:19.762843+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:20.763042+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e7aca000/0x0/0x4ffc00000, data 0x1668b49/0x1820000, compress 0x0/0x0/0x0, omap 0x78247, meta 0x16897db9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743364 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:21.763203+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:22.763375+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:23.763550+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e7aca000/0x0/0x4ffc00000, data 0x1668b49/0x1820000, compress 0x0/0x0/0x0, omap 0x78247, meta 0x16897db9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 313 handle_osd_map epochs [313,314], i have 314, src has [1,314]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.259491920s of 11.321173668s, submitted: 39
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: get_auth_request con 0x5600002e7400 auth_method 0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:24.763725+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 65028096 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:25.763899+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 65028096 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:26.764057+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 65028096 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:27.764943+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 65019904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:28.765204+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 65019904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:29.765347+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 65019904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:30.765517+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 65019904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:31.765672+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:32.765887+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:33.766182+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:34.766416+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:35.767025+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:36.767667+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:37.767917+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:38.768161+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:39.768547+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:40.769609+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:41.769779+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:42.769986+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:43.770295+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:44.770550+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:45.770832+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:46.771130+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:47.771339+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:48.771529+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:49.771665+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:50.771866+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:51.772000+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:52.772221+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:53.772445+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:54.772721+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:55.772891+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 64970752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:56.773063+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 64970752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:57.773328+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 64970752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:58.773597+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:59.773813+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:00.774033+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:01.774236+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:02.774480+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:03.774619+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:04.774771+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:05.774977+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 64954368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:06.775213+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 64954368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:07.775376+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 64954368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:08.775567+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 64946176 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:09.775755+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 64946176 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:10.775974+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 64946176 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:11.776170+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:12.776329+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:13.776487+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:14.776701+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:15.776863+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:16.777138+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:17.777297+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360841216 unmapped: 64929792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:18.777481+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360841216 unmapped: 64929792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:19.777647+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 64921600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:20.777852+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 64921600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:21.778057+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 64921600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:22.778308+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:23.778483+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:24.778643+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:25.778845+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:26.778989+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:27.779186+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:28.779375+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:29.779553+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:30.779765+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:31.779952+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:32.780125+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:33.780311+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:34.780652+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 64897024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:35.780807+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 64897024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:36.781215+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 64897024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:37.781347+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:38.781954+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:39.782721+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:40.783409+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:41.783721+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:42.783939+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:43.784179+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 64880640 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:44.784350+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 64880640 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:45.784508+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:46.784755+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:47.785017+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:48.785365+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:49.785512+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:50.785857+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 64864256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:51.786330+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 64864256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:52.786544+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 64864256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:53.786704+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:54.786856+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:55.787027+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:56.787297+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:57.787498+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:58.787735+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:59.787918+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 64847872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:00.788170+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 64847872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:01.788389+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 64847872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:02.788549+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:03.788758+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:04.788958+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:05.789117+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:06.789295+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:07.789481+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 64831488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:08.789733+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 64831488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:09.789972+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 64831488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:10.790197+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360947712 unmapped: 64823296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:11.790337+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 ms_handle_reset con 0x560000638c00 session 0x5600012328c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 59088896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:12.790509+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 59088896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:13.790725+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 59088896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:14.790950+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 59088896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:15.791160+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 59080704 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:16.791398+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760474 data_alloc: 218103808 data_used: 7640786
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 59080704 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:17.791552+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 59080704 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:18.791710+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 59072512 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:19.791883+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 59072512 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:20.792241+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:21.792415+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760474 data_alloc: 218103808 data_used: 7640786
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 314 handle_osd_map epochs [314,315], i have 315, src has [1,315]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 118.241065979s of 118.251205444s, submitted: 10
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 315 ms_handle_reset con 0x560002ddd000 session 0x5600027401c0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:22.792596+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:23.792820+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:24.793022+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e82c4000/0x0/0x4ffc00000, data 0xe6c1b8/0x1026000, compress 0x0/0x0/0x0, omap 0x786c3, meta 0x1689793d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:25.793177+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 316 ms_handle_reset con 0x560003088000 session 0x560002571dc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e82c4000/0x0/0x4ffc00000, data 0xe6c1b8/0x1026000, compress 0x0/0x0/0x0, omap 0x786c3, meta 0x1689793d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:26.793359+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3638298 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:27.793567+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:28.793757+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e8f31000/0x0/0x4ffc00000, data 0x1fdda8/0x3b9000, compress 0x0/0x0/0x0, omap 0x78a29, meta 0x168975d7), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:29.794010+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:30.794240+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:31.794435+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641072 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x1ff843/0x3bc000, compress 0x0/0x0/0x0, omap 0x78b3f, meta 0x168974c1), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: get_auth_request con 0x560000089800 auth_method 0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 63021056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:32.794644+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 63021056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:33.794849+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 317 handle_osd_map epochs [317,318], i have 318, src has [1,318]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.712511063s of 11.786905289s, submitted: 39
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:34.795036+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:35.795237+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:36.795412+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643846 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 318 heartbeat osd_stat(store_statfs(0x4e8f2b000/0x0/0x4ffc00000, data 0x2012c2/0x3bf000, compress 0x0/0x0/0x0, omap 0x79220, meta 0x16896de0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 318 heartbeat osd_stat(store_statfs(0x4e8f2b000/0x0/0x4ffc00000, data 0x2012c2/0x3bf000, compress 0x0/0x0/0x0, omap 0x79220, meta 0x16896de0), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:37.795556+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:38.795704+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 61956096 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 ms_handle_reset con 0x55ffff7b1400 session 0x560000d76700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:39.796524+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:40.797345+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:41.797531+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:42.797676+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:43.797835+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:44.798024+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:45.798218+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:46.798435+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:47.798603+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:48.798788+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:49.798980+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:50.799181+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:51.799358+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:52.799516+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:53.799721+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61931520 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:54.800475+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 61923328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:55.800713+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 61923328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:56.800923+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 61923328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:57.801165+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 61923328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:58.801385+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 61915136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:59.801558+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 61915136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:00.801801+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 61915136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:01.801991+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 61915136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:02.802318+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:03.802457+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:04.802610+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:05.802767+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:06.802916+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:07.803060+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:08.803258+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:09.803436+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:10.803640+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:11.803810+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:12.803968+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:13.804109+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:14.804240+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 61882368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:15.804429+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 61882368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:16.804589+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 61882368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:17.804746+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 61882368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:18.804912+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:19.805061+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:20.805252+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:21.805405+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:22.805558+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:23.805720+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:24.805882+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:25.806024+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:26.806197+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363913216 unmapped: 61857792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:27.806368+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363913216 unmapped: 61857792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:28.806570+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363913216 unmapped: 61857792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:29.806734+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363913216 unmapped: 61857792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:30.806956+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363921408 unmapped: 61849600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:31.807148+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363921408 unmapped: 61849600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:32.807308+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363921408 unmapped: 61849600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:33.807507+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363921408 unmapped: 61849600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:34.807747+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363937792 unmapped: 61833216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:35.807931+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363937792 unmapped: 61833216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:36.808090+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363937792 unmapped: 61833216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:37.808246+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363937792 unmapped: 61833216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:38.808409+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363945984 unmapped: 61825024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:39.808559+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363945984 unmapped: 61825024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:40.808749+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363945984 unmapped: 61825024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:41.808914+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363945984 unmapped: 61825024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:42.809173+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:43.809373+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:44.809530+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:45.809668+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:46.809812+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:47.810005+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:48.810147+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:49.810299+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:50.810680+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:51.810852+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:52.811018+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:53.811162+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:54.811306+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:55.811460+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:56.811613+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:57.811827+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:58.811968+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:59.812166+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363978752 unmapped: 61792256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:00.812369+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363978752 unmapped: 61792256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:01.812531+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363978752 unmapped: 61792256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:02.812741+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363978752 unmapped: 61792256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:03.812876+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363986944 unmapped: 61784064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:04.813054+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363986944 unmapped: 61784064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:05.813293+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363986944 unmapped: 61784064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:06.813475+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363995136 unmapped: 61775872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:07.813702+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364003328 unmapped: 61767680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:08.813885+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364003328 unmapped: 61767680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:09.814128+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364003328 unmapped: 61767680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:10.814333+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364003328 unmapped: 61767680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:11.814493+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:12.814691+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:13.814810+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:14.814959+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:15.815156+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:16.815366+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:17.815525+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364019712 unmapped: 61751296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:18.815669+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:19.815805+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:20.815979+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:21.816181+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:22.816373+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:23.816520+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:24.816697+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:25.816836+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:26.816980+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:27.817176+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 113.987884521s of 114.019287109s, submitted: 20
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:28.817337+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364068864 unmapped: 70098944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 319 handle_osd_map epochs [319,320], i have 320, src has [1,320]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 320 heartbeat osd_stat(store_statfs(0x4e8f27000/0x0/0x4ffc00000, data 0x202eb4/0x3c5000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 320 ms_handle_reset con 0x560000638c00 session 0x560007592e00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:29.817510+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364085248 unmapped: 70082560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:30.817738+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364085248 unmapped: 70082560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 ms_handle_reset con 0x560002ddd000 session 0x560007592fc0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e40000/0x0/0x4ffc00000, data 0x12e4a83/0x14aa000, compress 0x0/0x0/0x0, omap 0x7937e, meta 0x16896c82), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:31.817894+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:32.818155+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:33.818333+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:34.818512+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:35.818684+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:36.818888+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:37.819050+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:38.819216+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:39.819415+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364118016 unmapped: 70049792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:40.819639+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364118016 unmapped: 70049792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:41.819785+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364118016 unmapped: 70049792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:42.819983+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:43.820138+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:44.820317+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:45.820487+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-13T09:30:46.820652+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _finish_auth 0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:46.821917+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:47.820897+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:48.821122+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:49.821313+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:50.821568+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:51.821811+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:52.821990+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:53.822206+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:54.822401+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:55.822607+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:56.822825+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:57.823025+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:58.823301+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:59.823555+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:00.823796+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:01.823959+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:02.824200+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:03.824383+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 70000640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:04.824572+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 70000640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:05.824853+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 70000640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:06.825061+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 70000640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000715ec00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:07.825319+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364175360 unmapped: 69992448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.013404846s of 40.179763794s, submitted: 19
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:08.825494+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364183552 unmapped: 69984256 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:09.825667+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364183552 unmapped: 69984256 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 322 ms_handle_reset con 0x56000715ec00 session 0x560007592700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:10.826003+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 69951488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:11.826191+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 322 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e81c9/0x14ae000, compress 0x0/0x0/0x0, omap 0x79d6b, meta 0x16896295), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 69951488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750746 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:12.826463+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 69951488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:13.826712+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 69951488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:14.826956+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:15.827228+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:16.827433+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750746 data_alloc: 218103808 data_used: 300754
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:17.827656+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 322 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e81c9/0x14ae000, compress 0x0/0x0/0x0, omap 0x79d6b, meta 0x16896295), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 322 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e81c9/0x14ae000, compress 0x0/0x0/0x0, omap 0x79d6b, meta 0x16896295), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:18.827802+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:19.827942+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.607131004s of 11.598348618s, submitted: 26
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:20.828192+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:21.828436+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:22.828596+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:23.828803+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:24.829006+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:25.829186+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:26.829335+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:27.829521+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:28.829681+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:29.829829+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:30.831391+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:31.831595+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:32.831795+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 187K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1267 writes, 4441 keys, 1267 commit groups, 1.0 writes per commit group, ingest: 3.93 MB, 0.01 MB/s
                                           Interval WAL: 1267 writes, 514 syncs, 2.46 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread fragmentation_score=0.004811 took=0.000062s
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:33.832209+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:34.832357+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:35.832502+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 69894144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:36.832664+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 69894144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:37.832822+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 69894144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:38.832979+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 69894144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:39.833204+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 69885952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:40.833410+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 69885952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:41.833597+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 69877760 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:42.833791+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 69877760 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:43.833979+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:44.834223+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:45.834438+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:46.834610+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:47.834773+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: mgrc ms_handle_reset ms_handle_reset con 0x5600026d9800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:42:47 compute-0 ceph-osd[87041]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: get_auth_request con 0x560003088000 auth_method 0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:48.835191+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:49.835471+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:50.835705+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:51.835893+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:52.836152+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 ms_handle_reset con 0x5600026d7800 session 0x560001410a80
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:53.836393+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:54.836594+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:55.836752+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:56.836948+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:57.837189+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:58.837415+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:59.837635+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:00.837930+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:01.838111+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:02.838334+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:03.838581+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:04.838782+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:05.838980+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:06.839161+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:07.839335+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:08.839506+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:09.839653+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:10.839944+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:11.840181+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:12.840440+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:13.840646+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:14.840842+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 69836800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:15.841053+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 69836800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:16.841253+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 69828608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:17.841513+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 69828608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:18.841719+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:19.841943+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:20.842164+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:21.842331+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:22.842492+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:23.842688+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364355584 unmapped: 69812224 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:24.842850+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364355584 unmapped: 69812224 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:25.843033+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 69804032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:26.843865+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:27.844367+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:28.845110+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:29.845357+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:30.846045+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:31.846560+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:32.846871+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:33.847354+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:34.847747+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:35.848180+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:36.848661+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:37.848836+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:38.849208+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 69771264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:39.849577+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 69771264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:40.849919+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 69771264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:41.850211+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 69771264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:42.850487+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 69763072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:43.850659+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 69763072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:44.850851+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364412928 unmapped: 69754880 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:45.851002+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364412928 unmapped: 69754880 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:46.851267+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364412928 unmapped: 69754880 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:47.851611+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364421120 unmapped: 69746688 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:48.851848+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364421120 unmapped: 69746688 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:49.852013+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364421120 unmapped: 69746688 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:50.852222+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364429312 unmapped: 69738496 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:51.852459+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364429312 unmapped: 69738496 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:52.852691+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364429312 unmapped: 69738496 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:53.852852+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364437504 unmapped: 69730304 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:54.853044+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:55.853286+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:56.853508+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:57.853732+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:58.853974+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:59.854239+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:00.854540+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:01.854840+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:02.855178+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364462080 unmapped: 69705728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:03.855385+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364462080 unmapped: 69705728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:04.855587+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364462080 unmapped: 69705728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:05.855763+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364462080 unmapped: 69705728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:06.855940+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 69697536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:07.856156+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 69697536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:08.856342+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 69697536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:09.856525+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 69697536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:10.856763+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 69689344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:11.856910+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 69689344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:12.857158+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:13.857613+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:14.857884+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:15.858137+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:16.858306+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:17.858466+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:18.858774+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:19.858989+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:20.859219+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:21.859365+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 ms_handle_reset con 0x5600026dc400 session 0x55fffffffc00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d7800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:22.859609+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:23.859782+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:24.859967+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 69648384 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:25.860114+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 69648384 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:26.860331+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 127.056221008s of 127.063026428s, submitted: 10
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 69574656 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:27.860521+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 69574656 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:28.860827+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:29.861034+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:30.861252+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:31.861445+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:32.861674+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:33.861895+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364683264 unmapped: 69484544 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:34.862045+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:35.862158+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:36.862347+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:37.862815+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:38.863207+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:39.863439+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:40.863680+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:41.863981+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:42.864298+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:43.864761+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:44.864978+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:45.865191+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:46.865449+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:47.865632+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:48.865819+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:49.866033+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:50.866474+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:51.866680+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:52.866877+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:53.867253+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:54.867449+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:55.867583+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:56.867821+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:57.868188+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:58.868401+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:59.868578+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:00.868760+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:01.869000+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:02.869274+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:03.869399+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:04.869522+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:05.869667+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:06.869847+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:07.869997+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:08.870238+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:09.870488+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:10.870745+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:11.870958+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:12.871200+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:13.871423+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:14.871640+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:15.871897+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.884391785s of 49.232563019s, submitted: 132
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:16.872136+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:17.872328+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752822 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:18.880738+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:19.881046+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:20.881304+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:21.881453+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:22.881645+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:23.881823+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:24.882188+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:25.882373+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:26.882647+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:27.882846+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.627572060s of 12.162124634s, submitted: 8
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752822 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:28.883019+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:29.883187+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:30.883434+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:31.883637+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:32.883906+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:33.884155+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:34.884355+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:35.884509+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:36.884684+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:37.884848+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:38.885115+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:39.885273+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364781568 unmapped: 69386240 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.718849659s of 11.986929893s, submitted: 8
Dec 13 09:42:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:40.885550+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:41.885735+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:42.891038+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:43.891180+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:44.891366+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:45.891559+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:46.891945+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:47.892122+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:48.892293+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:49.892454+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:50.892630+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:51.892837+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:52.893172+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:53.893371+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:54.893531+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:55.893701+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 69337088 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:56.893845+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 69337088 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:57.894095+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 69337088 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:58.894272+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:59.894415+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:00.894625+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:01.894814+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:02.895017+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:03.895168+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364847104 unmapped: 69320704 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:04.895367+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364847104 unmapped: 69320704 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:05.895586+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364847104 unmapped: 69320704 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:06.895762+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:07.895947+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:08.896326+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:09.896492+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:10.896686+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364863488 unmapped: 69304320 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:11.896897+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364871680 unmapped: 69296128 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:12.897204+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:13.897535+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:14.897875+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:15.898214+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:16.898358+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:17.898621+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 69279744 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:18.898815+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 69279744 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:19.899135+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364896256 unmapped: 69271552 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:20.899352+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364896256 unmapped: 69271552 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:21.899566+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364896256 unmapped: 69271552 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:22.899742+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:23.899965+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:24.900243+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:25.900473+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:26.900734+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:27.900886+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:28.901041+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:29.901253+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:30.901459+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:31.901695+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:32.901951+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:33.902133+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:34.902323+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364920832 unmapped: 69246976 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:35.902613+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:36.902910+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:37.903177+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:38.903369+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:39.903559+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:40.903782+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:41.903928+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:42.904177+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:43.904374+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:44.904627+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:45.904781+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:46.904943+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:47.905159+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:48.905384+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:49.905521+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:50.905662+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:51.905901+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:52.906140+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:53.906333+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:54.906512+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:55.906735+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:56.906906+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:57.907411+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:58.907768+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:59.908058+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:00.908298+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:01.908502+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:02.908659+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:03.908805+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:04.908993+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:05.909184+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:06.909356+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:07.909516+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:08.909705+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:09.909881+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:10.910095+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:11.910283+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:12.910416+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:13.910569+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:14.910746+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:15.910919+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:16.911182+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:17.911327+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365035520 unmapped: 69132288 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:18.911490+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:19.911631+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:20.911801+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:21.911972+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:22.912181+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:23.912372+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:24.912532+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:25.912708+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:26.912910+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:27.913123+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:28.913265+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365060096 unmapped: 69107712 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:29.913381+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:30.914976+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:31.915181+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:32.915330+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:33.915517+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:34.915680+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:35.915855+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:36.916137+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:37.916316+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:38.916474+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:39.916669+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:40.916876+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:41.917050+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:42.917335+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:43.917540+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:44.917737+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:45.917953+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:46.918164+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:47.918341+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:48.918488+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:49.918677+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:50.918876+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:51.919046+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:52.919324+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:53.919562+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:54.919790+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:55.920015+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:56.920212+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:57.920415+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:58.920604+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:59.920765+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:00.921165+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:01.921320+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:02.921503+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:03.921639+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:04.921773+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:05.921947+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:06.922163+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:07.922345+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:08.922561+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:09.922754+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:10.922990+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:11.923147+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:12.923403+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:13.923662+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:14.923896+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:15.924140+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 68984832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:16.924341+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 68984832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:17.924586+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 68984832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:18.924832+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 68984832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:19.925006+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:20.925233+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:21.925440+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:22.929216+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:23.929366+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:24.929558+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:25.929725+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:26.929895+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:27.930058+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:28.930292+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:29.930500+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:30.930699+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:31.930855+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:32.931021+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:33.931192+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:34.931391+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:35.931532+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:36.931738+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:37.931931+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:38.932140+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:39.932385+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:40.932705+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:41.932924+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:42.933152+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:43.934504+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:44.934723+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:45.934930+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:46.935146+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:47.935285+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365256704 unmapped: 68911104 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:48.935458+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365256704 unmapped: 68911104 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:49.935615+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365256704 unmapped: 68911104 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:50.935832+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:51.936002+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:52.936206+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:53.936449+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:54.936616+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:55.936763+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:56.936910+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:57.937177+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:58.937377+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:59.937549+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:00.937765+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:01.937909+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:02.938054+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:03.938254+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:04.938444+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:05.939929+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:06.940169+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:07.940333+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:08.940522+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365322240 unmapped: 68845568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:09.940713+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365322240 unmapped: 68845568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:10.940944+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365322240 unmapped: 68845568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:11.941118+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 68837376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:12.941312+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 68837376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:13.941471+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 68837376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:14.941679+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 68837376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:15.941846+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:16.942005+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:17.942183+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:18.942361+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:19.942698+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:20.942899+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:21.943130+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365355008 unmapped: 68812800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:22.943333+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365355008 unmapped: 68812800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:23.943518+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365355008 unmapped: 68812800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:24.943782+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365355008 unmapped: 68812800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:25.943973+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 68804608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:26.944225+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 68804608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:27.944507+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 68804608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:28.944696+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 68804608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:29.944985+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365371392 unmapped: 68796416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:30.945334+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365371392 unmapped: 68796416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:31.945488+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:32.945639+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:33.945802+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:34.946048+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:35.946302+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:36.946528+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:37.946726+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365395968 unmapped: 68771840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:38.947017+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365395968 unmapped: 68771840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:39.947208+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365395968 unmapped: 68771840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:40.947447+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 68763648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:41.947658+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 68763648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:42.947799+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 68763648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:43.947971+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 68747264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:44.948160+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 68747264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:45.948494+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 68747264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:46.948764+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 68747264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:47.948938+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:48.949108+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752864 data_alloc: 218103808 data_used: 305161
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:49.949283+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:50.949524+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:51.949746+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:52.949955+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:53.950144+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 253.087219238s of 253.859130859s, submitted: 2
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 68714496 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756358 data_alloc: 218103808 data_used: 305161
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:54.950295+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365461504 unmapped: 68706304 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:55.950521+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365469696 unmapped: 68698112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:56.950730+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365469696 unmapped: 68698112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:57.950902+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365486080 unmapped: 68681728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:58.951139+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 324 heartbeat osd_stat(store_statfs(0x4e8aa9000/0x0/0x4ffc00000, data 0x67b828/0x843000, compress 0x0/0x0/0x0, omap 0x7a143, meta 0x16895ebd), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365494272 unmapped: 68673536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 324 ms_handle_reset con 0x560000638c00 session 0x5600055dd340
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692145 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:59.951324+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365494272 unmapped: 68673536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:00.951516+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 68665344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:01.951723+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 68665344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:02.951921+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 68665344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:03.952208+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.825212955s of 10.108526230s, submitted: 19
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365510656 unmapped: 68657152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 324 handle_osd_map epochs [324,325], i have 325, src has [1,325]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3695983 data_alloc: 218103808 data_used: 305161
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:04.952402+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 325 heartbeat osd_stat(store_statfs(0x4e8aa4000/0x0/0x4ffc00000, data 0x67d2a7/0x846000, compress 0x0/0x0/0x0, omap 0x7a8e5, meta 0x1689571b), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365535232 unmapped: 68632576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:05.952593+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365535232 unmapped: 68632576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:06.952788+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:07.952979+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:08.953235+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 326 heartbeat osd_stat(store_statfs(0x4e8aa1000/0x0/0x4ffc00000, data 0x67ee97/0x849000, compress 0x0/0x0/0x0, omap 0x7a996, meta 0x1689566a), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698669 data_alloc: 218103808 data_used: 305196
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:09.953405+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:10.953584+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 326 heartbeat osd_stat(store_statfs(0x4e8aa3000/0x0/0x4ffc00000, data 0x67ee97/0x849000, compress 0x0/0x0/0x0, omap 0x7aa47, meta 0x168955b9), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:11.953804+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365568000 unmapped: 68599808 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:12.953969+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365568000 unmapped: 68599808 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 326 heartbeat osd_stat(store_statfs(0x4e8aa4000/0x0/0x4ffc00000, data 0x67ee87/0x848000, compress 0x0/0x0/0x0, omap 0x7aba9, meta 0x16895457), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:13.954139+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.014971256s of 10.198164940s, submitted: 32
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674788 data_alloc: 218103808 data_used: 305145
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:14.954313+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 326 ms_handle_reset con 0x560002ddd000 session 0x560006879500
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:15.954497+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:16.954633+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:17.954806+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 326 heartbeat osd_stat(store_statfs(0x4e8f14000/0x0/0x4ffc00000, data 0x20ee64/0x3d7000, compress 0x0/0x0/0x0, omap 0x7ac5a, meta 0x168953a6), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:18.954966+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 68558848 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677606 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:19.955200+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 68558848 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:20.955419+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000715ec00
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 68558848 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:21.955548+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 68550656 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:22.955695+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e8710000/0x0/0x4ffc00000, data 0xa10916/0xbdc000, compress 0x0/0x0/0x0, omap 0x7ad73, meta 0x1689528d), peers [1,2] op hist [0,0,0,0,0,0,1])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 ms_handle_reset con 0x56000715ec00 session 0x56000550e700
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:23.955875+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:24.956036+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:25.956171+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:26.956354+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:27.956493+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:28.956648+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:29.956812+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:30.957014+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:31.957208+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:32.957372+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:33.957540+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:34.957727+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:35.957873+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:36.958050+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:37.958266+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:38.958481+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:39.958673+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:40.958888+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:41.959039+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364937216 unmapped: 69230592 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:42.959276+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:43.959471+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:44.959637+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:45.959788+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:46.959973+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:47.960167+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:48.960356+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:49.960545+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:50.960761+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:51.960920+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:52.961108+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:53.961247+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:54.961443+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364961792 unmapped: 69206016 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:55.961632+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364961792 unmapped: 69206016 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:56.961836+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364961792 unmapped: 69206016 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:57.961968+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:58.962120+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:59.962284+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:00.962512+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:01.962692+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:02.962951+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:03.963203+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:04.963353+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:05.963537+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:06.963751+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:07.963904+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:08.964133+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:09.964271+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:10.964439+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:11.964653+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:12.964853+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:13.965058+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:14.965327+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:15.965476+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:16.965623+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:17.965752+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:18.965896+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:19.966042+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:20.966355+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:21.966504+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:22.966698+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:23.966839+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:24.967411+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:25.967615+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:26.967831+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:27.967988+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:28.968166+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003892800
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 72.806076050s of 74.684478760s, submitted: 28
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 328 handle_osd_map epochs [328,329], i have 329, src has [1,329]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:29.968313+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 329 ms_handle_reset con 0x560003892800 session 0x56000550f180
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686636 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:30.968576+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:31.968722+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:32.968847+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:33.968985+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 329 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x21406f/0x3e0000, compress 0x0/0x0/0x0, omap 0x7b199, meta 0x16894e67), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:34.969166+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686636 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:35.969298+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:36.969469+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:37.969657+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 329 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x21406f/0x3e0000, compress 0x0/0x0/0x0, omap 0x7b199, meta 0x16894e67), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365060096 unmapped: 69107712 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:38.969823+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:39.969981+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:40.970223+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:41.970391+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:42.970603+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:43.970793+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:44.970982+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:45.971180+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:46.971342+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:47.971697+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:48.971871+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:49.972056+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:50.972324+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:51.972550+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:52.972792+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:53.972966+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:54.973206+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4268: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:55.973361+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:56.973558+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:57.973746+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:58.973963+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:59.974203+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:00.974434+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:01.975277+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:02.975727+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:03.975957+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:04.976691+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:05.977369+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:06.977828+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:07.978153+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365142016 unmapped: 69025792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:08.978461+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365142016 unmapped: 69025792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:09.978623+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365142016 unmapped: 69025792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:10.979176+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365142016 unmapped: 69025792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:11.979411+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:12.979823+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:13.980208+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:14.980466+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:15.980667+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:16.980864+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:17.981028+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:18.981184+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:19.981427+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:20.981626+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:21.981850+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:22.982213+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:23.982488+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365174784 unmapped: 68993024 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:24.982724+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365174784 unmapped: 68993024 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:25.982913+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:26.983172+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:27.983362+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:28.983549+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:29.983709+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:30.983994+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:31.984163+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:32.984342+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.74 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 496 writes, 1133 keys, 496 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s
                                           Interval WAL: 496 writes, 229 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:33.984537+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:34.985378+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:35.985941+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365215744 unmapped: 68952064 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:36.986168+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365215744 unmapped: 68952064 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:37.986683+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365215744 unmapped: 68952064 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:38.987216+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365215744 unmapped: 68952064 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:39.987622+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:40.987836+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:41.988143+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:42.988487+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:43.988788+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:44.989028+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:45.989212+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:46.989418+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:47.989639+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365240320 unmapped: 68927488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:48.989845+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365240320 unmapped: 68927488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:49.990056+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:50.990354+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365256704 unmapped: 68911104 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:51.990521+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:52.990738+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:53.990897+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:54.991220+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:55.991375+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:56.991597+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:57.991792+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:58.991962+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:59.992151+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:00.992390+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:01.992533+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:02.992658+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:03.992795+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:04.992916+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:05.993051+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365289472 unmapped: 68878336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:06.993459+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365289472 unmapped: 68878336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:07.993604+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365289472 unmapped: 68878336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:08.993885+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:09.994183+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:10.994388+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:11.994534+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:12.994668+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:13.994885+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365338624 unmapped: 68829184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'config show' '{prefix=config show}'
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:14.995049+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:42:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:42:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:15.995227+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:42:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:16.995419+0000)
Dec 13 09:42:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:42:47 compute-0 ceph-osd[87041]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:42:47 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23310 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:48 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23312 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:48 compute-0 nova_compute[248510]: 2025-12-13 09:42:48.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 09:42:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:42:49 compute-0 ceph-mon[76537]: pgmap v4267: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3945019973' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 13 09:42:49 compute-0 ceph-mon[76537]: from='client.23304 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:49 compute-0 ceph-mon[76537]: from='client.23308 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:49 compute-0 ceph-mon[76537]: from='client.23307 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:49 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23316 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 09:42:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:42:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4269: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 13 09:42:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806974952' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 13 09:42:49 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23320 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:50 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23324 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 13 09:42:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3085580492' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 13 09:42:50 compute-0 ceph-mon[76537]: pgmap v4268: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:50 compute-0 ceph-mon[76537]: from='client.23310 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:50 compute-0 ceph-mon[76537]: from='client.23312 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:42:50 compute-0 ceph-mon[76537]: from='client.23316 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:42:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2806974952' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 13 09:42:50 compute-0 systemd[1]: Starting Hostname Service...
Dec 13 09:42:50 compute-0 nova_compute[248510]: 2025-12-13 09:42:50.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:50 compute-0 podman[427830]: 2025-12-13 09:42:50.837976763 +0000 UTC m=+0.088417047 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 09:42:50 compute-0 podman[427827]: 2025-12-13 09:42:50.852017072 +0000 UTC m=+0.105378418 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 09:42:50 compute-0 systemd[1]: Started Hostname Service.
Dec 13 09:42:50 compute-0 nova_compute[248510]: 2025-12-13 09:42:50.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:51 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23326 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 13 09:42:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683512010' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 09:42:51 compute-0 ceph-mon[76537]: pgmap v4269: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:51 compute-0 ceph-mon[76537]: from='client.23320 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:51 compute-0 ceph-mon[76537]: from='client.23324 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3085580492' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 13 09:42:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/683512010' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 09:42:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 13 09:42:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605576686' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 13 09:42:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4270: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 13 09:42:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 13 09:42:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 13 09:42:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 13 09:42:52 compute-0 nova_compute[248510]: 2025-12-13 09:42:52.185 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 13 09:42:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1422976373' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 13 09:42:52 compute-0 nova_compute[248510]: 2025-12-13 09:42:52.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:52 compute-0 ceph-mon[76537]: from='client.23326 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:42:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1605576686' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 13 09:42:52 compute-0 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 13 09:42:52 compute-0 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 13 09:42:52 compute-0 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 13 09:42:52 compute-0 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 13 09:42:52 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1422976373' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 13 09:42:52 compute-0 podman[428153]: 2025-12-13 09:42:52.995932251 +0000 UTC m=+0.082120420 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Dec 13 09:42:53 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23342 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 13 09:42:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264461271' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 13 09:42:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4271: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 13 09:42:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1998616083' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 13 09:42:54 compute-0 ceph-mon[76537]: pgmap v4270: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:54 compute-0 ceph-mon[76537]: from='client.23342 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/264461271' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 13 09:42:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:42:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 13 09:42:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487080936' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 13 09:42:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:42:55.477 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:42:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:42:55.478 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:42:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:42:55.478 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:42:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4272: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:55 compute-0 nova_compute[248510]: 2025-12-13 09:42:55.985 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 13 09:42:56 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513426738' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 13 09:42:56 compute-0 ceph-mon[76537]: pgmap v4271: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1998616083' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 13 09:42:56 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2487080936' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 13 09:42:56 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23352 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:56 compute-0 nova_compute[248510]: 2025-12-13 09:42:56.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:42:57 compute-0 nova_compute[248510]: 2025-12-13 09:42:57.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:42:57 compute-0 nova_compute[248510]: 2025-12-13 09:42:57.240 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:42:57 compute-0 nova_compute[248510]: 2025-12-13 09:42:57.241 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:42:57 compute-0 nova_compute[248510]: 2025-12-13 09:42:57.241 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:42:57 compute-0 nova_compute[248510]: 2025-12-13 09:42:57.241 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:42:57 compute-0 nova_compute[248510]: 2025-12-13 09:42:57.242 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:42:57 compute-0 ceph-mon[76537]: pgmap v4272: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:57 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/513426738' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 13 09:42:57 compute-0 ceph-mon[76537]: from='client.23352 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 13 09:42:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817892434' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 13 09:42:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4273: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:42:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288577377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:42:57 compute-0 nova_compute[248510]: 2025-12-13 09:42:57.848 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:42:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 13 09:42:57 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351467198' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.033 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.034 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3335MB free_disk=59.987355314195156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.035 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.035 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.130 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.131 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.395 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 09:42:58 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23360 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.572 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.572 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 09:42:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/817892434' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 13 09:42:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1288577377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:42:58 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3351467198' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.596 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.634 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 09:42:58 compute-0 nova_compute[248510]: 2025-12-13 09:42:58.668 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:42:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 13 09:42:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874487379' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 13 09:42:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:42:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956683031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:42:59 compute-0 nova_compute[248510]: 2025-12-13 09:42:59.308 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:42:59 compute-0 nova_compute[248510]: 2025-12-13 09:42:59.314 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:42:59 compute-0 nova_compute[248510]: 2025-12-13 09:42:59.335 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:42:59 compute-0 nova_compute[248510]: 2025-12-13 09:42:59.337 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:42:59 compute-0 nova_compute[248510]: 2025-12-13 09:42:59.337 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:42:59 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23366 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:59 compute-0 ceph-mon[76537]: pgmap v4273: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:59 compute-0 ceph-mon[76537]: from='client.23360 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1874487379' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 13 09:42:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1956683031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:42:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4274: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:42:59 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23368 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:42:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 13 09:43:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628397413' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Dec 13 09:43:00 compute-0 ceph-mon[76537]: from='client.23366 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1628397413' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Dec 13 09:43:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec 13 09:43:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2200477' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Dec 13 09:43:00 compute-0 nova_compute[248510]: 2025-12-13 09:43:00.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23374 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:01 compute-0 ceph-mon[76537]: pgmap v4274: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:01 compute-0 ceph-mon[76537]: from='client.23368 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:01 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2200477' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23376 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:43:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4275: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:02 compute-0 nova_compute[248510]: 2025-12-13 09:43:02.189 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec 13 09:43:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587814501' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Dec 13 09:43:02 compute-0 ovs-appctl[429585]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 13 09:43:02 compute-0 ovs-appctl[429591]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 13 09:43:02 compute-0 ovs-appctl[429595]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 13 09:43:02 compute-0 nova_compute[248510]: 2025-12-13 09:43:02.338 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:02 compute-0 nova_compute[248510]: 2025-12-13 09:43:02.338 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:43:02 compute-0 nova_compute[248510]: 2025-12-13 09:43:02.338 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:43:02 compute-0 nova_compute[248510]: 2025-12-13 09:43:02.360 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:43:02 compute-0 nova_compute[248510]: 2025-12-13 09:43:02.360 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec 13 09:43:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125869209' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Dec 13 09:43:02 compute-0 ceph-mon[76537]: from='client.23374 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:02 compute-0 ceph-mon[76537]: from='client.23376 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1587814501' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Dec 13 09:43:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3125869209' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Dec 13 09:43:03 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23382 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:03 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23384 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4276: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:04 compute-0 ceph-mon[76537]: pgmap v4275: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 09:43:04 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473974810' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 09:43:04 compute-0 nova_compute[248510]: 2025-12-13 09:43:04.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:04 compute-0 nova_compute[248510]: 2025-12-13 09:43:04.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:04 compute-0 nova_compute[248510]: 2025-12-13 09:43:04.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:43:04 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:04 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Dec 13 09:43:04 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:04.996779) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:43:04 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Dec 13 09:43:04 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618984996815, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1113, "num_deletes": 257, "total_data_size": 1543582, "memory_usage": 1569216, "flush_reason": "Manual Compaction"}
Dec 13 09:43:04 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618985016511, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 1516764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84489, "largest_seqno": 85601, "table_properties": {"data_size": 1511192, "index_size": 2839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13056, "raw_average_key_size": 20, "raw_value_size": 1499609, "raw_average_value_size": 2332, "num_data_blocks": 126, "num_entries": 643, "num_filter_entries": 643, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618892, "oldest_key_time": 1765618892, "file_creation_time": 1765618984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 19813 microseconds, and 4784 cpu microseconds.
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:05.016579) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 1516764 bytes OK
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:05.016609) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:05.024103) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:05.024152) EVENT_LOG_v1 {"time_micros": 1765618985024143, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:05.024182) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 1538126, prev total WAL file size 1538126, number of live WAL files 2.
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:05.024882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373733' seq:72057594037927935, type:22 .. '6C6F676D0034303236' seq:0, type:0; will stop at (end)
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(1481KB)], [203(9625KB)]
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618985024912, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 11372807, "oldest_snapshot_seqno": -1}
Dec 13 09:43:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec 13 09:43:05 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527434453' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Dec 13 09:43:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4277: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 10036 keys, 11263965 bytes, temperature: kUnknown
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618985890769, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 11263965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11202403, "index_size": 35337, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 265845, "raw_average_key_size": 26, "raw_value_size": 11028943, "raw_average_value_size": 1098, "num_data_blocks": 1351, "num_entries": 10036, "num_filter_entries": 10036, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:43:05 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:43:05 compute-0 nova_compute[248510]: 2025-12-13 09:43:05.993 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:05.891173) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 11263965 bytes
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:06.388841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 13.1 rd, 13.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(14.9) write-amplify(7.4) OK, records in: 10562, records dropped: 526 output_compression: NoCompression
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:06.388875) EVENT_LOG_v1 {"time_micros": 1765618986388860, "job": 128, "event": "compaction_finished", "compaction_time_micros": 865971, "compaction_time_cpu_micros": 25423, "output_level": 6, "num_output_files": 1, "total_output_size": 11263965, "num_input_records": 10562, "num_output_records": 10036, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618986389378, "job": 128, "event": "table_file_deletion", "file_number": 205}
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618986391382, "job": 128, "event": "table_file_deletion", "file_number": 203}
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:05.024830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:06.391422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:06.391427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:06.391429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:06.391432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:43:06 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:43:06.391433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:43:06 compute-0 ceph-mon[76537]: from='client.23382 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:06 compute-0 ceph-mon[76537]: from='client.23384 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:43:06 compute-0 ceph-mon[76537]: pgmap v4276: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3473974810' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 09:43:06 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2527434453' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Dec 13 09:43:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec 13 09:43:06 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206303473' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Dec 13 09:43:07 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23392 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:07 compute-0 nova_compute[248510]: 2025-12-13 09:43:07.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4278: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:07 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec 13 09:43:07 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1885249870' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 09:43:07 compute-0 ceph-mon[76537]: pgmap v4277: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:07 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1206303473' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Dec 13 09:43:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec 13 09:43:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2570749173' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Dec 13 09:43:08 compute-0 ceph-mon[76537]: from='client.23392 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:08 compute-0 ceph-mon[76537]: pgmap v4278: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1885249870' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 09:43:08 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2570749173' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Dec 13 09:43:08 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec 13 09:43:08 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430873337' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Dec 13 09:43:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec 13 09:43:09 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644018073' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Dec 13 09:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:43:09
Dec 13 09:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'backups', 'volumes', 'images', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Dec 13 09:43:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:43:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4279: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2430873337' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Dec 13 09:43:09 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3644018073' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Dec 13 09:43:09 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23402 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:09 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:43:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:43:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec 13 09:43:10 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4169990648' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Dec 13 09:43:10 compute-0 ceph-mon[76537]: pgmap v4279: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:10 compute-0 ceph-mon[76537]: from='client.23402 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:10 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4169990648' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Dec 13 09:43:10 compute-0 nova_compute[248510]: 2025-12-13 09:43:10.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec 13 09:43:11 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872426942' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23408 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4280: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:12 compute-0 nova_compute[248510]: 2025-12-13 09:43:12.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:12 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec 13 09:43:12 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191608959' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Dec 13 09:43:12 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2872426942' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Dec 13 09:43:12 compute-0 ceph-mon[76537]: from='client.23408 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:12 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23412 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:12 compute-0 nova_compute[248510]: 2025-12-13 09:43:12.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:13 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23414 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4281: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:14 compute-0 ceph-mon[76537]: pgmap v4280: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:14 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/191608959' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Dec 13 09:43:14 compute-0 ceph-mon[76537]: from='client.23412 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec 13 09:43:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/33952215' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Dec 13 09:43:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec 13 09:43:14 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2998491231' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Dec 13 09:43:14 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:43:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4183780150' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:43:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:43:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4183780150' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23420 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:15 compute-0 sshd-session[431117]: Invalid user eos from 80.94.92.165 port 52654
Dec 13 09:43:15 compute-0 ceph-mon[76537]: from='client.23414 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:15 compute-0 ceph-mon[76537]: pgmap v4281: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/33952215' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Dec 13 09:43:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2998491231' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4282: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:15 compute-0 sshd-session[431117]: Connection closed by invalid user eos 80.94.92.165 port 52654 [preauth]
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23426 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:15 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:43:15 compute-0 nova_compute[248510]: 2025-12-13 09:43:15.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:16 compute-0 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 13 09:43:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec 13 09:43:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2667014326' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 09:43:16 compute-0 systemd[1]: Starting Time & Date Service...
Dec 13 09:43:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4183780150' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:43:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/4183780150' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:43:16 compute-0 ceph-mon[76537]: from='client.23420 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2667014326' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 09:43:16 compute-0 systemd[1]: Started Time & Date Service.
Dec 13 09:43:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Dec 13 09:43:16 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861444656' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Dec 13 09:43:17 compute-0 nova_compute[248510]: 2025-12-13 09:43:17.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:17 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23432 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:17 compute-0 ceph-mon[76537]: pgmap v4282: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:17 compute-0 ceph-mon[76537]: from='client.23426 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:17 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1861444656' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Dec 13 09:43:17 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23434 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4283: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:18 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 09:43:18 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659971772' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 09:43:19 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec 13 09:43:19 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643811994' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Dec 13 09:43:19 compute-0 ceph-mon[76537]: from='client.23432 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:19 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/659971772' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 09:43:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4284: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:20 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:20 compute-0 ceph-mon[76537]: from='client.23434 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:43:20 compute-0 ceph-mon[76537]: pgmap v4283: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:20 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/643811994' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Dec 13 09:43:20 compute-0 podman[431720]: 2025-12-13 09:43:20.977034311 +0000 UTC m=+0.063755604 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec 13 09:43:21 compute-0 nova_compute[248510]: 2025-12-13 09:43:20.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:21 compute-0 podman[431719]: 2025-12-13 09:43:21.037983885 +0000 UTC m=+0.124474292 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 09:43:21 compute-0 ceph-mon[76537]: pgmap v4284: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4285: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:43:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:43:22 compute-0 nova_compute[248510]: 2025-12-13 09:43:22.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4286: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:23 compute-0 podman[431763]: 2025-12-13 09:43:23.996014067 +0000 UTC m=+0.083004513 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:43:24 compute-0 ceph-mon[76537]: pgmap v4285: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:25 compute-0 ceph-mon[76537]: pgmap v4286: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4287: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:26 compute-0 nova_compute[248510]: 2025-12-13 09:43:26.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:26 compute-0 sudo[431783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:43:26 compute-0 sudo[431783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:26 compute-0 sudo[431783]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:26 compute-0 sudo[431808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:43:26 compute-0 sudo[431808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:26 compute-0 sudo[431808]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:43:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:43:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:43:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:43:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:43:26 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:43:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:43:26 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:43:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:43:27 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:43:27 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:43:27 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:43:27 compute-0 sudo[431864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:43:27 compute-0 sudo[431864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:27 compute-0 sudo[431864]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:27 compute-0 sudo[431889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:43:27 compute-0 sudo[431889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:27 compute-0 nova_compute[248510]: 2025-12-13 09:43:27.213 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:27 compute-0 podman[431927]: 2025-12-13 09:43:27.434738437 +0000 UTC m=+0.034918649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:43:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4288: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:28 compute-0 podman[431927]: 2025-12-13 09:43:28.042883321 +0000 UTC m=+0.643063513 container create 17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 09:43:28 compute-0 ceph-mon[76537]: pgmap v4287: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:28 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:43:28 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:43:28 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:43:28 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:43:28 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:43:28 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:43:28 compute-0 systemd[1]: Started libpod-conmon-17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea.scope.
Dec 13 09:43:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:43:28 compute-0 podman[431927]: 2025-12-13 09:43:28.274581036 +0000 UTC m=+0.874761268 container init 17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 09:43:28 compute-0 podman[431927]: 2025-12-13 09:43:28.287349293 +0000 UTC m=+0.887529505 container start 17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:43:28 compute-0 confident_mirzakhani[431943]: 167 167
Dec 13 09:43:28 compute-0 systemd[1]: libpod-17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea.scope: Deactivated successfully.
Dec 13 09:43:28 compute-0 podman[431927]: 2025-12-13 09:43:28.486719355 +0000 UTC m=+1.086899547 container attach 17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:43:28 compute-0 podman[431927]: 2025-12-13 09:43:28.48731406 +0000 UTC m=+1.087494252 container died 17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6d57963697513f95d61d5bce3c41379a47f744c4e9b5a3981c8ed62eb7c259c-merged.mount: Deactivated successfully.
Dec 13 09:43:29 compute-0 podman[431927]: 2025-12-13 09:43:29.219831934 +0000 UTC m=+1.820012126 container remove 17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 09:43:29 compute-0 systemd[1]: libpod-conmon-17a24e76e3a0d9269bfa73535a98489d183a5d7aa9396752681260b6f44b88ea.scope: Deactivated successfully.
Dec 13 09:43:29 compute-0 ceph-mon[76537]: pgmap v4288: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:29 compute-0 podman[431970]: 2025-12-13 09:43:29.437822258 +0000 UTC m=+0.078031149 container create 943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hodgkin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:43:29 compute-0 podman[431970]: 2025-12-13 09:43:29.388201076 +0000 UTC m=+0.028409967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:43:29 compute-0 systemd[1]: Started libpod-conmon-943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11.scope.
Dec 13 09:43:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2592685ffed1e41bf60a475fcbb9284151afb4e352f5b2266d13495686027e90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2592685ffed1e41bf60a475fcbb9284151afb4e352f5b2266d13495686027e90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2592685ffed1e41bf60a475fcbb9284151afb4e352f5b2266d13495686027e90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2592685ffed1e41bf60a475fcbb9284151afb4e352f5b2266d13495686027e90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2592685ffed1e41bf60a475fcbb9284151afb4e352f5b2266d13495686027e90/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:29 compute-0 podman[431970]: 2025-12-13 09:43:29.556457335 +0000 UTC m=+0.196666246 container init 943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hodgkin, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 09:43:29 compute-0 podman[431970]: 2025-12-13 09:43:29.56634476 +0000 UTC m=+0.206553651 container start 943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:43:29 compute-0 podman[431970]: 2025-12-13 09:43:29.624122356 +0000 UTC m=+0.264331257 container attach 943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 09:43:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4289: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:30 compute-0 affectionate_hodgkin[431987]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:43:30 compute-0 affectionate_hodgkin[431987]: --> All data devices are unavailable
Dec 13 09:43:30 compute-0 systemd[1]: libpod-943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11.scope: Deactivated successfully.
Dec 13 09:43:30 compute-0 conmon[431987]: conmon 943da87a1c4bcc4bae02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11.scope/container/memory.events
Dec 13 09:43:30 compute-0 podman[431970]: 2025-12-13 09:43:30.068997345 +0000 UTC m=+0.709206256 container died 943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 09:43:30 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2592685ffed1e41bf60a475fcbb9284151afb4e352f5b2266d13495686027e90-merged.mount: Deactivated successfully.
Dec 13 09:43:30 compute-0 podman[431970]: 2025-12-13 09:43:30.5297841 +0000 UTC m=+1.169993021 container remove 943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hodgkin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:43:30 compute-0 systemd[1]: libpod-conmon-943da87a1c4bcc4bae02a862c0403503d0c5dffb15d17ef71d88b97863e51c11.scope: Deactivated successfully.
Dec 13 09:43:30 compute-0 sudo[431889]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:30 compute-0 sudo[432021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:43:30 compute-0 sudo[432021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:30 compute-0 sudo[432021]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:30 compute-0 sudo[432046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:43:30 compute-0 sudo[432046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:31 compute-0 nova_compute[248510]: 2025-12-13 09:43:31.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:31 compute-0 podman[432083]: 2025-12-13 09:43:31.009703811 +0000 UTC m=+0.028838288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:43:31 compute-0 podman[432083]: 2025-12-13 09:43:31.456435766 +0000 UTC m=+0.475570263 container create 690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 09:43:31 compute-0 ceph-mon[76537]: pgmap v4289: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:31 compute-0 systemd[1]: Started libpod-conmon-690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3.scope.
Dec 13 09:43:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:43:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4290: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:32 compute-0 podman[432083]: 2025-12-13 09:43:32.00940518 +0000 UTC m=+1.028539737 container init 690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 09:43:32 compute-0 podman[432083]: 2025-12-13 09:43:32.018476116 +0000 UTC m=+1.037610573 container start 690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:43:32 compute-0 brave_mahavira[432099]: 167 167
Dec 13 09:43:32 compute-0 systemd[1]: libpod-690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3.scope: Deactivated successfully.
Dec 13 09:43:32 compute-0 podman[432083]: 2025-12-13 09:43:32.069955004 +0000 UTC m=+1.089089471 container attach 690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:43:32 compute-0 podman[432083]: 2025-12-13 09:43:32.070676172 +0000 UTC m=+1.089810639 container died 690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:43:32 compute-0 nova_compute[248510]: 2025-12-13 09:43:32.217 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-01e3db422e7bbf0575a4b01e92b5cb59aec143b7ffbe531a25aab34e5ad7ac20-merged.mount: Deactivated successfully.
Dec 13 09:43:32 compute-0 podman[432083]: 2025-12-13 09:43:32.264551368 +0000 UTC m=+1.283685825 container remove 690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 09:43:32 compute-0 systemd[1]: libpod-conmon-690cbc7a70618f749bbeab7f2082f1a49b002e75f322911809cd8dd005f6d7f3.scope: Deactivated successfully.
Dec 13 09:43:32 compute-0 podman[432124]: 2025-12-13 09:43:32.428002997 +0000 UTC m=+0.024248743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:43:33 compute-0 podman[432124]: 2025-12-13 09:43:33.164907931 +0000 UTC m=+0.761153647 container create 78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:43:33 compute-0 systemd[1]: Started libpod-conmon-78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985.scope.
Dec 13 09:43:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c030446b4531efb4b774d2754e917c366bac67c9f5f92992c7a48503d979650e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c030446b4531efb4b774d2754e917c366bac67c9f5f92992c7a48503d979650e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c030446b4531efb4b774d2754e917c366bac67c9f5f92992c7a48503d979650e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c030446b4531efb4b774d2754e917c366bac67c9f5f92992c7a48503d979650e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:33 compute-0 ceph-mon[76537]: pgmap v4290: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:33 compute-0 podman[432124]: 2025-12-13 09:43:33.332703958 +0000 UTC m=+0.928949694 container init 78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_panini, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 09:43:33 compute-0 podman[432124]: 2025-12-13 09:43:33.342774309 +0000 UTC m=+0.939020025 container start 78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:43:33 compute-0 podman[432124]: 2025-12-13 09:43:33.449266654 +0000 UTC m=+1.045512420 container attach 78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:43:33 compute-0 gracious_panini[432141]: {
Dec 13 09:43:33 compute-0 gracious_panini[432141]:     "0": [
Dec 13 09:43:33 compute-0 gracious_panini[432141]:         {
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "devices": [
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "/dev/loop3"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             ],
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_name": "ceph_lv0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_size": "21470642176",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "name": "ceph_lv0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "tags": {
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cluster_name": "ceph",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.crush_device_class": "",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.encrypted": "0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.objectstore": "bluestore",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osd_id": "0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.type": "block",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.vdo": "0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.with_tpm": "0"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             },
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "type": "block",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "vg_name": "ceph_vg0"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:         }
Dec 13 09:43:33 compute-0 gracious_panini[432141]:     ],
Dec 13 09:43:33 compute-0 gracious_panini[432141]:     "1": [
Dec 13 09:43:33 compute-0 gracious_panini[432141]:         {
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "devices": [
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "/dev/loop4"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             ],
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_name": "ceph_lv1",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_size": "21470642176",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "name": "ceph_lv1",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "tags": {
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cluster_name": "ceph",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.crush_device_class": "",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.encrypted": "0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.objectstore": "bluestore",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osd_id": "1",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.type": "block",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.vdo": "0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.with_tpm": "0"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             },
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "type": "block",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "vg_name": "ceph_vg1"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:         }
Dec 13 09:43:33 compute-0 gracious_panini[432141]:     ],
Dec 13 09:43:33 compute-0 gracious_panini[432141]:     "2": [
Dec 13 09:43:33 compute-0 gracious_panini[432141]:         {
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "devices": [
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "/dev/loop5"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             ],
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_name": "ceph_lv2",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_size": "21470642176",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "name": "ceph_lv2",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "tags": {
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.cluster_name": "ceph",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.crush_device_class": "",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.encrypted": "0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.objectstore": "bluestore",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osd_id": "2",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.type": "block",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.vdo": "0",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:                 "ceph.with_tpm": "0"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             },
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "type": "block",
Dec 13 09:43:33 compute-0 gracious_panini[432141]:             "vg_name": "ceph_vg2"
Dec 13 09:43:33 compute-0 gracious_panini[432141]:         }
Dec 13 09:43:33 compute-0 gracious_panini[432141]:     ]
Dec 13 09:43:33 compute-0 gracious_panini[432141]: }
Dec 13 09:43:33 compute-0 systemd[1]: libpod-78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985.scope: Deactivated successfully.
Dec 13 09:43:33 compute-0 podman[432124]: 2025-12-13 09:43:33.693178042 +0000 UTC m=+1.289423768 container died 78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 09:43:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4291: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c030446b4531efb4b774d2754e917c366bac67c9f5f92992c7a48503d979650e-merged.mount: Deactivated successfully.
Dec 13 09:43:33 compute-0 podman[432124]: 2025-12-13 09:43:33.9302516 +0000 UTC m=+1.526497356 container remove 78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 09:43:33 compute-0 systemd[1]: libpod-conmon-78ee937436db2da62e061ec32fc7fca0d8be7e9d211089ad90fd531a009a7985.scope: Deactivated successfully.
Dec 13 09:43:33 compute-0 sudo[432046]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:34 compute-0 sudo[432165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:43:34 compute-0 sudo[432165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:34 compute-0 sudo[432165]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:34 compute-0 sudo[432190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:43:34 compute-0 sudo[432190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:34 compute-0 podman[432230]: 2025-12-13 09:43:34.442174065 +0000 UTC m=+0.049844489 container create c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_montalcini, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:43:34 compute-0 systemd[1]: Started libpod-conmon-c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a.scope.
Dec 13 09:43:34 compute-0 podman[432230]: 2025-12-13 09:43:34.419703997 +0000 UTC m=+0.027374511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:43:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:43:34 compute-0 podman[432230]: 2025-12-13 09:43:34.54422849 +0000 UTC m=+0.151899004 container init c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:43:34 compute-0 podman[432230]: 2025-12-13 09:43:34.557743456 +0000 UTC m=+0.165413930 container start c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:43:34 compute-0 podman[432230]: 2025-12-13 09:43:34.563253583 +0000 UTC m=+0.170924037 container attach c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:43:34 compute-0 condescending_montalcini[432247]: 167 167
Dec 13 09:43:34 compute-0 systemd[1]: libpod-c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a.scope: Deactivated successfully.
Dec 13 09:43:34 compute-0 podman[432230]: 2025-12-13 09:43:34.565529939 +0000 UTC m=+0.173200373 container died c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_montalcini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 09:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fbefa64e9a5a44f340349a57e2f10ea84000f0654d2a243737105adb9042c3c-merged.mount: Deactivated successfully.
Dec 13 09:43:34 compute-0 podman[432230]: 2025-12-13 09:43:34.619623353 +0000 UTC m=+0.227293827 container remove c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_montalcini, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 09:43:34 compute-0 systemd[1]: libpod-conmon-c3e8a74b1023f55635f62cfc42ff31da3d6d8154a41912d54ade9281d87aeb5a.scope: Deactivated successfully.
Dec 13 09:43:34 compute-0 podman[432270]: 2025-12-13 09:43:34.818822821 +0000 UTC m=+0.050932187 container create 5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:43:34 compute-0 systemd[1]: Started libpod-conmon-5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877.scope.
Dec 13 09:43:34 compute-0 podman[432270]: 2025-12-13 09:43:34.797925591 +0000 UTC m=+0.030034977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:43:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cabd3e4213143a45f4042c95e5aeface7c8f4d17239f02f6d6ba26df8fa6c83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cabd3e4213143a45f4042c95e5aeface7c8f4d17239f02f6d6ba26df8fa6c83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cabd3e4213143a45f4042c95e5aeface7c8f4d17239f02f6d6ba26df8fa6c83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cabd3e4213143a45f4042c95e5aeface7c8f4d17239f02f6d6ba26df8fa6c83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:43:34 compute-0 podman[432270]: 2025-12-13 09:43:34.921661974 +0000 UTC m=+0.153771360 container init 5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:43:34 compute-0 podman[432270]: 2025-12-13 09:43:34.930848352 +0000 UTC m=+0.162957738 container start 5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 09:43:34 compute-0 podman[432270]: 2025-12-13 09:43:34.935215991 +0000 UTC m=+0.167325367 container attach 5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:43:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:35 compute-0 ceph-mon[76537]: pgmap v4291: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:35 compute-0 lvm[432367]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:43:35 compute-0 lvm[432367]: VG ceph_vg1 finished
Dec 13 09:43:35 compute-0 lvm[432368]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:43:35 compute-0 lvm[432368]: VG ceph_vg2 finished
Dec 13 09:43:35 compute-0 lvm[432365]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:43:35 compute-0 lvm[432365]: VG ceph_vg0 finished
Dec 13 09:43:35 compute-0 admiring_lalande[432287]: {}
Dec 13 09:43:35 compute-0 systemd[1]: libpod-5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877.scope: Deactivated successfully.
Dec 13 09:43:35 compute-0 podman[432270]: 2025-12-13 09:43:35.789100819 +0000 UTC m=+1.021210195 container died 5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:43:35 compute-0 systemd[1]: libpod-5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877.scope: Consumed 1.438s CPU time.
Dec 13 09:43:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4292: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:36 compute-0 nova_compute[248510]: 2025-12-13 09:43:36.006 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:37 compute-0 nova_compute[248510]: 2025-12-13 09:43:37.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cabd3e4213143a45f4042c95e5aeface7c8f4d17239f02f6d6ba26df8fa6c83-merged.mount: Deactivated successfully.
Dec 13 09:43:37 compute-0 ceph-mon[76537]: pgmap v4292: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:37 compute-0 podman[432270]: 2025-12-13 09:43:37.268037123 +0000 UTC m=+2.500146499 container remove 5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 09:43:37 compute-0 sudo[432190]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:43:37 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:43:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:43:37 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:43:37 compute-0 systemd[1]: libpod-conmon-5d41c3ecd9ede98c8fd91689a61bf6849a3f0c00f14d511267865311e00df877.scope: Deactivated successfully.
Dec 13 09:43:37 compute-0 sudo[432382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:43:37 compute-0 sudo[432382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:43:37 compute-0 sudo[432382]: pam_unix(sudo:session): session closed for user root
Dec 13 09:43:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4293: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:43:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:43:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4294: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:40 compute-0 ceph-mon[76537]: pgmap v4293: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:43:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:43:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:40 compute-0 nova_compute[248510]: 2025-12-13 09:43:40.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:41 compute-0 nova_compute[248510]: 2025-12-13 09:43:41.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:41 compute-0 ceph-mon[76537]: pgmap v4294: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4295: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:42 compute-0 nova_compute[248510]: 2025-12-13 09:43:42.225 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:43 compute-0 ceph-mon[76537]: pgmap v4295: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4296: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:45 compute-0 ceph-mon[76537]: pgmap v4296: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4297: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:46 compute-0 nova_compute[248510]: 2025-12-13 09:43:46.062 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:46 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 09:43:46 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 09:43:47 compute-0 nova_compute[248510]: 2025-12-13 09:43:47.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:47 compute-0 ceph-mon[76537]: pgmap v4297: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4298: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:48 compute-0 nova_compute[248510]: 2025-12-13 09:43:48.927 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:49 compute-0 ceph-mon[76537]: pgmap v4298: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4299: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:51 compute-0 nova_compute[248510]: 2025-12-13 09:43:51.062 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:51 compute-0 nova_compute[248510]: 2025-12-13 09:43:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:51 compute-0 ceph-mon[76537]: pgmap v4299: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4300: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:51 compute-0 podman[432412]: 2025-12-13 09:43:51.974645082 +0000 UTC m=+0.060720969 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 09:43:52 compute-0 podman[432411]: 2025-12-13 09:43:52.007411546 +0000 UTC m=+0.093435352 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 13 09:43:52 compute-0 nova_compute[248510]: 2025-12-13 09:43:52.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:52 compute-0 ceph-mon[76537]: pgmap v4300: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:53 compute-0 nova_compute[248510]: 2025-12-13 09:43:53.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4301: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:54 compute-0 podman[432453]: 2025-12-13 09:43:54.414231645 +0000 UTC m=+0.085527015 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:43:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:43:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:43:55.478 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:43:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:43:55.478 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:43:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:43:55.478 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:43:55 compute-0 ceph-mon[76537]: pgmap v4301: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4302: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:56 compute-0 nova_compute[248510]: 2025-12-13 09:43:56.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:57 compute-0 nova_compute[248510]: 2025-12-13 09:43:57.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:43:57 compute-0 ceph-mon[76537]: pgmap v4302: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:57 compute-0 nova_compute[248510]: 2025-12-13 09:43:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:43:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4303: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:43:57 compute-0 nova_compute[248510]: 2025-12-13 09:43:57.958 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:43:57 compute-0 nova_compute[248510]: 2025-12-13 09:43:57.959 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:43:57 compute-0 nova_compute[248510]: 2025-12-13 09:43:57.959 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:43:57 compute-0 nova_compute[248510]: 2025-12-13 09:43:57.960 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:43:57 compute-0 nova_compute[248510]: 2025-12-13 09:43:57.960 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:43:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:43:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456245238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:43:58 compute-0 nova_compute[248510]: 2025-12-13 09:43:58.578 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:43:58 compute-0 nova_compute[248510]: 2025-12-13 09:43:58.782 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:43:58 compute-0 nova_compute[248510]: 2025-12-13 09:43:58.784 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3254MB free_disk=59.987355314195156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:43:58 compute-0 nova_compute[248510]: 2025-12-13 09:43:58.784 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:43:58 compute-0 nova_compute[248510]: 2025-12-13 09:43:58.785 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:43:58 compute-0 nova_compute[248510]: 2025-12-13 09:43:58.967 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:43:58 compute-0 nova_compute[248510]: 2025-12-13 09:43:58.968 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:43:58 compute-0 nova_compute[248510]: 2025-12-13 09:43:58.989 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:43:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:43:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358626590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:43:59 compute-0 nova_compute[248510]: 2025-12-13 09:43:59.758 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.769s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:43:59 compute-0 nova_compute[248510]: 2025-12-13 09:43:59.766 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:43:59 compute-0 nova_compute[248510]: 2025-12-13 09:43:59.790 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:43:59 compute-0 nova_compute[248510]: 2025-12-13 09:43:59.793 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:43:59 compute-0 nova_compute[248510]: 2025-12-13 09:43:59.794 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:43:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1456245238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:43:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4304: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 09:44:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:00 compute-0 ceph-mon[76537]: pgmap v4303: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2358626590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:44:00 compute-0 ceph-mon[76537]: pgmap v4304: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 09:44:01 compute-0 nova_compute[248510]: 2025-12-13 09:44:01.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4305: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 09:44:02 compute-0 nova_compute[248510]: 2025-12-13 09:44:02.237 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:03 compute-0 ceph-mon[76537]: pgmap v4305: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 09:44:03 compute-0 nova_compute[248510]: 2025-12-13 09:44:03.796 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:03 compute-0 nova_compute[248510]: 2025-12-13 09:44:03.796 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:44:03 compute-0 nova_compute[248510]: 2025-12-13 09:44:03.796 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:44:03 compute-0 nova_compute[248510]: 2025-12-13 09:44:03.825 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:44:03 compute-0 nova_compute[248510]: 2025-12-13 09:44:03.826 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4306: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 09:44:04 compute-0 nova_compute[248510]: 2025-12-13 09:44:04.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:05 compute-0 sudo[424430]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:05 compute-0 sshd-session[424429]: Received disconnect from 192.168.122.10 port 58402:11: disconnected by user
Dec 13 09:44:05 compute-0 sshd-session[424429]: Disconnected from user zuul 192.168.122.10 port 58402
Dec 13 09:44:05 compute-0 sshd-session[424426]: pam_unix(sshd:session): session closed for user zuul
Dec 13 09:44:05 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Dec 13 09:44:05 compute-0 systemd[1]: session-59.scope: Consumed 3min 23.716s CPU time, 1.2G memory peak, read 589.3M from disk, written 402.3M to disk.
Dec 13 09:44:05 compute-0 systemd-logind[787]: Session 59 logged out. Waiting for processes to exit.
Dec 13 09:44:05 compute-0 systemd-logind[787]: Removed session 59.
Dec 13 09:44:05 compute-0 sshd-session[432518]: Accepted publickey for zuul from 192.168.122.10 port 56692 ssh2: ECDSA SHA256:LSt7YF7gQCPgR0tIb34gZL5shqra8qE8VX5d32veH0w
Dec 13 09:44:05 compute-0 systemd-logind[787]: New session 60 of user zuul.
Dec 13 09:44:05 compute-0 systemd[1]: Started Session 60 of User zuul.
Dec 13 09:44:05 compute-0 sshd-session[432518]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 09:44:05 compute-0 sudo[432522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-12-13-dpamlvl.tar.xz
Dec 13 09:44:05 compute-0 sudo[432522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:44:05 compute-0 sudo[432522]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:05 compute-0 sshd-session[432521]: Received disconnect from 192.168.122.10 port 56692:11: disconnected by user
Dec 13 09:44:05 compute-0 sshd-session[432521]: Disconnected from user zuul 192.168.122.10 port 56692
Dec 13 09:44:05 compute-0 sshd-session[432518]: pam_unix(sshd:session): session closed for user zuul
Dec 13 09:44:05 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Dec 13 09:44:05 compute-0 systemd-logind[787]: Session 60 logged out. Waiting for processes to exit.
Dec 13 09:44:05 compute-0 systemd-logind[787]: Removed session 60.
Dec 13 09:44:05 compute-0 sshd-session[432547]: Accepted publickey for zuul from 192.168.122.10 port 56694 ssh2: ECDSA SHA256:LSt7YF7gQCPgR0tIb34gZL5shqra8qE8VX5d32veH0w
Dec 13 09:44:05 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:05 compute-0 systemd-logind[787]: New session 61 of user zuul.
Dec 13 09:44:05 compute-0 nova_compute[248510]: 2025-12-13 09:44:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:05 compute-0 nova_compute[248510]: 2025-12-13 09:44:05.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:44:05 compute-0 systemd[1]: Started Session 61 of User zuul.
Dec 13 09:44:05 compute-0 sshd-session[432547]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 09:44:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4307: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 09:44:05 compute-0 sudo[432551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 13 09:44:05 compute-0 sudo[432551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:44:05 compute-0 sudo[432551]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:05 compute-0 sshd-session[432550]: Received disconnect from 192.168.122.10 port 56694:11: disconnected by user
Dec 13 09:44:05 compute-0 sshd-session[432550]: Disconnected from user zuul 192.168.122.10 port 56694
Dec 13 09:44:05 compute-0 sshd-session[432547]: pam_unix(sshd:session): session closed for user zuul
Dec 13 09:44:05 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Dec 13 09:44:05 compute-0 systemd-logind[787]: Session 61 logged out. Waiting for processes to exit.
Dec 13 09:44:05 compute-0 systemd-logind[787]: Removed session 61.
Dec 13 09:44:06 compute-0 nova_compute[248510]: 2025-12-13 09:44:06.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:06 compute-0 ceph-mon[76537]: pgmap v4306: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 09:44:07 compute-0 nova_compute[248510]: 2025-12-13 09:44:07.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:07 compute-0 ceph-mon[76537]: pgmap v4307: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 09:44:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4308: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 09:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:44:09
Dec 13 09:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data']
Dec 13 09:44:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:44:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4309: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 13 09:44:09 compute-0 ceph-mon[76537]: pgmap v4308: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 09:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:44:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:44:10 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:10 compute-0 ceph-mon[76537]: pgmap v4309: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 13 09:44:11 compute-0 nova_compute[248510]: 2025-12-13 09:44:11.114 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:44:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4310: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 13 09:44:12 compute-0 nova_compute[248510]: 2025-12-13 09:44:12.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:13 compute-0 ceph-mon[76537]: pgmap v4310: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 13 09:44:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4311: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 09:44:14 compute-0 nova_compute[248510]: 2025-12-13 09:44:14.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:44:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3188312375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:44:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:44:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3188312375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:44:15 compute-0 ceph-mon[76537]: pgmap v4311: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 09:44:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3188312375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:44:15 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/3188312375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:44:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4312: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 13 09:44:16 compute-0 nova_compute[248510]: 2025-12-13 09:44:16.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:17 compute-0 nova_compute[248510]: 2025-12-13 09:44:17.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:17 compute-0 ceph-mon[76537]: pgmap v4312: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 13 09:44:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4313: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Dec 13 09:44:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4314: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Dec 13 09:44:20 compute-0 ceph-mon[76537]: pgmap v4313: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Dec 13 09:44:21 compute-0 nova_compute[248510]: 2025-12-13 09:44:21.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4315: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:44:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:44:22 compute-0 nova_compute[248510]: 2025-12-13 09:44:22.316 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:22 compute-0 ceph-mon[76537]: pgmap v4314: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Dec 13 09:44:22 compute-0 podman[432577]: 2025-12-13 09:44:22.987775603 +0000 UTC m=+0.066052191 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:44:23 compute-0 podman[432576]: 2025-12-13 09:44:23.040882252 +0000 UTC m=+0.126488282 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:44:23 compute-0 ceph-mon[76537]: pgmap v4315: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 09:44:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4316: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 09:44:24 compute-0 podman[432621]: 2025-12-13 09:44:24.985007929 +0000 UTC m=+0.075061674 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Dec 13 09:44:25 compute-0 ceph-mon[76537]: pgmap v4316: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 09:44:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4317: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:26 compute-0 nova_compute[248510]: 2025-12-13 09:44:26.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:27 compute-0 nova_compute[248510]: 2025-12-13 09:44:27.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:27 compute-0 ceph-mon[76537]: pgmap v4317: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4318: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:29 compute-0 ceph-mon[76537]: pgmap v4318: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4319: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:31 compute-0 nova_compute[248510]: 2025-12-13 09:44:31.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:31 compute-0 ceph-mon[76537]: pgmap v4319: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4320: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:32 compute-0 nova_compute[248510]: 2025-12-13 09:44:32.323 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:33 compute-0 ceph-mon[76537]: pgmap v4320: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4321: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:35 compute-0 ceph-mon[76537]: pgmap v4321: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4322: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:36 compute-0 nova_compute[248510]: 2025-12-13 09:44:36.167 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:37 compute-0 nova_compute[248510]: 2025-12-13 09:44:37.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:37 compute-0 sudo[432642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:44:37 compute-0 sudo[432642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:37 compute-0 sudo[432642]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:37 compute-0 sudo[432667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:44:37 compute-0 sudo[432667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:37 compute-0 ceph-mon[76537]: pgmap v4322: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4323: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:38 compute-0 sudo[432667]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:44:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:44:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:44:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:44:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:44:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:44:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:44:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:44:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:44:38 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:44:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:44:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:44:38 compute-0 sudo[432724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:44:38 compute-0 sudo[432724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:38 compute-0 sudo[432724]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:38 compute-0 sudo[432749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:44:38 compute-0 sudo[432749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:44:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:44:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:44:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:44:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:44:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:44:38 compute-0 podman[432786]: 2025-12-13 09:44:38.675459401 +0000 UTC m=+0.049944832 container create 67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:44:38 compute-0 systemd[1]: Started libpod-conmon-67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c.scope.
Dec 13 09:44:38 compute-0 podman[432786]: 2025-12-13 09:44:38.650923241 +0000 UTC m=+0.025408702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:44:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:44:38 compute-0 podman[432786]: 2025-12-13 09:44:38.7905893 +0000 UTC m=+0.165074761 container init 67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 09:44:38 compute-0 podman[432786]: 2025-12-13 09:44:38.799212175 +0000 UTC m=+0.173697626 container start 67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 09:44:38 compute-0 podman[432786]: 2025-12-13 09:44:38.804062585 +0000 UTC m=+0.178548026 container attach 67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec 13 09:44:38 compute-0 sad_bell[432802]: 167 167
Dec 13 09:44:38 compute-0 systemd[1]: libpod-67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c.scope: Deactivated successfully.
Dec 13 09:44:38 compute-0 conmon[432802]: conmon 67aa76c53a2e2bf52cb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c.scope/container/memory.events
Dec 13 09:44:38 compute-0 podman[432786]: 2025-12-13 09:44:38.808056254 +0000 UTC m=+0.182541695 container died 67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 09:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-434ea5bf4d8ceaba4b1d0934116cf103cba8007554df1b0efada783b4d193b64-merged.mount: Deactivated successfully.
Dec 13 09:44:38 compute-0 podman[432786]: 2025-12-13 09:44:38.860506567 +0000 UTC m=+0.234992028 container remove 67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:44:38 compute-0 systemd[1]: libpod-conmon-67aa76c53a2e2bf52cb4b9ebda7fc87d95a19bb74dcbfbbe4a60a5f031b9b09c.scope: Deactivated successfully.
Dec 13 09:44:39 compute-0 podman[432825]: 2025-12-13 09:44:39.035965325 +0000 UTC m=+0.046667920 container create 8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 09:44:39 compute-0 systemd[1]: Started libpod-conmon-8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff.scope.
Dec 13 09:44:39 compute-0 podman[432825]: 2025-12-13 09:44:39.015045086 +0000 UTC m=+0.025747701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:44:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f615e5596e4a6ffaa58355a93bd76c16d89e30700665585f954e2ee0dc492805/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f615e5596e4a6ffaa58355a93bd76c16d89e30700665585f954e2ee0dc492805/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f615e5596e4a6ffaa58355a93bd76c16d89e30700665585f954e2ee0dc492805/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f615e5596e4a6ffaa58355a93bd76c16d89e30700665585f954e2ee0dc492805/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f615e5596e4a6ffaa58355a93bd76c16d89e30700665585f954e2ee0dc492805/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:39 compute-0 podman[432825]: 2025-12-13 09:44:39.131644772 +0000 UTC m=+0.142347367 container init 8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:44:39 compute-0 podman[432825]: 2025-12-13 09:44:39.139438805 +0000 UTC m=+0.150141400 container start 8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:44:39 compute-0 podman[432825]: 2025-12-13 09:44:39.142751167 +0000 UTC m=+0.153453762 container attach 8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:44:39 compute-0 kind_zhukovsky[432841]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:44:39 compute-0 kind_zhukovsky[432841]: --> All data devices are unavailable
Dec 13 09:44:39 compute-0 systemd[1]: libpod-8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff.scope: Deactivated successfully.
Dec 13 09:44:39 compute-0 podman[432825]: 2025-12-13 09:44:39.648041157 +0000 UTC m=+0.658743752 container died 8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_zhukovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 09:44:39 compute-0 ceph-mon[76537]: pgmap v4323: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f615e5596e4a6ffaa58355a93bd76c16d89e30700665585f954e2ee0dc492805-merged.mount: Deactivated successfully.
Dec 13 09:44:39 compute-0 podman[432825]: 2025-12-13 09:44:39.707003611 +0000 UTC m=+0.717706226 container remove 8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_zhukovsky, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:44:39 compute-0 systemd[1]: libpod-conmon-8962e23a70ffdd9b1557daa50246fd17dd2ad3b8b5fd97ba32f6d6f1db9082ff.scope: Deactivated successfully.
Dec 13 09:44:39 compute-0 sudo[432749]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:39 compute-0 sudo[432876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:44:39 compute-0 sudo[432876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:39 compute-0 sudo[432876]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4324: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:39 compute-0 sudo[432901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:44:39 compute-0 sudo[432901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:44:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:44:40 compute-0 podman[432938]: 2025-12-13 09:44:40.23921425 +0000 UTC m=+0.045923171 container create d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:44:40 compute-0 systemd[1]: Started libpod-conmon-d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13.scope.
Dec 13 09:44:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:44:40 compute-0 podman[432938]: 2025-12-13 09:44:40.220161057 +0000 UTC m=+0.026869998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:44:40 compute-0 podman[432938]: 2025-12-13 09:44:40.333219595 +0000 UTC m=+0.139928536 container init d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:44:40 compute-0 podman[432938]: 2025-12-13 09:44:40.342549737 +0000 UTC m=+0.149258658 container start d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:44:40 compute-0 podman[432938]: 2025-12-13 09:44:40.345968172 +0000 UTC m=+0.152677093 container attach d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:44:40 compute-0 cranky_rubin[432954]: 167 167
Dec 13 09:44:40 compute-0 systemd[1]: libpod-d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13.scope: Deactivated successfully.
Dec 13 09:44:40 compute-0 conmon[432954]: conmon d6e3f74f98e047e241ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13.scope/container/memory.events
Dec 13 09:44:40 compute-0 podman[432938]: 2025-12-13 09:44:40.352102474 +0000 UTC m=+0.158811405 container died d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-752adb14f8696970d8a5172b2ed6b396af6fc322dacb19de6b296c9c66696dea-merged.mount: Deactivated successfully.
Dec 13 09:44:40 compute-0 podman[432938]: 2025-12-13 09:44:40.400449425 +0000 UTC m=+0.207158346 container remove d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rubin, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 09:44:40 compute-0 systemd[1]: libpod-conmon-d6e3f74f98e047e241ed4b03b6a56a327e2bae201117eea829948dd2b3b51b13.scope: Deactivated successfully.
Dec 13 09:44:40 compute-0 podman[432976]: 2025-12-13 09:44:40.571198716 +0000 UTC m=+0.043724097 container create 0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 09:44:40 compute-0 systemd[1]: Started libpod-conmon-0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f.scope.
Dec 13 09:44:40 compute-0 podman[432976]: 2025-12-13 09:44:40.552066071 +0000 UTC m=+0.024591482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:44:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3009940dea823f9c4d0440dc03a18485c7ab428e8e646922c98098e64e1b1d68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3009940dea823f9c4d0440dc03a18485c7ab428e8e646922c98098e64e1b1d68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3009940dea823f9c4d0440dc03a18485c7ab428e8e646922c98098e64e1b1d68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3009940dea823f9c4d0440dc03a18485c7ab428e8e646922c98098e64e1b1d68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:40 compute-0 podman[432976]: 2025-12-13 09:44:40.672630966 +0000 UTC m=+0.145156367 container init 0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 09:44:40 compute-0 podman[432976]: 2025-12-13 09:44:40.680750357 +0000 UTC m=+0.153275738 container start 0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:44:40 compute-0 podman[432976]: 2025-12-13 09:44:40.683846124 +0000 UTC m=+0.156371505 container attach 0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]: {
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:     "0": [
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:         {
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "devices": [
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "/dev/loop3"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             ],
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_name": "ceph_lv0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_size": "21470642176",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "name": "ceph_lv0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "tags": {
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cluster_name": "ceph",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.crush_device_class": "",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.encrypted": "0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.objectstore": "bluestore",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osd_id": "0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.type": "block",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.vdo": "0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.with_tpm": "0"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             },
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "type": "block",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "vg_name": "ceph_vg0"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:         }
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:     ],
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:     "1": [
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:         {
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "devices": [
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "/dev/loop4"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             ],
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_name": "ceph_lv1",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_size": "21470642176",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "name": "ceph_lv1",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "tags": {
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cluster_name": "ceph",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.crush_device_class": "",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.encrypted": "0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.objectstore": "bluestore",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osd_id": "1",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.type": "block",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.vdo": "0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.with_tpm": "0"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             },
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "type": "block",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "vg_name": "ceph_vg1"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:         }
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:     ],
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:     "2": [
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:         {
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "devices": [
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "/dev/loop5"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             ],
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_name": "ceph_lv2",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_size": "21470642176",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "name": "ceph_lv2",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "tags": {
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.cluster_name": "ceph",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.crush_device_class": "",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.encrypted": "0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.objectstore": "bluestore",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osd_id": "2",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.type": "block",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.vdo": "0",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:                 "ceph.with_tpm": "0"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             },
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "type": "block",
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:             "vg_name": "ceph_vg2"
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:         }
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]:     ]
Dec 13 09:44:40 compute-0 cranky_mcclintock[432992]: }
Dec 13 09:44:41 compute-0 systemd[1]: libpod-0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f.scope: Deactivated successfully.
Dec 13 09:44:41 compute-0 podman[432976]: 2025-12-13 09:44:41.013853271 +0000 UTC m=+0.486378662 container died 0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:44:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3009940dea823f9c4d0440dc03a18485c7ab428e8e646922c98098e64e1b1d68-merged.mount: Deactivated successfully.
Dec 13 09:44:41 compute-0 podman[432976]: 2025-12-13 09:44:41.061991597 +0000 UTC m=+0.534516978 container remove 0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mcclintock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 09:44:41 compute-0 systemd[1]: libpod-conmon-0cadb5a987eecc72d8095b49ea96d9c2ce88c701afd4a2cfebb39a486c699d0f.scope: Deactivated successfully.
Dec 13 09:44:41 compute-0 sudo[432901]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:41 compute-0 sudo[433012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:44:41 compute-0 nova_compute[248510]: 2025-12-13 09:44:41.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:41 compute-0 sudo[433012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:41 compute-0 sudo[433012]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:41 compute-0 sudo[433037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:44:41 compute-0 sudo[433037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:41 compute-0 podman[433075]: 2025-12-13 09:44:41.56943229 +0000 UTC m=+0.027630357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:44:41 compute-0 podman[433075]: 2025-12-13 09:44:41.730214733 +0000 UTC m=+0.188412780 container create 2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:44:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4325: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:41 compute-0 ceph-mon[76537]: pgmap v4324: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:42 compute-0 systemd[1]: Started libpod-conmon-2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198.scope.
Dec 13 09:44:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:44:42 compute-0 podman[433075]: 2025-12-13 09:44:42.179592426 +0000 UTC m=+0.637790493 container init 2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 09:44:42 compute-0 podman[433075]: 2025-12-13 09:44:42.187219375 +0000 UTC m=+0.645417432 container start 2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:44:42 compute-0 strange_nash[433092]: 167 167
Dec 13 09:44:42 compute-0 systemd[1]: libpod-2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198.scope: Deactivated successfully.
Dec 13 09:44:42 compute-0 podman[433075]: 2025-12-13 09:44:42.201625442 +0000 UTC m=+0.659823489 container attach 2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_nash, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:44:42 compute-0 podman[433075]: 2025-12-13 09:44:42.202527615 +0000 UTC m=+0.660725662 container died 2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 09:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e2b86e15c0905ebbc462f2dfc207dd74fbaeab45bcbf77e3aa8046ac64b19a5-merged.mount: Deactivated successfully.
Dec 13 09:44:42 compute-0 nova_compute[248510]: 2025-12-13 09:44:42.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:42 compute-0 podman[433075]: 2025-12-13 09:44:42.445952621 +0000 UTC m=+0.904150668 container remove 2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_nash, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:44:42 compute-0 systemd[1]: libpod-conmon-2190f8b5b260b61ae9039936fb22302a25b14a8c43143bd66f3f64df73531198.scope: Deactivated successfully.
Dec 13 09:44:42 compute-0 podman[433115]: 2025-12-13 09:44:42.595111646 +0000 UTC m=+0.025899264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:44:42 compute-0 podman[433115]: 2025-12-13 09:44:42.903291181 +0000 UTC m=+0.334078779 container create ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 09:44:43 compute-0 systemd[1]: Started libpod-conmon-ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d.scope.
Dec 13 09:44:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966378af2df0e3c1b8d325ad67cab9f7c9a1dea1b8105b33533d005d03e139d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966378af2df0e3c1b8d325ad67cab9f7c9a1dea1b8105b33533d005d03e139d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966378af2df0e3c1b8d325ad67cab9f7c9a1dea1b8105b33533d005d03e139d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966378af2df0e3c1b8d325ad67cab9f7c9a1dea1b8105b33533d005d03e139d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:44:43 compute-0 podman[433115]: 2025-12-13 09:44:43.404581241 +0000 UTC m=+0.835368859 container init ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:44:43 compute-0 podman[433115]: 2025-12-13 09:44:43.412370834 +0000 UTC m=+0.843158432 container start ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 09:44:43 compute-0 ceph-mon[76537]: pgmap v4325: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:43 compute-0 podman[433115]: 2025-12-13 09:44:43.437051367 +0000 UTC m=+0.867838965 container attach ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:44:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4326: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:44 compute-0 lvm[433210]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:44:44 compute-0 lvm[433210]: VG ceph_vg1 finished
Dec 13 09:44:44 compute-0 lvm[433209]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:44:44 compute-0 lvm[433209]: VG ceph_vg0 finished
Dec 13 09:44:44 compute-0 lvm[433212]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:44:44 compute-0 lvm[433212]: VG ceph_vg2 finished
Dec 13 09:44:44 compute-0 infallible_black[433131]: {}
Dec 13 09:44:44 compute-0 systemd[1]: libpod-ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d.scope: Deactivated successfully.
Dec 13 09:44:44 compute-0 systemd[1]: libpod-ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d.scope: Consumed 1.324s CPU time.
Dec 13 09:44:44 compute-0 podman[433115]: 2025-12-13 09:44:44.25082015 +0000 UTC m=+1.681607758 container died ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 09:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-966378af2df0e3c1b8d325ad67cab9f7c9a1dea1b8105b33533d005d03e139d8-merged.mount: Deactivated successfully.
Dec 13 09:44:44 compute-0 podman[433115]: 2025-12-13 09:44:44.533324106 +0000 UTC m=+1.964111704 container remove ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 09:44:44 compute-0 systemd[1]: libpod-conmon-ddc2dcbaced95c03730e19078fd663f89626f8f9ca7ec5d7e491b0eab2f8ff7d.scope: Deactivated successfully.
Dec 13 09:44:44 compute-0 sudo[433037]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:44:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:44:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:44:44 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:44:44 compute-0 sudo[433229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:44:44 compute-0 sudo[433229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:44:44 compute-0 sudo[433229]: pam_unix(sudo:session): session closed for user root
Dec 13 09:44:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4327: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:45 compute-0 ceph-mon[76537]: pgmap v4326: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:44:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:44:46 compute-0 nova_compute[248510]: 2025-12-13 09:44:46.203 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:47 compute-0 nova_compute[248510]: 2025-12-13 09:44:47.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:47 compute-0 ceph-mon[76537]: pgmap v4327: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4328: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:49 compute-0 ceph-mon[76537]: pgmap v4328: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4329: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:50 compute-0 nova_compute[248510]: 2025-12-13 09:44:50.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:51 compute-0 nova_compute[248510]: 2025-12-13 09:44:51.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:51 compute-0 ceph-mon[76537]: pgmap v4329: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:51 compute-0 nova_compute[248510]: 2025-12-13 09:44:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4330: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:52 compute-0 nova_compute[248510]: 2025-12-13 09:44:52.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:53 compute-0 ceph-mon[76537]: pgmap v4330: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4331: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:53 compute-0 podman[433255]: 2025-12-13 09:44:53.986228365 +0000 UTC m=+0.070785249 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 09:44:54 compute-0 podman[433254]: 2025-12-13 09:44:54.053321441 +0000 UTC m=+0.138528672 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 13 09:44:54 compute-0 ceph-mon[76537]: pgmap v4331: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:44:55.479 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:44:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:44:55.479 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:44:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:44:55.479 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:44:55 compute-0 nova_compute[248510]: 2025-12-13 09:44:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4332: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:55 compute-0 podman[433297]: 2025-12-13 09:44:55.994500095 +0000 UTC m=+0.086619472 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 09:44:56 compute-0 nova_compute[248510]: 2025-12-13 09:44:56.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:56 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:44:56 compute-0 ceph-mon[76537]: pgmap v4332: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:57 compute-0 nova_compute[248510]: 2025-12-13 09:44:57.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:44:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4333: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:58 compute-0 nova_compute[248510]: 2025-12-13 09:44:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:44:58 compute-0 nova_compute[248510]: 2025-12-13 09:44:58.827 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:44:58 compute-0 nova_compute[248510]: 2025-12-13 09:44:58.828 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:44:58 compute-0 nova_compute[248510]: 2025-12-13 09:44:58.828 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:44:58 compute-0 nova_compute[248510]: 2025-12-13 09:44:58.828 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:44:58 compute-0 nova_compute[248510]: 2025-12-13 09:44:58.829 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:44:58 compute-0 ceph-mon[76537]: pgmap v4333: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:44:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557364480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:44:59 compute-0 nova_compute[248510]: 2025-12-13 09:44:59.421 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:44:59 compute-0 nova_compute[248510]: 2025-12-13 09:44:59.588 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:44:59 compute-0 nova_compute[248510]: 2025-12-13 09:44:59.590 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3353MB free_disk=59.987355314195156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:44:59 compute-0 nova_compute[248510]: 2025-12-13 09:44:59.590 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:44:59 compute-0 nova_compute[248510]: 2025-12-13 09:44:59.590 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:44:59 compute-0 nova_compute[248510]: 2025-12-13 09:44:59.685 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:44:59 compute-0 nova_compute[248510]: 2025-12-13 09:44:59.685 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:44:59 compute-0 nova_compute[248510]: 2025-12-13 09:44:59.731 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:44:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4334: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:44:59 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1557364480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:45:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:45:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720829905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:45:00 compute-0 nova_compute[248510]: 2025-12-13 09:45:00.335 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:45:00 compute-0 nova_compute[248510]: 2025-12-13 09:45:00.342 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:45:00 compute-0 nova_compute[248510]: 2025-12-13 09:45:00.709 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:45:00 compute-0 nova_compute[248510]: 2025-12-13 09:45:00.710 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:45:00 compute-0 nova_compute[248510]: 2025-12-13 09:45:00.711 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:45:00 compute-0 ceph-mon[76537]: pgmap v4334: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/720829905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:45:01 compute-0 nova_compute[248510]: 2025-12-13 09:45:01.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4335: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:02 compute-0 nova_compute[248510]: 2025-12-13 09:45:02.385 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:03 compute-0 ceph-mon[76537]: pgmap v4335: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:03 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4336: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:05 compute-0 ceph-mon[76537]: pgmap v4336: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:05 compute-0 nova_compute[248510]: 2025-12-13 09:45:05.712 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:05 compute-0 nova_compute[248510]: 2025-12-13 09:45:05.713 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 09:45:05 compute-0 nova_compute[248510]: 2025-12-13 09:45:05.713 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 09:45:05 compute-0 nova_compute[248510]: 2025-12-13 09:45:05.846 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 09:45:05 compute-0 nova_compute[248510]: 2025-12-13 09:45:05.846 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:05 compute-0 nova_compute[248510]: 2025-12-13 09:45:05.847 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:05 compute-0 nova_compute[248510]: 2025-12-13 09:45:05.847 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 09:45:05 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4337: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:06 compute-0 nova_compute[248510]: 2025-12-13 09:45:06.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:06 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:06 compute-0 nova_compute[248510]: 2025-12-13 09:45:06.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:07 compute-0 ceph-mon[76537]: pgmap v4337: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:07 compute-0 nova_compute[248510]: 2025-12-13 09:45:07.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:07 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4338: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:09 compute-0 ceph-mon[76537]: pgmap v4338: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:45:09
Dec 13 09:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 09:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 09:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', '.rgw.root']
Dec 13 09:45:09 compute-0 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 09:45:09 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4339: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:45:10 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:45:10 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:45:10 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:45:10 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:45:11 compute-0 ceph-mon[76537]: pgmap v4339: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:11 compute-0 nova_compute[248510]: 2025-12-13 09:45:11.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 09:45:11 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:11 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4340: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:12 compute-0 nova_compute[248510]: 2025-12-13 09:45:12.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:13 compute-0 ceph-mon[76537]: pgmap v4340: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.110408) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765619113110464, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1298, "num_deletes": 251, "total_data_size": 2062350, "memory_usage": 2087056, "flush_reason": "Manual Compaction"}
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765619113126228, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 2014752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85602, "largest_seqno": 86899, "table_properties": {"data_size": 2008505, "index_size": 3449, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13377, "raw_average_key_size": 20, "raw_value_size": 1995996, "raw_average_value_size": 2996, "num_data_blocks": 154, "num_entries": 666, "num_filter_entries": 666, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618986, "oldest_key_time": 1765618986, "file_creation_time": 1765619113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 15894 microseconds, and 7968 cpu microseconds.
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.126299) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 2014752 bytes OK
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.126329) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.132109) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.132144) EVENT_LOG_v1 {"time_micros": 1765619113132137, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.132173) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 2056445, prev total WAL file size 2056445, number of live WAL files 2.
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.133096) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(1967KB)], [206(10MB)]
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765619113133162, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 13278717, "oldest_snapshot_seqno": -1}
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 10188 keys, 11382561 bytes, temperature: kUnknown
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765619113217348, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 11382561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11319930, "index_size": 36086, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 269795, "raw_average_key_size": 26, "raw_value_size": 11143643, "raw_average_value_size": 1093, "num_data_blocks": 1377, "num_entries": 10188, "num_filter_entries": 10188, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765619113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.217708) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 11382561 bytes
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.221952) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.5 rd, 135.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 10.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.6) OK, records in: 10702, records dropped: 514 output_compression: NoCompression
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.221979) EVENT_LOG_v1 {"time_micros": 1765619113221967, "job": 130, "event": "compaction_finished", "compaction_time_micros": 84324, "compaction_time_cpu_micros": 31055, "output_level": 6, "num_output_files": 1, "total_output_size": 11382561, "num_input_records": 10702, "num_output_records": 10188, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765619113222614, "job": 130, "event": "table_file_deletion", "file_number": 208}
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765619113225877, "job": 130, "event": "table_file_deletion", "file_number": 206}
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.133003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.226110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.226123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.226126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.226130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:45:13 compute-0 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:45:13.226133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 09:45:13 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4341: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:15 compute-0 ceph-mon[76537]: pgmap v4341: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 09:45:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1176812931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:45:15 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 09:45:15 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1176812931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:45:15 compute-0 nova_compute[248510]: 2025-12-13 09:45:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:15 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4342: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:16 compute-0 nova_compute[248510]: 2025-12-13 09:45:16.218 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1176812931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 09:45:16 compute-0 ceph-mon[76537]: from='client.? 192.168.122.10:0/1176812931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 09:45:16 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:17 compute-0 nova_compute[248510]: 2025-12-13 09:45:17.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:17 compute-0 ceph-mon[76537]: pgmap v4342: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:17 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4343: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:19 compute-0 ceph-mon[76537]: pgmap v4343: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:19 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4344: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:21 compute-0 sshd-session[433363]: Accepted publickey for zuul from 192.168.122.10 port 60282 ssh2: ECDSA SHA256:LSt7YF7gQCPgR0tIb34gZL5shqra8qE8VX5d32veH0w
Dec 13 09:45:21 compute-0 systemd-logind[787]: New session 62 of user zuul.
Dec 13 09:45:21 compute-0 systemd[1]: Started Session 62 of User zuul.
Dec 13 09:45:21 compute-0 sshd-session[433363]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 09:45:21 compute-0 nova_compute[248510]: 2025-12-13 09:45:21.221 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:21 compute-0 sudo[433367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 13 09:45:21 compute-0 sudo[433367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 09:45:21 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:21 compute-0 ceph-mon[76537]: pgmap v4344: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4345: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:45:21 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:45:22 compute-0 nova_compute[248510]: 2025-12-13 09:45:22.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:23 compute-0 ceph-mon[76537]: pgmap v4345: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:23 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4346: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:23 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23456 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:24 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23458 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:25 compute-0 podman[433589]: 2025-12-13 09:45:25.005152356 +0000 UTC m=+0.092611642 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 09:45:25 compute-0 podman[433588]: 2025-12-13 09:45:25.014498408 +0000 UTC m=+0.103332768 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:45:25 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 09:45:25 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558118013' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 09:45:25 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4347: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:26 compute-0 ceph-mon[76537]: pgmap v4346: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:26 compute-0 ceph-mon[76537]: from='client.23456 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:26 compute-0 ceph-mon[76537]: from='client.23458 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:26 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2558118013' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 09:45:26 compute-0 nova_compute[248510]: 2025-12-13 09:45:26.223 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:26 compute-0 podman[433662]: 2025-12-13 09:45:26.264586026 +0000 UTC m=+0.073543307 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 09:45:26 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:27 compute-0 ceph-mon[76537]: pgmap v4347: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:27 compute-0 nova_compute[248510]: 2025-12-13 09:45:27.400 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:27 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4348: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:28 compute-0 ovs-vsctl[433714]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 13 09:45:29 compute-0 ceph-mon[76537]: pgmap v4348: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:29 compute-0 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 13 09:45:29 compute-0 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 13 09:45:29 compute-0 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 13 09:45:29 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4349: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:30 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: cache status {prefix=cache status} (starting...)
Dec 13 09:45:30 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: client ls {prefix=client ls} (starting...)
Dec 13 09:45:30 compute-0 lvm[434053]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:45:30 compute-0 lvm[434053]: VG ceph_vg0 finished
Dec 13 09:45:30 compute-0 lvm[434060]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:45:30 compute-0 lvm[434060]: VG ceph_vg1 finished
Dec 13 09:45:30 compute-0 lvm[434067]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:45:30 compute-0 lvm[434067]: VG ceph_vg2 finished
Dec 13 09:45:30 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23462 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: damage ls {prefix=damage ls} (starting...)
Dec 13 09:45:31 compute-0 nova_compute[248510]: 2025-12-13 09:45:31.225 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump loads {prefix=dump loads} (starting...)
Dec 13 09:45:31 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23464 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:31 compute-0 ceph-mon[76537]: pgmap v4349: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 13 09:45:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 13 09:45:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 13 09:45:31 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 13 09:45:31 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4350: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:31 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:31 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 13 09:45:31 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3887153314' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 09:45:32 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 13 09:45:32 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 13 09:45:32 compute-0 nova_compute[248510]: 2025-12-13 09:45:32.402 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:32 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23470 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:32 compute-0 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 09:45:32 compute-0 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:45:32.460+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 09:45:32 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: ops {prefix=ops} (starting...)
Dec 13 09:45:32 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:45:32 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/555948811' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:45:32 compute-0 ceph-mon[76537]: from='client.23462 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:32 compute-0 ceph-mon[76537]: from='client.23464 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:32 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3887153314' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 09:45:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 13 09:45:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452951807' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 09:45:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 13 09:45:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186834356' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 09:45:33 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: session ls {prefix=session ls} (starting...)
Dec 13 09:45:33 compute-0 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: status {prefix=status} (starting...)
Dec 13 09:45:33 compute-0 ceph-mon[76537]: pgmap v4350: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:33 compute-0 ceph-mon[76537]: from='client.23468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:33 compute-0 ceph-mon[76537]: from='client.23470 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/555948811' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:45:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1452951807' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 09:45:33 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1186834356' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 09:45:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 09:45:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347288461' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 09:45:33 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 13 09:45:33 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2301832129' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 09:45:33 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4351: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:34 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23484 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 09:45:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1561458931' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 09:45:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/347288461' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 09:45:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2301832129' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 09:45:34 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1561458931' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 09:45:34 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23486 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:34 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 09:45:34 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708852382' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 09:45:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 13 09:45:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1728416195' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 09:45:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 09:45:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4208574337' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 09:45:35 compute-0 ceph-mon[76537]: pgmap v4351: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:35 compute-0 ceph-mon[76537]: from='client.23484 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:35 compute-0 ceph-mon[76537]: from='client.23486 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1708852382' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 09:45:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1728416195' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 09:45:35 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4208574337' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 09:45:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 13 09:45:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054905262' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 09:45:35 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4352: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:35 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 09:45:35 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167183673' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 09:45:36 compute-0 nova_compute[248510]: 2025-12-13 09:45:36.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:36 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23498 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:36 compute-0 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:45:36.301+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 09:45:36 compute-0 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 09:45:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 09:45:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326583552' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 09:45:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3054905262' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 09:45:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1167183673' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 09:45:36 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1326583552' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 09:45:36 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 13 09:45:36 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446947992' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 09:45:36 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23504 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:45.476192+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:46.476459+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b531808c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320829 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b550fda40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:47.476607+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56786944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b55346fc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.317261696s of 14.469831467s, submitted: 12
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b51d11340
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:48.476970+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56778752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:49.477130+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56778752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:50.477298+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:51.477500+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361887 data_alloc: 218103808 data_used: 10613331
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:52.477672+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:53.477803+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:54.477972+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:55.478200+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:56.478439+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361887 data_alloc: 218103808 data_used: 10613331
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:57.478615+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:58.478822+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:59.478956+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:00.479196+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.383279800s of 12.385676384s, submitted: 1
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:01.479368+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 47841280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430299 data_alloc: 218103808 data_used: 10621523
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec089000/0x0/0x4ffc00000, data 0x28c498f/0x2a83000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:02.479659+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:03.479853+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:04.480043+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:05.480145+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x298198f/0x2b40000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [0,0,3,0,0,0,0,0,7,1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:06.480387+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445819 data_alloc: 218103808 data_used: 11854419
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:07.480612+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:08.480769+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:09.480955+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:10.481151+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:11.481296+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445819 data_alloc: 218103808 data_used: 11854419
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:12.481464+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:13.481641+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:14.481777+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:15.481962+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b547ffc00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494fc00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494fc00 session 0x562b54d2c1c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54d6a380
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b5511ec40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:16.482165+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.426490784s of 15.836899757s, submitted: 131
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 47407104 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b552c7180
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b5250bc00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b57d3e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b57d3e000 session 0x562b555c8c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483357 data_alloc: 218103808 data_used: 11854419
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54406e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552fc540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:17.482302+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:18.482466+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:19.482656+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:20.482839+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:21.482986+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483357 data_alloc: 218103808 data_used: 11854419
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:22.483190+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:23.483336+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b5250aa80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:24.483522+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 47382528 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b552fce00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:25.483693+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 47382528 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980c00 session 0x562b566bea80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b51d101c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:26.515583+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3485119 data_alloc: 218103808 data_used: 11854419
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:27.515735+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:28.516000+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.240238190s of 12.527070999s, submitted: 14
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:29.516180+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297066496 unmapped: 47349760 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:30.516373+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:31.516530+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518799 data_alloc: 234881024 data_used: 17691219
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:32.516674+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:33.516805+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 46678016 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:34.516950+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 46661632 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:35.517095+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:36.517267+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518671 data_alloc: 234881024 data_used: 17687123
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:37.517397+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:38.517553+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:39.517782+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.727138519s of 10.814496040s, submitted: 92
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:40.517963+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297926656 unmapped: 46489600 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:41.518117+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557167 data_alloc: 234881024 data_used: 18235987
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:42.518294+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:43.518403+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:44.518616+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:45.518816+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:46.519869+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557167 data_alloc: 234881024 data_used: 18235987
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:47.520062+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:48.520381+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:49.520532+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:50.520684+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:51.520841+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557295 data_alloc: 234881024 data_used: 18240083
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:52.521001+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:53.521151+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:54.521311+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:55.521435+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b52afdc00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54951a40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:56.521611+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.452621460s of 16.708354950s, submitted: 31
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b54e0d180
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449686 data_alloc: 218103808 data_used: 11854419
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:57.521764+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:58.521906+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:59.522088+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfc0000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:00.522281+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b55347500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b50f49c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:01.522441+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55367880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfc0000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:02.522627+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:03.522774+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:04.522964+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:05.523152+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:06.523376+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:07.523561+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:08.523710+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:09.523911+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:10.524059+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:11.524245+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:12.524430+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:13.524691+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:14.524861+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:15.525124+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:16.525345+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:17.525546+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:18.525696+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:19.525879+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:20.526114+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:21.526335+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:22.526513+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:23.526707+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:24.526888+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:25.527201+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:26.527422+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:27.527599+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:28.527759+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:29.527927+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:30.528169+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:31.529210+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:32.529384+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:33.529557+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:34.529731+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:35.529964+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:36.530411+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:37.530627+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:38.530838+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:39.531054+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:40.531297+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:41.531563+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:42.531747+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:43.532018+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:44.532198+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:45.532451+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:46.532708+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:47.532991+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:48.533165+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:49.533379+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:50.533552+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:51.533732+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:52.533905+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:53.534118+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:54.534322+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:55.534486+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:56.534676+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 60.426128387s of 60.458377838s, submitted: 21
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293036032 unmapped: 51380224 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54db96c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b5498ee00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54980800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b5511efc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54407dc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:57.534821+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552c6c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:58.535031+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:59.535232+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:00.535446+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:01.535618+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:02.535781+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:03.535934+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:04.536119+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:05.536296+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:06.536489+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:07.536725+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:08.536908+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:09.537161+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b550fd340
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:10.537304+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:11.537498+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:12.537689+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b59f63dc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:13.537873+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5497fc00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5497fc00 session 0x562b5819ec40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.514007568s of 16.712116241s, submitted: 33
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5250b500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:14.538028+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:15.538161+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:16.538328+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:17.538480+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3347951 data_alloc: 218103808 data_used: 7718483
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:18.538642+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:19.538800+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:20.539022+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:21.539150+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:22.539314+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3347951 data_alloc: 218103808 data_used: 7718483
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:23.539487+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:24.539676+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:25.539812+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.294775009s of 12.303226471s, submitted: 2
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 45989888 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:26.539982+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297967616 unmapped: 46448640 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:27.540136+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x25a09e1/0x275f000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425143 data_alloc: 218103808 data_used: 7842387
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 47120384 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:28.540280+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:29.540447+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:30.540620+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec383000/0x0/0x4ffc00000, data 0x25ca9e1/0x2789000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:31.540746+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:32.540925+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429921 data_alloc: 218103808 data_used: 7842387
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec383000/0x0/0x4ffc00000, data 0x25ca9e1/0x2789000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:33.541061+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:34.541246+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:35.541441+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:36.541659+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:37.541786+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428473 data_alloc: 218103808 data_used: 7846483
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:38.541925+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:39.542040+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:40.542185+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:41.542361+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:42.542496+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428473 data_alloc: 218103808 data_used: 7846483
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:43.542647+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:44.542759+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:45.542874+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.759756088s of 19.683139801s, submitted: 100
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54db8700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b52afcfc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc1400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b59f62540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d2c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d2c00 session 0x562b54d2c380
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5498e700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:46.543062+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:47.544631+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:48.545267+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:49.545445+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:50.545581+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:51.545709+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:52.546161+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:53.546331+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b543aca80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:54.546471+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc1400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b54db9340
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:55.546983+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:56.547244+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b566be000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:57.548159+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5ad35400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5ad35400 session 0x562b54d2d500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:58.548326+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:59.548437+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298393600 unmapped: 49700864 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.676366806s of 13.795249939s, submitted: 7
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:00.548949+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:01.549254+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:02.549540+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500135 data_alloc: 234881024 data_used: 13920851
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:03.549828+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:04.549983+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:05.550241+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:06.550434+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:07.550564+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500135 data_alloc: 234881024 data_used: 13920851
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:08.550712+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:09.550897+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.781335831s of 10.787143707s, submitted: 3
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:10.551122+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 45432832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:11.551303+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 48185344 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:12.551533+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547725 data_alloc: 234881024 data_used: 14633555
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:13.551696+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:14.551921+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:15.552156+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:16.552370+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:17.552467+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547725 data_alloc: 234881024 data_used: 14633555
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:18.552604+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:19.552749+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:20.552896+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:21.553106+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.882456779s of 11.166961670s, submitted: 31
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:22.553301+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549005 data_alloc: 234881024 data_used: 14719571
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:23.553448+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:24.553653+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:25.553830+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:26.554020+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:27.554140+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547045 data_alloc: 234881024 data_used: 14719571
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:28.554327+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b543801c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b555c8380
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc1400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:29.554480+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 48136192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:30.554682+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 48136192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b54db96c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:31.554798+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fa000/0x0/0x4ffc00000, data 0x31539e1/0x3312000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:32.554961+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435449 data_alloc: 218103808 data_used: 7846483
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:33.555116+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:34.555214+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:35.555342+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.982597351s of 14.264299393s, submitted: 9
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b532196c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54407880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:36.555481+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2c3000/0x0/0x4ffc00000, data 0x168b97f/0x1849000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:37.555596+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b555c88c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:38.555746+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:39.555868+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:40.556122+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:41.556270+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:42.556413+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:43.556556+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:44.556727+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:45.556891+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:46.557214+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:47.557340+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:48.557500+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:49.557647+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:50.557795+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:51.557940+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:52.558365+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:53.558576+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:54.558740+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:55.558897+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:56.559172+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:57.559443+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:58.559699+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:59.559901+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:00.560198+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:01.560410+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:02.562562+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:03.562777+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: mgrc ms_handle_reset ms_handle_reset con 0x562b588df800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: get_auth_request con 0x562b54980c00 auth_method 0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:04.562932+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:05.563208+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:06.563452+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:07.563718+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:08.563931+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:09.564184+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:10.564475+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:11.564629+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:12.564818+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:13.565142+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 50905088 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:14.565333+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 50905088 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.654140472s of 39.253135681s, submitted: 37
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:15.565502+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5511f500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b55347c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b550fcc40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc1400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b543ac8c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54d2da40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:16.565765+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:17.565959+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335641 data_alloc: 218103808 data_used: 4018673
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:18.566118+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:19.566309+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:20.566514+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b51d108c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:21.566670+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:22.566794+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335897 data_alloc: 218103808 data_used: 4050417
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:23.566942+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:24.567157+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:25.567349+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:26.567552+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:27.567737+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354329 data_alloc: 218103808 data_used: 7196145
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:28.567944+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:29.568159+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:30.568305+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:31.568463+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:32.568666+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354329 data_alloc: 218103808 data_used: 7196145
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:33.568895+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.388601303s of 18.978036880s, submitted: 4
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 50544640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:34.569027+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:35.569129+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:36.569306+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:37.569502+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375597 data_alloc: 218103808 data_used: 7561713
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:38.569642+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:39.569779+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:40.569916+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:41.570182+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:42.570451+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375613 data_alloc: 218103808 data_used: 7561713
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:43.570720+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299294720 unmapped: 48799744 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:44.570876+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299294720 unmapped: 48799744 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:45.571186+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:46.571387+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:47.571580+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375613 data_alloc: 218103808 data_used: 7561713
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:48.571761+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:49.571961+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.605512619s of 15.843142509s, submitted: 25
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b51d116c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298852352 unmapped: 49242112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d9c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b51d11340
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555de800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555de800 session 0x562b5297a000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53180000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b552c7180
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:50.572141+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:51.572278+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:52.572446+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407519 data_alloc: 218103808 data_used: 7561713
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:53.572674+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:54.572888+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:55.573031+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:56.573261+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:57.573469+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407519 data_alloc: 218103808 data_used: 7561713
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:58.573659+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d9c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b54e0c380
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298885120 unmapped: 49209344 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5ad35800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:59.573794+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298893312 unmapped: 49201152 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:00.573939+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:01.574092+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:02.574234+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436133 data_alloc: 218103808 data_used: 11955185
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:03.574392+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:04.574572+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:05.574877+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:06.575177+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:07.575317+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436133 data_alloc: 218103808 data_used: 11955185
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:08.575467+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:09.575609+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:10.575761+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.689737320s of 20.918289185s, submitted: 50
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 46456832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:11.575899+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 46456832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:12.576172+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 46178304 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496455 data_alloc: 218103808 data_used: 12848113
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:13.576304+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:14.576514+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:15.576694+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:16.576948+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:17.577194+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497911 data_alloc: 218103808 data_used: 12860401
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:18.577323+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:19.577494+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:20.577718+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:21.577890+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5523bc00 session 0x562b552c6a80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52941000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:22.578040+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec153000/0x0/0x4ffc00000, data 0x27f9a04/0x29b9000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497663 data_alloc: 218103808 data_used: 12934129
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:23.578189+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.069880486s of 13.417132378s, submitted: 83
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54950fc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5ad35800 session 0x562b55346000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301940736 unmapped: 46153728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:24.578342+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec153000/0x0/0x4ffc00000, data 0x27f9a04/0x29b9000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:25.578493+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:26.578725+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:27.578938+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:28.579110+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384144 data_alloc: 218103808 data_used: 7561713
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:29.579258+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:30.579419+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecaeb000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:31.579591+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53218000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b552fce00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:32.579746+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecaeb000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,1,2])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b53234e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:33.579935+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:34.580175+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:35.580358+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:36.580588+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:37.580744+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:38.580931+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:39.582941+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:40.583089+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:41.583275+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:42.583449+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:43.583612+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:44.583783+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:45.583966+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:46.584181+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:47.584352+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:48.584508+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:49.584726+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:50.584969+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:51.585172+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:52.585328+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:53.585479+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:54.585634+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:55.585804+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:56.586025+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:57.586216+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:58.586370+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:59.586681+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:00.586871+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:01.587130+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:02.588408+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:03.589661+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:04.590280+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:05.590518+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:06.591695+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:07.592453+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:08.593207+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:09.593631+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:10.593876+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:11.594115+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:12.594625+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:13.594938+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:14.595476+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.231575012s of 50.706127167s, submitted: 63
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:15.595694+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:16.595982+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:17.596199+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:18.596456+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:19.596778+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:20.596979+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:21.597160+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:22.597390+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d9c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:23.597565+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,0,0,4])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:24.597712+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 44638208 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b5819ec40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53181dc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b53180700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:25.597881+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b566bea80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b532341c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 48308224 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:26.598162+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 48308224 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:27.598447+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:28.598645+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369086 data_alloc: 218103808 data_used: 4026732
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7c000/0x0/0x4ffc00000, data 0x1ad297f/0x1c90000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:29.598834+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:30.599016+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7c000/0x0/0x4ffc00000, data 0x1ad297f/0x1c90000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:31.599227+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.732717514s of 16.663993835s, submitted: 37
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552fc700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:32.599363+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:33.599507+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395959 data_alloc: 218103808 data_used: 8353644
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:34.599663+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:35.599818+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:36.600010+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:37.600223+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:38.601314+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395959 data_alloc: 218103808 data_used: 8353644
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:39.601516+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:40.601667+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:41.601905+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:42.602109+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:43.602279+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.296160698s of 12.306211472s, submitted: 4
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411073 data_alloc: 218103808 data_used: 8374124
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec87f000/0x0/0x4ffc00000, data 0x20ce9a2/0x228d000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:44.602520+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 45539328 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:45.602716+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:46.602957+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:47.603208+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec41c000/0x0/0x4ffc00000, data 0x25299a2/0x26e8000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:48.603607+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465651 data_alloc: 218103808 data_used: 8661868
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:49.603751+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:50.603908+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:51.604097+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:52.604779+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:53.604928+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460459 data_alloc: 218103808 data_used: 8661868
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:54.605062+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:55.605290+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:56.605540+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.590172768s of 12.890510559s, submitted: 92
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3fa000/0x0/0x4ffc00000, data 0x25539a2/0x2712000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:57.605766+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:58.605998+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460587 data_alloc: 218103808 data_used: 8661868
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:59.606300+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 46006272 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:00.606519+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 46006272 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53219a40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54406fc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b52afdc00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54951500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555cec00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,4])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:01.606660+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555cec00 session 0x562b555c96c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b555c9880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b59f62540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b547ff340
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b5511ec40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:02.606885+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:03.607228+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532989 data_alloc: 218103808 data_used: 8661868
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:04.607407+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:05.607592+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:06.607786+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:07.607928+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 52477952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494fc00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494fc00 session 0x562b55347500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:08.608087+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 52477952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532989 data_alloc: 218103808 data_used: 8661868
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:09.608229+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5297a1c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 52469760 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:10.608393+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54d6bdc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 52469760 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.584549904s of 14.069593430s, submitted: 17
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b53219500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528d9800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:11.608518+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 52461568 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:12.608686+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:13.608830+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580038 data_alloc: 218103808 data_used: 13113962
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:14.608967+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:15.609140+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:16.609388+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:17.609537+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:18.609690+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580542 data_alloc: 218103808 data_used: 13113962
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:19.609855+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb80f000/0x0/0x4ffc00000, data 0x313c9b2/0x32fc000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:20.610047+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:21.610267+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb80f000/0x0/0x4ffc00000, data 0x313c9b2/0x32fc000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:22.610436+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.964795113s of 11.985386848s, submitted: 9
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 52191232 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:23.610584+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 51101696 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3610428 data_alloc: 234881024 data_used: 14183018
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:24.610772+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:25.610943+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:26.611176+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:27.611374+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb37a000/0x0/0x4ffc00000, data 0x35d29b2/0x3792000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:28.611558+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617380 data_alloc: 234881024 data_used: 14252650
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:29.611706+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb359000/0x0/0x4ffc00000, data 0x35f39b2/0x37b3000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:30.611920+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:31.612194+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:32.612382+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:33.612626+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552fce00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.867029190s of 11.092142105s, submitted: 63
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528d9800 session 0x562b566bf880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612536 data_alloc: 234881024 data_used: 14252650
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b555c8e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:34.612819+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f8000/0x0/0x4ffc00000, data 0x25559a2/0x2714000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:35.613139+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:36.613397+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:37.613579+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b51d101c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b5250a700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b553476c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:38.613731+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:39.613870+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:40.613998+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:41.614148+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:42.614278+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:43.614435+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:44.614587+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:45.614735+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:46.614878+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:47.615043+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:48.615264+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:49.615468+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:50.615701+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:51.615902+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:52.616059+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:53.616304+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:54.616482+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:55.616628+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:56.616837+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:57.617032+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:58.617198+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:59.617339+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:00.617493+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:01.617608+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:02.617737+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:03.617982+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 52625408 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:04.618202+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 52625408 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:05.618401+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:06.618698+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:07.618937+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:08.619185+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:09.619453+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:10.619657+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:11.619853+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:12.620030+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:13.620235+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 52609024 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:14.620405+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5494e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b53218a80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b552c7c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b552c6e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54d2c1c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.048336029s of 41.130916595s, submitted: 44
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 51535872 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b5375ee00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b598e5400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552c6c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53234e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b55346700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b550fcc40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:15.620605+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:16.620833+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:17.620988+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:18.621196+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371796 data_alloc: 218103808 data_used: 4010551
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:19.621360+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:20.621511+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:21.621651+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:22.621813+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23508 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:37 compute-0 nova_compute[248510]: 2025-12-13 09:45:37.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:23.622029+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b543801c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371796 data_alloc: 218103808 data_used: 4010551
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d7000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a0c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:24.622159+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:25.622283+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:26.622512+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:27.622672+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:28.622859+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371928 data_alloc: 218103808 data_used: 4010551
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:29.623097+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:30.623272+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:31.623430+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:32.623645+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:33.623798+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371928 data_alloc: 218103808 data_used: 4010551
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:34.623999+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:35.624251+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.683448792s of 20.806020737s, submitted: 41
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:36.624490+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 51322880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:37.624709+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306692096 unmapped: 49274880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:38.624897+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 52928512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396090 data_alloc: 218103808 data_used: 4084279
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:39.625143+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:40.625324+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:41.625552+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccef000/0x0/0x4ffc00000, data 0x1c5c9f1/0x1e1c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:42.625736+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccef000/0x0/0x4ffc00000, data 0x1c5c9f1/0x1e1c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:43.625902+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3404906 data_alloc: 218103808 data_used: 4227639
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:44.626165+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:45.626382+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:46.626620+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:47.626769+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:48.626969+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:49.627200+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.2 total, 600.0 interval
                                           Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.82 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1838 writes, 7373 keys, 1838 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s
                                           Interval WAL: 1838 writes, 718 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:50.627397+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:51.627522+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:52.627771+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:53.627919+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:54.628204+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:55.628388+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:56.628608+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:57.628788+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:58.629186+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:59.629355+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:00.629511+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:01.629670+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:02.629842+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:03.630024+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:04.630150+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:05.630293+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:06.630488+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.124912262s of 30.743280411s, submitted: 64
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 51372032 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:07.630672+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54a8e800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54a8e800 session 0x562b566bea80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b52afc540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b59f62a80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b59f621c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 51372032 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:08.630820+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54381a40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5499b400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5499b400 session 0x562b5250afc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b59f63880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b53181c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b552fcfc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x225aa2a/0x241c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442101 data_alloc: 218103808 data_used: 4231735
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:09.630994+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:10.631138+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:11.631360+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:12.631656+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:13.631824+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b566bee00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441205 data_alloc: 218103808 data_used: 4231735
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:14.632011+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x225aa63/0x241c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5489d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5489d800 session 0x562b51d116c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c9dc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:15.632136+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b528556c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:16.632359+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:17.632541+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:18.632742+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479447 data_alloc: 218103808 data_used: 10342983
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:19.632910+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:20.633092+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:21.633244+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:22.633454+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:23.633618+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479447 data_alloc: 218103808 data_used: 10342983
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:24.633789+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:25.633928+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.493576050s of 19.711193085s, submitted: 33
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:26.634153+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:27.634319+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305594368 unmapped: 50372608 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:28.634444+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 47267840 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511373 data_alloc: 218103808 data_used: 10801735
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:29.634639+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2ed000/0x0/0x4ffc00000, data 0x265ca73/0x281f000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:30.634887+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:31.635190+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:32.635402+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:33.635567+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2c9000/0x0/0x4ffc00000, data 0x2680a73/0x2843000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:34.635720+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517921 data_alloc: 218103808 data_used: 11022919
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:35.635888+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:36.636274+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:37.636438+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:38.636575+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2c7000/0x0/0x4ffc00000, data 0x2682a73/0x2845000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:39.636779+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515601 data_alloc: 218103808 data_used: 11035207
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.758734703s of 13.324839592s, submitted: 61
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:40.636939+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:41.637205+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54380c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53180000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e5800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e5800 session 0x562b566bf340
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:42.637439+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:43.637624+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eca9e000/0x0/0x4ffc00000, data 0x1c839f1/0x1e43000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:44.637854+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410514 data_alloc: 218103808 data_used: 4231735
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eca9e000/0x0/0x4ffc00000, data 0x1c839f1/0x1e43000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:45.638025+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:46.638569+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccc2000/0x0/0x4ffc00000, data 0x1c8a9f1/0x1e4a000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:47.639021+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:48.639201+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b5511f500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b54db8000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:49.639335+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53234c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:50.639524+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:51.639710+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:52.640402+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:53.640925+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:54.641292+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:55.641500+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:56.641794+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:57.642111+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:58.642541+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:59.642706+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:00.642863+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:01.643182+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:02.643407+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:03.643601+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:04.643834+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:05.644020+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:06.644399+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:07.644624+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:08.644808+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:09.645003+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:10.645143+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:11.645296+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:12.645437+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:13.645617+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:14.645796+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:15.645962+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:16.646269+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:17.647214+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:18.647481+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:19.647758+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:20.648180+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:21.649160+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:22.649361+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.192054749s of 43.299339294s, submitted: 52
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5250a700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a4800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54950fc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55346540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5819e000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a0c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b51d108c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:23.649671+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:24.649880+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386400 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:25.650038+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:26.650492+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:27.650824+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:28.651279+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d7000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b552fc8c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:29.651516+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385968 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552fd6c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:30.651696+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54d2c700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:31.651881+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 49995776 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a0c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:32.652004+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 49995776 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.878920555s of 10.003678322s, submitted: 50
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:33.652192+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:34.652375+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397315 data_alloc: 218103808 data_used: 4811123
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:35.652549+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:36.652789+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b55367500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b55366a80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:37.652983+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:38.653211+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:39.653426+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397183 data_alloc: 218103808 data_used: 4811123
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:40.653661+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:41.653791+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:42.654025+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555d7000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5cc12000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:43.654187+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:44.654371+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397443 data_alloc: 218103808 data_used: 4811139
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:45.654559+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:46.654850+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:47.654991+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:48.655145+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b55366e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.139425278s of 16.248653412s, submitted: 60
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b552fd180
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:49.655317+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397311 data_alloc: 218103808 data_used: 4811139
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:50.655704+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:51.655963+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:52.656156+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:53.656330+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:54.656499+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397443 data_alloc: 218103808 data_used: 4811139
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b53219500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:55.656687+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306036736 unmapped: 49930240 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b555c9500
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:56.657146+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:57.657413+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:58.657634+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:59.657948+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:00.658263+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:01.658480+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:02.658632+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:03.658854+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:04.659035+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:05.659314+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:06.659516+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:07.659749+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:08.659979+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:09.660221+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:10.660427+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:11.660575+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:12.660789+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:13.661149+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:14.661318+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:15.661490+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:16.661773+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:17.661979+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:18.662171+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:19.662338+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:20.662499+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:21.662724+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:22.662889+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:23.663105+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:24.663233+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:25.663421+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:26.663717+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:27.663922+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:28.664221+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:29.664396+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:30.664535+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:31.664628+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:32.664741+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:33.664879+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:34.665012+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:35.665139+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:36.665300+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:37.665440+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:38.665577+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:39.665706+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:40.665875+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:41.666049+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:42.666269+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:43.666476+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:44.666643+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:45.666850+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:46.667039+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:47.667167+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:48.667330+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:49.667480+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:50.668539+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:51.668684+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:52.668874+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a0c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.132514954s of 64.175086975s, submitted: 21
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b54380c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:53.669025+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55346a80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b552c6540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53234e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5cc12000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b54e0d880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b555dc000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555dc000 session 0x562b550fddc0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:54.669256+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433682 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:55.669559+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5819f880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54407880
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b532196c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:56.670179+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5cc12000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52946400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:57.670350+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:58.670520+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec90f000/0x0/0x4ffc00000, data 0x203e98f/0x21fd000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:59.670765+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491164 data_alloc: 218103808 data_used: 12978547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:00.670935+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b51d101c0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b52946400 session 0x562b53235c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec90f000/0x0/0x4ffc00000, data 0x203e98f/0x21fd000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:01.671158+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52946400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:02.671318+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b52946400 session 0x562b52afc540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:03.671452+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:04.671629+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:05.671732+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:06.671983+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:07.672150+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:08.672283+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:09.672459+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:10.672590+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:11.672723+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:12.672867+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:13.673038+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:14.673163+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:15.673307+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:16.673598+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:17.673775+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:18.673962+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:19.674180+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:20.674335+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:21.674495+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:22.674633+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:23.674777+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:24.674944+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:25.675154+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:26.675379+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:27.675580+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:28.676353+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:29.676595+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:30.676786+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:31.676963+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:32.677148+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:33.677306+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:34.677484+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:35.677666+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:36.677910+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:37.678143+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:38.678371+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:39.678630+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:40.678868+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:41.679134+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:42.679336+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:43.679482+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:44.679716+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:45.679880+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:46.680128+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:47.680320+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:48.680487+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:49.680687+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:50.680835+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:51.680972+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:52.681130+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:53.681391+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:54.681597+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:55.681834+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:56.682145+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:57.682305+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:58.682554+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:59.683178+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:00.683760+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:01.684036+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:02.685128+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:03.685467+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:04.685905+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:05.686352+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:06.686752+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:07.687053+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:08.687247+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:09.687477+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:10.687655+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:11.687872+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:12.688185+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 79.888259888s of 80.062774658s, submitted: 22
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 58089472 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 313 ms_handle_reset con 0x562b528e1800 session 0x562b54e0c380
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:13.688342+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ecc7, meta 0x110a1339), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 58089472 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:14.688706+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385265 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:15.688946+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:16.689185+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:17.689358+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:18.689537+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:19.689748+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385265 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:20.689897+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:21.690175+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:22.690368+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 58064896 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:23.690539+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 58064896 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:24.690768+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.884174347s of 11.914915085s, submitted: 19
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:25.690993+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:26.691224+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:27.691861+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:28.692018+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:29.692247+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:30.692374+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:31.692525+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:32.692759+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:33.693467+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:34.693840+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:35.694183+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:36.694469+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:37.694907+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:38.695225+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:39.695652+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:40.696013+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:41.696337+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:42.696592+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:43.696817+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:44.697060+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:45.697298+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:46.697586+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:47.697808+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:48.698152+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:49.698479+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:50.698731+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:51.698927+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:52.699213+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:53.699498+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:54.699707+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:55.699901+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:56.700348+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:57.700562+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:58.700884+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:59.701236+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:00.701509+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:01.701782+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:02.702007+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:03.702219+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:04.702465+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:05.702701+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:06.702974+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301645824 unmapped: 57991168 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:07.703196+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:08.703441+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:09.703657+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:10.703831+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:11.704042+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:12.704297+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:13.704510+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:14.704701+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:15.704943+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:16.705241+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:17.705549+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:18.705729+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:19.705956+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 57966592 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:20.706270+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:21.706579+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:22.706863+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:23.707217+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:24.707562+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:25.707869+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:26.708200+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:27.708521+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:28.708813+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:29.709059+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:30.709357+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:31.709639+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:32.709941+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:33.710246+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:34.710449+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:35.710635+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:36.710935+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:37.711163+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:38.711910+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:39.712276+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:40.712561+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:41.712806+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 57933824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:42.713364+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 57933824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:43.713629+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:44.713837+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:45.714050+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:46.714484+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:47.714700+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:48.714894+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:49.715109+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:50.715250+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:51.715438+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:52.715585+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 57901056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:53.715755+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 57901056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:54.716294+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:55.716512+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:56.716816+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:57.717133+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:58.717388+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:59.717658+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:00.717863+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:01.718046+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:02.718312+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:03.718529+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:04.718801+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:05.719140+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:06.719427+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:07.719595+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:08.719887+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:09.720190+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:10.720577+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 57868288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:11.720765+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 106.316764832s of 106.328948975s, submitted: 13
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 ms_handle_reset con 0x562b5296d800 session 0x562b54380700
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 ms_handle_reset con 0x562b538a8400 session 0x562b54380540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:12.720984+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:13.721159+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:14.721345+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:15.721641+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387959 data_alloc: 234881024 data_used: 11555187
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:16.721913+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:17.722105+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:18.722265+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:19.722500+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:20.722673+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387959 data_alloc: 234881024 data_used: 11555187
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:21.722863+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5cc12000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.003993034s of 10.013246536s, submitted: 5
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 51838976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ef44, meta 0x110a10bc), peers [0,1] op hist [0,0,1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 315 ms_handle_reset con 0x562b5cc12000 session 0x562b550fdc00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:22.723048+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:23.723270+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:24.723433+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:25.724147+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 315 heartbeat osd_stat(store_statfs(0x4edf4c000/0x0/0x4ffc00000, data 0x9fcbde/0xbbe000, compress 0x0/0x0/0x0, omap 0x4f41b, meta 0x110a0be5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 56975360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303249 data_alloc: 218103808 data_used: 151939
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:26.724310+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 316 ms_handle_reset con 0x562b528e1800 session 0x562b54407c00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:27.724573+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ee74a000/0x0/0x4ffc00000, data 0x1fe79b/0x3bf000, compress 0x0/0x0/0x0, omap 0x4fd71, meta 0x110a028f), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:28.724905+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ee74a000/0x0/0x4ffc00000, data 0x1fe79b/0x3bf000, compress 0x0/0x0/0x0, omap 0x4fd71, meta 0x110a028f), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:29.725229+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:30.725613+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262502 data_alloc: 218103808 data_used: 151923
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:31.725896+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52946400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 316 ms_handle_reset con 0x562b52946400 session 0x562b5250aa80
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022649765s of 10.216675758s, submitted: 92
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:32.726216+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ee748000/0x0/0x4ffc00000, data 0x200236/0x3c2000, compress 0x0/0x0/0x0, omap 0x4fec5, meta 0x110a013b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:33.726377+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:34.726549+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ee748000/0x0/0x4ffc00000, data 0x200236/0x3c2000, compress 0x0/0x0/0x0, omap 0x4fec5, meta 0x110a013b), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5296d800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 317 ms_handle_reset con 0x562b5296d800 session 0x562b566bee00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 317 handle_osd_map epochs [318,318], i have 318, src has [1,318]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:35.726694+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267970 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:36.726938+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:37.727196+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 318 heartbeat osd_stat(store_statfs(0x4ee745000/0x0/0x4ffc00000, data 0x201cb5/0x3c5000, compress 0x0/0x0/0x0, omap 0x503f7, meta 0x1109fc09), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:38.727357+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 318 heartbeat osd_stat(store_statfs(0x4ee745000/0x0/0x4ffc00000, data 0x201cb5/0x3c5000, compress 0x0/0x0/0x0, omap 0x503f7, meta 0x1109fc09), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 318 ms_handle_reset con 0x562b538a8400 session 0x562b59f63180
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:39.727520+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:40.727711+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:41.727891+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:42.728088+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:43.728630+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:44.729058+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:45.729245+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:46.729425+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:47.729559+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:48.729801+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:49.730023+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:50.730187+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:51.730333+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:52.730480+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:53.730672+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:54.730865+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:55.731058+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:56.731283+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:57.731689+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:58.731857+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:59.732019+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:00.732209+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:01.732443+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:02.732604+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:03.732770+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:04.732913+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:05.733105+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:06.733325+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:07.733497+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:08.733651+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:09.733910+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:10.734171+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:11.734384+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:12.734547+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:13.734721+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:14.734874+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:15.735039+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:16.735246+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:17.735404+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:18.735606+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:19.735746+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:20.735850+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:21.735951+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:22.736105+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:23.736294+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:24.736423+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:25.736648+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:26.736807+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:27.736959+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:28.737159+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:29.737348+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:30.737491+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:31.737680+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:32.737919+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:33.738136+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:34.738332+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:35.738484+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:36.738713+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:37.738879+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:38.740599+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:39.740787+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:40.740927+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:41.741064+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:42.741245+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:43.741393+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:44.741581+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:45.741780+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:46.742006+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:47.742128+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:48.742262+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:49.742443+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:50.742615+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:51.742757+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:52.742911+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:53.743047+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:54.743249+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:55.743416+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:56.743620+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:57.743829+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:58.744233+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:59.744394+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:00.744518+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:01.744685+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:02.744881+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:03.745174+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:04.745343+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:05.745497+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:06.745748+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:07.745928+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:08.746147+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:09.746306+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:10.746429+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:11.746582+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:12.746736+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:13.746925+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:14.747118+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:15.747283+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 56827904 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:16.747503+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 56827904 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:17.747604+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 56819712 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:18.747839+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56811520 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:19.747989+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:20.748113+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:21.748273+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:22.748438+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:23.748574+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 56795136 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:24.748726+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:25.748857+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:26.749032+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:27.749801+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 56778752 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc0800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 116.213493347s of 116.241378784s, submitted: 33
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:28.749960+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 56778752 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 320 ms_handle_reset con 0x562b54dc0800 session 0x562b59fc2000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:29.750141+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 56762368 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a1000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 320 heartbeat osd_stat(store_statfs(0x4edf3d000/0x0/0x4ffc00000, data 0x20550e/0x3cd000, compress 0x0/0x0/0x0, omap 0x5097d, meta 0x1109f683), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:30.750304+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 ms_handle_reset con 0x562b548a1000 session 0x562b59fc2380
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:31.750456+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:32.750642+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:33.750797+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:34.750947+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:35.751114+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:36.751329+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:37.751466+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:38.751792+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:39.752006+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:40.752228+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:41.752374+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:42.752545+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:43.752704+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:44.752835+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:45.753015+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-13T09:30:46.753297+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _finish_auth 0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:46.754630+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:47.753627+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:48.753872+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:49.754064+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:50.754266+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:51.754527+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:52.754708+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:53.754900+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:54.755136+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:55.755280+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:56.755478+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:57.755633+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:58.755803+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:59.756047+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:00.756267+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:01.756429+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:02.756597+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:03.756797+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:04.756951+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:05.757137+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:06.757345+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b5497f000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324731 data_alloc: 218103808 data_used: 156000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:07.758260+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:08.758416+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:09.758592+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.045394897s of 42.178756714s, submitted: 18
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 322 ms_handle_reset con 0x562b5497f000 session 0x562b54d6a540
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:10.758715+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:11.758877+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326012 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:12.759063+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:13.759332+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:14.759516+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:15.759690+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:16.759929+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326012 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:17.760124+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:18.760291+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:19.760481+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 65003520 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.889834404s of 10.003436089s, submitted: 19
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:20.760612+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:21.760845+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:22.761013+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:23.761187+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:24.761386+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:25.761575+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:26.761875+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:27.762008+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:28.762243+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:29.762451+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:30.762640+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:31.762861+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:32.763036+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:33.763226+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:34.763444+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:35.763672+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:36.763967+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:37.764174+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:38.764340+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:39.764538+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:40.764708+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:41.764913+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:42.765146+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:43.765324+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:44.765523+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:45.765746+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:46.765927+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:47.766100+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:48.766291+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 153K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 912 writes, 2666 keys, 912 commit groups, 1.0 writes per commit group, ingest: 1.84 MB, 0.00 MB/s
                                           Interval WAL: 912 writes, 399 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread fragmentation_score=0.004241 took=0.000088s
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:49.766456+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:50.766620+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:51.766765+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 65175552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:52.766900+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 65175552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:53.767044+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 65167360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:54.767224+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 65167360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:55.767405+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:57.275958+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:58.276174+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:59.276347+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:00.276514+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:01.276749+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:02.277011+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:03.277191+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:04.277461+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:05.277644+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:06.277769+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:07.277947+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:08.278154+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:09.278293+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:10.278485+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:11.278638+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:12.278818+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:13.278990+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:14.279280+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:15.279460+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:16.279630+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:17.279881+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:18.280050+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:19.280251+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:20.280426+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:21.280694+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:22.280874+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:23.281120+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:24.281290+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:25.281485+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:26.281632+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:27.281869+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:28.282035+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:29.282591+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:30.283109+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:31.283485+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:32.284562+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:33.284961+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:34.285776+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:35.286517+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:36.287225+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:37.287567+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:38.287769+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:39.287930+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:40.288099+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:41.288329+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:42.288727+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:43.289018+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:44.289332+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:45.289525+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:46.290143+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 13 09:45:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487582962' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:47.290313+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:48.290481+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:49.290808+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:50.291027+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:51.291305+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:52.291529+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:53.291680+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:54.291968+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:55.292195+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:56.292621+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:57.292864+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:58.293042+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:59.293209+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:00.293686+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:01.293922+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:02.294300+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:03.294603+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 65003520 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:04.294951+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64995328 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:05.295372+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64995328 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:06.295676+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:07.295915+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:08.296151+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:09.296423+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:10.296760+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:11.297001+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:12.297183+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:13.297452+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:14.297768+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:15.297981+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:16.298241+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:17.298506+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:18.298664+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:19.298848+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:20.299046+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:21.299284+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:22.299724+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 ms_handle_reset con 0x562b52941000 session 0x562b54406e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b528e1800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:23.300011+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:24.300201+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:25.300398+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:26.300574+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 126.705642700s of 126.712867737s, submitted: 15
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:27.300754+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:28.300892+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:29.301145+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:30.301319+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:31.301541+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:32.301719+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:33.301954+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:34.302287+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 64839680 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:35.302423+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:36.302565+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:37.302769+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:38.302938+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:39.303160+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:40.303307+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:41.303634+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:42.303953+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:43.304237+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:44.304411+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:45.304791+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:46.305010+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:47.305332+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:48.305575+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:49.305817+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:50.306040+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:51.306207+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:52.306386+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:53.306700+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:54.306937+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:55.307162+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:56.307359+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:57.307596+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:58.307761+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:59.307982+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:00.308169+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:01.308363+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:02.308567+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:03.308784+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:04.308930+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:05.309114+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:06.309326+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:07.309506+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:08.309660+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:09.309826+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:10.310005+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:11.310168+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:12.310332+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:13.310502+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:14.310705+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:15.310905+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:16.311122+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:17.311349+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.510643005s of 50.603378296s, submitted: 108
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:18.311540+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:19.311735+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:20.311935+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:21.312135+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:22.312328+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:23.312529+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:24.312715+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:25.312898+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:26.313047+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:27.313305+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 2.568279505s of 10.172575951s, submitted: 10
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:28.313463+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:29.313637+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:30.313822+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:31.313993+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:32.314148+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:33.314327+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:34.314503+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:35.314684+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:36.314872+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:37.315056+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63692800 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.948706627s of 10.349843025s, submitted: 6
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:38.315250+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:39.315434+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:40.315647+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:41.315804+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:42.315945+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:43.316148+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:44.316368+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:45.316571+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:46.316734+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:47.316985+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63668224 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:48.317198+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:49.317386+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:50.317593+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:51.317775+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:52.317916+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:53.318162+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:54.318428+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:55.318654+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:56.318928+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:57.319127+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:58.319336+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:59.319529+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:00.319741+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:01.320581+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:02.320710+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:03.320869+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:04.321022+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:05.321161+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:06.321309+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:07.321469+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:08.321612+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:09.321772+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:10.321976+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:11.322175+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63610880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:12.322413+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63610880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:13.322598+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:14.322735+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:15.322921+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:16.323089+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:17.323325+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:18.323577+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:19.323913+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63586304 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:20.324159+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63586304 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:21.324303+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:22.324587+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:23.324825+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:24.324994+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:25.325223+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:26.325421+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:27.325621+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:28.325830+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:29.325982+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:30.326162+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:31.326329+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:32.326504+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:33.326776+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:34.327015+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:35.327242+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:36.327434+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:37.327667+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:38.327813+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:39.327947+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:40.328166+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:41.328340+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:42.328557+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:43.328768+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:44.328955+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 63545344 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:45.329136+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 63545344 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:46.329385+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 63537152 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:47.329707+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:48.329904+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:49.330198+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:50.330365+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:51.330565+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:52.330746+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:53.330985+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:54.331311+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:55.331499+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:56.331667+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:57.331879+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:58.332123+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:59.332290+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:00.332494+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:01.332678+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:02.332838+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:03.333021+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:04.333201+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:05.333421+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:06.333593+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 63488000 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:07.333918+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:08.334183+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:09.334362+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:10.334620+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:11.334833+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:12.335033+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:13.335238+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:14.335446+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:15.335715+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:16.336015+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:17.336294+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:18.336463+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:19.336621+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:20.336795+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:21.337012+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:22.337175+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:23.337397+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:24.337572+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:25.337864+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:26.338010+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:27.338189+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:28.338375+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:29.338581+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:30.338764+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:31.338955+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:32.342249+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:33.342479+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:34.342664+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:35.342789+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:36.342948+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:37.343174+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:38.347866+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:39.348036+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:40.348324+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:41.348508+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:42.348657+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:43.348781+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:44.348964+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:45.349144+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:46.349348+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:47.349589+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:48.349789+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:49.349982+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:50.350152+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:51.350326+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:52.350482+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:53.350650+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:54.350810+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:55.351119+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:56.351304+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:57.351520+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:58.351731+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:59.351915+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:00.352163+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:01.352354+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:02.352505+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:03.352645+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:04.352796+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:05.353003+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:06.353150+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:07.353621+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:08.353766+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:09.353915+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:10.354127+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:11.354341+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:12.354543+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:13.354753+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:14.354906+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:15.355138+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:16.355344+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:17.355588+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:18.355737+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:19.355898+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:20.356046+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:21.356441+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 63307776 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:22.356647+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:23.356828+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:24.356974+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:25.357817+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:26.358008+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:27.358270+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:28.358437+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:29.358589+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:30.358811+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:31.359110+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:32.359256+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:33.359518+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:34.359719+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:35.359903+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:36.360057+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:37.360278+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:38.360414+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:39.360617+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:40.360777+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:41.360954+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 63258624 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:42.361135+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 63258624 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:43.361342+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:44.361498+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:45.361637+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:46.361804+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:47.362017+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:48.362123+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:49.362246+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:50.362386+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:51.362563+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:52.362848+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:53.363041+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:54.363283+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:55.363481+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:56.363646+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:57.363852+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:58.364003+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:59.364198+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304807936 unmapped: 63225856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:00.364337+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:01.364510+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:02.364696+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:03.364844+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:04.365003+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:05.365203+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:06.365325+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:07.365492+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:08.365618+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:09.365777+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:10.365928+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:11.366115+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:12.366261+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:13.366487+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:14.366653+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:15.366960+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:16.367147+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:17.367391+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:18.367570+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:19.367748+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304848896 unmapped: 63184896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:20.367935+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304848896 unmapped: 63184896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:21.368158+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304857088 unmapped: 63176704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:22.368331+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304857088 unmapped: 63176704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:23.368562+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:24.368715+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:25.368905+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:26.369121+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:27.369366+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:28.369537+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:29.369748+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 63160320 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:30.369917+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 63160320 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:31.370061+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:32.370300+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:33.370489+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:34.370652+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:35.370842+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:36.371035+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:37.371323+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:38.371477+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:39.371644+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304914432 unmapped: 63119360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:40.371859+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304914432 unmapped: 63119360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:41.372041+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:42.372295+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:43.372487+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:44.372643+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:45.372831+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:46.373023+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:47.373248+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:48.373458+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:49.373635+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b52941000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 63102976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:50.373853+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 63102976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:51.374146+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:52.374294+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:53.374478+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:54.374653+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:55.374844+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 254.228393555s of 257.374877930s, submitted: 8
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:56.375306+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x516f8, meta 0x1109e908), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:57.375536+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:58.375750+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x5176e, meta 0x1109e892), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331560 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 63062016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:59.375990+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 324 ms_handle_reset con 0x562b52941000 session 0x562b57c07340
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:00.376195+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:01.376384+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b538a8400
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:02.376590+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:03.376709+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331192 data_alloc: 218103808 data_used: 156129
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:04.376898+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x517e4, meta 0x1109e81c), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:05.377218+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.808628559s of 10.056917191s, submitted: 37
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:06.377408+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:07.377614+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:08.377815+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0xa0f857/0xbdd000, compress 0x0/0x0/0x0, omap 0x522af, meta 0x1109dd51), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336626 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:09.378182+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:10.378348+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4edf2f000/0x0/0x4ffc00000, data 0xa0f857/0xbdd000, compress 0x0/0x0/0x0, omap 0x522af, meta 0x1109dd51), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:11.378527+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:12.378711+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 63488000 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:13.378888+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3295730 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:14.379149+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4ee72f000/0x0/0x4ffc00000, data 0x20f857/0x3dd000, compress 0x0/0x0/0x0, omap 0x52325, meta 0x1109dcdb), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 326 ms_handle_reset con 0x562b538a8400 session 0x562b56c08c40
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:15.379331+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4ee72f000/0x0/0x4ffc00000, data 0x20f857/0x3dd000, compress 0x0/0x0/0x0, omap 0x52538, meta 0x1109dac8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:16.379586+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:17.379820+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:18.379976+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3294910 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:19.380140+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.690998077s of 13.717031479s, submitted: 34
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:20.380288+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b548a1000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:21.380438+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 327 heartbeat osd_stat(store_statfs(0x4ee72a000/0x0/0x4ffc00000, data 0x2112d6/0x3e0000, compress 0x0/0x0/0x0, omap 0x52708, meta 0x1109d8f8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:22.380620+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 327 heartbeat osd_stat(store_statfs(0x4ee72a000/0x0/0x4ffc00000, data 0x2112d6/0x3e0000, compress 0x0/0x0/0x0, omap 0x52708, meta 0x1109d8f8), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:23.380802+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 ms_handle_reset con 0x562b548a1000 session 0x562b56c08e00
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:24.380987+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:25.381246+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:26.381396+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:27.381603+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:28.381878+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:29.384282+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:30.384457+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:31.384624+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:32.384810+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:33.385033+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:34.385187+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:35.385365+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:36.385559+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:37.385901+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:38.386141+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:39.386271+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:40.386441+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:41.386689+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:42.386820+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:43.387005+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:44.387183+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:45.387375+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:46.387542+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:47.387797+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:48.387950+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:49.388238+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:50.388479+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:51.388614+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:52.388817+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:53.388994+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:54.389199+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:55.389409+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:56.389593+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:57.389842+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:58.390041+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:59.390266+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:00.390465+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:01.390739+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:02.391012+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:03.391220+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:04.391413+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:05.391587+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:06.391816+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:07.392108+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:08.392304+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:09.392516+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:10.392675+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:11.392877+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:12.393121+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:13.393327+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:14.393489+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:15.393678+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:16.393881+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:17.394054+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:18.394246+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:19.394418+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:20.394616+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:21.394801+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:22.394967+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:23.395119+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:24.395349+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:25.395517+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:26.395718+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:27.395903+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:28.396169+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: handle_auth_request added challenge on 0x562b54dc0800
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:29.396353+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _renew_subs
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 70.411071777s of 70.590003967s, submitted: 17
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 329 ms_handle_reset con 0x562b54dc0800 session 0x562b54fe8000
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:30.396538+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:31.396717+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 329 heartbeat osd_stat(store_statfs(0x4ee724000/0x0/0x4ffc00000, data 0x214a62/0x3e6000, compress 0x0/0x0/0x0, omap 0x52d43, meta 0x1109d2bd), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:32.396893+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:33.397064+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:34.397303+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303952 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:35.397426+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 329 heartbeat osd_stat(store_statfs(0x4ee724000/0x0/0x4ffc00000, data 0x214a62/0x3e6000, compress 0x0/0x0/0x0, omap 0x52d43, meta 0x1109d2bd), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:36.397597+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:37.397813+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:38.397957+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 329 handle_osd_map epochs [329,330], i have 329, src has [1,330]
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:39.398115+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:40.398250+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:41.398476+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:42.398656+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:43.398789+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:44.398946+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:45.399103+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:46.399313+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:47.399553+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:48.399792+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:49.400039+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:50.400229+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:51.400365+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:52.400561+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:53.400929+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:54.401121+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:55.401278+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:56.401472+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:57.401697+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:58.401913+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:59.402089+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:00.402256+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:01.402507+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:02.402827+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:03.403000+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:04.403171+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:05.403326+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:06.403668+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:07.403947+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:08.404160+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:09.404341+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:10.404581+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:11.404798+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:12.405187+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:13.405312+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:14.405661+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:15.405810+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:16.405991+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:17.406323+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:18.406610+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:19.406800+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:20.406984+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:21.407156+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:22.407311+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:23.407586+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:24.407762+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:25.407997+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:26.408249+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:27.408459+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:28.408666+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:29.408870+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305856512 unmapped: 62177280 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:30.409047+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:31.409260+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:32.409440+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:33.409652+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:34.409926+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:35.410142+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:36.410896+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:37.411142+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:38.411541+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:39.411691+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:40.411954+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:41.412119+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:42.412399+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:43.412639+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:44.412860+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:45.413151+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:46.413331+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:47.413564+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:48.413733+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:49.413994+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 154K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 443 writes, 954 keys, 443 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s
                                           Interval WAL: 443 writes, 192 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:50.414216+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:51.414421+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:52.414579+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:53.414823+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:54.414982+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:55.415115+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:56.415307+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:57.415652+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:58.415834+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:59.416007+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:00.416165+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:01.416321+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:02.416454+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:03.416644+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:04.416788+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'config show' '{prefix=config show}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:05.416948+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:06.417157+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306028544 unmapped: 62005248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:07.417464+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'perf dump' '{prefix=perf dump}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:08.417769+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'perf schema' '{prefix=perf schema}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:09.417953+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:10.418120+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:11.418282+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:12.418534+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:13.418732+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:14.418970+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:15.419145+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:16.419331+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306200576 unmapped: 61833216 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:17.419517+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 61825024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:18.419670+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 61825024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:19.419872+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 61825024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:20.420183+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 61825024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:21.420476+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 61825024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:22.420650+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 61825024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:23.420814+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:24.421000+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 61825024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:25.421171+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306208768 unmapped: 61825024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:26.421315+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306216960 unmapped: 61816832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:27.421483+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306216960 unmapped: 61816832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:28.421640+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306216960 unmapped: 61816832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:29.421806+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306216960 unmapped: 61816832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:30.421962+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306216960 unmapped: 61816832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:31.422119+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:32.422264+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:33.422435+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:34.422634+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:35.422812+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:36.422994+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:37.423261+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:38.423419+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:39.423584+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:40.424297+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:41.424444+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:42.424606+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:43.425196+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:44.425401+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306225152 unmapped: 61808640 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:45.425557+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 61792256 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:46.425709+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 61792256 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:47.425899+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306249728 unmapped: 61784064 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:48.426138+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306257920 unmapped: 61775872 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:49.426304+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306257920 unmapped: 61775872 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:50.426586+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306257920 unmapped: 61775872 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:51.426944+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306266112 unmapped: 61767680 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:52.427330+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306266112 unmapped: 61767680 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:53.427521+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306266112 unmapped: 61767680 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:54.427799+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306266112 unmapped: 61767680 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:55.428113+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:56.428309+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:57.428530+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:58.428712+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:59.428882+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:00.429166+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:01.429310+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:02.429479+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:03.429621+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:04.429820+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:05.429959+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:06.430161+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:07.430374+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:08.430541+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:09.430711+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:10.430919+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:11.431164+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:12.431291+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:13.431845+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:14.432336+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306282496 unmapped: 61751296 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:15.432870+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306290688 unmapped: 61743104 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:16.433143+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306298880 unmapped: 61734912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:17.433407+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306298880 unmapped: 61734912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:18.433810+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306298880 unmapped: 61734912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:19.434018+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306298880 unmapped: 61734912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:20.434291+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306298880 unmapped: 61734912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:21.434597+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306298880 unmapped: 61734912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:22.434937+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 61726720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:23.435366+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 61726720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:24.435500+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 61726720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:25.435680+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 61726720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:26.435809+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306307072 unmapped: 61726720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:27.436007+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306315264 unmapped: 61718528 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:28.436188+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306315264 unmapped: 61718528 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:29.436430+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 179.428451538s of 179.474777222s, submitted: 31
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306315264 unmapped: 61718528 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:30.436689+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306315264 unmapped: 61718528 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:31.436925+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306315264 unmapped: 61718528 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:32.437257+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306315264 unmapped: 61718528 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:33.437413+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306323456 unmapped: 61710336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:34.437592+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 61661184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:35.437750+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 61661184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:36.438023+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306372608 unmapped: 61661184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:37.438282+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306380800 unmapped: 61652992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:38.438585+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:39.438761+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:40.438981+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:41.439294+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:42.439542+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:43.439818+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:44.439985+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:45.440251+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:46.440384+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:47.440543+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306405376 unmapped: 61628416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:48.440822+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:49.441043+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:50.441188+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:51.441334+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:52.441468+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:53.441599+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:54.441734+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:55.441924+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:56.442109+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:57.442340+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:58.442515+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306421760 unmapped: 61612032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:59.442711+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:00.442861+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:01.443013+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:02.443169+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:03.443360+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:04.443492+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:05.443646+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:06.443835+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:07.444030+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:08.444209+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:09.444397+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:10.444552+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:11.444725+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:12.444920+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:13.445142+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:14.445341+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:15.445520+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:16.445633+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:17.445852+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:18.446013+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:19.446217+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:20.446339+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:21.446553+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:22.446681+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:23.446852+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:24.446995+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:25.447194+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306429952 unmapped: 61603840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:26.447324+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306438144 unmapped: 61595648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:27.447533+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306438144 unmapped: 61595648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:28.447711+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306438144 unmapped: 61595648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:29.447986+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306438144 unmapped: 61595648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:30.448144+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306438144 unmapped: 61595648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:31.448310+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:32.448458+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:33.448640+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:34.448810+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:35.448974+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:36.449132+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:37.449359+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:38.449527+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:39.449795+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:40.449948+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:41.450212+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:42.450374+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:43.450528+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:44.450687+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:45.450883+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:46.451130+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306446336 unmapped: 61587456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:47.451355+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306462720 unmapped: 61571072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:48.451528+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306462720 unmapped: 61571072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:49.451699+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306462720 unmapped: 61571072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:50.451826+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306462720 unmapped: 61571072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:51.451987+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:52.452155+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:53.452306+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:54.452454+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:55.452669+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:56.452877+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:57.453150+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:58.453306+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:59.453552+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:00.453709+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:01.453860+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:02.454003+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:03.454159+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306470912 unmapped: 61562880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:04.454308+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'config show' '{prefix=config show}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306339840 unmapped: 61693952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:05.454537+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee723000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:37 compute-0 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306339840 unmapped: 61693952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306006 data_alloc: 218103808 data_used: 156094
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: tick
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_tickets
Dec 13 09:45:37 compute-0 ceph-osd[89221]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:06.454730+0000)
Dec 13 09:45:37 compute-0 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306192384 unmapped: 61841408 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:37 compute-0 ceph-osd[89221]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:45:37 compute-0 ceph-mon[76537]: pgmap v4352: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:37 compute-0 ceph-mon[76537]: from='client.23498 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3446947992' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 09:45:37 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2487582962' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 09:45:37 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:37 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 09:45:37 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:45:37 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4353: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 09:45:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456703974' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 09:45:38 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 09:45:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:45:38 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 09:45:38 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791044403' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 09:45:38 compute-0 ceph-mon[76537]: from='client.23504 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:38 compute-0 ceph-mon[76537]: from='client.23508 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:45:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3456703974' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 09:45:38 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:45:38 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3791044403' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 09:45:38 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23518 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 09:45:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824866799' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 09:45:39 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23522 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:39 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 09:45:39 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650349559' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 09:45:39 compute-0 ceph-mon[76537]: from='client.23510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:39 compute-0 ceph-mon[76537]: pgmap v4353: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:39 compute-0 ceph-mon[76537]: from='client.23514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:39 compute-0 ceph-mon[76537]: from='client.23518 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:39 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2824866799' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 09:45:39 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3650349559' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 09:45:39 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23526 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:39 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4354: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 09:45:40 compute-0 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 09:45:40 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23530 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 09:45:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514390721' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 09:45:40 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23532 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:40 compute-0 ceph-mon[76537]: from='client.23522 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:40 compute-0 ceph-mon[76537]: from='client.23526 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:40 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/514390721' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 09:45:40 compute-0 crontab[435394]: (root) LIST (root)
Dec 13 09:45:40 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 13 09:45:40 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1560599478' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 09:45:41 compute-0 nova_compute[248510]: 2025-12-13 09:45:41.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:41 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23536 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:41 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:41 compute-0 nova_compute[248510]: 2025-12-13 09:45:41.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:41 compute-0 ceph-mon[76537]: pgmap v4354: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:41 compute-0 ceph-mon[76537]: from='client.23530 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:41 compute-0 ceph-mon[76537]: from='client.23532 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:41 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1560599478' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 09:45:41 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23540 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:41 compute-0 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 09:45:41 compute-0 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:45:41.815+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 09:45:41 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4355: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 13 09:45:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669622585' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 09:45:42 compute-0 nova_compute[248510]: 2025-12-13 09:45:42.408 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 13 09:45:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861990602' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:43.521267+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513790 data_alloc: 218103808 data_used: 169785
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:44.521471+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:45.521633+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:46.521815+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:47.522020+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.418393135s of 14.521083832s, submitted: 6
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182be2d500
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:48.522185+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516668 data_alloc: 218103808 data_used: 169785
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:49.522346+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:50.522556+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:51.522746+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:52.522907+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:53.523115+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545088 data_alloc: 218103808 data_used: 4920137
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:54.523301+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:55.523456+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:56.523612+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:57.523821+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:58.523940+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545088 data_alloc: 218103808 data_used: 4920137
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:12:59.524107+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:00.524342+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.942356110s of 12.959303856s, submitted: 10
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:01.524483+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338231296 unmapped: 58712064 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:02.524681+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 58040320 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:03.524818+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 58040320 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567036 data_alloc: 218103808 data_used: 4929353
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:04.524947+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338911232 unmapped: 58032128 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:05.525089+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa33c94/0xbf5000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:06.525229+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:07.525418+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:08.525575+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573854 data_alloc: 218103808 data_used: 4929353
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:09.525706+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:10.525896+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.332371712s of 10.059819221s, submitted: 28
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:11.526041+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:12.526239+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:13.526375+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573870 data_alloc: 218103808 data_used: 4929353
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:14.526512+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:15.526643+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:16.526786+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 54501376 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x56182cef4e00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a09ac00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a09ac00 session 0x56182b8861c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:17.526977+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:18.527159+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:19.527371+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:20.527599+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:21.527832+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:22.528026+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:23.528214+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:24.528420+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:25.528659+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:26.529141+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:27.529336+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338264064 unmapped: 58679296 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:28.529469+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338264064 unmapped: 58679296 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.482851028s of 17.732896805s, submitted: 27
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:29.529609+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c60dc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338272256 unmapped: 58671104 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:30.529738+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338436096 unmapped: 58507264 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:31.529853+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338436096 unmapped: 58507264 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:32.529995+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 58499072 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:33.530134+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338452480 unmapped: 58490880 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627765 data_alloc: 218103808 data_used: 9248073
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:34.530335+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 58482688 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:35.530464+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:36.531160+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:37.532775+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:38.532906+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627981 data_alloc: 218103808 data_used: 9248073
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:39.533036+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.794153214s of 10.743078232s, submitted: 91
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:40.533124+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb2dc000/0x0/0x4ffc00000, data 0x1325cf6/0x14e8000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341172224 unmapped: 55771136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:41.533223+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:42.533403+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:43.533531+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669113 data_alloc: 218103808 data_used: 9273673
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1c6000/0x0/0x4ffc00000, data 0x1443cf6/0x1606000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:44.533672+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:45.533842+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1c6000/0x0/0x4ffc00000, data 0x1443cf6/0x1606000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:46.533987+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:47.534184+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:48.534375+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3667489 data_alloc: 218103808 data_used: 9277769
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:49.534545+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:50.534690+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:51.534819+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:52.534966+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:53.535114+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3667745 data_alloc: 218103808 data_used: 9285961
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:55.064491+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:56.064663+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.866159439s of 16.156522751s, submitted: 69
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60dc00 session 0x56182b709a40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561830d47340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:57.064825+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:58.065002+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eba06000/0x0/0x4ffc00000, data 0xa42c94/0xc04000, compress 0x0/0x0/0x0, omap 0x74d01, meta 0x133bb2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eba06000/0x0/0x4ffc00000, data 0xa42c94/0xc04000, compress 0x0/0x0/0x0, omap 0x74d01, meta 0x133bb2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:59.065144+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3581947 data_alloc: 218103808 data_used: 4929353
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:00.065289+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:01.065440+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182bf65dc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561831c20700
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829b40c40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:02.065602+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c84/0x428000, compress 0x0/0x0/0x0, omap 0x750b9, meta 0x133baf47), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:03.065755+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:04.065958+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:05.066152+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:06.066297+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:07.066498+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:08.066750+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:09.066912+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:10.067096+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:11.067258+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:12.067370+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:13.067510+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:14.067636+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:15.067814+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:16.068022+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:17.068164+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:18.068376+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:19.068553+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:20.068840+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:21.069041+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:22.069219+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:23.069400+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:24.069594+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:25.069763+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:26.069948+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:27.070150+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:28.070373+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:29.070518+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:30.070667+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:31.070868+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:32.071025+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:33.071176+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:34.071419+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:35.071604+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:36.071751+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:37.071926+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:38.072190+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:39.072481+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:40.072638+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:41.072873+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:42.073036+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:43.073232+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:44.073402+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:45.073593+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:46.073776+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:47.073954+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:48.074189+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:49.074345+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:50.074490+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:51.074650+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:52.074836+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:53.074982+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:54.075156+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:55.075312+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:56.075494+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334790656 unmapped: 62152704 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a09ac00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a09ac00 session 0x56182bb83500
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182cef41c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182ba7cc40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182c330380
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:57.075649+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.001258850s of 61.117053986s, submitted: 73
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 60293120 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182a1fe540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:58.075815+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:59.075933+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:00.076120+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:01.076277+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:02.076423+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:03.077166+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:04.077954+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:05.078315+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:06.078512+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:07.078681+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:08.078891+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:09.079065+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:10.079295+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x561830d46c40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:11.079507+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:12.079698+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335052800 unmapped: 65560576 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182c330e00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:13.079934+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335052800 unmapped: 65560576 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182ba7c8c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.469144821s of 16.633630753s, submitted: 17
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182be2d6c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:14.080141+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335060992 unmapped: 65552384 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579208 data_alloc: 218103808 data_used: 177844
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:15.080275+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335020032 unmapped: 65593344 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:16.080390+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:17.080631+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:18.080892+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:19.081108+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652168 data_alloc: 234881024 data_used: 12416692
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:20.081236+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:21.081499+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:22.081700+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:23.081956+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:24.082883+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652168 data_alloc: 234881024 data_used: 12416692
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:25.083178+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:26.083375+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.341166496s of 12.350997925s, submitted: 5
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341409792 unmapped: 59203584 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:27.083573+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 58343424 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb079000/0x0/0x4ffc00000, data 0x1591c94/0x1753000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:28.083761+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 58343424 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:29.083934+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716928 data_alloc: 234881024 data_used: 14595764
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:30.084218+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:31.084409+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:32.084579+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:33.084725+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:34.084860+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717184 data_alloc: 234881024 data_used: 14603956
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:35.085053+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:36.085304+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:37.085470+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:38.085706+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:39.085866+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717440 data_alloc: 234881024 data_used: 14612148
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:40.086041+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:41.086208+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:42.086375+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:43.086543+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:44.086727+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343646208 unmapped: 56967168 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717440 data_alloc: 234881024 data_used: 14612148
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:45.086866+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b709500
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b413c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b413c00 session 0x56182bfb3880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182b855340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343646208 unmapped: 56967168 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829cac1c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.109582901s of 19.573698044s, submitted: 104
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561830d47c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b431180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6400 session 0x56182bc5ec40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:46.087017+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561829b65340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829b65880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343670784 unmapped: 56942592 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:47.087309+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:48.089721+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:49.089918+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730452 data_alloc: 234881024 data_used: 14612148
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:50.090748+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:51.090922+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:52.091208+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:53.091766+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:54.091918+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730452 data_alloc: 234881024 data_used: 14612148
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:55.092088+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:56.092190+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:57.092530+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.862773895s of 11.917412758s, submitted: 10
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343834624 unmapped: 56778752 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182bb82380
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:58.092719+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 56770560 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:59.092885+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:00.093255+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:01.093394+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:02.093519+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:03.093676+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:04.093783+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:05.093917+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:06.094053+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:07.094240+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:08.094770+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 56205312 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:09.094910+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 56205312 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.189497948s of 12.216222763s, submitted: 13
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:10.095124+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 54435840 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:11.095260+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 54173696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:12.095423+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea779000/0x0/0x4ffc00000, data 0x1e80cc7/0x2044000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:13.095682+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:14.095877+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791218 data_alloc: 234881024 data_used: 15420612
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:15.096050+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:16.096247+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:17.096492+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:18.096831+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:19.097186+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791234 data_alloc: 234881024 data_used: 15420612
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:20.097298+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:21.097465+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:22.097845+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:23.098051+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:24.098271+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792386 data_alloc: 234881024 data_used: 15502532
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:25.098483+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:26.098689+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:27.098871+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:28.099737+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2d340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.931962967s of 18.925085068s, submitted: 81
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b1e2e00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:29.099937+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788494 data_alloc: 234881024 data_used: 15502532
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:30.100057+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:31.100235+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182cef5340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:32.100399+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:33.100504+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafb2000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x758c9, meta 0x133ba737), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:34.101285+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 53575680 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724208 data_alloc: 234881024 data_used: 14681780
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:35.101441+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 53575680 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182ba7c540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x561829b41340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:36.101631+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 59138048 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:37.101737+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafb2000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x758c9, meta 0x133ba737), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182b708700
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c84/0x428000, compress 0x0/0x0/0x0, omap 0x75c7a, meta 0x133ba386), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:38.101888+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:39.102015+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:40.102197+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:41.102350+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:42.102473+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:43.102608+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:44.102777+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:45.102960+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:46.103122+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:47.103331+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:48.103584+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:49.103765+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:50.103942+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:51.104151+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b081800 session 0x56182c330fc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618299ef400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:52.104335+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: mgrc ms_handle_reset ms_handle_reset con 0x56182a1c1400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: get_auth_request con 0x56182cc39800 auth_method 0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:53.104496+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b415c00 session 0x56182bd53a40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829362c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a012800 session 0x56182bb836c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b415c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:54.104669+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:55.104984+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:56.105283+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:57.105464+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:58.105692+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:59.105891+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:00.106057+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:01.106260+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:02.106553+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:03.106804+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:04.107024+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:05.107243+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:06.107534+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:07.107716+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:08.108046+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:09.108348+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:10.108550+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:11.108764+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:12.108924+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:13.109111+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182ba7c8c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561831c216c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829cac8c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182c330380
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:14.109304+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.973899841s of 45.842048645s, submitted: 73
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527821 data_alloc: 218103808 data_used: 181905
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:15.109493+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2d180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:16.109668+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b855340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23c8a/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:17.109852+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:18.110117+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:19.110285+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3581350 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:20.110463+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:21.110618+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:22.110827+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 62660608 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:23.111015+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 62357504 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:24.111213+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620518 data_alloc: 218103808 data_used: 6809198
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:25.111371+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:26.111550+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:27.111793+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:28.112010+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:29.112172+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620518 data_alloc: 218103808 data_used: 6809198
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:30.112332+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:31.112451+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:32.113304+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:33.113534+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.478496552s of 19.308280945s, submitted: 36
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [0,0,1,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:34.113681+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 61358080 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:35.113840+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 61915136 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668324 data_alloc: 218103808 data_used: 7542382
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:36.114111+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:37.114294+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:38.114494+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb363000/0x0/0x4ffc00000, data 0x1299cc3/0x145a000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:39.114632+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:40.114758+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681236 data_alloc: 218103808 data_used: 7419502
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:41.114907+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:42.115057+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:43.115240+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:44.115385+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:45.115586+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674556 data_alloc: 218103808 data_used: 7423598
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:46.115788+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.386922836s of 12.756870270s, submitted: 92
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:47.115938+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:48.116300+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:49.116534+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829b65880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182a1fe540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182cef41c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c4400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c4400 session 0x56182b0b2000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:50.116654+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561831c20700
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 70705152 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3751276 data_alloc: 218103808 data_used: 7423598
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829167dc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd52000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182bb83a40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcbe000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcbe000 session 0x56182b0fdc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:51.116827+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 70705152 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea739000/0x0/0x4ffc00000, data 0x1ed1cd2/0x2093000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:52.117667+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:53.117820+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:54.118035+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:55.118190+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3751276 data_alloc: 218103808 data_used: 7423598
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:56.118446+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea739000/0x0/0x4ffc00000, data 0x1ed1cd2/0x2093000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:57.118608+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829b65500
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:58.126285+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182bb708c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561829bf2c40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.984330177s of 12.172765732s, submitted: 21
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2c000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:59.126452+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:00.126648+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 70557696 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753806 data_alloc: 218103808 data_used: 7520878
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:01.126839+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:02.127034+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea738000/0x0/0x4ffc00000, data 0x1ed1ce2/0x2094000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:03.127150+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:04.127333+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:05.127632+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819854 data_alloc: 234881024 data_used: 18727534
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:06.127842+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:07.128033+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea738000/0x0/0x4ffc00000, data 0x1ed1ce2/0x2094000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:08.128263+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:09.128539+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:10.128736+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819262 data_alloc: 234881024 data_used: 18731630
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.963133812s of 11.972195625s, submitted: 3
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:11.128888+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 59408384 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:12.129064+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e91000/0x0/0x4ffc00000, data 0x276ace2/0x292d000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 59326464 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:13.129272+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 59858944 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:14.129426+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:15.129598+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884820 data_alloc: 234881024 data_used: 19830382
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:16.129823+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:17.130063+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:18.130286+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:19.130491+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829499000 session 0x56182bb11880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:20.130661+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879500 data_alloc: 234881024 data_used: 19830382
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:21.130805+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.312377930s of 10.672085762s, submitted: 111
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:22.130943+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0c00 session 0x561829b41a40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182ba4bc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:23.131174+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:24.131340+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bc5fdc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182bd53c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:25.131464+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd4d340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689771 data_alloc: 218103808 data_used: 7411310
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:26.131619+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:27.131790+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:28.132129+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa8000/0x0/0x4ffc00000, data 0x12cbcc3/0x148c000, compress 0x0/0x0/0x0, omap 0x76139, meta 0x133b9ec7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:29.132329+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:30.132497+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689771 data_alloc: 218103808 data_used: 7411310
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:31.132668+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa8000/0x0/0x4ffc00000, data 0x12cbcc3/0x148c000, compress 0x0/0x0/0x0, omap 0x76139, meta 0x133b9ec7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:32.132835+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.554950714s of 10.802752495s, submitted: 39
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182b709a40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c602400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b0fc8c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:33.133007+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:34.133147+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:35.133306+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:36.133483+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:37.133705+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:38.134007+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:39.134235+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:40.134421+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:41.134621+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:42.134807+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:43.135022+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:44.135167+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:45.137896+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:46.138115+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:47.138260+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:48.138453+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:49.138665+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:50.138827+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:51.139038+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:52.139248+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:53.139487+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:54.332702+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:55.332863+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:56.333025+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:57.333144+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:58.333339+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:59.333490+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:00.333677+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:01.333855+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:02.334707+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:03.334959+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:04.335337+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:05.335504+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:06.335650+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:07.336360+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:08.336803+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:09.337413+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:10.338162+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:11.338382+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:12.339455+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:13.340402+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:14.341213+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.951286316s of 42.007965088s, submitted: 30
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:15.341366+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553079 data_alloc: 218103808 data_used: 189964
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:16.342036+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:17.342715+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:18.343003+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:19.343191+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:20.343523+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553079 data_alloc: 218103808 data_used: 189964
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:21.343950+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:22.344374+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182cef4540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182cef4a80
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x561831c20c40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182a000e00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:23.344624+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:24.344827+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.872159004s of 10.219871521s, submitted: 29
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 62029824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182bb10380
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182b1e3c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182b0fddc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:25.345032+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611703 data_alloc: 218103808 data_used: 189980
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x561829acefc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182bc12fc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:26.345240+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:27.345450+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:28.345679+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:29.345870+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:30.346133+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611703 data_alloc: 218103808 data_used: 189980
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:31.346275+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd4c700
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 70713344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:32.346433+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 70713344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:33.346602+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 70443008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:34.346744+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:35.346889+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675816 data_alloc: 234881024 data_used: 10259996
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:36.347038+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:37.347330+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:38.347560+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:39.347692+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:40.347978+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675816 data_alloc: 234881024 data_used: 10259996
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:41.348122+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:42.348324+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:43.348496+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.931827545s of 19.378744125s, submitted: 11
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:44.348630+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 62586880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:45.348818+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759650 data_alloc: 234881024 data_used: 11513372
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:46.349020+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:47.349216+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:48.349394+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:49.349565+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:50.349704+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759650 data_alloc: 234881024 data_used: 11513372
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:51.349852+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:52.350054+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:53.350212+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:54.350366+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:55.350518+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0x16c8c71/0x1889000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753666 data_alloc: 234881024 data_used: 11513372
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:56.350672+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.650314331s of 12.895412445s, submitted: 88
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:57.350862+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da2000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:58.351161+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:59.351319+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:00.351517+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753866 data_alloc: 234881024 data_used: 11513372
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da2000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:01.351707+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182cef5c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182a1ffdc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:02.351839+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:03.351998+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:04.352132+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:05.352302+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3794327 data_alloc: 234881024 data_used: 11513372
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:06.352501+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:07.352677+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:08.352880+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:09.353066+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:10.353288+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3794327 data_alloc: 234881024 data_used: 11513372
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.721897125s of 14.075347900s, submitted: 35
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182c08e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:11.353514+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c611800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:12.353628+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:13.353806+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:14.353995+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:15.354180+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817488 data_alloc: 234881024 data_used: 15285788
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:16.354350+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:17.354491+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:18.354661+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:19.354829+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:20.354993+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817872 data_alloc: 234881024 data_used: 15289884
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:21.355242+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 62488576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:22.355476+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.944444656s of 11.960687637s, submitted: 7
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 61833216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:23.355608+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62734336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:24.355733+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7ef4000/0x0/0x4ffc00000, data 0x23c6cf6/0x2589000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:25.355926+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7ef4000/0x0/0x4ffc00000, data 0x23c6cf6/0x2589000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3875166 data_alloc: 234881024 data_used: 15687196
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:26.356157+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:27.356311+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:28.356541+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:29.356683+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355803136 unmapped: 60563456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:30.356798+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7f00000/0x0/0x4ffc00000, data 0x23c9cf6/0x258c000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869110 data_alloc: 234881024 data_used: 15691292
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355803136 unmapped: 60563456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:31.356937+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:32.357171+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:33.357473+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.615018845s of 11.088137627s, submitted: 103
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561830d46c40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182d6b0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:34.357620+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182d6b0400 session 0x56182cef4380
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:35.358260+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765060 data_alloc: 234881024 data_used: 11513372
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:36.358373+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8c00000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76b7d, meta 0x156f9483), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:37.358561+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b708a80
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182fe8b180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:38.358806+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x5618293988c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:39.359162+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:40.359320+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:41.359487+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:42.359665+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:43.359835+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:44.359977+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:45.360159+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:46.360343+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:47.360519+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:48.360752+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:49.360899+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:50.361123+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:51.361330+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:52.361506+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:53.361680+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:54.361854+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:55.362019+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 64634880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:56.362186+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 64634880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:57.362340+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:58.362522+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:59.362643+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:00.362784+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:01.363288+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:02.363415+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:03.363559+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:04.363723+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:05.363890+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:06.364131+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:07.364350+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:08.364628+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:09.364771+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:10.364993+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:11.365229+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:12.365407+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:13.365513+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:14.365717+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182b854000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182b431c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182ba7c540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182fe8a8c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.068725586s of 41.208274841s, submitted: 78
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 64077824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x561829ace540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:15.365881+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640912 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:16.366144+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:17.366272+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:18.366402+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:19.366583+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0xbdec61/0xd9e000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:20.366752+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640912 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:21.366919+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:22.367119+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:23.367345+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bfb9180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0xbdec61/0xd9e000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 64528384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:24.367496+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 64528384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c611800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:25.367638+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701496 data_alloc: 234881024 data_used: 10021255
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:26.367836+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:27.367967+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:28.368153+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:29.368377+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:30.368643+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701496 data_alloc: 234881024 data_used: 10021255
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:31.368802+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:32.368976+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:33.369125+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:34.369285+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:35.369574+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702776 data_alloc: 234881024 data_used: 10060167
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.024909973s of 21.150117874s, submitted: 14
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:36.369703+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 63397888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:37.369924+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e94a6000/0x0/0x4ffc00000, data 0xe26c61/0xfe6000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [0,0,0,0,0,0,0,1,0,27])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62332928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:38.370146+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 62021632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:39.370307+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9139000/0x0/0x4ffc00000, data 0x118bc61/0x134b000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9139000/0x0/0x4ffc00000, data 0x118bc61/0x134b000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:40.370468+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6601.8 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2279 writes, 9430 keys, 2279 commit groups, 1.0 writes per commit group, ingest: 11.32 MB, 0.02 MB/s
                                           Interval WAL: 2279 writes, 877 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768766 data_alloc: 234881024 data_used: 11496839
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:41.370700+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:42.370871+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:43.371041+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:44.371160+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:45.371311+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:46.371466+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:47.371608+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:48.371757+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:49.371984+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:50.372173+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:51.372288+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:52.372427+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:53.372538+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:54.372695+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:55.372867+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:56.373049+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:57.373290+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:58.373486+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:59.373648+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:00.373778+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769038 data_alloc: 234881024 data_used: 11505031
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:01.373992+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:02.374218+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:03.374387+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:04.374526+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:05.374698+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769038 data_alloc: 234881024 data_used: 11505031
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:06.374827+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:07.374989+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:08.375125+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829f62c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.785942078s of 32.167488098s, submitted: 70
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829f62c00 session 0x561831c20540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:09.375384+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:10.375558+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:11.375767+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3825944 data_alloc: 234881024 data_used: 11505031
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:12.375998+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8846000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8846000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:13.376207+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb9500
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:14.376373+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182b855880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:15.376546+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182b887dc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x561828f04fc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:16.376725+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826205 data_alloc: 234881024 data_used: 11505031
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:17.376888+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:18.377092+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:19.377253+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:20.377388+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:21.377555+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3873569 data_alloc: 234881024 data_used: 19325831
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:22.377729+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:23.377908+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:24.378051+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:25.378288+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:26.378449+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3873569 data_alloc: 234881024 data_used: 19325831
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 59473920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:27.378601+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 59473920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.664230347s of 19.858476639s, submitted: 27
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:28.378898+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 55656448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8691000/0x0/0x4ffc00000, data 0x1c3bc61/0x1dfb000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:29.379026+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 55631872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:30.379162+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:31.379386+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3930201 data_alloc: 234881024 data_used: 20498311
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:32.379576+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:33.379783+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fe5000/0x0/0x4ffc00000, data 0x22e7c61/0x24a7000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:34.379952+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:35.380154+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:36.380298+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3929153 data_alloc: 234881024 data_used: 20498311
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:37.380480+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:38.380857+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:39.381025+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:40.381214+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 55328768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:41.381497+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3929665 data_alloc: 234881024 data_used: 20559751
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 55328768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bd53180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.727113724s of 13.411382675s, submitted: 117
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x561829399c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bb83dc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:42.381694+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:43.381848+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:44.381988+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:45.382393+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:46.382876+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776487 data_alloc: 234881024 data_used: 11566471
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:47.383691+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:48.384187+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561829398a80
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bb82540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:49.384383+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00f800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182c331880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:50.384767+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:51.385107+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:52.385355+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:53.385733+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:54.385989+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:55.386128+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:56.386350+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:57.386533+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:58.386807+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:59.387001+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:00.387195+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:01.387426+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:02.387588+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:03.387749+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:04.387997+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:05.388187+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:06.389015+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:07.389221+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:08.389423+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:09.390290+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:10.391013+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:11.391124+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:12.391286+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:13.391446+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:14.391659+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:15.391807+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:16.391996+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:17.392145+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:18.392586+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:19.393121+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:20.393260+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:21.393447+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:22.393608+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb8c40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182b887dc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561829b64700
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c611800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561831c21180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.109107971s of 41.198600769s, submitted: 61
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bd41180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182b0fcc40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bd4ddc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182bb708c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:23.394004+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bb83dc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:24.394206+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:25.394357+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:26.394597+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662357 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:27.394845+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:28.395156+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:29.395386+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:30.395546+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c611800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:31.395696+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561829ace540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3663102 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:32.395919+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.737547874s of 10.004721642s, submitted: 42
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:33.396111+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:34.396385+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:35.396545+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:36.396703+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb3dc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:37.396878+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:38.397149+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:39.397286+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:40.397467+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:41.397616+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:42.397791+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:43.397953+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:44.398150+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:45.398298+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:46.398440+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:47.399193+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:48.399488+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182c08f180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:49.399702+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:50.399877+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:51.400166+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:52.400704+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:53.400859+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:54.401024+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bfb3500
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:55.401185+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.629669189s of 22.757272720s, submitted: 64
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182c331a40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:56.401589+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:57.401831+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:58.402159+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:59.402840+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:00.402979+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:01.403482+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:02.403765+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:03.403992+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:04.404265+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:05.404427+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:06.404703+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:07.405144+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:08.405466+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:09.405701+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:10.405963+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:11.406213+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:12.406446+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:13.406716+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:14.406940+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:15.407053+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:16.407203+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:17.407382+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:18.407816+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:19.407962+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:20.408161+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:21.408440+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:22.408638+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:23.408913+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:24.409134+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:25.409314+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:26.409524+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:27.409707+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:28.409997+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:29.410217+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:30.410340+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:31.410520+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:32.410674+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:33.410840+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:34.410971+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:35.411107+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:36.411273+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:37.411440+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:38.411616+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:39.411756+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:40.411893+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:41.412049+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:42.412258+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:43.412481+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:44.412585+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:45.412725+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:46.412869+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:47.413130+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:48.413316+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:49.413496+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:50.413657+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:51.413821+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:52.414053+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.619342804s of 57.669017792s, submitted: 25
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:53.414222+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182a001180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bd40c40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182be2c000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x561829399c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182c3316c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:54.414418+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bfb2000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:55.414505+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b854e00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9ed5000/0x0/0x4ffc00000, data 0x3f7c61/0x5b7000, compress 0x0/0x0/0x0, omap 0x77891, meta 0x156f876f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561831c21a40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:56.414634+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182b4308c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9ed5000/0x0/0x4ffc00000, data 0x3f7c61/0x5b7000, compress 0x0/0x0/0x0, omap 0x77891, meta 0x156f876f), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628047 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:57.414786+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:58.415186+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:59.415345+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:00.415484+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:01.415647+0000)
Dec 13 09:45:42 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182c08ec40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207040162' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3637451 data_alloc: 218103808 data_used: 1815908
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344580096 unmapped: 71786496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:02.415775+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77c7d, meta 0x156f8383), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bb82000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:03.415950+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:04.416203+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:05.416372+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:06.416568+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:07.416759+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:08.416960+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:09.417152+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:10.417333+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:11.417475+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:12.417636+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:13.417874+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:14.418292+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:15.418548+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:16.418872+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:17.419111+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:18.419397+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:19.419652+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:20.419875+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:21.420205+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:22.420447+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:23.420662+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:24.420914+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:25.421134+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:26.421342+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:27.421837+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:28.422265+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:29.422536+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:30.422728+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:31.422966+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:32.423250+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:33.423794+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:34.424176+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:35.424501+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:36.424915+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:37.425233+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:38.425550+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:39.425856+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:40.426061+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:41.426351+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:42.426580+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:43.426816+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344637440 unmapped: 71729152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:44.427191+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344645632 unmapped: 71720960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:45.427515+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:46.427755+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:47.428053+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:48.428432+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:49.428716+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:50.428932+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:51.429401+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:52.429710+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:53.430051+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:54.430409+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:55.430645+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:56.431207+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:57.431681+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:58.432203+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:59.433159+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:00.433478+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:01.433760+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:02.434214+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:03.434831+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:04.435260+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:05.435908+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:06.436749+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:07.437654+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:08.438458+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:09.439550+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:10.440120+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:11.440806+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:12.441142+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 78.625038147s of 79.510787964s, submitted: 41
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:13.441492+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 313 ms_handle_reset con 0x56182b411c00 session 0x56182ba7c540
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:14.441852+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:15.442048+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:16.442373+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:17.442655+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619786 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:18.442906+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:19.443237+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:20.443484+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:21.443846+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:22.444223+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619786 data_alloc: 218103808 data_used: 202084
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:23.444494+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:24.444846+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829b7e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 313 ms_handle_reset con 0x561829b7e000 session 0x56182ba7c8c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.686821938s of 11.737939835s, submitted: 28
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 71622656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:25.445062+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 71622656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:26.445273+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:27.445426+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:28.445844+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:29.446001+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:30.446257+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:31.446448+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:32.446644+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:33.447430+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:34.447714+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:35.447952+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:36.448388+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:37.448958+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:38.449792+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:39.449982+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:40.450162+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:41.450548+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:42.450736+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:43.451055+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:44.451301+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:45.451429+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:46.451639+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:47.451888+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:48.452147+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:49.452420+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:50.452597+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:51.452729+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:52.452933+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:53.453091+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:54.453288+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:55.453410+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:56.453554+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 71557120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:57.453734+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 71557120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:58.454047+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:59.454261+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:00.454413+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:01.454555+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:02.454744+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:03.454895+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:04.455052+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:05.455245+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:06.455406+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:07.455608+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:08.455832+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:09.455981+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:10.456205+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:11.456440+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:12.456648+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:13.456832+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:14.457139+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:15.457351+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:16.457524+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:17.457654+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:18.457905+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:19.458113+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:20.458245+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:21.458389+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:22.458555+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:23.458740+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:24.458923+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:25.459133+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:26.459293+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:27.459488+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 71499776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:28.459717+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:29.459908+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:30.460064+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:31.460331+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:32.460535+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:33.460747+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344883200 unmapped: 71483392 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:34.460978+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344883200 unmapped: 71483392 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:35.461171+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:36.461317+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:37.461477+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:38.462351+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:39.462783+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:40.463046+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:41.463620+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344899584 unmapped: 71467008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:42.463862+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344899584 unmapped: 71467008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:43.464102+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:44.464328+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:45.464547+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:46.464801+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:47.465181+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:48.465467+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:49.465658+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:50.465831+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:51.466020+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:52.466182+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:53.466319+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:54.466448+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:55.466629+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:56.466783+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:57.467012+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:58.467394+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:59.467823+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 71434240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:00.468006+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 71434240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:01.468201+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:02.468475+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:03.468882+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:04.469093+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:05.469360+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:06.469571+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:07.469841+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:08.470198+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:09.471308+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:10.471553+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:11.471779+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 ms_handle_reset con 0x56182a00e000 session 0x56182bd416c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 ms_handle_reset con 0x56182a1c6c00 session 0x56182bb11340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:12.472053+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:13.472405+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:14.472586+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:15.472850+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:16.473268+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:17.473429+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206180
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:18.473741+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:19.474009+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:20.474229+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:21.474387+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 117.142692566s of 117.148269653s, submitted: 14
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:22.474557+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 315 ms_handle_reset con 0x56182c0e9400 session 0x56182c08efc0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623367 data_alloc: 218103808 data_used: 206129
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 71360512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:23.474708+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 71352320 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:24.474892+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 71344128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:25.475037+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc0400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 315 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26cd26/0x42a000, compress 0x0/0x0/0x0, omap 0x78b82, meta 0x156f747e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 315 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26cd26/0x42a000, compress 0x0/0x0/0x0, omap 0x78b82, meta 0x156f747e), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 71344128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:26.475233+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 316 ms_handle_reset con 0x56182bcc0400 session 0x56182bc13340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 71335936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:27.475402+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3625319 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 71335936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:28.475581+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 316 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26e8f3/0x42c000, compress 0x0/0x0/0x0, omap 0x78c09, meta 0x156f73f7), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:29.475766+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:30.475979+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:31.476180+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.064450264s of 10.179132462s, submitted: 41
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 71311360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:32.476416+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628093 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 71311360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:33.476593+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 317 heartbeat osd_stat(store_statfs(0x4ea05b000/0x0/0x4ffc00000, data 0x27038e/0x42f000, compress 0x0/0x0/0x0, omap 0x78c90, meta 0x156f7370), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 71303168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 317 heartbeat osd_stat(store_statfs(0x4ea05b000/0x0/0x4ffc00000, data 0x27038e/0x42f000, compress 0x0/0x0/0x0, omap 0x78c90, meta 0x156f7370), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:34.476748+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: get_auth_request con 0x56182a213000 auth_method 0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:35.476937+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:36.477129+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:37.477295+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3630867 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:38.477561+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345088000 unmapped: 71278592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:39.477702+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 ms_handle_reset con 0x56182a00e000 session 0x56182bb11880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 71262208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:40.477856+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:41.478010+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 71262208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:42.478202+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:43.478380+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:44.478526+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:45.478681+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:46.478853+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:47.479012+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:48.479250+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:49.479423+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:50.479620+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:51.479809+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:52.480024+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:53.480177+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:54.480376+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:55.480602+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:56.480767+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:57.480988+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:58.481206+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:59.481397+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:00.481573+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:01.481810+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:02.482013+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:03.482188+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:04.482378+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:05.482559+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:06.482772+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:07.482936+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:08.483172+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:09.483411+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:10.483673+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:11.483915+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:12.484125+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 71204864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:13.484310+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 71204864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:14.484513+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345169920 unmapped: 71196672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:15.484651+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:16.484867+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:17.485007+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:18.485390+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:19.485516+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:20.485669+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:21.485864+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:22.486197+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:23.486336+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:24.486504+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:25.486656+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:26.486836+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:27.487017+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:28.487269+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:29.487490+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:30.487683+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:31.487909+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:32.488154+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:33.488308+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:34.488595+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:35.488857+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:36.489034+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 71155712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:37.489321+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 71155712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:38.489546+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:39.489726+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:40.489952+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:41.490185+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:42.490324+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:43.490536+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:44.490698+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:45.490835+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:46.490983+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:47.491119+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 71122944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:48.491332+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 71122944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:49.491493+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:50.491707+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:51.491989+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:52.492181+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:53.492397+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:54.492582+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:55.492859+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:56.493231+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:57.493447+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:58.493824+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:59.494151+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:00.494456+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:01.494747+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:02.495046+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:03.495438+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:04.495668+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:05.495973+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:06.496233+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:07.496511+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:08.496768+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 71081984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:09.496943+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 71081984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:10.497171+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:11.497390+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:12.497608+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:13.497873+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:14.498142+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:15.498360+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:16.498571+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:17.498766+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:18.498970+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:19.499150+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:20.499297+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:21.499487+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:22.499676+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:23.499863+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:24.500011+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:25.500188+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:26.500353+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:27.500525+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 71041024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 116.263618469s of 116.304878235s, submitted: 39
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:28.500704+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679712 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 79429632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 319 handle_osd_map epochs [320,320], i have 320, src has [1,320]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 320 ms_handle_reset con 0x56182a1c6c00 session 0x56182be2c000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:29.500905+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:30.501144+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 78422016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 ms_handle_reset con 0x56182b411c00 session 0x56182c08ec40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:31.501301+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e9052000/0x0/0x4ffc00000, data 0x127558b/0x143a000, compress 0x0/0x0/0x0, omap 0x7a247, meta 0x156f5db9), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:32.501524+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:33.501761+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:34.501965+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:35.502232+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:36.502421+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:37.502571+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:38.502832+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:39.502962+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:40.503147+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:41.503322+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:42.503506+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:43.503709+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:44.503857+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:45.504006+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-13T09:30:46.504215+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _finish_auth 0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:46.505451+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:47.504591+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:48.504853+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:49.505014+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:50.505212+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:51.505356+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:52.508113+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:53.508392+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:54.508598+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:55.508752+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:56.508932+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:57.509365+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:58.509643+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:59.509801+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:00.509982+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:01.510171+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:02.510424+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:03.510591+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:04.510806+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:05.511225+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:06.511491+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.994350433s of 38.509582520s, submitted: 26
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:07.511691+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:08.511918+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728172 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:09.512240+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904f000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a571, meta 0x156f5a8f), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:10.512414+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 322 ms_handle_reset con 0x56182c0e9400 session 0x56182ba7c000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:11.512567+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:12.512808+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:13.513042+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730846 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:14.513281+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:15.513464+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 322 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 322 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:16.513610+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:17.513757+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:18.513937+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730846 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:19.514119+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.453020096s of 13.245205879s, submitted: 36
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:20.514290+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:21.514433+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:22.514739+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:23.514967+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:24.515152+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:25.515383+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:26.515647+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:27.515816+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:28.516006+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:29.516210+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:30.516934+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:31.517666+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:32.517815+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:33.517985+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347578368 unmapped: 77185024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:34.518181+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347578368 unmapped: 77185024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:35.518406+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:36.518573+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:37.518739+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:38.518948+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:39.519217+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:40.519422+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7201.8 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 997 writes, 3370 keys, 997 commit groups, 1.0 writes per commit group, ingest: 2.81 MB, 0.00 MB/s
                                           Interval WAL: 997 writes, 422 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread fragmentation_score=0.004687 took=0.000062s
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:41.519575+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:42.519719+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:43.519891+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:44.520041+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:45.520175+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:46.520364+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:47.520554+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:48.520762+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:49.520955+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:50.521169+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:51.521324+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x5618299ef400 session 0x561831c201c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a00e000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347611136 unmapped: 77152256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:52.521603+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: mgrc ms_handle_reset ms_handle_reset con 0x56182cc39800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: get_auth_request con 0x56182bcc0400 auth_method 0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x561829362c00 session 0x56182cef4c40
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182b415c00 session 0x56182bd4c1c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x561829109800
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:53.521751+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:54.522043+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:55.522273+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:56.522471+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:57.522636+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:58.523226+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:59.523433+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:00.523609+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:01.523842+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:02.524019+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:03.524232+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:04.524425+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:05.524653+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:06.524887+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:07.525050+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:08.525297+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:09.525535+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:10.525734+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:11.525948+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:12.526177+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:13.526362+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:14.526560+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:15.526752+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347643904 unmapped: 77119488 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:16.526988+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347643904 unmapped: 77119488 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:17.527211+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:18.527460+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:19.527673+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:20.527907+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:21.528099+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:22.528251+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:23.528497+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 77103104 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:24.528801+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 77103104 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:25.528991+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:26.529160+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:27.529418+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:28.529631+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:29.531024+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:30.531202+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:31.531891+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:32.532344+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:33.533278+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347676672 unmapped: 77086720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:34.533658+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347676672 unmapped: 77086720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:35.534124+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:36.534568+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:37.534847+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:38.535290+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:39.535740+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:40.536052+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:41.536408+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:42.536709+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:43.537222+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:44.537617+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:45.537883+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:46.538194+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:47.538507+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:48.538931+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:49.539260+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:50.539512+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:51.539716+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:52.539946+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:53.540249+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:54.540510+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:55.540758+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:56.541043+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:57.541382+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:58.541746+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:59.543197+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:00.543557+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:01.544554+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:02.545040+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:03.545763+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:04.546479+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:05.546752+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:06.547363+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:07.547649+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:08.548193+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:09.548627+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:10.548873+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:11.549304+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:12.549688+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:13.550222+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:14.550654+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:15.550868+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:16.551200+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:17.551721+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:18.552266+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182a1c6000 session 0x56182fe8a380
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182a1c6c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:19.552511+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:20.552767+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:21.553047+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182ba4bc00 session 0x561830d47880
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182b411c00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:22.553335+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:23.553627+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:24.553875+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:25.554123+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:26.554327+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 127.064315796s of 127.072921753s, submitted: 13
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:27.554504+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347774976 unmapped: 76988416 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:28.554812+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 76980224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:29.555020+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347791360 unmapped: 76972032 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:30.555224+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:31.555459+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:32.555628+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:33.556196+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:34.556340+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 76931072 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:35.556498+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:36.557006+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:37.557189+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:38.557361+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:39.557606+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:40.557855+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:41.558038+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:42.558209+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:43.558385+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:44.558596+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 76898304 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:45.558823+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 76898304 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:46.559004+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:47.559216+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:48.559445+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:49.559618+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:50.559750+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:51.559922+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:52.560270+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:53.560519+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:54.560733+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:55.560907+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:56.561056+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:57.561257+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:58.561521+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:59.561680+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:00.561868+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:01.562035+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:02.562230+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:03.562374+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:04.562565+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:05.562761+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:06.562884+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:07.563151+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:08.563347+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:09.563531+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:10.563670+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:11.563942+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:12.564147+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:13.564313+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:14.564483+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.015445709s of 48.233345032s, submitted: 110
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:15.564655+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:16.564894+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:17.565159+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:18.565348+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:19.565524+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:20.565701+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:21.565875+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:22.566130+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:23.566305+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:24.566486+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:25.566663+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:26.566892+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:27.567141+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:28.567340+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:29.567523+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:30.567688+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:31.567902+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 76824576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:32.568065+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.083396912s of 17.435754776s, submitted: 6
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 76824576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:33.568304+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:34.568493+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:35.568698+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:36.568917+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:37.569120+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:38.569258+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:39.569432+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:40.569590+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:41.569760+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:42.569942+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:43.570189+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:44.570373+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:45.570550+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:46.570768+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:47.570959+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:48.571197+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:49.571371+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:50.571551+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:51.571734+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 76791808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:52.572024+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 76783616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:53.572265+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 76783616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:54.576065+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 76783616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:55.576286+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 76775424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:56.576497+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 76775424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:57.576683+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:58.576899+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:59.577179+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:00.577390+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:01.577552+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:02.577742+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347996160 unmapped: 76767232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:03.577929+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:04.578182+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:05.578407+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:06.578582+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:07.578777+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348004352 unmapped: 76759040 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:08.579118+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 76750848 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:09.579407+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 76750848 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:10.579644+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 76750848 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:11.579882+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 76742656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:12.580221+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 76742656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:13.580592+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:14.580769+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:15.581164+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:16.581530+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:17.581943+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:18.582288+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 76734464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:19.582484+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:20.582921+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:21.583261+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:22.583540+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:23.583724+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:24.584026+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:25.584244+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:26.584454+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 76726272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:27.584652+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 76709888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:28.584916+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 76709888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:29.585404+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 76709888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:30.585808+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348053504 unmapped: 76709888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:31.586058+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:32.586473+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:33.586977+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:34.587454+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:35.587777+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:36.588153+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 76701696 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:37.588383+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:38.588779+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:39.589029+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:40.589285+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:41.589462+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:42.589680+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 76693504 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:43.589861+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 76677120 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:44.590057+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348086272 unmapped: 76677120 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:45.590317+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:46.590514+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:47.590720+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:48.590947+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:49.591146+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:50.591315+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 76660736 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:51.591477+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:52.591650+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:53.591845+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:54.592022+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:55.592953+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:56.593243+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:57.593431+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:58.595232+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 76652544 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:59.595408+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 76644352 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:00.595559+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 76644352 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:01.595762+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 76636160 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:02.595903+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 76636160 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:03.596117+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 76627968 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:04.596279+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 76627968 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:05.596408+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 76627968 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:06.596549+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:07.596757+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 76627968 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:08.597010+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:09.597192+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:10.597341+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:11.597546+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:12.597749+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:13.597855+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:14.598050+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:15.598279+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 76619776 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:16.598432+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 76611584 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:17.598657+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 76611584 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:18.598869+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:19.599008+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:20.599125+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:21.599295+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:22.599448+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:23.599625+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 76595200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:24.599788+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:25.599967+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:26.600160+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:27.600336+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:28.600577+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:29.600814+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:30.600985+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:31.601160+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 76587008 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:32.601326+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 76578816 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:33.601697+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 76578816 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:34.601876+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:35.602056+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:36.602280+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:37.602469+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:38.602737+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:39.602874+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348192768 unmapped: 76570624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:40.603122+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:41.603312+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:42.603493+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:43.603644+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:44.603805+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348209152 unmapped: 76554240 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:45.604036+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:46.604340+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:47.604536+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:48.604827+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:49.605034+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348217344 unmapped: 76546048 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:50.605290+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:51.605519+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:52.607358+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:53.607559+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:54.607712+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:55.607868+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348225536 unmapped: 76537856 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:56.608048+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:57.608317+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:58.608546+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:59.608754+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:00.609041+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:01.609296+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:02.609495+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:03.609661+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 76521472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:04.609856+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:05.610042+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:06.610304+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:07.610515+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:08.610778+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348250112 unmapped: 76513280 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:09.610964+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 76505088 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:10.611134+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348258304 unmapped: 76505088 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:11.611371+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:12.611566+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:13.611735+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:14.611910+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:15.612142+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:16.612373+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:17.612617+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:18.612865+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348266496 unmapped: 76496896 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:19.613116+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 76488704 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:20.613338+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 76488704 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:21.613513+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 76488704 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:22.613730+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:23.613950+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:24.614149+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:25.614338+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:26.614539+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 76480512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:27.614722+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 76472320 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:28.614941+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348291072 unmapped: 76472320 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:29.615156+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348299264 unmapped: 76464128 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:30.615328+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348307456 unmapped: 76455936 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:31.615523+0000)
Dec 13 09:45:42 compute-0 ceph-mon[76537]: from='client.23536 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:42 compute-0 ceph-mon[76537]: from='client.23540 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3669622585' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 09:45:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1861990602' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 09:45:42 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1207040162' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:32.615703+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:33.615898+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:34.616186+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:35.616487+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:36.616705+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:37.616890+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 76447744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:38.617103+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:39.617425+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:40.617603+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:41.617798+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:42.618000+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 76439552 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:43.618206+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:44.618480+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:45.618708+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:46.618905+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:47.619129+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:48.619352+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 76431360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:49.619558+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 76423168 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:50.619743+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 76414976 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:51.619939+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 76414976 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:52.620150+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348348416 unmapped: 76414976 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:53.620297+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:54.620490+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:55.620704+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:56.620891+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:57.621110+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:58.621410+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:59.621633+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:00.621814+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 76406784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:01.622054+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:02.622265+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:03.622456+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:04.622724+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:05.622901+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:06.623052+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348364800 unmapped: 76398592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:07.623245+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 76390400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:08.623429+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 76390400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:09.623618+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 76390400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:10.623784+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 76390400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:11.623995+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 76374016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:12.624172+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 76374016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:13.626158+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 76374016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:14.626334+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348389376 unmapped: 76374016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:15.626568+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:16.626722+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:17.626927+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:18.627202+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:19.627392+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:20.627625+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348405760 unmapped: 76357632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:21.627824+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 76349440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:22.627967+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348413952 unmapped: 76349440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:23.628227+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:24.628480+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:25.628726+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:26.628952+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:27.629152+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:28.629354+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:29.629516+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:30.638595+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348422144 unmapped: 76341248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:31.638809+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 76324864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:32.638969+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348438528 unmapped: 76324864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:33.639165+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:34.639333+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:35.639553+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:36.639757+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:37.639989+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:38.640235+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 76316672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:39.640424+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 76308480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:40.640618+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 76308480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:41.640852+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:42.641347+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:43.641561+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:44.641780+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:45.641975+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:46.642218+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 76300288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:47.642438+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c0e9400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 250.118301392s of 255.261520386s, submitted: 16
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 76283904 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:48.642673+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 76283904 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:49.642824+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 76259328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:50.643168+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733452 data_alloc: 218103808 data_used: 210410
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 76259328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:51.643366+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 76259328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:52.643542+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 76259328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:53.643730+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 323 handle_osd_map epochs [323,324], i have 323, src has [1,324]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 76251136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:54.643919+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348512256 unmapped: 76251136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:55.644061+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3736040 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 76234752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:56.644256+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 324 heartbeat osd_stat(store_statfs(0x4e9045000/0x0/0x4ffc00000, data 0x127c363/0x1445000, compress 0x0/0x0/0x0, omap 0x7ad5a, meta 0x156f52a6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348528640 unmapped: 76234752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:57.644408+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.513689041s of 10.024970055s, submitted: 41
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:58.644622+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:59.644811+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 324 ms_handle_reset con 0x56182c0e9400 session 0x56182bb70a80
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:00.644990+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693904 data_alloc: 218103808 data_used: 210375
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:01.645218+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182c60cc00
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:02.645445+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 324 heartbeat osd_stat(store_statfs(0x4e9847000/0x0/0x4ffc00000, data 0xa7c340/0xc44000, compress 0x0/0x0/0x0, omap 0x7ad5a, meta 0x156f52a6), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:03.645644+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348536832 unmapped: 76226560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:04.645842+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348553216 unmapped: 76210176 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 325 heartbeat osd_stat(store_statfs(0x4e9842000/0x0/0x4ffc00000, data 0xa7ddbf/0xc47000, compress 0x0/0x0/0x0, omap 0x7b965, meta 0x156f469b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:05.645996+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3697334 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348553216 unmapped: 76210176 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:06.646158+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 325 heartbeat osd_stat(store_statfs(0x4e9843000/0x0/0x4ffc00000, data 0xa7ddbf/0xc47000, compress 0x0/0x0/0x0, omap 0x7b965, meta 0x156f469b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 325 handle_osd_map epochs [326,326], i have 326, src has [1,326]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348569600 unmapped: 76193792 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:07.646342+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.852182388s of 10.416975975s, submitted: 26
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 76177408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:08.646606+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348585984 unmapped: 76177408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:09.646819+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 326 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0xa7f9af/0xc4a000, compress 0x0/0x0/0x0, omap 0x7b9ed, meta 0x156f4613), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 76169216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:10.646985+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700108 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 76169216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:11.647240+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 76161024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:12.647476+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 76161024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:13.647683+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 326 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0xa7f9af/0xc4a000, compress 0x0/0x0/0x0, omap 0x7b9ed, meta 0x156f4613), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:14.647884+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 326 ms_handle_reset con 0x56182c60cc00 session 0x56182b709340
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:15.648166+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659840 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:16.648412+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:17.648602+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:18.649154+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 326 heartbeat osd_stat(store_statfs(0x4ea040000/0x0/0x4ffc00000, data 0x27f9af/0x44a000, compress 0x0/0x0/0x0, omap 0x7b9ed, meta 0x156f4613), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348626944 unmapped: 76136448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:19.649316+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348643328 unmapped: 76120064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.377560616s of 12.102450371s, submitted: 13
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:20.649466+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x56182bcc1400
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662614 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348659712 unmapped: 76103680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:21.649654+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 357064704 unmapped: 67698688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:22.649816+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348692480 unmapped: 76070912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 ms_handle_reset con 0x56182bcc1400 session 0x561829bf3500
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:23.649946+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348692480 unmapped: 76070912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:24.650167+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 76062720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:25.650393+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 76062720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:26.650685+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 76062720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:27.650894+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348708864 unmapped: 76054528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:28.651190+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348708864 unmapped: 76054528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:29.651354+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:30.651521+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:31.651698+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:32.651854+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:33.651995+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:34.652256+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348717056 unmapped: 76046336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:35.652434+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 76029952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:36.652592+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 76029952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:37.652801+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348733440 unmapped: 76029952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:38.653270+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:39.653442+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:40.653616+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:41.653813+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:42.654021+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:43.654428+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 76021760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:44.654656+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:45.654835+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:46.655152+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:47.655453+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:48.655732+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:49.655917+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:50.656144+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348749824 unmapped: 76013568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:51.656270+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:52.656463+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:53.656633+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:54.656783+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:55.656926+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:56.657146+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:57.657401+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:58.657620+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:59.657809+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348766208 unmapped: 75997184 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:00.658005+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348782592 unmapped: 75980800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:01.658184+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348782592 unmapped: 75980800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:02.658339+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:03.658521+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:04.658688+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:05.659113+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:06.659291+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:07.659483+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 75972608 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:08.659749+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:09.659944+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:10.660174+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:11.660411+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:12.660572+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:13.660785+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:14.661000+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:15.661199+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348807168 unmapped: 75956224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:16.661469+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 75948032 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:17.661695+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348815360 unmapped: 75948032 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:18.661896+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:19.662097+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:20.662319+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:21.662434+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:22.662640+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:23.662920+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 75939840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:24.663173+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 heartbeat osd_stat(store_statfs(0x4e93c7000/0x0/0x4ffc00000, data 0xef2fed/0x10c1000, compress 0x0/0x0/0x0, omap 0x7c15d, meta 0x156f3ea3), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:25.663377+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734637 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:26.663552+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:27.663727+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:28.664160+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: handle_auth_request added challenge on 0x5618337d0000
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 67.868362427s of 68.700660706s, submitted: 25
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:29.664425+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 75923456 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 329 ms_handle_reset con 0x5618337d0000 session 0x56182b8541c0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:30.664612+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 75890688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 329 heartbeat osd_stat(store_statfs(0x4ea037000/0x0/0x4ffc00000, data 0x284bba/0x453000, compress 0x0/0x0/0x0, omap 0x7caed, meta 0x156f3513), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672841 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:31.664793+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 75890688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 329 heartbeat osd_stat(store_statfs(0x4ea037000/0x0/0x4ffc00000, data 0x284bba/0x453000, compress 0x0/0x0/0x0, omap 0x7caed, meta 0x156f3513), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:32.664941+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:33.665061+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:34.665278+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:35.665431+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 329 heartbeat osd_stat(store_statfs(0x4ea037000/0x0/0x4ffc00000, data 0x284bba/0x453000, compress 0x0/0x0/0x0, omap 0x7caed, meta 0x156f3513), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672841 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:36.665598+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:37.665756+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 329 heartbeat osd_stat(store_statfs(0x4ea037000/0x0/0x4ffc00000, data 0x284bba/0x453000, compress 0x0/0x0/0x0, omap 0x7caed, meta 0x156f3513), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:38.666033+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:39.666442+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 75882496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _renew_subs
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.121034622s of 11.191111565s, submitted: 41
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:40.666581+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:41.666766+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:42.667020+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:43.667195+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:44.667395+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348913664 unmapped: 75849728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:45.667546+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:46.667750+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:47.667971+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:48.668178+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:49.668341+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 75841536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:50.668497+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:51.668728+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:52.668938+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:53.669178+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:54.669411+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:55.669625+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 75833344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:56.669828+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 75825152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:57.670026+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 75825152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:58.670336+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348938240 unmapped: 75825152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:59.670502+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 75816960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:00.670700+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 75816960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:01.670875+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 75816960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:02.671425+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 75816960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:03.671638+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 75808768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:04.672638+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 75808768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:05.673241+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 75808768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:06.673422+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:07.673800+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:08.674187+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:09.674346+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:10.674775+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 75800576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:11.675018+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:12.675417+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:13.675618+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:14.675859+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:15.676212+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:16.676472+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:17.676625+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:18.676833+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 75792384 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:19.677210+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 75784192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:20.677456+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 75784192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:21.677716+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 75784192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:22.686564+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 75776000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:23.686729+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 75767808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:24.686921+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 75767808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:25.687167+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 75767808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:26.687373+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348995584 unmapped: 75767808 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:27.687537+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 75759616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:28.687795+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 75759616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:29.687943+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 75759616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:30.688162+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 75759616 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:31.688334+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:32.688521+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:33.688698+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:34.689167+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:35.689473+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 75751424 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:36.690551+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:37.691455+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:38.692274+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:39.692545+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:40.693902+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7801.8 total, 600.0 interval
                                           Cumulative writes: 49K writes, 190K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.72 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 525 writes, 1278 keys, 525 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 525 writes, 234 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:41.694419+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:42.695134+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 75743232 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:43.695382+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 75726848 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:44.695677+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:45.695892+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:46.696106+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:47.696293+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:48.696466+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:49.698422+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:50.698598+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 75718656 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:51.698769+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:52.698986+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:53.699165+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:54.699356+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:55.699513+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:56.699793+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 75710464 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:57.700169+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 75702272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:58.700416+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 75702272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:59.700835+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 75694080 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:00.700999+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:01.701184+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:02.701339+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:03.701507+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:04.701679+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:05.701863+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:06.702045+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 75685888 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:07.702227+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349102080 unmapped: 75661312 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:08.702442+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 75571200 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'config show' '{prefix=config show}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:09.702619+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 75702272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:10.702889+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 75776000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:11.703057+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349061120 unmapped: 75702272 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:12.703943+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'perf dump' '{prefix=perf dump}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 75546624 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'perf schema' '{prefix=perf schema}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:13.704142+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 75464704 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:14.704553+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349306880 unmapped: 75456512 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:15.704691+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 75448320 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:16.704858+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 75448320 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:17.705166+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 75448320 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:18.710193+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 75440128 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:19.710385+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 75431936 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:20.710542+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 75431936 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:21.710714+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 75431936 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:22.711036+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 75431936 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:23.711192+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 75431936 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:24.711373+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 75431936 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:25.711612+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 75423744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:26.711799+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 75423744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:27.711976+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 75423744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:28.712184+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 75423744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:29.712342+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 75423744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:30.712503+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 75423744 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:31.712662+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 75407360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:32.712821+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 75407360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:33.712977+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 75407360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:34.713176+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 75407360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:35.713349+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 75407360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:36.713515+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 75407360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:37.713663+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 75407360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:38.713870+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 75407360 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:39.714421+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 75390976 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:40.714562+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 75390976 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:41.714690+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 75382784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:42.714921+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 75382784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:43.715138+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 75382784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:44.715293+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 75382784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:45.715441+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 75382784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:46.715820+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 75382784 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:47.715981+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 75374592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:48.716501+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 75374592 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:49.716960+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 75366400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:50.717155+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 75366400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:51.717438+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 75366400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:52.717818+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 75366400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:53.718219+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 75366400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:54.718551+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 75366400 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:55.718725+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 75358208 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:56.718896+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 75358208 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:57.719215+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 75358208 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:58.719484+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 75358208 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:59.719778+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 75358208 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:00.719958+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 75350016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:01.720400+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349421568 unmapped: 75341824 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:02.720616+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349421568 unmapped: 75341824 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:03.720788+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 75325440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:04.721016+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 75325440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:05.721752+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 75325440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:06.722020+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 75325440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:07.722236+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 75325440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:08.722553+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 75325440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:09.722817+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 75325440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:10.723117+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 75325440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:11.724325+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 75317248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:12.725182+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 75309056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:13.725699+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 75309056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:14.726034+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 75309056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:15.726218+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:16.726842+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 75309056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:17.727428+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349462528 unmapped: 75300864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:18.727739+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349462528 unmapped: 75300864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:19.728164+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349462528 unmapped: 75300864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:20.728441+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 75292672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:21.728570+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 75292672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:22.729169+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 75292672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:23.729427+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 75292672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:24.730175+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 75292672 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:25.730465+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 75284480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:26.730887+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 75284480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675615 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:27.731156+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 75284480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:28.731356+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 75284480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:29.731527+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 75276288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 169.327667236s of 169.352218628s, submitted: 13
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:30.731822+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 75268096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:31.732158+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 75268096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:32.732501+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 75268096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:33.732752+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 75268096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:34.732956+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 75251712 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:35.733154+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 75251712 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:36.733324+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:37.733589+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 75235328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:38.733827+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:39.734093+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:40.734303+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:41.734541+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:42.734698+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:43.734873+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:44.735041+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:45.735187+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349519872 unmapped: 75243520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:46.735331+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 75235328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:47.735802+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 75235328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:48.736041+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 75235328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:49.736264+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 75235328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:50.736422+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 75235328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:51.736560+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 75235328 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:52.736717+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 75227136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:53.736854+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 75227136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:54.737003+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 75227136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:55.737151+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 75227136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:56.737294+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 75227136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:57.737415+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 75227136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:58.737618+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 75227136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:59.737818+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 75227136 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:00.738002+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 75218944 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:01.738155+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 75218944 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:02.738337+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 75218944 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:03.738540+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 75218944 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:04.738691+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 75218944 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:05.738885+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 75210752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:06.739057+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 75210752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:07.739330+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 75210752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:08.739565+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 75210752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:09.739731+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 75210752 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:10.739922+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 75202560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:11.740064+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 75202560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:12.740247+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 75202560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:13.740388+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 75202560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:14.740564+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349560832 unmapped: 75202560 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:15.740758+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 75194368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:16.740963+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 75194368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:17.741129+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 75194368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:18.741299+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 75194368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:19.741530+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 75194368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:20.741679+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349569024 unmapped: 75194368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:21.741802+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 75186176 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:22.742022+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 75186176 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:23.742265+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 75177984 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:24.742453+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 75177984 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:25.742623+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 75177984 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:26.742782+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 75169792 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:27.742976+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 75169792 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:28.743197+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 75169792 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:29.743416+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 75169792 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:30.743585+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 75169792 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:31.743784+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 75161600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:32.743907+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 75161600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:33.744163+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 75161600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:34.744396+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 75161600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:35.744595+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 75161600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:36.744735+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 75161600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:37.744889+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 75161600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:38.745189+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349601792 unmapped: 75161600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:39.745384+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 75145216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:40.745580+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 75145216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:41.745752+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349618176 unmapped: 75145216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:42.745936+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 75137024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:43.746118+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 75137024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:44.746353+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 75137024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:45.746564+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 75137024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:46.746776+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349626368 unmapped: 75137024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:47.746990+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349634560 unmapped: 75128832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:48.747208+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349634560 unmapped: 75128832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:49.747454+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349634560 unmapped: 75128832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:50.747623+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 75120640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:51.747840+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 75120640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:52.747999+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 75120640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:53.748172+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 75120640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:54.748389+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349642752 unmapped: 75120640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:55.748553+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 75112448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:56.748695+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 75112448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:57.748892+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 75104256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:58.749207+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 75104256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:59.749336+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 75104256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:00.749541+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 75104256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:01.749657+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 75104256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:02.749773+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 75104256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:03.749972+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 75104256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:04.750147+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349659136 unmapped: 75104256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:05.750286+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 75096064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:06.750613+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 75096064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:07.750805+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:42 compute-0 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:42 compute-0 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674895 data_alloc: 218103808 data_used: 214436
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 75096064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:08.750994+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349667328 unmapped: 75096064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'config show' '{prefix=config show}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:09.751156+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349593600 unmapped: 75169792 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:10.751317+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 75497472 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: osd.1 330 heartbeat osd_stat(store_statfs(0x4ea036000/0x0/0x4ffc00000, data 0x286639/0x456000, compress 0x0/0x0/0x0, omap 0x7cb75, meta 0x156f348b), peers [0,2] op hist [])
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: tick
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_tickets
Dec 13 09:45:42 compute-0 ceph-osd[88086]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:11.751471+0000)
Dec 13 09:45:42 compute-0 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 75317248 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:42 compute-0 ceph-osd[88086]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:45:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 13 09:45:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640294431' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 13 09:45:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 13 09:45:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064242558' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 09:45:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 13 09:45:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224825707' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 13 09:45:43 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 13 09:45:43 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673395776' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 13 09:45:43 compute-0 ceph-mon[76537]: pgmap v4355: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2640294431' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 13 09:45:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4064242558' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 09:45:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2224825707' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 13 09:45:43 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1673395776' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 13 09:45:43 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4356: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 13 09:45:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1899346326' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 13 09:45:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 13 09:45:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2301024063' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 13 09:45:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 13 09:45:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2017994000' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 13 09:45:44 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 13 09:45:44 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195732084' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 13 09:45:44 compute-0 sudo[435811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:45:44 compute-0 sudo[435811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:44 compute-0 sudo[435811]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:44 compute-0 ceph-mon[76537]: pgmap v4356: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1899346326' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 13 09:45:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2301024063' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 13 09:45:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2017994000' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 13 09:45:44 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1195732084' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 13 09:45:44 compute-0 sudo[435859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 09:45:44 compute-0 sudo[435859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857722775' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113628931' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 13 09:45:45 compute-0 sudo[435859]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:45:45 compute-0 sudo[435990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:45:45 compute-0 sudo[435990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:45 compute-0 sudo[435990]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:45 compute-0 sudo[436015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 09:45:45 compute-0 sudo[436015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308107918' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/857722775' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1113628931' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2308107918' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 09:45:45 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4357: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:45 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 13 09:45:45 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3360354539' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 13 09:45:45 compute-0 podman[436077]: 2025-12-13 09:45:45.924089855 +0000 UTC m=+0.042520457 container create afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_beaver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 09:45:45 compute-0 systemd[1]: Started libpod-conmon-afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb.scope.
Dec 13 09:45:45 compute-0 podman[436077]: 2025-12-13 09:45:45.90338302 +0000 UTC m=+0.021813642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:45:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:45:46 compute-0 podman[436077]: 2025-12-13 09:45:46.021961856 +0000 UTC m=+0.140392478 container init afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:45:46 compute-0 podman[436077]: 2025-12-13 09:45:46.031163914 +0000 UTC m=+0.149594516 container start afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:45:46 compute-0 podman[436077]: 2025-12-13 09:45:46.035248816 +0000 UTC m=+0.153679438 container attach afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_beaver, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 09:45:46 compute-0 thirsty_beaver[436097]: 167 167
Dec 13 09:45:46 compute-0 systemd[1]: libpod-afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb.scope: Deactivated successfully.
Dec 13 09:45:46 compute-0 conmon[436097]: conmon afbb0573cf6878e40e15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb.scope/container/memory.events
Dec 13 09:45:46 compute-0 podman[436077]: 2025-12-13 09:45:46.039573543 +0000 UTC m=+0.158004145 container died afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_beaver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 09:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0053baa2f71a7e83073a58a3efe88afaad156e6dbc73d4c99cebdf4d8b6906-merged.mount: Deactivated successfully.
Dec 13 09:45:46 compute-0 podman[436077]: 2025-12-13 09:45:46.088117549 +0000 UTC m=+0.206548151 container remove afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:45:46 compute-0 systemd[1]: libpod-conmon-afbb0573cf6878e40e1591875379415834a034d72b44cbe63bfbba8a2856eaeb.scope: Deactivated successfully.
Dec 13 09:45:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 13 09:45:46 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878240809' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 13 09:45:46 compute-0 podman[436147]: 2025-12-13 09:45:46.265390712 +0000 UTC m=+0.041016220 container create bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_saha, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 09:45:46 compute-0 nova_compute[248510]: 2025-12-13 09:45:46.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:46 compute-0 systemd[1]: Started libpod-conmon-bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0.scope.
Dec 13 09:45:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:45:46 compute-0 podman[436147]: 2025-12-13 09:45:46.245484087 +0000 UTC m=+0.021109615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54abfe327451598f993f5ccdb26c4e915a7030c30f8a27df7f46c4c140730ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54abfe327451598f993f5ccdb26c4e915a7030c30f8a27df7f46c4c140730ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54abfe327451598f993f5ccdb26c4e915a7030c30f8a27df7f46c4c140730ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54abfe327451598f993f5ccdb26c4e915a7030c30f8a27df7f46c4c140730ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54abfe327451598f993f5ccdb26c4e915a7030c30f8a27df7f46c4c140730ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:46 compute-0 podman[436147]: 2025-12-13 09:45:46.363022977 +0000 UTC m=+0.138648485 container init bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 09:45:46 compute-0 podman[436147]: 2025-12-13 09:45:46.371246161 +0000 UTC m=+0.146871669 container start bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 09:45:46 compute-0 podman[436147]: 2025-12-13 09:45:46.376185544 +0000 UTC m=+0.151811052 container attach bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 09:45:46 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23574 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:46 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:46 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23576 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:46 compute-0 ceph-mon[76537]: pgmap v4357: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3360354539' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 13 09:45:46 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2878240809' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 13 09:45:46 compute-0 ceph-mon[76537]: from='client.23574 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:46 compute-0 ceph-mon[76537]: from='client.23576 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:46 compute-0 nervous_saha[436185]: --> passed data devices: 0 physical, 3 LVM
Dec 13 09:45:46 compute-0 nervous_saha[436185]: --> All data devices are unavailable
Dec 13 09:45:46 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23578 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:46 compute-0 systemd[1]: libpod-bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0.scope: Deactivated successfully.
Dec 13 09:45:46 compute-0 podman[436147]: 2025-12-13 09:45:46.896269702 +0000 UTC m=+0.671895210 container died bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e54abfe327451598f993f5ccdb26c4e915a7030c30f8a27df7f46c4c140730ac-merged.mount: Deactivated successfully.
Dec 13 09:45:46 compute-0 podman[436147]: 2025-12-13 09:45:46.958007605 +0000 UTC m=+0.733633113 container remove bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:45:46 compute-0 systemd[1]: libpod-conmon-bd33221d75377e703df89a81830adbae5509335212678670dbe2f18272242de0.scope: Deactivated successfully.
Dec 13 09:45:47 compute-0 sudo[436015]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:47 compute-0 sudo[436278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:45:47 compute-0 sudo[436278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:47 compute-0 sudo[436278]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:47 compute-0 sudo[436324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- lvm list --format json
Dec 13 09:45:47 compute-0 sudo[436324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:47 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23580 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:47 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23582 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 09:45:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:45:47 compute-0 nova_compute[248510]: 2025-12-13 09:45:47.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:47 compute-0 podman[436381]: 2025-12-13 09:45:47.494225863 +0000 UTC m=+0.048220249 container create f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shtern, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 09:45:47 compute-0 systemd[1]: Started libpod-conmon-f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da.scope.
Dec 13 09:45:47 compute-0 podman[436381]: 2025-12-13 09:45:47.472971095 +0000 UTC m=+0.026965501 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:45:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:45:47 compute-0 podman[436381]: 2025-12-13 09:45:47.59998371 +0000 UTC m=+0.153978126 container init f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:45:47 compute-0 podman[436381]: 2025-12-13 09:45:47.608604084 +0000 UTC m=+0.162598470 container start f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 09:45:47 compute-0 podman[436381]: 2025-12-13 09:45:47.612463079 +0000 UTC m=+0.166457475 container attach f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 09:45:47 compute-0 magical_shtern[436410]: 167 167
Dec 13 09:45:47 compute-0 systemd[1]: libpod-f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da.scope: Deactivated successfully.
Dec 13 09:45:47 compute-0 podman[436381]: 2025-12-13 09:45:47.617475364 +0000 UTC m=+0.171469770 container died f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shtern, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 09:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-bedac40d1d9fca5ce3881fd13ab420704d53e2f1d681ab84f9845216ebfb7950-merged.mount: Deactivated successfully.
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 53821440 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:02.581951+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7c84000/0x0/0x4ffc00000, data 0x264ffde/0x2808000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [0,0,0,0,0,0,0,3])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765387 data_alloc: 234881024 data_used: 10240546
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 54681600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:03.582111+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348504064 unmapped: 54681600 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:04.582309+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 54591488 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7931000/0x0/0x4ffc00000, data 0x29a1fde/0x2b5a000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [0,0,0,0,0,0,0,8])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:05.582546+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:06.582695+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:07.582883+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3780615 data_alloc: 234881024 data_used: 11115042
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e78f6000/0x0/0x4ffc00000, data 0x29dcfde/0x2b95000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:08.583113+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:09.583219+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:10.583491+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348823552 unmapped: 54362112 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.410496235s of 10.694005013s, submitted: 99
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:11.583647+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e78f6000/0x0/0x4ffc00000, data 0x29dcfde/0x2b95000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:12.583924+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3779471 data_alloc: 234881024 data_used: 11119138
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:13.584122+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:14.584423+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:15.584617+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 54222848 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:16.584857+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e78d2000/0x0/0x4ffc00000, data 0x2a01fde/0x2bba000, compress 0x0/0x0/0x0, omap 0x73160, meta 0x156fcea0), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 49774592 heap: 403185664 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560007592e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560000d76000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026db400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026db400 session 0x560002982700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560000d77c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560009d98380
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:17.585006+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3847922 data_alloc: 234881024 data_used: 11119138
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349306880 unmapped: 57556992 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:18.585203+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349306880 unmapped: 57556992 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:19.585407+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:20.585587+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:21.586034+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6e9e000/0x0/0x4ffc00000, data 0x3435fde/0x35ee000, compress 0x0/0x0/0x0, omap 0x72f9b, meta 0x156fd065), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:22.586349+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3847690 data_alloc: 234881024 data_used: 11119138
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:23.586532+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560009d98e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:24.586690+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560002570fc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:25.586866+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 57540608 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026da800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026da800 session 0x560004c93340
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.456769943s of 14.890123367s, submitted: 32
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560007592000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:26.587000+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:27.587123+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3853640 data_alloc: 234881024 data_used: 11119138
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6e79000/0x0/0x4ffc00000, data 0x345a001/0x3613000, compress 0x0/0x0/0x0, omap 0x731f7, meta 0x156fce09), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:28.587249+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:29.587399+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 57073664 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:30.587585+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 55386112 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:31.588167+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6e79000/0x0/0x4ffc00000, data 0x345a001/0x3613000, compress 0x0/0x0/0x0, omap 0x731f7, meta 0x156fce09), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 55386112 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:32.588344+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917080 data_alloc: 234881024 data_used: 21725730
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 55386112 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:33.588522+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 53272576 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:34.588680+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 53264384 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5cd9000/0x0/0x4ffc00000, data 0x345a001/0x3613000, compress 0x0/0x0/0x0, omap 0x731f7, meta 0x1689ce09), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:35.588819+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 53248000 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:36.588978+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 53248000 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:37.589112+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917368 data_alloc: 234881024 data_used: 21725730
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 53248000 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:38.589262+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 53248000 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5cd9000/0x0/0x4ffc00000, data 0x345a001/0x3613000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:39.589450+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.297067642s of 13.578221321s, submitted: 122
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 52723712 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:40.589618+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 358203392 unmapped: 48660480 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:41.589800+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e94000/0x0/0x4ffc00000, data 0x429f001/0x4458000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:42.589979+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4027812 data_alloc: 234881024 data_used: 24445474
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e55000/0x0/0x4ffc00000, data 0x42d6001/0x448f000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:43.590234+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:44.590402+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:45.590568+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 47448064 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e55000/0x0/0x4ffc00000, data 0x42d6001/0x448f000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:46.590735+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:47.590908+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4021804 data_alloc: 234881024 data_used: 24445474
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e5a000/0x0/0x4ffc00000, data 0x42d9001/0x4492000, compress 0x0/0x0/0x0, omap 0x7322b, meta 0x1689cdd5), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:48.591098+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:49.591254+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:50.591466+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.182911873s of 11.664206505s, submitted: 140
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:51.591605+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:52.591750+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e5a000/0x0/0x4ffc00000, data 0x42d9001/0x4492000, compress 0x0/0x0/0x0, omap 0x7325f, meta 0x1689cda1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4023596 data_alloc: 234881024 data_used: 24547874
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:53.591925+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:54.592112+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e5a000/0x0/0x4ffc00000, data 0x42d9001/0x4492000, compress 0x0/0x0/0x0, omap 0x7325f, meta 0x1689cda1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:55.592272+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 47439872 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560002726700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600012321c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:56.592420+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560000e1c540
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:57.592576+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796182 data_alloc: 234881024 data_used: 11127330
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:58.592724+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e66fc000/0x0/0x4ffc00000, data 0x2a12fde/0x2bcb000, compress 0x0/0x0/0x0, omap 0x73717, meta 0x1689c8e9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:13:59.592955+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e66fc000/0x0/0x4ffc00000, data 0x2a12fde/0x2bcb000, compress 0x0/0x0/0x0, omap 0x73717, meta 0x1689c8e9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:00.593195+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e66fc000/0x0/0x4ffc00000, data 0x2a12fde/0x2bcb000, compress 0x0/0x0/0x0, omap 0x73717, meta 0x1689c8e9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356392960 unmapped: 50470912 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 podman[436381]: 2025-12-13 09:45:47.661246971 +0000 UTC m=+0.215241357 container remove f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000247f400 session 0x5600027e0c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9800 session 0x560004c921c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:01.593372+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.963108063s of 10.171596527s, submitted: 62
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c49500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:02.593571+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:03.593824+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:04.594042+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:05.594344+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:06.594523+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:07.594768+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:08.594967+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:09.595152+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:10.595371+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:11.596012+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:12.596179+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:13.596339+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:14.596472+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:15.596641+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:16.596842+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:17.596994+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:18.597149+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:19.597375+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:20.597592+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:21.597818+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:22.598021+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:23.598246+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:24.598402+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:25.598618+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:26.598819+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:27.599027+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:28.599273+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:29.599510+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:30.599726+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:31.599896+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:32.600205+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:33.600480+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:34.600629+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:35.600883+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:36.601059+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:37.601316+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:38.601508+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:39.601704+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:40.601928+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:41.602117+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:42.602260+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:43.602456+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:44.602613+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:45.602801+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:46.603005+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:47.603273+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:48.603444+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:49.603647+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:50.603838+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:51.604005+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:52.604184+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617678 data_alloc: 218103808 data_used: 804863
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:53.604328+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:54.604485+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:55.604638+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 57548800 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:56.604835+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e752a000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x739fb, meta 0x1689c605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560003966fc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560004c92a80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560009881000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560009881000 session 0x560004c58380
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030dea80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 55.332927704s of 55.392375946s, submitted: 35
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 52822016 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x5600027416c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002934700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9800 session 0x5600014a2380
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600094f8000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600094f8000 session 0x5600011a6e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030df6c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:57.605035+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659110 data_alloc: 218103808 data_used: 808861
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:58.605235+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x73a83, meta 0x1689c57d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:14:59.605436+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x73a83, meta 0x1689c57d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:00.605647+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:01.605848+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:02.606002+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659110 data_alloc: 218103808 data_used: 808861
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349339648 unmapped: 57524224 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:03.606156+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x73a83, meta 0x1689c57d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:04.606404+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:05.606573+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:06.606754+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:07.606953+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659110 data_alloc: 218103808 data_used: 808861
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:08.607140+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349347840 unmapped: 57516032 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x73a83, meta 0x1689c57d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:09.607319+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 57507840 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:10.607551+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349356032 unmapped: 57507840 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:11.607725+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 57499648 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:12.607969+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659110 data_alloc: 218103808 data_used: 808861
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349364224 unmapped: 57499648 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:13.608194+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.549066544s of 16.679725647s, submitted: 15
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560004c928c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 57491456 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:14.608419+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 57491456 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:15.608572+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:16.608690+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:17.608851+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688687 data_alloc: 218103808 data_used: 5624237
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:18.609008+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:19.609168+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:20.609388+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:21.609510+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:22.609826+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688687 data_alloc: 218103808 data_used: 5624237
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:23.609994+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:24.610258+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 57483264 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:25.610463+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.406724930s of 12.420920372s, submitted: 7
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 55599104 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:26.610607+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e74ee000/0x0/0x4ffc00000, data 0x1c45f8c/0x1dfe000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,6])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 57860096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:27.610866+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717479 data_alloc: 218103808 data_used: 5706157
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 349003776 unmapped: 57860096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e702e000/0x0/0x4ffc00000, data 0x2105f8c/0x22be000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:28.611022+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:29.611228+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fe8000/0x0/0x4ffc00000, data 0x214af8c/0x2303000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fa9000/0x0/0x4ffc00000, data 0x2189f8c/0x2342000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:30.611396+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:31.611655+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fa9000/0x0/0x4ffc00000, data 0x2189f8c/0x2342000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:32.611842+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fa9000/0x0/0x4ffc00000, data 0x2189f8c/0x2342000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726589 data_alloc: 218103808 data_used: 5697965
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:33.612166+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6fa9000/0x0/0x4ffc00000, data 0x2189f8c/0x2342000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350281728 unmapped: 56582144 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:34.612507+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:35.612770+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:36.612987+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f89000/0x0/0x4ffc00000, data 0x21aaf8c/0x2363000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:37.613132+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724821 data_alloc: 218103808 data_used: 5697965
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:38.613353+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f89000/0x0/0x4ffc00000, data 0x21aaf8c/0x2363000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:39.613493+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:40.613655+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 56844288 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:41.613819+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f89000/0x0/0x4ffc00000, data 0x21aaf8c/0x2363000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 56836096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:42.614003+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724821 data_alloc: 218103808 data_used: 5697965
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 56836096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:43.614201+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.264011383s of 17.528238297s, submitted: 41
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 56836096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:44.614366+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000a3bf000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000a3bf000 session 0x560002740c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600030df880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002dd7800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002dd7800 session 0x560002570700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 56836096 heap: 406863872 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:45.614488+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560003966a80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f83000/0x0/0x4ffc00000, data 0x21b0f8c/0x2369000, compress 0x0/0x0/0x0, omap 0x7415f, meta 0x1689bea1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560002571880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560004556c40
Dec 13 09:45:47 compute-0 systemd[1]: libpod-conmon-f544a3e664b13cd43fccc3e20ac5b1c45a0d3b1b219cd1722641c7c5d01508da.scope: Deactivated successfully.
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000a3bf000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000a3bf000 session 0x5600011a6e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002489800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002489800 session 0x560004c921c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002570700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:46.614738+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:47.615432+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812375 data_alloc: 218103808 data_used: 5697965
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:48.616493+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:49.616639+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:50.616798+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 68214784 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d3000/0x0/0x4ffc00000, data 0x2f5ffee/0x3119000, compress 0x0/0x0/0x0, omap 0x7436e, meta 0x1689bc92), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:51.616955+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:52.617127+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d3000/0x0/0x4ffc00000, data 0x2f5ffee/0x3119000, compress 0x0/0x0/0x0, omap 0x7436e, meta 0x1689bc92), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3812375 data_alloc: 218103808 data_used: 5697965
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:53.617315+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560004557500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:54.617444+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560004556000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:55.617801+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d3000/0x0/0x4ffc00000, data 0x2f5ffee/0x3119000, compress 0x0/0x0/0x0, omap 0x7436e, meta 0x1689bc92), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:56.618137+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000a3bf000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000a3bf000 session 0x560009d99a40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:57.618462+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000308a800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.645063400s of 13.876958847s, submitted: 50
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000308a800 session 0x560004c58000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3813085 data_alloc: 218103808 data_used: 5697965
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:58.618679+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 350208000 unmapped: 68206592 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:15:59.618806+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:00.619031+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:01.619220+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d2000/0x0/0x4ffc00000, data 0x2f5fffe/0x311a000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:02.619452+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3896829 data_alloc: 234881024 data_used: 19757997
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:03.620154+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d2000/0x0/0x4ffc00000, data 0x2f5fffe/0x311a000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:04.620393+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:05.620565+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:06.620763+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:07.620911+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d2000/0x0/0x4ffc00000, data 0x2f5fffe/0x311a000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3896829 data_alloc: 234881024 data_used: 19757997
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:08.621177+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:09.621356+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 64880640 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.166196823s of 12.185412407s, submitted: 5
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:10.621566+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61d2000/0x0/0x4ffc00000, data 0x2f5fffe/0x311a000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 59465728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:11.621744+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 57892864 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:12.621891+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3980347 data_alloc: 234881024 data_used: 21101485
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:13.622144+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:14.622374+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54ca000/0x0/0x4ffc00000, data 0x3c67ffe/0x3e22000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:15.622494+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:16.622663+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:17.622826+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3978539 data_alloc: 234881024 data_used: 21105581
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:18.622945+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:19.623063+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:20.623262+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54a6000/0x0/0x4ffc00000, data 0x3c8bffe/0x3e46000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:21.623415+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:22.623573+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3979179 data_alloc: 234881024 data_used: 21126061
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:23.623715+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54a6000/0x0/0x4ffc00000, data 0x3c8bffe/0x3e46000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:24.623884+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54a6000/0x0/0x4ffc00000, data 0x3c8bffe/0x3e46000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:25.624044+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.691843987s of 15.793980598s, submitted: 146
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 57630720 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e54a6000/0x0/0x4ffc00000, data 0x3c8bffe/0x3e46000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:26.624224+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:27.624399+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e548b000/0x0/0x4ffc00000, data 0x3ca6ffe/0x3e61000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:28.624572+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3979211 data_alloc: 234881024 data_used: 21126061
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e548b000/0x0/0x4ffc00000, data 0x3ca6ffe/0x3e61000, compress 0x0/0x0/0x0, omap 0x743f6, meta 0x1689bc0a), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560004c93c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca5000 session 0x560004c59500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:29.624742+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:30.624905+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 57614336 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560001232c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:31.625051+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:32.625227+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:33.625403+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741268 data_alloc: 218103808 data_used: 5706157
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f76000/0x0/0x4ffc00000, data 0x21bcf8c/0x2375000, compress 0x0/0x0/0x0, omap 0x745c7, meta 0x1689ba39), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:34.625530+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:35.625686+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560004c92fc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.019501686s of 10.248433113s, submitted: 46
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9800 session 0x560002934c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 357212160 unmapped: 61202432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:36.625830+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:37.625996+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030dfdc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:38.626165+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:39.626335+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:40.626515+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:41.626667+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:42.626810+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:43.627099+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:44.627273+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:45.627418+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:46.627606+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:47.627737+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: mgrc ms_handle_reset ms_handle_reset con 0x560002f13800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: get_auth_request con 0x5600026d9800 auth_method 0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:48.627915+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:49.628102+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:50.628291+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:51.628438+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:52.628666+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x5600027261c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d7800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:53.628830+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:54.628983+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:55.629116+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:56.629254+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:57.629453+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:58.629642+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:16:59.629806+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:00.630024+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:01.630391+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:02.630532+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 64585728 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:03.630801+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:04.630945+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:05.631149+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:06.631296+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:07.631493+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:08.631674+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:09.631891+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 64577536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:10.632192+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 64569344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:11.632357+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 64569344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:12.632550+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 64569344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560002c64380
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560009d996c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560003966e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000a3bf000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000a3bf000 session 0x5600030defc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:13.632704+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.651481628s of 37.857086182s, submitted: 26
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643648 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x74786, meta 0x1689b87a), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 64569344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f8c/0x181f000, compress 0x0/0x0/0x0, omap 0x7480e, meta 0x1689b7f2), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:14.632902+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 353853440 unmapped: 64561152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:15.633033+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030df340
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002c43180
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600029348c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560002740540
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002f13400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002f13400 session 0x560002982a80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:16.633213+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:17.633401+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:18.633548+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560001233500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3713637 data_alloc: 218103808 data_used: 812875
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600024be1c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:19.633671+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 64241664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7069000/0x0/0x4ffc00000, data 0x20caf8c/0x2283000, compress 0x0/0x0/0x0, omap 0x74ab6, meta 0x1689b54a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560000d77180
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:20.633851+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 64086016 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560002740700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:21.634003+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002de6800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:22.634187+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:23.634324+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7043000/0x0/0x4ffc00000, data 0x20eefbe/0x22a9000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782319 data_alloc: 234881024 data_used: 11086682
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:24.634503+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:25.634705+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:26.634963+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7043000/0x0/0x4ffc00000, data 0x20eefbe/0x22a9000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:27.635135+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:28.635317+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3785519 data_alloc: 234881024 data_used: 11616090
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:29.635514+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:30.635774+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:31.635935+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7043000/0x0/0x4ffc00000, data 0x20eefbe/0x22a9000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:32.636152+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 64069632 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:33.636370+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3785903 data_alloc: 234881024 data_used: 11628378
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.954565048s of 20.221033096s, submitted: 39
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 356638720 unmapped: 61775872 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:34.636491+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 58318848 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:35.636610+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 58089472 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:36.636757+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 58089472 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6142000/0x0/0x4ffc00000, data 0x2feffbe/0x31aa000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:37.636900+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 58089472 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:38.637059+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3897989 data_alloc: 234881024 data_used: 13733722
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 57991168 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:39.637208+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6142000/0x0/0x4ffc00000, data 0x2feffbe/0x31aa000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 57991168 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:40.637409+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:41.637583+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6140000/0x0/0x4ffc00000, data 0x2ff1fbe/0x31ac000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:42.637775+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:43.637973+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3890853 data_alloc: 234881024 data_used: 13733722
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:44.638145+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:45.638345+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6140000/0x0/0x4ffc00000, data 0x2ff1fbe/0x31ac000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:46.638499+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.321171761s of 12.761690140s, submitted: 114
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:47.638718+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613f000/0x0/0x4ffc00000, data 0x2ff2fbe/0x31ad000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:48.638966+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613f000/0x0/0x4ffc00000, data 0x2ff2fbe/0x31ad000, compress 0x0/0x0/0x0, omap 0x74d12, meta 0x1689b2ee), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3891309 data_alloc: 234881024 data_used: 13741914
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 58236928 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000144d000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000144d000 session 0x560003966a80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:49.639241+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560003967880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600004ab6c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560001fb4c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560001fbea80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:50.639474+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:51.639640+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:52.639797+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c73000/0x0/0x4ffc00000, data 0x34befbe/0x3679000, compress 0x0/0x0/0x0, omap 0x74d9a, meta 0x1689b266), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:53.639943+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922455 data_alloc: 234881024 data_used: 13741914
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:54.640213+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c73000/0x0/0x4ffc00000, data 0x34befbe/0x3679000, compress 0x0/0x0/0x0, omap 0x74d9a, meta 0x1689b266), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:55.640414+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:56.640584+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d9400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d9400 session 0x560004c588c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:57.640754+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c73000/0x0/0x4ffc00000, data 0x34befbe/0x3679000, compress 0x0/0x0/0x0, omap 0x74d9a, meta 0x1689b266), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600003ee8c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 58220544 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:58.640936+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002744e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.030322075s of 12.168992996s, submitted: 8
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560001fbfa40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3924831 data_alloc: 234881024 data_used: 13741914
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 58212352 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560006a00800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:17:59.641146+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026dc000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 58130432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:00.641340+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:01.641507+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:02.641695+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c4f000/0x0/0x4ffc00000, data 0x34e2fbe/0x369d000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:03.642268+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3954911 data_alloc: 234881024 data_used: 18773850
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:04.642402+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:05.642561+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c4f000/0x0/0x4ffc00000, data 0x34e2fbe/0x369d000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:06.642821+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:07.643011+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c4d000/0x0/0x4ffc00000, data 0x34e3fbe/0x369e000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:08.643203+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3955559 data_alloc: 234881024 data_used: 18773850
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5c4d000/0x0/0x4ffc00000, data 0x34e3fbe/0x369e000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:09.643387+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 55386112 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:10.643578+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.978725433s of 11.995156288s, submitted: 6
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 51675136 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:11.643852+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 51675136 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:12.643975+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:13.644120+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010013 data_alloc: 234881024 data_used: 19072858
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5463000/0x0/0x4ffc00000, data 0x3ccefbe/0x3e89000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:14.644281+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:15.644439+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:16.644607+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:17.644831+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5463000/0x0/0x4ffc00000, data 0x3ccefbe/0x3e89000, compress 0x0/0x0/0x0, omap 0x74e22, meta 0x1689b1de), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:18.645185+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010013 data_alloc: 234881024 data_used: 19072858
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:19.645380+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:20.645631+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.403758049s of 10.652852058s, submitted: 44
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:21.645785+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000247f000 session 0x560001232e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026dc400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:22.645949+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5462000/0x0/0x4ffc00000, data 0x3ccffbe/0x3e8a000, compress 0x0/0x0/0x0, omap 0x74eaa, meta 0x1689b156), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 51658752 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:23.646156+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4009941 data_alloc: 234881024 data_used: 19138394
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026dc000 session 0x560004c59880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560006a00800 session 0x560002745880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 51650560 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:24.646310+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5462000/0x0/0x4ffc00000, data 0x3ccffbe/0x3e8a000, compress 0x0/0x0/0x0, omap 0x74eaa, meta 0x1689b156), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600027e16c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:25.646504+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613d000/0x0/0x4ffc00000, data 0x2ff4fbe/0x31af000, compress 0x0/0x0/0x0, omap 0x74fba, meta 0x1689b046), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:26.646671+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:27.646877+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613d000/0x0/0x4ffc00000, data 0x2ff4fbe/0x31af000, compress 0x0/0x0/0x0, omap 0x74fba, meta 0x1689b046), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:28.647038+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900837 data_alloc: 234881024 data_used: 13807450
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 51634176 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:29.647197+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 51625984 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:30.647406+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 51625984 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:31.647582+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.783339500s of 10.808708191s, submitted: 14
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x5600027448c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002de6800 session 0x560002741180
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e613d000/0x0/0x4ffc00000, data 0x2ff4fbe/0x31af000, compress 0x0/0x0/0x0, omap 0x74fba, meta 0x1689b046), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 51617792 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:32.647703+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002570700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aa8000/0x0/0x4ffc00000, data 0x168afae/0x1844000, compress 0x0/0x0/0x0, omap 0x752a6, meta 0x1689ad5a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:33.647818+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:34.647955+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:35.648096+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:36.648231+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:37.648369+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:38.648567+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:39.648722+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:40.648895+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:41.649139+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:42.649308+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:43.649476+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:44.649648+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:45.649838+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:46.650057+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:47.650263+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:48.650491+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:49.650651+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:50.650867+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:51.651049+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:52.651293+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:53.651496+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:54.651718+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:55.651929+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:56.652145+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:57.652341+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:58.652510+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:18:59.652656+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:00.653563+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:01.657630+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:02.658459+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:03.658718+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:04.659375+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:05.659821+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:06.660166+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:07.660361+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:08.660609+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:09.661229+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:10.661849+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 56377344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:11.662119+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362045440 unmapped: 56369152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:12.662590+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362045440 unmapped: 56369152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:13.662933+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362045440 unmapped: 56369152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670695 data_alloc: 218103808 data_used: 812859
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:14.663376+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362045440 unmapped: 56369152 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.719680786s of 42.821624756s, submitted: 56
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:15.663731+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:16.663912+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:17.664252+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:18.664567+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670631 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:19.664836+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75502, meta 0x1689aafe), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:20.665169+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:21.665379+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:22.665689+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 56352768 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002c64fc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560000d76e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002741880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560004ad6e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:23.665900+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3732853 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:24.666111+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368492544 unmapped: 49922048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x5600029341c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002de6800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002de6800 session 0x5600025701c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x55ffffffe000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.353059769s of 10.003663063s, submitted: 51
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:25.666297+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002c42a80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600026b8540
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a9000/0x0/0x4ffc00000, data 0x1d8afa5/0x1f43000, compress 0x0/0x0/0x0, omap 0x7575e, meta 0x1689a8a2), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:26.666502+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:27.666707+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:28.666955+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718973 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:29.667258+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560004c921c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:30.667449+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362160128 unmapped: 56254464 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026dc000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026dc000 session 0x5600004ab880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560000d776c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:31.667586+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600027e0c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a9000/0x0/0x4ffc00000, data 0x1d8afde/0x1f43000, compress 0x0/0x0/0x0, omap 0x75792, meta 0x1689a86e), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:32.667709+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:33.667823+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758867 data_alloc: 218103808 data_used: 7178008
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:34.667982+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:35.668134+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a8000/0x0/0x4ffc00000, data 0x1d8afee/0x1f44000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:36.668257+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:37.668425+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:38.668588+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a8000/0x0/0x4ffc00000, data 0x1d8afee/0x1f44000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a8000/0x0/0x4ffc00000, data 0x1d8afee/0x1f44000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758867 data_alloc: 218103808 data_used: 7178008
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:39.669298+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e73a8000/0x0/0x4ffc00000, data 0x1d8afee/0x1f44000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:40.669568+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:41.669768+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:42.669909+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:43.670127+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 56238080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.259466171s of 18.754104614s, submitted: 11
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3786887 data_alloc: 218103808 data_used: 7796504
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:44.670299+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363020288 unmapped: 55394304 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f85000/0x0/0x4ffc00000, data 0x21a7fee/0x2361000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:45.670444+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:46.670657+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:47.670843+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:48.671014+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796901 data_alloc: 218103808 data_used: 7956248
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:49.671131+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:50.671352+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:51.671506+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:52.671675+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:53.671823+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796901 data_alloc: 218103808 data_used: 7956248
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:54.671983+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:55.672160+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:56.672344+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:57.672496+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f68000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7581a, meta 0x1689a7e6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:58.672676+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364077056 unmapped: 54337536 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797157 data_alloc: 218103808 data_used: 7964440
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:19:59.672824+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364085248 unmapped: 54329344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:00.673175+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364085248 unmapped: 54329344 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.286857605s of 17.486074448s, submitted: 64
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x5600030df500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:01.673320+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d6800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d6800 session 0x560000d76c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002488800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002488800 session 0x560000630700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600011a7c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560000631dc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f77000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x7593f, meta 0x1689a6c1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:02.673511+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6acf000/0x0/0x4ffc00000, data 0x2663fee/0x281d000, compress 0x0/0x0/0x0, omap 0x759ef, meta 0x1689a611), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:03.673677+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821490 data_alloc: 218103808 data_used: 7968536
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:04.673812+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:05.673954+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:06.674126+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 54190080 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:07.674313+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 54181888 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6acf000/0x0/0x4ffc00000, data 0x2663fee/0x281d000, compress 0x0/0x0/0x0, omap 0x759ef, meta 0x1689a611), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d6800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d6800 session 0x560002c42700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:08.674438+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 54181888 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821490 data_alloc: 218103808 data_used: 7968536
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x5600027e08c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:09.674612+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 54181888 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:10.674817+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003084400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003084400 session 0x560001fb4c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c64000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 54034432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.996580124s of 10.098914146s, submitted: 19
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d6800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:11.675259+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 54034432 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:12.675446+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6aaa000/0x0/0x4ffc00000, data 0x2687ffe/0x2842000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:13.675587+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854464 data_alloc: 234881024 data_used: 12761880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:14.675738+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:15.675886+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6aaa000/0x0/0x4ffc00000, data 0x2687ffe/0x2842000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:16.676052+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:17.676236+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6aaa000/0x0/0x4ffc00000, data 0x2687ffe/0x2842000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:18.676384+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854464 data_alloc: 234881024 data_used: 12761880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:19.676526+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6aaa000/0x0/0x4ffc00000, data 0x2687ffe/0x2842000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:20.676712+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:21.676845+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 54018048 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:22.676996+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.363173485s of 11.364879608s, submitted: 1
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365477888 unmapped: 52936704 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:23.677137+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950244 data_alloc: 234881024 data_used: 13623064
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:24.677309+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5e72000/0x0/0x4ffc00000, data 0x32b7ffe/0x3472000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:25.677494+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5e72000/0x0/0x4ffc00000, data 0x32b7ffe/0x3472000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:26.677631+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:27.677791+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:28.677955+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:29.678175+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950500 data_alloc: 234881024 data_used: 13631256
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:30.678445+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5e72000/0x0/0x4ffc00000, data 0x32b7ffe/0x3472000, compress 0x0/0x0/0x0, omap 0x75a77, meta 0x1689a589), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 50569216 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:31.678660+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 51224576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:32.678823+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 51224576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:33.678995+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.801392555s of 11.098257065s, submitted: 90
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560003967880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d6800 session 0x560001410c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 51224576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560000d77500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:34.679221+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3805479 data_alloc: 218103808 data_used: 8017688
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 55025664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:35.679439+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 55025664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6f77000/0x0/0x4ffc00000, data 0x21bbfee/0x2375000, compress 0x0/0x0/0x0, omap 0x75c79, meta 0x1689a387), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:36.679573+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 55025664 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:37.679728+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002c43a40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560004557dc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363397120 unmapped: 55017472 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c656c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:38.679916+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:39.680114+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:40.680362+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:41.680548+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:42.680781+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:43.680928+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:44.681036+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:45.681196+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:46.681395+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:47.681562+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:48.681762+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:49.681917+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:50.682118+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:51.682292+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:52.682417+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:53.682564+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:54.682718+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:55.682825+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:56.682974+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:57.683170+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:58.683345+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:20:59.683530+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:00.683717+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:01.683880+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:02.684055+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:03.684328+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:04.684632+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:05.684889+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:06.685142+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:07.685329+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:08.685464+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:09.685664+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:10.685949+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:11.686162+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:12.686408+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:13.686579+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 56344576 heap: 418414592 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:14.686764+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693216 data_alloc: 218103808 data_used: 816920
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.016468048s of 41.142299652s, submitted: 65
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x75f5d, meta 0x1689a0a3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 58802176 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560002c428c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:15.687062+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 63520768 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:16.687520+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 63520768 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:17.687756+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 63520768 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:18.688140+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7063000/0x0/0x4ffc00000, data 0x20d1f7c/0x2289000, compress 0x0/0x0/0x0, omap 0x75fe5, meta 0x1689a01b), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362250240 unmapped: 63520768 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:19.688382+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757852 data_alloc: 218103808 data_used: 820918
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362258432 unmapped: 63512576 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:20.688601+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362258432 unmapped: 63512576 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:21.688747+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7063000/0x0/0x4ffc00000, data 0x20d1f7c/0x2289000, compress 0x0/0x0/0x0, omap 0x75fe5, meta 0x1689a01b), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362258432 unmapped: 63512576 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600030de700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:22.688923+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d6800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600026d6800 session 0x560002982a80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362258432 unmapped: 63512576 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c65500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:23.689094+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560004ad6fc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 63586304 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:24.689240+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762768 data_alloc: 218103808 data_used: 820918
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 63586304 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:25.689358+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:26.689571+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7061000/0x0/0x4ffc00000, data 0x20d1faf/0x228b000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:27.689782+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7061000/0x0/0x4ffc00000, data 0x20d1faf/0x228b000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:28.689982+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:29.690213+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3827152 data_alloc: 234881024 data_used: 11658934
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:30.690465+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:31.690597+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7061000/0x0/0x4ffc00000, data 0x20d1faf/0x228b000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:32.690790+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:33.690993+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.76 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2584 writes, 10K keys, 2584 commit groups, 1.0 writes per commit group, ingest: 11.17 MB, 0.02 MB/s
                                           Interval WAL: 2585 writes, 1016 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7061000/0x0/0x4ffc00000, data 0x20d1faf/0x228b000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:34.691211+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3827152 data_alloc: 234881024 data_used: 11658934
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 62808064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:35.691442+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.893293381s of 21.064565659s, submitted: 24
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 61898752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:36.691618+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 61898752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:37.691817+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 367263744 unmapped: 58507264 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:38.692004+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e61ae000/0x0/0x4ffc00000, data 0x2f7efaf/0x3138000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 59162624 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:39.692179+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3913484 data_alloc: 234881024 data_used: 12236470
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:40.692349+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:41.692537+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e617b000/0x0/0x4ffc00000, data 0x2faffaf/0x3169000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:42.692694+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:43.692856+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:44.692999+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e617b000/0x0/0x4ffc00000, data 0x2faffaf/0x3169000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3915456 data_alloc: 234881024 data_used: 12215990
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 58933248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:45.693170+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:46.693353+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:47.693507+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:48.693729+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:49.693949+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911784 data_alloc: 234881024 data_used: 12215990
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:50.694155+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:51.694274+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:52.694456+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:53.694673+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:54.694877+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911784 data_alloc: 234881024 data_used: 12215990
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:55.695057+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:56.695234+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:57.695409+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:58.695630+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:21:59.695796+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911784 data_alloc: 234881024 data_used: 12215990
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:00.696009+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:01.696141+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:02.696310+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:03.696482+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:04.696656+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911784 data_alloc: 234881024 data_used: 12215990
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e6180000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x762d1, meta 0x16899d2f), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:05.696794+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:06.696986+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 59056128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:07.697138+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.367116928s of 32.255329132s, submitted: 118
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 59047936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560004557880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:08.697281+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca4c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca4c00 session 0x560000630700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd400 session 0x560002c43180
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600030df340
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca4c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca4c00 session 0x560002745880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 59015168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:09.697469+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3951599 data_alloc: 234881024 data_used: 12215990
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 59015168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5afe000/0x0/0x4ffc00000, data 0x3634faf/0x37ee000, compress 0x0/0x0/0x0, omap 0x7647f, meta 0x16899b81), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:10.697626+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:11.697836+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:12.698031+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5afe000/0x0/0x4ffc00000, data 0x3634faf/0x37ee000, compress 0x0/0x0/0x0, omap 0x7647f, meta 0x16899b81), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:13.698314+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:14.698460+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3951887 data_alloc: 234881024 data_used: 12215990
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 59006976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x5600024be700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:15.698584+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 58851328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:16.699567+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 58851328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:17.699711+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:18.699860+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5ad9000/0x0/0x4ffc00000, data 0x3658fd2/0x3813000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:19.700029+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3996452 data_alloc: 234881024 data_used: 19041974
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:20.700268+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:21.700415+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:22.700696+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5ad9000/0x0/0x4ffc00000, data 0x3658fd2/0x3813000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:23.700871+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:24.701018+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3996452 data_alloc: 234881024 data_used: 19041974
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5ad9000/0x0/0x4ffc00000, data 0x3658fd2/0x3813000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 57139200 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:25.701138+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.759237289s of 17.944322586s, submitted: 27
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 57090048 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:26.701295+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e5ad9000/0x0/0x4ffc00000, data 0x3658fd2/0x3813000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 57090048 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:27.701450+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 370524160 unmapped: 55246848 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:28.701613+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e547c000/0x0/0x4ffc00000, data 0x3cb5fd2/0x3e70000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372178944 unmapped: 53592064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:29.701741+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4075406 data_alloc: 234881024 data_used: 19361462
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372178944 unmapped: 53592064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:30.701929+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372178944 unmapped: 53592064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:31.702118+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372178944 unmapped: 53592064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:32.702269+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e4d000/0x0/0x4ffc00000, data 0x42e4fd2/0x449f000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:33.702481+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:34.702648+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083418 data_alloc: 234881024 data_used: 19361462
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:35.702805+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:36.703005+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e4d000/0x0/0x4ffc00000, data 0x42e4fd2/0x449f000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372211712 unmapped: 53559296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:37.703206+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372219904 unmapped: 53551104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:38.703396+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372219904 unmapped: 53551104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:39.703562+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083546 data_alloc: 234881024 data_used: 19365558
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e4e4d000/0x0/0x4ffc00000, data 0x42e4fd2/0x449f000, compress 0x0/0x0/0x0, omap 0x76507, meta 0x16899af9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372228096 unmapped: 53542912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:40.703790+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 372228096 unmapped: 53542912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:41.703936+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.653168678s of 15.359889030s, submitted: 89
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560001232380
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x560004556000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x560004c581c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e617f000/0x0/0x4ffc00000, data 0x2fb2faf/0x316c000, compress 0x0/0x0/0x0, omap 0x7667c, meta 0x16899984), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:42.704138+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:43.704366+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:44.704625+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3925462 data_alloc: 234881024 data_used: 12207798
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:45.704758+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:46.704922+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e617e000/0x0/0x4ffc00000, data 0x2fb3faf/0x316d000, compress 0x0/0x0/0x0, omap 0x7667c, meta 0x16899984), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:47.705119+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 57450496 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:48.705604+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x55fffe9d5180
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600027408c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c42fc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:49.705779+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:50.705983+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:51.706258+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:52.706571+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:53.706730+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:54.706916+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:55.707145+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:56.707396+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:57.707648+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:58.707830+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:22:59.708123+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:00.708346+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:01.708520+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:02.708732+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:03.708903+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:04.709118+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:05.709261+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:06.709411+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:07.709569+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:08.709715+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:09.709906+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:10.710155+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:11.710311+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:12.718568+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:13.718727+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:14.718978+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:15.719168+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:16.719422+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:17.719606+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:18.719818+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362061824 unmapped: 63709184 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:19.720060+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 63700992 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719581 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:20.720439+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362070016 unmapped: 63700992 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:21.720651+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362078208 unmapped: 63692800 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7ace000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x769f0, meta 0x16899610), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca4c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffffca4c00 session 0x560002c43340
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x5600025708c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x5600026b9c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:22.720784+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560002c436c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361816064 unmapped: 63954944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.080917358s of 41.216419220s, submitted: 75
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x5600027e08c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600004b2400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x5600004b2400 session 0x560009d988c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c49500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002570c40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560000630700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:23.721024+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:24.721236+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792201 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:25.721507+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:26.721677+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76c80, meta 0x16899380), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:27.721890+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76c80, meta 0x16899380), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:28.722274+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 63946752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:29.722427+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361832448 unmapped: 63938560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791161 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:30.722661+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361832448 unmapped: 63938560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x560002570700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:31.722812+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361832448 unmapped: 63938560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:32.722980+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361832448 unmapped: 63938560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.708735466s of 10.003569603s, submitted: 73
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:33.723122+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:34.723300+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857085 data_alloc: 234881024 data_used: 11944560
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:35.723472+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:36.723623+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560004557dc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:37.723744+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:38.723905+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:39.724114+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3856953 data_alloc: 234881024 data_used: 11944560
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:40.724311+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:41.724483+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:42.724625+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:43.724945+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:44.725123+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857085 data_alloc: 234881024 data_used: 11944560
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:45.725412+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:46.725619+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:47.725827+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:48.725990+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.146135330s of 16.288476944s, submitted: 80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002745dc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:49.727755+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3856953 data_alloc: 234881024 data_used: 11944560
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:50.728217+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:51.728468+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:52.728668+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:53.728927+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e701a000/0x0/0x4ffc00000, data 0x2119fde/0x22d2000, compress 0x0/0x0/0x0, omap 0x76d08, meta 0x168992f8), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:54.729200+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 63873024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002571500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3856953 data_alloc: 234881024 data_used: 11944560
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:55.729405+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560002c65880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:56.729574+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:57.729724+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:58.729921+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:23:59.730384+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:00.730682+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:01.730891+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:02.731567+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:03.731935+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:04.732131+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 66142208 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:05.732447+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:06.732605+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:07.732775+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:08.733029+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:09.733185+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 66134016 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:10.733377+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:11.733531+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:12.733682+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:13.733887+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:14.734123+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:15.734284+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:16.734450+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:17.734634+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 66125824 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:18.734815+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:19.734976+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:20.735216+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:21.735490+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:22.735686+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:23.735866+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:24.736129+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:25.736339+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:26.736506+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:27.736658+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:28.736857+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:29.737066+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:30.737287+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:31.737448+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 66101248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:32.737620+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 66101248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:33.737746+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:34.737921+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:35.738179+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:36.738319+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:37.738485+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:38.738592+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:39.738751+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:40.738964+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:41.739188+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:42.739404+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:43.739644+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:44.739754+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:45.739997+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:46.740176+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 66084864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:47.740337+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 66076672 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:48.740535+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 66076672 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:49.740704+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 66076672 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7aaf000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x76fec, meta 0x16899014), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:50.740907+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728883 data_alloc: 218103808 data_used: 829040
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 66068480 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:51.741040+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 66068480 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:52.741183+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x5600024be380
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003095c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003095c00 session 0x560001411500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002982380
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002983a40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 66068480 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.068878174s of 64.135406494s, submitted: 35
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560001233500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x560009d99a40
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002dd6000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002dd6000 session 0x560002744540
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:53.741298+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x55ffff7b1400 session 0x560002c656c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560000638c00 session 0x560002982a80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:54.741479+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e70d1000/0x0/0x4ffc00000, data 0x2061fee/0x221b000, compress 0x0/0x0/0x0, omap 0x77074, meta 0x16898f8c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e70d1000/0x0/0x4ffc00000, data 0x2061fee/0x221b000, compress 0x0/0x0/0x0, omap 0x77074, meta 0x16898f8c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:55.741719+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800904 data_alloc: 218103808 data_used: 833038
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560002ddd000 session 0x560000d776c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:56.741909+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 64847872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000715ec00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:57.742166+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:58.742326+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:24:59.742492+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e70d0000/0x0/0x4ffc00000, data 0x2062011/0x221c000, compress 0x0/0x0/0x0, omap 0x772d0, meta 0x16898d30), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:00.742744+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3863274 data_alloc: 234881024 data_used: 10992670
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e70d0000/0x0/0x4ffc00000, data 0x2062011/0x221c000, compress 0x0/0x0/0x0, omap 0x772d0, meta 0x16898d30), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360980480 unmapped: 64790528 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x560003088000 session 0x5600026b9340
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000715ec00 session 0x560002c48e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:01.742955+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000715ec00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 66256896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:02.743151+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 ms_handle_reset con 0x56000715ec00 session 0x560001410fc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:03.743316+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:04.743462+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:05.743592+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:06.743873+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:07.744123+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:08.744327+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:09.744539+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:10.744792+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:11.744918+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:12.745178+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:13.745383+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:14.745791+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:15.745954+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:16.746159+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:17.746324+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:18.746499+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:19.746692+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:20.746882+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:21.747028+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 66232320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:22.747171+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:23.747350+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:24.747523+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:25.747692+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:26.747858+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 66224128 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:27.748031+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:28.748265+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:29.748543+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:30.748885+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:31.749126+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:32.749374+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:33.749589+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 66215936 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:34.749788+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 66207744 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:35.750007+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 66207744 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:36.750242+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 66207744 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:37.750450+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 66207744 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:38.750667+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 66199552 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:39.750894+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 66199552 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:40.751130+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 66199552 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:41.751350+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 66199552 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:42.751531+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:43.751724+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:44.751898+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:45.752045+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:46.753265+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:47.753441+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:48.753752+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:49.753932+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 66191360 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:50.754274+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:51.754479+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:52.754663+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:53.754814+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:54.755043+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:55.755276+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:56.755471+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:57.755675+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 66183168 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:58.755803+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 66174976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:25:59.756485+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 66174976 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:00.757171+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:01.758207+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:02.758700+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:03.759227+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:04.759490+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:05.759712+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 66166784 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:06.759988+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 66158592 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:07.760368+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 66158592 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:08.760633+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 66150400 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:09.760984+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 66150400 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:10.761246+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741144 data_alloc: 218103808 data_used: 837036
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 66150400 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:11.761451+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 heartbeat osd_stat(store_statfs(0x4e7acd000/0x0/0x4ffc00000, data 0x1666f7c/0x181e000, compress 0x0/0x0/0x0, omap 0x775b4, meta 0x16898a4c), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 66150400 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:12.761619+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 78.070625305s of 79.486984253s, submitted: 93
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 312 handle_osd_map epochs [312,313], i have 313, src has [1,313]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 313 ms_handle_reset con 0x55ffff7b1400 session 0x560004556e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:13.761778+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e7aca000/0x0/0x4ffc00000, data 0x1668b49/0x1820000, compress 0x0/0x0/0x0, omap 0x78247, meta 0x16897db9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 66117632 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:14.761971+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:15.762187+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743364 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:16.762331+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e7aca000/0x0/0x4ffc00000, data 0x1668b49/0x1820000, compress 0x0/0x0/0x0, omap 0x78247, meta 0x16897db9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:17.762477+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:18.762642+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:19.762843+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:20.763042+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e7aca000/0x0/0x4ffc00000, data 0x1668b49/0x1820000, compress 0x0/0x0/0x0, omap 0x78247, meta 0x16897db9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743364 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:21.763203+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:22.763375+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 66109440 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:23.763550+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 66093056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e7aca000/0x0/0x4ffc00000, data 0x1668b49/0x1820000, compress 0x0/0x0/0x0, omap 0x78247, meta 0x16897db9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 313 handle_osd_map epochs [313,314], i have 314, src has [1,314]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.259491920s of 11.321173668s, submitted: 39
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: get_auth_request con 0x5600002e7400 auth_method 0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:24.763725+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 65028096 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:25.763899+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 65028096 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:26.764057+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 65028096 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:27.764943+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 65019904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:28.765204+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 65019904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:29.765347+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 65019904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:30.765517+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 65019904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:31.765672+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:32.765887+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:33.766182+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:34.766416+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:35.767025+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:36.767667+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:37.767917+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:38.768161+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 65011712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:39.768547+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:40.769609+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:41.769779+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:42.769986+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:43.770295+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:44.770550+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:45.770832+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 64995328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:46.771130+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:47.771339+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:48.771529+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:49.771665+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:50.771866+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:51.772000+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:52.772221+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:53.772445+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:54.772721+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 64987136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:55.772891+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 64970752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:56.773063+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 64970752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:57.773328+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 64970752 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:58.773597+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:26:59.773813+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:00.774033+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:01.774236+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:02.774480+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:03.774619+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:04.774771+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 64962560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:05.774977+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 64954368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:06.775213+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 64954368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:07.775376+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 64954368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:08.775567+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 64946176 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:09.775755+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 64946176 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:10.775974+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 64946176 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:11.776170+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:12.776329+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:13.776487+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:14.776701+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:15.776863+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:16.777138+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 64937984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:17.777297+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360841216 unmapped: 64929792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:18.777481+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360841216 unmapped: 64929792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:19.777647+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 64921600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:20.777852+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 64921600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:21.778057+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 64921600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:22.778308+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:23.778483+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:24.778643+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:25.778845+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:26.778989+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 64913408 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:27.779186+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:28.779375+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:29.779553+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:30.779765+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:31.779952+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:32.780125+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:33.780311+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 64905216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:34.780652+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 64897024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:35.780807+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 64897024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:36.781215+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360873984 unmapped: 64897024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:37.781347+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:38.781954+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:39.782721+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:40.783409+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:41.783721+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:42.783939+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 64888832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:43.784179+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 64880640 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:44.784350+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 64880640 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:45.784508+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:46.784755+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:47.785017+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:48.785365+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:49.785512+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360898560 unmapped: 64872448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:50.785857+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 64864256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:51.786330+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 64864256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:52.786544+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 64864256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:53.786704+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:54.786856+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:55.787027+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:56.787297+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:57.787498+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:58.787735+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 64856064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:27:59.787918+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 64847872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:00.788170+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 64847872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:01.788389+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 64847872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:02.788549+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:03.788758+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:04.788958+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:05.789117+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:06.789295+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 64839680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:07.789481+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 64831488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:08.789733+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 64831488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:09.789972+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 64831488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:10.790197+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 360947712 unmapped: 64823296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:11.790337+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746138 data_alloc: 218103808 data_used: 841097
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 ms_handle_reset con 0x560000638c00 session 0x5600012328c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 59088896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:12.790509+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 59088896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:13.790725+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 59088896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:14.790950+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 59088896 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:15.791160+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 59080704 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:16.791398+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760474 data_alloc: 218103808 data_used: 7640786
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 59080704 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:17.791552+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 59080704 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:18.791710+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 59072512 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:19.791883+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 59072512 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:20.792241+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:21.792415+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760474 data_alloc: 218103808 data_used: 7640786
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e7ac7000/0x0/0x4ffc00000, data 0x166a5c8/0x1823000, compress 0x0/0x0/0x0, omap 0x7835d, meta 0x16897ca3), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 59064320 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 314 handle_osd_map epochs [314,315], i have 315, src has [1,315]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 118.241065979s of 118.251205444s, submitted: 10
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 315 ms_handle_reset con 0x560002ddd000 session 0x5600027401c0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:22.792596+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:23.792820+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:24.793022+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e82c4000/0x0/0x4ffc00000, data 0xe6c1b8/0x1026000, compress 0x0/0x0/0x0, omap 0x786c3, meta 0x1689793d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:25.793177+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003088000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 316 ms_handle_reset con 0x560003088000 session 0x560002571dc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e82c4000/0x0/0x4ffc00000, data 0xe6c1b8/0x1026000, compress 0x0/0x0/0x0, omap 0x786c3, meta 0x1689793d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:26.793359+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3638298 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:27.793567+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:28.793757+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e8f31000/0x0/0x4ffc00000, data 0x1fdda8/0x3b9000, compress 0x0/0x0/0x0, omap 0x78a29, meta 0x168975d7), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:29.794010+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:30.794240+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 63029248 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:31.794435+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641072 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e8f2e000/0x0/0x4ffc00000, data 0x1ff843/0x3bc000, compress 0x0/0x0/0x0, omap 0x78b3f, meta 0x168974c1), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: get_auth_request con 0x560000089800 auth_method 0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 63021056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:32.794644+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 63021056 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:33.794849+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 317 handle_osd_map epochs [317,318], i have 318, src has [1,318]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.712511063s of 11.786905289s, submitted: 39
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:34.795036+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:35.795237+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:36.795412+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643846 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 318 heartbeat osd_stat(store_statfs(0x4e8f2b000/0x0/0x4ffc00000, data 0x2012c2/0x3bf000, compress 0x0/0x0/0x0, omap 0x79220, meta 0x16896de0), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 318 heartbeat osd_stat(store_statfs(0x4e8f2b000/0x0/0x4ffc00000, data 0x2012c2/0x3bf000, compress 0x0/0x0/0x0, omap 0x79220, meta 0x16896de0), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:37.795556+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 63012864 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:38.795704+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffff7b1400
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 61956096 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 ms_handle_reset con 0x55ffff7b1400 session 0x560000d76700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:39.796524+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:40.797345+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:41.797531+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:42.797676+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:43.797835+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:44.798024+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:45.798218+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 61947904 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:46.798435+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:47.798603+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:48.798788+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:49.798980+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:50.799181+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:51.799358+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:52.799516+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61939712 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:53.799721+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61931520 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:54.800475+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 61923328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:55.800713+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 61923328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:56.800923+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 61923328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:57.801165+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 61923328 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:58.801385+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 61915136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:28:59.801558+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 61915136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:00.801801+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 61915136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:01.801991+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 61915136 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:02.802318+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:03.802457+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:04.802610+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:05.802767+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:06.802916+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:07.803060+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:08.803258+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 61906944 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:09.803436+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:10.803640+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:11.803810+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:12.803968+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:13.804109+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 61890560 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:14.804240+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 61882368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:15.804429+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 61882368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:16.804589+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 61882368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:17.804746+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 61882368 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:18.804912+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:19.805061+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:20.805252+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:21.805405+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:22.805558+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:23.805720+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:24.805882+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:25.806024+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 61865984 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:26.806197+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363913216 unmapped: 61857792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:27.806368+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363913216 unmapped: 61857792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:28.806570+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363913216 unmapped: 61857792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:29.806734+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363913216 unmapped: 61857792 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:30.806956+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363921408 unmapped: 61849600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:31.807148+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363921408 unmapped: 61849600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:32.807308+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363921408 unmapped: 61849600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:33.807507+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363921408 unmapped: 61849600 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:34.807747+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363937792 unmapped: 61833216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:35.807931+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363937792 unmapped: 61833216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:36.808090+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363937792 unmapped: 61833216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:37.808246+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363937792 unmapped: 61833216 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:38.808409+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363945984 unmapped: 61825024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:39.808559+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363945984 unmapped: 61825024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:40.808749+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363945984 unmapped: 61825024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:41.808914+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363945984 unmapped: 61825024 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:42.809173+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:43.809373+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:44.809530+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:45.809668+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:46.809812+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:47.810005+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:48.810147+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:49.810299+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363954176 unmapped: 61816832 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:50.810680+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:51.810852+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:52.811018+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:53.811162+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:54.811306+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:55.811460+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:56.811613+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:57.811827+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:58.811968+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 61800448 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:29:59.812166+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363978752 unmapped: 61792256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:00.812369+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363978752 unmapped: 61792256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:01.812531+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363978752 unmapped: 61792256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:02.812741+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363978752 unmapped: 61792256 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:03.812876+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363986944 unmapped: 61784064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:04.813054+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363986944 unmapped: 61784064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:05.813293+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363986944 unmapped: 61784064 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:06.813475+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 363995136 unmapped: 61775872 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:07.813702+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364003328 unmapped: 61767680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:08.813885+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364003328 unmapped: 61767680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:09.814128+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364003328 unmapped: 61767680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:10.814333+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364003328 unmapped: 61767680 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:11.814493+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:12.814691+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:13.814810+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:14.814959+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:15.815156+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:16.815366+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364011520 unmapped: 61759488 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:17.815525+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364019712 unmapped: 61751296 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:18.815669+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:19.815805+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:20.815979+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:21.816181+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:22.816373+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364027904 unmapped: 61743104 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:23.816520+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:24.816697+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 heartbeat osd_stat(store_statfs(0x4e8f26000/0x0/0x4ffc00000, data 0x202ea4/0x3c4000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:25.816836+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:26.816980+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650161 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:27.817176+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 61734912 heap: 425771008 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 113.987884521s of 114.019287109s, submitted: 20
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:28.817337+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364068864 unmapped: 70098944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 319 handle_osd_map epochs [319,320], i have 320, src has [1,320]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 320 heartbeat osd_stat(store_statfs(0x4e8f27000/0x0/0x4ffc00000, data 0x202eb4/0x3c5000, compress 0x0/0x0/0x0, omap 0x792cf, meta 0x16896d31), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 320 ms_handle_reset con 0x560000638c00 session 0x560007592e00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:29.817510+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364085248 unmapped: 70082560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:30.817738+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364085248 unmapped: 70082560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 ms_handle_reset con 0x560002ddd000 session 0x560007592fc0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e40000/0x0/0x4ffc00000, data 0x12e4a83/0x14aa000, compress 0x0/0x0/0x0, omap 0x7937e, meta 0x16896c82), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:31.817894+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:32.818155+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:33.818333+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:34.818512+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:35.818684+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:36.818888+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:37.819050+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:38.819216+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 70057984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:39.819415+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364118016 unmapped: 70049792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:40.819639+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364118016 unmapped: 70049792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:41.819785+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364118016 unmapped: 70049792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:42.819983+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:43.820138+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:44.820317+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:45.820487+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-13T09:30:46.820652+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _finish_auth 0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:46.821917+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:47.820897+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:48.821122+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:49.821313+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364134400 unmapped: 70033408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:50.821568+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:51.821811+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:52.821990+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:53.822206+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:54.822401+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364142592 unmapped: 70025216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:55.822607+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:56.822825+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:57.823025+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:58.823301+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:30:59.823555+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:00.823796+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:01.823959+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:02.824200+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364158976 unmapped: 70008832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:03.824383+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 70000640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:04.824572+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 70000640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:05.824853+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 70000640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:06.825061+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364167168 unmapped: 70000640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000715ec00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749150 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:07.825319+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364175360 unmapped: 69992448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.013404846s of 40.179763794s, submitted: 19
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:08.825494+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 heartbeat osd_stat(store_statfs(0x4e7e3d000/0x0/0x4ffc00000, data 0x12e661f/0x14ad000, compress 0x0/0x0/0x0, omap 0x799fb, meta 0x16896605), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364183552 unmapped: 69984256 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:09.825667+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364183552 unmapped: 69984256 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 322 ms_handle_reset con 0x56000715ec00 session 0x560007592700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:10.826003+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 69951488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:11.826191+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 322 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e81c9/0x14ae000, compress 0x0/0x0/0x0, omap 0x79d6b, meta 0x16896295), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 69951488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750746 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:12.826463+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 69951488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:13.826712+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 69951488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:14.826956+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:15.827228+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:16.827433+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750746 data_alloc: 218103808 data_used: 300754
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:17.827656+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 322 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e81c9/0x14ae000, compress 0x0/0x0/0x0, omap 0x79d6b, meta 0x16896295), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 322 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e81c9/0x14ae000, compress 0x0/0x0/0x0, omap 0x79d6b, meta 0x16896295), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:18.827802+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 69943296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:19.827942+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.607131004s of 11.598348618s, submitted: 26
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:20.828192+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:21.828436+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:22.828596+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:23.828803+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:24.829006+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:25.829186+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:26.829335+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 69910528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:27.829521+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:28.829681+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:29.829829+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:30.831391+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:31.831595+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:32.831795+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 187K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1267 writes, 4441 keys, 1267 commit groups, 1.0 writes per commit group, ingest: 3.93 MB, 0.01 MB/s
                                           Interval WAL: 1267 writes, 514 syncs, 2.46 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread fragmentation_score=0.004811 took=0.000062s
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:33.832209+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:34.832357+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 69902336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:35.832502+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 69894144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:36.832664+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 69894144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:37.832822+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 69894144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:38.832979+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 69894144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:39.833204+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 69885952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:40.833410+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 69885952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:41.833597+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 69877760 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:42.833791+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 69877760 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:43.833979+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:44.834223+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:45.834438+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:46.834610+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:47.834773+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 69869568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: mgrc ms_handle_reset ms_handle_reset con 0x5600026d9800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 09:45:47 compute-0 ceph-osd[87041]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: get_auth_request con 0x560003088000 auth_method 0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: mgrc handle_mgr_configure stats_period=5
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:48.835191+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:49.835471+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:50.835705+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:51.835893+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:52.836152+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 ms_handle_reset con 0x5600026d7800 session 0x560001410a80
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x55ffffca5000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:53.836393+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:54.836594+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:55.836752+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:56.836948+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 69861376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:57.837189+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:58.837415+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:31:59.837635+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:00.837930+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:01.838111+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:02.838334+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:03.838581+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:04.838782+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:05.838980+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:06.839161+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 69853184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:07.839335+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:08.839506+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:09.839653+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:10.839944+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:11.840181+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:12.840440+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:13.840646+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 69844992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:14.840842+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 69836800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:15.841053+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 69836800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:16.841253+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 69828608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:17.841513+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 69828608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:18.841719+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:19.841943+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:20.842164+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:21.842331+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:22.842492+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 69820416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:23.842688+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364355584 unmapped: 69812224 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:24.842850+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364355584 unmapped: 69812224 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:25.843033+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 69804032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:26.843865+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:27.844367+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:28.845110+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:29.845357+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:30.846045+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:31.846560+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 69795840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:32.846871+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:33.847354+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:34.847747+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:35.848180+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:36.848661+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:37.848836+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 69787648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:38.849208+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 69771264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:39.849577+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 69771264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:40.849919+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 69771264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:41.850211+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 69771264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:42.850487+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 69763072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:43.850659+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 69763072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:44.850851+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364412928 unmapped: 69754880 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:45.851002+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364412928 unmapped: 69754880 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:46.851267+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364412928 unmapped: 69754880 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:47.851611+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364421120 unmapped: 69746688 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:48.851848+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364421120 unmapped: 69746688 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:49.852013+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364421120 unmapped: 69746688 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:50.852222+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364429312 unmapped: 69738496 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:51.852459+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364429312 unmapped: 69738496 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:52.852691+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364429312 unmapped: 69738496 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:53.852852+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364437504 unmapped: 69730304 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:54.853044+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:55.853286+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:56.853508+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:57.853732+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:58.853974+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:32:59.854239+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:00.854540+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:01.854840+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 69722112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:02.855178+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364462080 unmapped: 69705728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:03.855385+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364462080 unmapped: 69705728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:04.855587+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364462080 unmapped: 69705728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:05.855763+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364462080 unmapped: 69705728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:06.855940+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 69697536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:07.856156+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 69697536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:08.856342+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 69697536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:09.856525+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 69697536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:10.856763+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 69689344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:11.856910+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 69689344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:12.857158+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:13.857613+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:14.857884+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:15.858137+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:16.858306+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:17.858466+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 69681152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:18.858774+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:19.858989+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:20.859219+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:21.859365+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 ms_handle_reset con 0x5600026dc400 session 0x55fffffffc00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x5600026d7800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:22.859609+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753456 data_alloc: 218103808 data_used: 304815
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:23.859782+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 69656576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:24.859967+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e39000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 69648384 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:25.860114+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 69648384 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:26.860331+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 127.056221008s of 127.063026428s, submitted: 10
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 69574656 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:27.860521+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 69574656 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:28.860827+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:29.861034+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:30.861252+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:31.861445+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:32.861674+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 69558272 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:33.861895+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364683264 unmapped: 69484544 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:34.862045+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:35.862158+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:36.862347+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:37.862815+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:38.863207+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:39.863439+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:40.863680+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:41.863981+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:42.864298+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:43.864761+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:44.864978+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364716032 unmapped: 69451776 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:45.865191+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:46.865449+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:47.865632+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:48.865819+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:49.866033+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:50.866474+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:51.866680+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:52.866877+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:53.867253+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:54.867449+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:55.867583+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:56.867821+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:57.868188+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:58.868401+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:33:59.868578+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:00.868760+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:01.869000+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:02.869274+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:03.869399+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:04.869522+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:05.869667+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:06.869847+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:07.869997+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:08.870238+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:09.870488+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:10.870745+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:11.870958+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:12.871200+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:13.871423+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:14.871640+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:15.871897+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.884391785s of 49.232563019s, submitted: 132
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:16.872136+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:17.872328+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752822 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:18.880738+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:19.881046+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:20.881304+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:21.881453+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:22.881645+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:23.881823+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:24.882188+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:25.882373+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:26.882647+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:27.882846+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.627572060s of 12.162124634s, submitted: 8
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752822 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:28.883019+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:29.883187+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:30.883434+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:31.883637+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:32.883906+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:33.884155+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:34.884355+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:35.884509+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364765184 unmapped: 69402624 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:36.884684+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:37.884848+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:38.885115+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:39.885273+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364781568 unmapped: 69386240 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.718849659s of 11.986929893s, submitted: 8
Dec 13 09:45:47 compute-0 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:40.885550+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:41.885735+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:42.891038+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:43.891180+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:44.891366+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:45.891559+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:46.891945+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 69361664 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:47.892122+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:48.892293+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:49.892454+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:50.892630+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:51.892837+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:52.893172+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:53.893371+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:54.893531+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:55.893701+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 69337088 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:56.893845+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 69337088 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:57.894095+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 69337088 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:58.894272+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:34:59.894415+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:00.894625+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:01.894814+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:02.895017+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:03.895168+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364847104 unmapped: 69320704 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:04.895367+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364847104 unmapped: 69320704 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:05.895586+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364847104 unmapped: 69320704 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:06.895762+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:07.895947+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:08.896326+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:09.896492+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:10.896686+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364863488 unmapped: 69304320 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:11.896897+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364871680 unmapped: 69296128 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:12.897204+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:13.897535+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:14.897875+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:15.898214+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:16.898358+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:17.898621+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 69279744 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:18.898815+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364888064 unmapped: 69279744 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:19.899135+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364896256 unmapped: 69271552 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:20.899352+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364896256 unmapped: 69271552 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:21.899566+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364896256 unmapped: 69271552 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:22.899742+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:23.899965+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:24.900243+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:25.900473+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:26.900734+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:27.900886+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:28.901041+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:29.901253+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:30.901459+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:31.901695+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:32.901951+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:33.902133+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:34.902323+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364920832 unmapped: 69246976 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:35.902613+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:36.902910+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:37.903177+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:38.903369+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:39.903559+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:40.903782+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:41.903928+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:42.904177+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:43.904374+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:44.904627+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:45.904781+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:46.904943+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:47.905159+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:48.905384+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:49.905521+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:50.905662+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:51.905901+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:52.906140+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:53.906333+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:54.906512+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:55.906735+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:56.906906+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:57.907411+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:58.907768+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:35:59.908058+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:00.908298+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:01.908502+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:02.908659+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:03.908805+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:04.908993+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:05.909184+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:06.909356+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:07.909516+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:08.909705+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:09.909881+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:10.910095+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:11.910283+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:12.910416+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:13.910569+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:14.910746+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:15.910919+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:16.911182+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:17.911327+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365035520 unmapped: 69132288 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:18.911490+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:19.911631+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:20.911801+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:21.911972+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:22.912181+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:23.912372+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:24.912532+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:25.912708+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:26.912910+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:27.913123+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:28.913265+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365060096 unmapped: 69107712 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:29.913381+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:30.914976+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:31.915181+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:32.915330+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:33.915517+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365068288 unmapped: 69099520 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:34.915680+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:35.915855+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:36.916137+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:37.916316+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:38.916474+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:39.916669+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:40.916876+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:41.917050+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:42.917335+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:43.917540+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:44.917737+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:45.917953+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:46.918164+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:47.918341+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:48.918488+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:49.918677+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:50.918876+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:51.919046+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:52.919324+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:53.919562+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:54.919790+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:55.920015+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:56.920212+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:57.920415+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:58.920604+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:36:59.920765+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:00.921165+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:01.921320+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:02.921503+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:03.921639+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:04.921773+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:05.921947+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:06.922163+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:07.922345+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:08.922561+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:09.922754+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:10.922990+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 69009408 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:11.923147+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:12.923403+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:13.923662+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:14.923896+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:15.924140+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 68984832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:16.924341+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 68984832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:17.924586+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 68984832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:18.924832+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 68984832 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:19.925006+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:20.925233+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:21.925440+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:22.929216+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:23.929366+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:24.929558+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:25.929725+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:26.929895+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:27.930058+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:28.930292+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:29.930500+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:30.930699+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:31.930855+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:32.931021+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:33.931192+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:34.931391+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:35.931532+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:36.931738+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:37.931931+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:38.932140+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:39.932385+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:40.932705+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:41.932924+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:42.933152+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:43.934504+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:44.934723+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:45.934930+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:46.935146+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:47.935285+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365256704 unmapped: 68911104 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:48.935458+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365256704 unmapped: 68911104 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:49.935615+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365256704 unmapped: 68911104 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:50.935832+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:51.936002+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:52.936206+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:53.936449+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:54.936616+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:55.936763+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:56.936910+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:57.937177+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:58.937377+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:37:59.937549+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:00.937765+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:01.937909+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:02.938054+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:03.938254+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:04.938444+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:05.939929+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:06.940169+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:07.940333+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 68861952 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:08.940522+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365322240 unmapped: 68845568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:09.940713+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365322240 unmapped: 68845568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:10.940944+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365322240 unmapped: 68845568 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:11.941118+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 68837376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:12.941312+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 68837376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:13.941471+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 68837376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:14.941679+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365330432 unmapped: 68837376 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:15.941846+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:16.942005+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:17.942183+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:18.942361+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:19.942698+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:20.942899+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365346816 unmapped: 68820992 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:21.943130+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365355008 unmapped: 68812800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:22.943333+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365355008 unmapped: 68812800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:23.943518+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365355008 unmapped: 68812800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:24.943782+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365355008 unmapped: 68812800 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:25.943973+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 68804608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:26.944225+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 68804608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:27.944507+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 68804608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:28.944696+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 68804608 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:29.944985+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365371392 unmapped: 68796416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:30.945334+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365371392 unmapped: 68796416 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:31.945488+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:32.945639+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:33.945802+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:34.946048+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:35.946302+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:36.946528+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365387776 unmapped: 68780032 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:37.946726+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365395968 unmapped: 68771840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:38.947017+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365395968 unmapped: 68771840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:39.947208+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365395968 unmapped: 68771840 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:40.947447+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 68763648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:41.947658+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 68763648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:42.947799+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 68763648 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:43.947971+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 68747264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752736 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:44.948160+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 68747264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:45.948494+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 68747264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:46.948764+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 68747264 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:47.948938+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560000638c00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:48.949108+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752864 data_alloc: 218103808 data_used: 305161
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:49.949283+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:50.949524+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:51.949746+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:52.949955+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 68739072 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:53.950144+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 heartbeat osd_stat(store_statfs(0x4e7e3b000/0x0/0x4ffc00000, data 0x12e9c48/0x14b1000, compress 0x0/0x0/0x0, omap 0x79e83, meta 0x1689617d), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 253.087219238s of 253.859130859s, submitted: 2
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 68714496 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756358 data_alloc: 218103808 data_used: 305161
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:54.950295+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365461504 unmapped: 68706304 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:55.950521+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365469696 unmapped: 68698112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:56.950730+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365469696 unmapped: 68698112 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:57.950902+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365486080 unmapped: 68681728 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:58.951139+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 324 heartbeat osd_stat(store_statfs(0x4e8aa9000/0x0/0x4ffc00000, data 0x67b828/0x843000, compress 0x0/0x0/0x0, omap 0x7a143, meta 0x16895ebd), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365494272 unmapped: 68673536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 324 ms_handle_reset con 0x560000638c00 session 0x5600055dd340
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692145 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:38:59.951324+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365494272 unmapped: 68673536 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:00.951516+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 68665344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:01.951723+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560002ddd000
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 68665344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:02.951921+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 68665344 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:03.952208+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.825212955s of 10.108526230s, submitted: 19
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365510656 unmapped: 68657152 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 324 handle_osd_map epochs [324,325], i have 325, src has [1,325]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3695983 data_alloc: 218103808 data_used: 305161
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:04.952402+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 325 heartbeat osd_stat(store_statfs(0x4e8aa4000/0x0/0x4ffc00000, data 0x67d2a7/0x846000, compress 0x0/0x0/0x0, omap 0x7a8e5, meta 0x1689571b), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365535232 unmapped: 68632576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:05.952593+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365535232 unmapped: 68632576 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:06.952788+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:07.952979+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:08.953235+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 326 heartbeat osd_stat(store_statfs(0x4e8aa1000/0x0/0x4ffc00000, data 0x67ee97/0x849000, compress 0x0/0x0/0x0, omap 0x7a996, meta 0x1689566a), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698669 data_alloc: 218103808 data_used: 305196
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:09.953405+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:10.953584+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 68608000 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 326 heartbeat osd_stat(store_statfs(0x4e8aa3000/0x0/0x4ffc00000, data 0x67ee97/0x849000, compress 0x0/0x0/0x0, omap 0x7aa47, meta 0x168955b9), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:11.953804+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365568000 unmapped: 68599808 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:12.953969+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365568000 unmapped: 68599808 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 326 heartbeat osd_stat(store_statfs(0x4e8aa4000/0x0/0x4ffc00000, data 0x67ee87/0x848000, compress 0x0/0x0/0x0, omap 0x7aba9, meta 0x16895457), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:13.954139+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.014971256s of 10.198164940s, submitted: 32
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674788 data_alloc: 218103808 data_used: 305145
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:14.954313+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 326 ms_handle_reset con 0x560002ddd000 session 0x560006879500
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:15.954497+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:16.954633+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:17.954806+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 326 heartbeat osd_stat(store_statfs(0x4e8f14000/0x0/0x4ffc00000, data 0x20ee64/0x3d7000, compress 0x0/0x0/0x0, omap 0x7ac5a, meta 0x168953a6), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 68575232 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:18.954966+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 68558848 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677606 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:19.955200+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 68558848 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:20.955419+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x56000715ec00
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 68558848 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:21.955548+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 68550656 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:22.955695+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e8710000/0x0/0x4ffc00000, data 0xa10916/0xbdc000, compress 0x0/0x0/0x0, omap 0x7ad73, meta 0x1689528d), peers [1,2] op hist [0,0,0,0,0,0,1])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 ms_handle_reset con 0x56000715ec00 session 0x56000550e700
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:23.955875+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:24.956036+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:25.956171+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:26.956354+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:27.956493+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:28.956648+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:29.956812+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:30.957014+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:31.957208+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:32.957372+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:33.957540+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:34.957727+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:35.957873+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:36.958050+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:37.958266+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:38.958481+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:39.958673+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:40.958888+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364929024 unmapped: 69238784 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:41.959039+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364937216 unmapped: 69230592 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:42.959276+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:43.959471+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:44.959637+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:45.959788+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:46.959973+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:47.960167+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:48.960356+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:49.960545+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:50.960761+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:51.960920+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:52.961108+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:53.961247+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:54.961443+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364961792 unmapped: 69206016 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:55.961632+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364961792 unmapped: 69206016 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:56.961836+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364961792 unmapped: 69206016 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:57.961968+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:58.962120+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:39:59.962284+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:00.962512+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:01.962692+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:02.962951+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:03.963203+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:04.963353+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:05.963537+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:06.963751+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:07.963904+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:08.964133+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:09.964271+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:10.964439+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:11.964653+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:12.964853+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:13.965058+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:14.965327+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:15.965476+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:16.965623+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:17.965752+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:18.965896+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:19.966042+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:20.966355+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:21.966504+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:22.966698+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:23.966839+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:24.967411+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725547 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:25.967615+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:26.967831+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:27.967988+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:28.968166+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: handle_auth_request added challenge on 0x560003892800
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 72.806076050s of 74.684478760s, submitted: 28
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365027328 unmapped: 69140480 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 heartbeat osd_stat(store_statfs(0x4e870b000/0x0/0x4ffc00000, data 0xa124b2/0xbdf000, compress 0x0/0x0/0x0, omap 0x7ae24, meta 0x168951dc), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 328 handle_osd_map epochs [328,329], i have 329, src has [1,329]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:29.968313+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 329 ms_handle_reset con 0x560003892800 session 0x56000550f180
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686636 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:30.968576+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:31.968722+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:32.968847+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:33.968985+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 329 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x21406f/0x3e0000, compress 0x0/0x0/0x0, omap 0x7b199, meta 0x16894e67), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:34.969166+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686636 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:35.969298+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:36.969469+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:37.969657+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 329 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x21406f/0x3e0000, compress 0x0/0x0/0x0, omap 0x7b199, meta 0x16894e67), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365060096 unmapped: 69107712 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:38.969823+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:39.969981+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:40.970223+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _renew_subs
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:41.970391+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:42.970603+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:43.970793+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:44.970982+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:45.971180+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:46.971342+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:47.971697+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:48.971871+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:49.972056+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-mon[76537]: from='client.23578 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:47 compute-0 ceph-mon[76537]: from='client.23580 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:47 compute-0 ceph-mon[76537]: from='client.23582 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:47 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:50.972324+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:51.972550+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:52.972792+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:53.972966+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:54.973206+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:55.973361+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:56.973558+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:57.973746+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:58.973963+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:40:59.974203+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:00.974434+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365117440 unmapped: 69050368 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:01.975277+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:02.975727+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:03.975957+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:04.976691+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:05.977369+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:06.977828+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:07.978153+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365142016 unmapped: 69025792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:08.978461+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365142016 unmapped: 69025792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:09.978623+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365142016 unmapped: 69025792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:10.979176+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365142016 unmapped: 69025792 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:11.979411+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:12.979823+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:13.980208+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:14.980466+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:15.980667+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:16.980864+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:17.981028+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:18.981184+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:19.981427+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:20.981626+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:21.981850+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:22.982213+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365166592 unmapped: 69001216 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:23.982488+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365174784 unmapped: 68993024 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:24.982724+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365174784 unmapped: 68993024 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:25.982913+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:26.983172+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365191168 unmapped: 68976640 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:27.983362+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:28.983549+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:29.983709+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:30.983994+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:31.984163+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:32.984342+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.74 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 496 writes, 1133 keys, 496 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s
                                           Interval WAL: 496 writes, 229 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:33.984537+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:34.985378+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365199360 unmapped: 68968448 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:35.985941+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365215744 unmapped: 68952064 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:36.986168+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365215744 unmapped: 68952064 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:37.986683+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365215744 unmapped: 68952064 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:38.987216+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365215744 unmapped: 68952064 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:39.987622+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:40.987836+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:41.988143+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:42.988487+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:43.988788+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:44.989028+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:45.989212+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365223936 unmapped: 68943872 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:46.989418+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365232128 unmapped: 68935680 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:47.989639+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365240320 unmapped: 68927488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:48.989845+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365240320 unmapped: 68927488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:49.990056+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:50.990354+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365256704 unmapped: 68911104 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:51.990521+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:52.990738+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:53.990897+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:54.991220+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:55.991375+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:56.991597+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:57.991792+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:58.991962+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365264896 unmapped: 68902912 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:41:59.992151+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:00.992390+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:01.992533+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:02.992658+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365273088 unmapped: 68894720 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:03.992795+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:04.992916+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365281280 unmapped: 68886528 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:05.993051+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365289472 unmapped: 68878336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:06.993459+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365289472 unmapped: 68878336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:07.993604+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365289472 unmapped: 68878336 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:08.993885+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:09.994183+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:10.994388+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:11.994534+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:12.994668+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 68870144 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:13.994885+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365338624 unmapped: 68829184 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'config show' '{prefix=config show}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:14.995049+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:15.995227+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364879872 unmapped: 69287936 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:16.995419+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:17.995578+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'perf dump' '{prefix=perf dump}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'perf schema' '{prefix=perf schema}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:18.995725+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364658688 unmapped: 69509120 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:19.996004+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364658688 unmapped: 69509120 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:20.996242+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364658688 unmapped: 69509120 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:21.996386+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364658688 unmapped: 69509120 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:22.996769+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364658688 unmapped: 69509120 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:23.996934+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364666880 unmapped: 69500928 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:24.997210+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364666880 unmapped: 69500928 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:25.997417+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364666880 unmapped: 69500928 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:26.997591+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364666880 unmapped: 69500928 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:27.997741+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364666880 unmapped: 69500928 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:28.997893+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364675072 unmapped: 69492736 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:29.998034+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364683264 unmapped: 69484544 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:30.998222+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364683264 unmapped: 69484544 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:31.998371+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364683264 unmapped: 69484544 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:32.998527+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364699648 unmapped: 69468160 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:33.998682+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364699648 unmapped: 69468160 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:34.998869+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364699648 unmapped: 69468160 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:35.999060+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364699648 unmapped: 69468160 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:36.999222+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364699648 unmapped: 69468160 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:37.999368+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364699648 unmapped: 69468160 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:38.999555+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364699648 unmapped: 69468160 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:39.999723+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364707840 unmapped: 69459968 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:40.999919+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:42.000141+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364724224 unmapped: 69443584 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:43.000287+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:44.000449+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:45.000601+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:46.001143+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364732416 unmapped: 69435392 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:47.001305+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:48.001478+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:49.001818+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:50.002010+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:51.002187+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 69427200 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:52.002461+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:53.002712+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364748800 unmapped: 69419008 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:54.002856+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364756992 unmapped: 69410816 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:55.003033+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:56.003225+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:57.003479+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:58.003694+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:42:59.004019+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:00.004252+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:01.004449+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 69394432 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:02.004621+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364781568 unmapped: 69386240 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:03.004795+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364781568 unmapped: 69386240 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:04.005007+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 69378048 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:05.005203+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 69378048 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:06.005419+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 69378048 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:07.005610+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 69369856 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:08.005797+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 69369856 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:09.005951+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 69369856 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:10.006129+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 69369856 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:11.006346+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:12.006620+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:13.006891+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:14.007146+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:15.007329+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:16.008049+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:17.008433+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:18.008688+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 69353472 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:19.009056+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:20.009400+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 69345280 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:21.009663+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 69337088 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:22.009836+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 69337088 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:23.010023+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:24.010466+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:25.010636+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f07000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:26.010989+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688237 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 69328896 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:27.011170+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364847104 unmapped: 69320704 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:28.011316+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:29.011501+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364855296 unmapped: 69312512 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 180.538192749s of 180.600326538s, submitted: 38
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:30.011689+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364863488 unmapped: 69304320 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:31.011932+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364863488 unmapped: 69304320 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:32.012236+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364863488 unmapped: 69304320 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:33.012372+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364863488 unmapped: 69304320 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:34.012516+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364896256 unmapped: 69271552 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:35.012761+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:36.012924+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364912640 unmapped: 69255168 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:37.013058+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364904448 unmapped: 69263360 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:38.013197+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364937216 unmapped: 69230592 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:39.014585+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364937216 unmapped: 69230592 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:40.014813+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:41.015204+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:42.015420+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:43.015595+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:44.015813+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364945408 unmapped: 69222400 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:45.015969+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:46.016096+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:47.016282+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:48.016378+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:49.016554+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:50.016761+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:51.016941+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364953600 unmapped: 69214208 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:52.017181+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364961792 unmapped: 69206016 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:53.017308+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:54.017716+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:55.017878+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:56.018054+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:57.018293+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:58.018415+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364969984 unmapped: 69197824 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:43:59.018646+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364978176 unmapped: 69189632 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:00.018807+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:01.019048+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:02.019260+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:03.019411+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:04.019593+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:05.019730+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:06.019990+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364986368 unmapped: 69181440 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:07.020172+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:08.020422+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:09.020570+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:10.020743+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:11.020943+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:12.021169+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:13.021361+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:14.021511+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 364994560 unmapped: 69173248 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:15.021692+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:16.021873+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:17.022033+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:18.022210+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:19.022394+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:20.022651+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:21.022893+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365002752 unmapped: 69165056 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:22.023034+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 69156864 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:23.023194+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:24.023383+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:25.023574+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:26.023740+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:27.023869+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:28.023979+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:29.024107+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:30.024320+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:31.024551+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:32.024803+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:33.024968+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 69148672 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:34.025195+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365035520 unmapped: 69132288 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:35.025405+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365035520 unmapped: 69132288 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:36.025569+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:37.025755+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:38.025924+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:39.026163+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365043712 unmapped: 69124096 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:40.026438+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:41.026683+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:42.026900+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:43.027260+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365051904 unmapped: 69115904 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:44.027536+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365060096 unmapped: 69107712 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:45.027777+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365060096 unmapped: 69107712 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:46.027984+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365060096 unmapped: 69107712 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 podman[436451]: 2025-12-13 09:45:47.890657399 +0000 UTC m=+0.104119267 container create 927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:47.028160+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365076480 unmapped: 69091328 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:48.028355+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:49.028517+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:50.028688+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:51.028901+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365084672 unmapped: 69083136 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:52.029053+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:53.029301+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:54.029494+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:55.029679+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365092864 unmapped: 69074944 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:56.029870+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:57.030000+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:58.030150+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:44:59.030359+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:00.030505+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365101056 unmapped: 69066752 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:01.030688+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:02.030838+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:03.031045+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365109248 unmapped: 69058560 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:04.031307+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:05.031544+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:06.031787+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:07.031978+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:08.032169+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4358: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:09.032378+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:10.032586+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365125632 unmapped: 69042176 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:11.032818+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365133824 unmapped: 69033984 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:12.033008+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:13.033220+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:14.033385+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365150208 unmapped: 69017600 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'config diff' '{prefix=config diff}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'config show' '{prefix=config show}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:15.033531+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365240320 unmapped: 68927488 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:16.033669+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: osd.0 330 heartbeat osd_stat(store_statfs(0x4e8f09000/0x0/0x4ffc00000, data 0x215aee/0x3e3000, compress 0x0/0x0/0x0, omap 0x7b899, meta 0x16894767), peers [1,2] op hist [])
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365248512 unmapped: 68919296 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 09:45:47 compute-0 ceph-osd[87041]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 09:45:47 compute-0 ceph-osd[87041]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687517 data_alloc: 218103808 data_used: 309206
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: tick
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_tickets
Dec 13 09:45:47 compute-0 ceph-osd[87041]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T09:45:17.033845+0000)
Dec 13 09:45:47 compute-0 ceph-osd[87041]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 68722688 heap: 434167808 old mem: 2845415832 new mem: 2845415832
Dec 13 09:45:47 compute-0 ceph-osd[87041]: do_command 'log dump' '{prefix=log dump}'
Dec 13 09:45:47 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23586 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:47 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 09:45:47 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:45:47 compute-0 systemd[1]: Started libpod-conmon-927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1.scope.
Dec 13 09:45:47 compute-0 podman[436451]: 2025-12-13 09:45:47.852712717 +0000 UTC m=+0.066174605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:45:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582a3b3cea238740ab369e2fdac6fde39baaa5bea206dfb761dd058388429a28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582a3b3cea238740ab369e2fdac6fde39baaa5bea206dfb761dd058388429a28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582a3b3cea238740ab369e2fdac6fde39baaa5bea206dfb761dd058388429a28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/582a3b3cea238740ab369e2fdac6fde39baaa5bea206dfb761dd058388429a28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:47 compute-0 podman[436451]: 2025-12-13 09:45:47.974900242 +0000 UTC m=+0.188362130 container init 927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mclaren, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 09:45:47 compute-0 podman[436451]: 2025-12-13 09:45:47.984875739 +0000 UTC m=+0.198337607 container start 927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:45:47 compute-0 podman[436451]: 2025-12-13 09:45:47.988324755 +0000 UTC m=+0.201786623 container attach 927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 09:45:48 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]: {
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:     "0": [
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:         {
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "devices": [
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "/dev/loop3"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             ],
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_name": "ceph_lv0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_size": "21470642176",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "name": "ceph_lv0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "tags": {
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cluster_name": "ceph",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.crush_device_class": "",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.encrypted": "0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.objectstore": "bluestore",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osd_id": "0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.type": "block",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.vdo": "0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.with_tpm": "0"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             },
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "type": "block",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "vg_name": "ceph_vg0"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:         }
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:     ],
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:     "1": [
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:         {
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "devices": [
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "/dev/loop4"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             ],
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_name": "ceph_lv1",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_size": "21470642176",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "name": "ceph_lv1",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "tags": {
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cluster_name": "ceph",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.crush_device_class": "",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.encrypted": "0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.objectstore": "bluestore",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osd_id": "1",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.type": "block",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.vdo": "0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.with_tpm": "0"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             },
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "type": "block",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "vg_name": "ceph_vg1"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:         }
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:     ],
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:     "2": [
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:         {
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "devices": [
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "/dev/loop5"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             ],
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_name": "ceph_lv2",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_size": "21470642176",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "name": "ceph_lv2",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "tags": {
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.cluster_name": "ceph",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.crush_device_class": "",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.encrypted": "0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.objectstore": "bluestore",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osd_id": "2",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.type": "block",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.vdo": "0",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:                 "ceph.with_tpm": "0"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             },
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "type": "block",
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:             "vg_name": "ceph_vg2"
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:         }
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]:     ]
Dec 13 09:45:48 compute-0 dazzling_mclaren[436470]: }
Dec 13 09:45:48 compute-0 systemd[1]: libpod-927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1.scope: Deactivated successfully.
Dec 13 09:45:48 compute-0 podman[436451]: 2025-12-13 09:45:48.312514017 +0000 UTC m=+0.525975915 container died 927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mclaren, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 09:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-582a3b3cea238740ab369e2fdac6fde39baaa5bea206dfb761dd058388429a28-merged.mount: Deactivated successfully.
Dec 13 09:45:48 compute-0 podman[436451]: 2025-12-13 09:45:48.383221864 +0000 UTC m=+0.596683742 container remove 927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 09:45:48 compute-0 systemd[1]: libpod-conmon-927b29631123269a30c51f1ac833b6fbe668c4a327ca14303a80b9591745ace1.scope: Deactivated successfully.
Dec 13 09:45:48 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 13 09:45:48 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880447158' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 13 09:45:48 compute-0 sudo[436324]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:48 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23590 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:48 compute-0 sudo[436543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 09:45:48 compute-0 sudo[436543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:48 compute-0 sudo[436543]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:48 compute-0 sudo[436573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -- raw list --format json
Dec 13 09:45:48 compute-0 sudo[436573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:48 compute-0 ceph-mon[76537]: pgmap v4358: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:48 compute-0 ceph-mon[76537]: from='client.23586 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:48 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 09:45:48 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/880447158' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 13 09:45:48 compute-0 ceph-mon[76537]: from='client.23590 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:48 compute-0 podman[436652]: 2025-12-13 09:45:48.931201754 +0000 UTC m=+0.046424164 container create af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:45:48 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23594 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:49 compute-0 podman[436652]: 2025-12-13 09:45:48.908418698 +0000 UTC m=+0.023641138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:45:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 13 09:45:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3652404214' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 13 09:45:49 compute-0 systemd[1]: Started libpod-conmon-af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366.scope.
Dec 13 09:45:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:45:49 compute-0 podman[436652]: 2025-12-13 09:45:49.168852297 +0000 UTC m=+0.284074727 container init af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hopper, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:45:49 compute-0 podman[436652]: 2025-12-13 09:45:49.177799559 +0000 UTC m=+0.293021969 container start af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Dec 13 09:45:49 compute-0 podman[436652]: 2025-12-13 09:45:49.182443005 +0000 UTC m=+0.297665445 container attach af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 09:45:49 compute-0 stupefied_hopper[436698]: 167 167
Dec 13 09:45:49 compute-0 podman[436652]: 2025-12-13 09:45:49.185290415 +0000 UTC m=+0.300512835 container died af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 09:45:49 compute-0 systemd[1]: libpod-af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366.scope: Deactivated successfully.
Dec 13 09:45:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8cea7d88fb4e167414ad4643a16446228c693cdf0389dca0ce909c4daa2ceb8-merged.mount: Deactivated successfully.
Dec 13 09:45:49 compute-0 podman[436652]: 2025-12-13 09:45:49.232434246 +0000 UTC m=+0.347656656 container remove af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 09:45:49 compute-0 systemd[1]: libpod-conmon-af31500aef8178a688b7b6dd52c5ed7d56cc4b5b0331bcc3a57586fd18d08366.scope: Deactivated successfully.
Dec 13 09:45:49 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:49 compute-0 podman[436753]: 2025-12-13 09:45:49.425638145 +0000 UTC m=+0.044741502 container create a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:45:49 compute-0 systemd[1]: Started libpod-conmon-a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159.scope.
Dec 13 09:45:49 compute-0 podman[436753]: 2025-12-13 09:45:49.406487659 +0000 UTC m=+0.025591036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 09:45:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 09:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d049a4eadfd4e190d15294a24391869b92f758d1bc2f851721e21db848d5ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d049a4eadfd4e190d15294a24391869b92f758d1bc2f851721e21db848d5ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d049a4eadfd4e190d15294a24391869b92f758d1bc2f851721e21db848d5ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d049a4eadfd4e190d15294a24391869b92f758d1bc2f851721e21db848d5ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 09:45:49 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 13 09:45:49 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461059150' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 09:45:49 compute-0 podman[436753]: 2025-12-13 09:45:49.713012443 +0000 UTC m=+0.332115830 container init a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 09:45:49 compute-0 podman[436753]: 2025-12-13 09:45:49.724317824 +0000 UTC m=+0.343421201 container start a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 09:45:49 compute-0 podman[436753]: 2025-12-13 09:45:49.731664546 +0000 UTC m=+0.350767933 container attach a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pasteur, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 09:45:49 compute-0 ceph-mon[76537]: from='client.23594 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3652404214' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 13 09:45:49 compute-0 ceph-mon[76537]: from='client.23596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 09:45:49 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3461059150' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 09:45:49 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4359: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 13 09:45:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322842429' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 13 09:45:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 13 09:45:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 13 09:45:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 13 09:45:50 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 13 09:45:50 compute-0 lvm[436970]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 09:45:50 compute-0 lvm[436968]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 09:45:50 compute-0 lvm[436968]: VG ceph_vg0 finished
Dec 13 09:45:50 compute-0 lvm[436973]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 09:45:50 compute-0 lvm[436973]: VG ceph_vg2 finished
Dec 13 09:45:50 compute-0 lvm[436970]: VG ceph_vg1 finished
Dec 13 09:45:50 compute-0 nervous_pasteur[436782]: {}
Dec 13 09:45:50 compute-0 systemd[1]: Starting Hostname Service...
Dec 13 09:45:50 compute-0 podman[436753]: 2025-12-13 09:45:50.689579879 +0000 UTC m=+1.308683256 container died a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 09:45:50 compute-0 systemd[1]: libpod-a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159.scope: Deactivated successfully.
Dec 13 09:45:50 compute-0 systemd[1]: libpod-a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159.scope: Consumed 1.546s CPU time.
Dec 13 09:45:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-09d049a4eadfd4e190d15294a24391869b92f758d1bc2f851721e21db848d5ab-merged.mount: Deactivated successfully.
Dec 13 09:45:50 compute-0 podman[436753]: 2025-12-13 09:45:50.744693018 +0000 UTC m=+1.363796375 container remove a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 09:45:50 compute-0 systemd[1]: libpod-conmon-a72eeb575b43b4427b9b8c4391e103dbc90d887aa6e960eff59ccd80fbc57159.scope: Deactivated successfully.
Dec 13 09:45:50 compute-0 systemd[1]: Started Hostname Service.
Dec 13 09:45:50 compute-0 sudo[436573]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 09:45:50 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:45:50 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 09:45:50 compute-0 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:45:50 compute-0 ceph-mon[76537]: pgmap v4359: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:50 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2322842429' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 13 09:45:50 compute-0 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 13 09:45:50 compute-0 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 13 09:45:50 compute-0 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 13 09:45:50 compute-0 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 13 09:45:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:45:50 compute-0 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 09:45:50 compute-0 sudo[437024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 09:45:50 compute-0 sudo[437024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 09:45:50 compute-0 sudo[437024]: pam_unix(sudo:session): session closed for user root
Dec 13 09:45:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 13 09:45:51 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508054510' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 13 09:45:51 compute-0 nova_compute[248510]: 2025-12-13 09:45:51.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:51 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23612 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:51 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:51 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4360: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:51 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2508054510' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 13 09:45:51 compute-0 ceph-mon[76537]: from='client.23612 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 13 09:45:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/48050044' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 13 09:45:52 compute-0 nova_compute[248510]: 2025-12-13 09:45:52.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:52 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 13 09:45:52 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871802743' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 13 09:45:52 compute-0 nova_compute[248510]: 2025-12-13 09:45:52.796 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 13 09:45:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307441391' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 13 09:45:53 compute-0 ceph-mon[76537]: pgmap v4360: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/48050044' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 13 09:45:53 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3871802743' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 13 09:45:53 compute-0 nova_compute[248510]: 2025-12-13 09:45:53.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:53 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 13 09:45:53 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761523698' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 13 09:45:53 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4361: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:54 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23622 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:54 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 13 09:45:54 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3855031411' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 13 09:45:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/307441391' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 13 09:45:54 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1761523698' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 13 09:45:55 compute-0 podman[437516]: 2025-12-13 09:45:55.150959479 +0000 UTC m=+0.100098798 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 09:45:55 compute-0 podman[437514]: 2025-12-13 09:45:55.153468811 +0000 UTC m=+0.101756298 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 09:45:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:45:55.480 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:45:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:45:55.481 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:45:55 compute-0 ovn_metadata_agent[158414]: 2025-12-13 09:45:55.481 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:45:55 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4362: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:55 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 13 09:45:55 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554505665' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 13 09:45:56 compute-0 nova_compute[248510]: 2025-12-13 09:45:56.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:56 compute-0 nova_compute[248510]: 2025-12-13 09:45:56.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:57 compute-0 podman[437692]: 2025-12-13 09:45:57.002973179 +0000 UTC m=+0.096814936 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 09:45:57 compute-0 nova_compute[248510]: 2025-12-13 09:45:57.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:45:57 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:45:57 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4363: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:45:58 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23628 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:58 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 13 09:45:58 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148167940' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 13 09:45:58 compute-0 nova_compute[248510]: 2025-12-13 09:45:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 09:45:58 compute-0 nova_compute[248510]: 2025-12-13 09:45:58.873 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:45:58 compute-0 nova_compute[248510]: 2025-12-13 09:45:58.874 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:45:58 compute-0 nova_compute[248510]: 2025-12-13 09:45:58.874 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:45:58 compute-0 nova_compute[248510]: 2025-12-13 09:45:58.874 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 09:45:58 compute-0 nova_compute[248510]: 2025-12-13 09:45:58.874 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:45:59 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23632 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:59 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:45:59 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862404405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:45:59 compute-0 nova_compute[248510]: 2025-12-13 09:45:59.433 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:45:59 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23636 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:45:59 compute-0 nova_compute[248510]: 2025-12-13 09:45:59.597 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 09:45:59 compute-0 nova_compute[248510]: 2025-12-13 09:45:59.598 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3266MB free_disk=59.987355314195156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 09:45:59 compute-0 nova_compute[248510]: 2025-12-13 09:45:59.599 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 09:45:59 compute-0 nova_compute[248510]: 2025-12-13 09:45:59.599 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 09:45:59 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4364: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:46:00 compute-0 nova_compute[248510]: 2025-12-13 09:46:00.157 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 09:46:00 compute-0 nova_compute[248510]: 2025-12-13 09:46:00.158 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 09:46:00 compute-0 nova_compute[248510]: 2025-12-13 09:46:00.190 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 09:46:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 13 09:46:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641851574' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Dec 13 09:46:00 compute-0 ceph-mon[76537]: pgmap v4361: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:46:00 compute-0 ceph-mon[76537]: from='client.23622 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:46:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3855031411' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 13 09:46:00 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/3554505665' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 13 09:46:00 compute-0 ovs-appctl[438685]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 13 09:46:00 compute-0 ovs-appctl[438689]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 13 09:46:00 compute-0 ovs-appctl[438693]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 13 09:46:00 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 09:46:00 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223542628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:46:00 compute-0 nova_compute[248510]: 2025-12-13 09:46:00.964 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 09:46:00 compute-0 nova_compute[248510]: 2025-12-13 09:46:00.975 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 09:46:01 compute-0 nova_compute[248510]: 2025-12-13 09:46:01.007 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 09:46:01 compute-0 nova_compute[248510]: 2025-12-13 09:46:01.010 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 09:46:01 compute-0 nova_compute[248510]: 2025-12-13 09:46:01.010 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 09:46:01 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec 13 09:46:01 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279226296' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Dec 13 09:46:01 compute-0 nova_compute[248510]: 2025-12-13 09:46:01.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:46:01 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23644 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:46:01 compute-0 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4365: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:46:02 compute-0 ceph-mon[76537]: pgmap v4362: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:46:02 compute-0 ceph-mon[76537]: pgmap v4363: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:46:02 compute-0 ceph-mon[76537]: from='client.23628 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:46:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/148167940' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 13 09:46:02 compute-0 ceph-mon[76537]: from='client.23632 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:46:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/862404405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:46:02 compute-0 ceph-mon[76537]: from='client.23636 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:46:02 compute-0 ceph-mon[76537]: pgmap v4364: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:46:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/1641851574' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Dec 13 09:46:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2223542628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 09:46:02 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/4279226296' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23646 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 09:46:02 compute-0 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 09:46:02 compute-0 nova_compute[248510]: 2025-12-13 09:46:02.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 09:46:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec 13 09:46:02 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2130995380' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Dec 13 09:46:02 compute-0 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 09:46:03 compute-0 ceph-mon[76537]: from='client.23644 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:46:03 compute-0 ceph-mon[76537]: pgmap v4365: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 09:46:03 compute-0 ceph-mon[76537]: from='client.23646 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 09:46:03 compute-0 ceph-mon[76537]: from='client.? 192.168.122.100:0/2130995380' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Dec 13 09:46:03 compute-0 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec 13 09:46:03 compute-0 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1397388509' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
